Another reftable test has been ported to use the unit test framework.
* cp/unit-test-reftable-merged:
t-reftable-merged: add test for REFTABLE_FORMAT_ERROR
t-reftable-merged: use reftable_ref_record_equal to compare ref records
t-reftable-merged: add tests for reftable_merged_table_max_update_index
t-reftable-merged: improve the const-correctness of helper functions
t-reftable-merged: improve the test t_merged_single_record()
t: harmonize t-reftable-merged.c with coding guidelines
t: move reftable/merged_test.c to the unit testing framework
"git checkout --ours" (no other arguments) complained that the
option is incompatible with branch switching, which is technically
correct, but found confusing by some users. It now says that the
user needs to give pathspec to specify what paths to checkout.
* jc/checkout-no-op-switch-errors:
checkout: special case error messages during noop switching
"git add -p" by users with diff.suppressBlankEmpty set to true
failed to parse the patch that represents an unmodified empty line
with an empty line (not a line with a single space on it), which
has been corrected.
* pw/add-patch-with-suppress-blank-empty:
add-patch: use normalize_marker() when recounting edited hunk
add-patch: handle splitting hunks with diff.suppressBlankEmpty
Many Porcelain commands that internally use the merge machinery
were taught to consistently honor the diff.algorithm configuration.
* ad/merge-with-diff-algorithm:
merge-recursive: honor diff.algorithm
Unit test clean-up.
* rs/t-strvec-use-test-msg:
t-strvec: fix type mismatch in check_strvec
t-strvec: improve check_strvec() output
t-strvec: use test_msg()
The comment surrounding check_pointer_eq() should explain about what
this function does instead of explaining check_int(). Correct this.
Signed-off-by: Kousik Sanagavarapu <five231003@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Repacking a repository with multi-pack index started making stupid
pack selections in Git 2.45, which has been corrected.
* ds/midx-write-repack-fix:
midx-write: revert use of --stdin-packs
t5319: add failing test case for repack/expire
When "add -p" parses diffs, it looks for context lines starting with a
single space. But when diff.suppressBlankEmpty is in effect, an empty
context line will omit the space, giving us a true empty line. This
confuses the parser, which is unable to split based on such a line.
It's tempting to say that we should just make sure that we generate a
diff without that option. However, although we do not parse hunks that
the user has manually edited with parse_diff() we do allow the user
to split such hunks. As POSIX calls the decision of whether to print the
space here "implementation-defined" we need to handle edited hunks where
empty context lines omit the space.
So let's handle both cases: a context line either starts with a space or
consists of a totally empty line by normalizing the first character to a
space when we parse them. Normalizing the first character rather than
changing the code to check for a space or newline will hopefully future
proof against introducing similar bugs if the code is changed.
Reported-by: Ilya Tumaykin <itumaykin@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts b7d6f23a17 (midx-write.c: use `--stdin-packs` when
repacking, 2024-04-01) and then marks the test created in the previous
change as passing.
The fundamental issue with the reverted change is that the focus on
pack-files separates the object selection from how the multi-pack-index
selects a single pack-file for an object ID with multiple copies among
the tracked pack-files.
The change was made with the intention of improving delta compression in
the resulting pack-file, but that can be resolved with the existing
object list mechanism. There are other potential pitfalls of doing an
object walk at this time if the repository is a blobless partial clone,
and that will require additional testing on top of the one that changes
here.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git 2.45.0 included the change b7d6f23a17 (midx-write.c: use
`--stdin-packs` when repacking, 2024-04-01) which caused the 'git
multi-pack-index repack' command to use 'git pack-objects --stdin-packs'
instead of listing the objects to repack. While this change was
motivated by efficient cross-process communication and the ability to
improve delta compression, it breaks a fundamental function of the
'incremental-repack' task that is enabled by default in Scalar clones or
Git repositories that run 'git maintenance start'.
The 'incremental-repack' task performs a two-step process of the
'expire' and 'repack' subcommands of the 'git multi-pack-index' builtin.
The 'expire' command removes any pack-files listed in the
multi-pack-index but without any referenced objects. The 'repack' task
then finds a batch of pack-files to repack and sends their objects to
'git pack-objects'. Both the pack-files chosen for the batch and the
objects chosen to repack are based on the ones that the multi-pack-index
references. Objects that appear in a pack-file but have a duplicate copy
in a newer pack-file are not considered in this case. Since the
multi-pack-index references only the newest copy of an object, this
allows the next 'incremental-repack' task to remove the pack-files in
the next 'expire' task. This delay is intentional due to how Windows
handles may block deletion of files with open read handles.
However, the mentioned commit changed this behavior to divorce the set
of objects referenced by the multi-pack-index and instead use a set of
"included" and "excluded" pack-files in the 'git pack-objects' builtin.
When a pack-file is selected as "included", only the objects it contains
but are not in any "excluded" pack-files are considered for repacking.
This has led to client repositories failing to remove old pack-files as
they still have some referenced objects. This grows over time until the
point that Git is trying to repack the same pack-files over and over.
For now, create a test case that demonstrates the expected behavior, but
also fails in its final line. The setup here it attempting to recreate a
typical situation for a repository that uses a blobless partial clone.
There would be a large initial pack-file from the clone that is never
selected in the 'repack' batch. There are other pack-files that have a
combination of new objects from incremental fetches and possibly blobs
that are not connected to those incremental fetches; these blobs could
be filled in from commands like 'git checkout' or 'git blame'. The
pack-files also have some overlap on purpose so test-1 has some
duplicates in test-2 and test-2 has some duplicates in test-3.
At the end of the test, the test-2 pack-file still exists though it
should have been expired. This test will pass when reverting the
offending commit.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git var GIT_SHELL_PATH" should report the path to the shell used
to spawn external commands, but it didn't do so on Windows, which
has been corrected.
* js/var-git-shell-path:
var(win32): do report the GIT_SHELL_PATH that is actually used
run-command: declare the `git_shell_path()` function globally
run-command(win32): resolve the path to the Unix shell early
mingw(is_msys2_sh): handle forward slashes in the `sh.exe` path, too
win32: override `fspathcmp()` with a directory separator-aware version
strvec: declare the `strvec_push_nodup()` function globally
run-command: refactor getting the Unix shell path into its own function
"git push '' HEAD:there" used to hit a BUG(); it has been corrected
to die with "fatal: bad repository ''".
* kn/push-empty-fix:
builtin/push: call set_refspecs after validating remote
The test framework learned to take the test body not as a single
string but as a here-document.
* jk/test-body-in-here-doc:
t/.gitattributes: ignore whitespace in chainlint expect files
t: convert some here-doc test bodies
test-lib: allow test snippets as here-docs
chainlint.pl: add tests for test body in heredoc
chainlint.pl: recognize test bodies defined via heredoc
chainlint.pl: check line numbers in expected output
chainlint.pl: force CRLF conversion when opening input files
chainlint.pl: do not spawn more threads than we have scripts
chainlint.pl: only start threads if jobs > 1
chainlint.pl: add test_expect_success call to test snippets
Tests that use GIT_TEST_SANITIZE_LEAK_LOG feature got their exit
status inverted, which has been corrected.
* rj/test-sanitize-leak-log-fix:
test-lib: GIT_TEST_SANITIZE_LEAK_LOG enabled by default
test-lib: fix GIT_TEST_SANITIZE_LEAK_LOG
Commit 852a171018 (am: let command-line options override saved options,
2015-08-04) redirected a few "git am" invocations from /dev/zero, even
though it did not expect "am" to read the input. This was necessary at
the time because those tests used test_terminal, and as described in
18d8c26930 (test_terminal: redirect child process' stdin to a pty,
2015-08-04):
Note that due to the way the code is structured, the child's stdin
pseudo-tty will be closed when we finish reading from our stdin. This
means that in the common case, where our stdin is attached to /dev/null,
the child's stdin pseudo-tty will be closed immediately. Some operations
like isatty(), which git-am uses, require the file descriptor to be
open, and hence if the success of the command depends on such functions,
test_terminal's stdin should be redirected to a source with large amount
of data to ensure that the child's stdin is not closed, e.g.
test_terminal git am --3way </dev/zero
But we later dropped the use of test_terminal in 53ce2e3f0a (am: add
explicit "--retry" option, 2024-06-06). That commit dropped one of the
redirections from /dev/zero but not the other.
In theory the remaining one should not cause any problems, but it turns
out that at least one platform (NonStop) does not have /dev/zero at all.
We never noticed before because it also did not pass the TTY prereq,
meaning these tests were not run at all there until 53ce2e3f0a.
So let's drop the useless /dev/zero mention. There are others in the
test suite, but they are run only for tests marked with EXPENSIVE (so
not typically by default).
Reported-by: Randall S. Becker <rsbecker@nexbridge.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The http transport can now be told to send request with
authentication material without first getting a 401 response.
* bc/http-proactive-auth:
http: allow authenticating proactively
A new warning message is issued when a command has to expand a
sparse index to handle working tree cruft that are outside of the
sparse checkout.
* ds/advice-sparse-index-expansion:
advice: warn when sparse index expands
Address-looking strings found on the trailer are now placed on the
Cc: list after running through sanitize_address by "git send-email".
* cb/send-email-sanitize-trailer-addresses:
git-send-email: use sanitized address when reading mbox body
The "ort" merge backend saw one bugfix for a crash that happens
when inner merge gets killed, and assorted code clean-ups.
* en/ort-inner-merge-error-fix:
merge-ort: fix missing early return
merge-ort: convert more error() cases to path_msg()
merge-ort: upon merge abort, only show messages causing the abort
merge-ort: loosen commented requirements
merge-ort: clearer propagation of failure-to-function from merge_submodule
merge-ort: fix type of local 'clean' var in handle_content_merge ()
merge-ort: maintain expected invariant for priv member
merge-ort: extract handling of priv member into reusable function
Cast i from size_t to uintmax_t to match the format string.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A test in reftable library has been rewritten using the unit test
framework.
* cp/unit-test-reftable-record:
t-reftable-record: add tests for reftable_log_record_compare_key()
t-reftable-record: add tests for reftable_ref_record_compare_name()
t-reftable-record: add index tests for reftable_record_is_deletion()
t-reftable-record: add obj tests for reftable_record_is_deletion()
t-reftable-record: add log tests for reftable_record_is_deletion()
t-reftable-record: add ref tests for reftable_record_is_deletion()
t-reftable-record: add comparison tests for obj records
t-reftable-record: add comparison tests for index records
t-reftable-record: add comparison tests for ref records
t-reftable-record: add reftable_record_cmp() tests for log records
t: move reftable/record_test.c to the unit testing framework
"git push" that pushes only deletion gave an unnecessary and
harmless error message when push negotiation is configured, which
has been corrected.
* jc/disable-push-nego-for-deletion:
push: avoid showing false negotiation errors
Test suite has been taught not to unnecessarily rely on DNS failing
a bogus external name.
* jk/tests-without-dns:
t/lib-bundle-uri: use local fake bundle URLs
t5551: do not confirm that bogus url cannot be used
t5553: use local url for invalid fetch
An existing test of oidmap API has been rewritten with the
unit-test framework.
* gt/unit-test-oidmap:
t: migrate helper/test-oidmap.c to unit-tests/t-oidmap.c
"git describe --dirty --broken" forgot to refresh the index before
seeing if there is any chang, ("git describe --dirty" correctly did
so), which has been corrected.
* as/describe-broken-refresh-index-fix:
describe: refresh the index when 'broken' flag is used
The macro check_strvec calls the function check_strvec_loc(), which
performs the actual checks. They report the line number inside that
function on error, which is not very helpful. Before the previous
patch half of them triggered an assertion that reported the caller's
line number using a custom message, which was more useful, but a bit
awkward.
Improve the output by getting rid of check_strvec_loc() and performing
all checks within check_strvec, as they then report the line number of
the call site, aiding in finding the broken test. Determine the number
of items and check it up front to avoid having to do them both in the
loop and at the end.
Sanity check the expected items to make sure there are any and that the
last one is NULL, as the compiler no longer does that for us with the
removal of the function attribute LAST_ARG_MUST_BE_NULL.
Use only the actual strvec name passed to the macro, the internal
"expect" array name and an index "i" in the output, for clarity. While
"expect" does not exist at the call site, it's reasonably easy to infer
that it's referring to the NULL-terminated list of expected strings,
converted to an array.
Here's the output with less items than expected in the strvec before:
# check "vec->nr > nr" failed at t/unit-tests/t-strvec.c:19
# left: 1
# right: 1
... and with the patch:
# check "(&vec)->nr == ARRAY_SIZE(expect) - 1" failed at t/unit-tests/t-strvec.c:53
# left: 1
# right: 2
With too many items in the strvec we got before:
# check "vec->nr == nr" failed at t/unit-tests/t-strvec.c:34
# left: 1
# right: 0
# check "vec->v[nr] == NULL" failed at t/unit-tests/t-strvec.c:36
# left: 0x6000004b8010
# right: 0x0
... and with the patch:
# check "(&vec)->nr == ARRAY_SIZE(expect) - 1" failed at t/unit-tests/t-strvec.c:53
# left: 1
# right: 0
A broken alloc value was reported like this:
# check "vec->alloc > nr" failed at t/unit-tests/t-strvec.c:20
# left: 0
# right: 0
... and with the patch:
# check "(&vec)->nr <= (&vec)->alloc" failed at t/unit-tests/t-strvec.c:56
# left: 2
# right: 0
An unexpected string value was reported like this:
# check "!strcmp(vec->v[nr], str)" failed at t/unit-tests/t-strvec.c:24
# left: "foo"
# right: "bar"
# nr: 0
... and with the patch:
# check "!strcmp((&vec)->v[i], expect[i])" failed at t/unit-tests/t-strvec.c:53
# left: "foo"
# right: "bar"
# i: 0
If the strvec is not NULL terminated, we got:
# check "vec->v[nr] == NULL" failed at t/unit-tests/t-strvec.c:36
# left: 0x102c3abc8
# right: 0x0
... and with the patch we get the line number of the caller:
# check "!strcmp((&vec)->v[i], expect[i])" failed at t/unit-tests/t-strvec.c:53
# left: "bar"
# right: NULL
# i: 1
check_strvec calls without a trailing NULL were detected at compile time
before:
t/unit-tests/t-strvec.c:71:2: error: missing sentinel in function call [-Werror,-Wsentinel]
... and with the patch it's only found at runtime:
# check "expect[ARRAY_SIZE(expect) - 1] == NULL" failed at t/unit-tests/t-strvec.c:53
# left: 0x100e5a663
# right: 0x0
We can let check_strvec add the terminating NULL for us and remove it
from callers, making it impossible to forget. Leave that conversion for
a future patch, though, since this reimplementation is already intrusive
enough.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation claims that "recursive defaults to the diff.algorithm
config setting", but this is currently not the case. This fixes it,
ensuring that diff.algorithm is used when -Xdiff-algorithm is not
supplied. This affects the following porcelain commands: "merge",
"rebase", "cherry-pick", "pull", "stash", "log", "am" and "checkout".
It also affects the "merge-tree" ancillary interrogator.
This change refactors the initialization of merge options to introduce
two functions, "init_merge_ui_options" and "init_merge_basic_options"
instead of just one "init_merge_options". This design follows the
approach used in diff.c, providing initialization methods for
porcelain and plumbing commands respectively. Thanks to that, the
"replay" and "merge-recursive" plumbing commands remain unaffected by
diff.algorithm.
Signed-off-by: Antonin Delpeuch <antonin@delpeuch.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On Windows, Unix-like paths like `/bin/sh` make very little sense. In
the best case, they simply don't work, in the worst case they are
misinterpreted as absolute paths that are relative to the drive
associated with the current directory.
To that end, Git does not actually use the path `/bin/sh` that is
recorded e.g. when `run_command()` is called with a Unix shell
command-line. Instead, as of 776297548e (Do not use SHELL_PATH from
build system in prepare_shell_cmd on Windows, 2012-04-17), it
re-interprets `/bin/sh` as "look up `sh` on the `PATH` and use the
result instead".
This is the logic users expect to be followed when running `git var
GIT_SHELL_PATH`.
However, when 1e65721227 (var: add support for listing the shell,
2023-06-27) introduced support for `git var GIT_SHELL_PATH`, Windows was
not special-cased as above, which is why it outputs `/bin/sh` even
though that disagrees with what Git actually uses.
Let's fix this by using the exact same logic as `prepare_shell_cmd()`,
adjusting the Windows-specific `git var GIT_SHELL_PATH` test case to
verify that it actually finds a working executable.
Reported-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When calling reftable_new_merged_table(), if the hash ID of the
passed reftable_table parameter doesn't match the passed hash_id
parameter, a REFTABLE_FORMAT_ERROR is thrown. This case is
currently left unexercised, so add a test for the same.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the test t_merged_single_record() defined in t-reftable-merged.c,
the 'input' and 'expected' ref records are checked for equality
by comparing their update indices. It is very much possible for
two different ref records to have the same update indices. Use
reftable_ref_record_equal() instead for a stronger check.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
reftable_merged_table_max_update_index() as defined by reftable/
merged.{c, h} returns the maximum update index in a merged table.
Since this function is currently unexercised, add tests for it.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In t-reftable-merged.c, a number of helper functions used by the
tests can be re-defined with parameters made 'const' which makes
it easier to understand if they're read-only or not. Re-define
these functions along these lines.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In t-reftable-merged.c, the test t_merged_single_record() ensures
that a ref ('a') which occurs in only one of the records ('r2')
can be retrieved. Improve this test by adding another record 'r3'
to ensure that ref 'a' only occurs in 'r2' and that merged tables
don't simply read the last record.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Harmonize the newly ported test unit-tests/t-reftable-merged.c
with the following guidelines:
- Single line control flow statements like 'for' and 'if'
must omit curly braces.
- Structs must be 0-initialized with '= { 0 }' instead of '= { NULL }'.
- Array indices should preferably be of type 'size_t', not 'int'.
- It is fine to use C99 initial declaration in 'for' loop.
While at it, use 'ARRAY_SIZE(x)' to store the number of elements
in an array instead of hardcoding them.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
reftable/merged_test.c exercises the functions defined in
reftable/merged.{c, h}. Migrate reftable/merged_test.c to the unit
testing framework. Migration involves refactoring the tests
to use the unit testing framework instead of reftable's test
framework and renaming the tests according to unit-tests' naming
conventions.
Also, move strbuf_add_void() and noop_flush() from
reftable/test_framework.c to the ported test. This is because
both these functions are used in the merged tests and
reftable/test_framework.{c, h} is not #included in the ported test.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When an end-user runs "git push" with an empty string for the remote
repository name, e.g.
$ git push '' main
"git push" fails with a BUG(). Even though this is a nonsense request
that we want to fail, we shouldn't hit a BUG(). Instead we want to give
a sensible error message, e.g., 'bad repository'".
This is because since 9badf97c42 (remote: allow resetting url list,
2024-06-14), we reset the remote URL if the provided URL is empty. When
a user of 'remotes_remote_get' tries to fetch a remote with an empty
repo name, the function initializes the remote via 'make_remote'. But
the remote is still not a valid remote, since the URL is empty, so it
tries to add the URL alias using 'add_url_alias'. This in-turn will call
'add_url', but since the URL is empty we call 'strvec_clear' on the
`remote->url`. Back in 'remotes_remote_get', we again check if the
remote is valid, which fails, so we return 'NULL' for the 'struct
remote *' value.
The 'builtin/push.c' code, calls 'set_refspecs' before validating the
remote. This worked with empty repo names earlier since we would get a
remote, albeit with an empty URL. With the new changes, we get a 'NULL'
remote value, this causes the check for remote to fail and raises the
BUG in 'set_refspecs'.
Do a simple fix by doing remote validation first. Also add a test to
validate the bug fix. With this, we can also now directly pass remote to
'set_refspecs' instead of it trying to lazily obtain it.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As we currently describe in t/README, it can happen that:
Some tests run "git" (or "test-tool" etc.) without properly checking
the exit code, or git will invoke itself and fail to ferry the
abort() exit code to the original caller.
Therefore, GIT_TEST_SANITIZE_LEAK_LOG=true is needed to be set to
capture all memory leaks triggered by our tests.
It seems unnecessary to force users to remember this option, as
forgetting it could lead to missed memory leaks.
We could solve the problem by making it "true" by default, but that
might suggest we think "false" makes sense, which isn't the case.
Therefore, the best approach is to remove the option entirely while
maintaining the capability to detect memory leaks in blind spots of our
tests.
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ".expect" files in t/chainlint/ are snippets of expected output from
the chainlint script, and do not necessarily conform to our usual code
style. Especially with the recent change to retain line numbers, blank
lines in the input script end up with trailing whitespace as we print
"3 " for line 3, for example. The point of these files is to match the
output verbatim, so let's not complain about the trailing spaces.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The t1404 script checks a lot of output from Git which contains single
quotes. Because the test snippets are themselves wrapped in the same
single-quotes, we have to resort to using $SQ to match them. This is
error-prone and makes the tests harder to read.
Instead, let's use the new here-doc feature added in the previous
commit, which lets us write anything in the test body we want (except
the here-doc end marker on a line by itself, of course).
Note that we do use "\" in our marker to avoid interpolation (which is
the whole point). But we don't use "<<-", as we want to preserve
whitespace in the snippet (and running with "-v" before and after shows
that we produce the exact same output, except with the ugly $SQ
references fixed).
I just converted every test here, even though only some of them use
$SQ. But it would be equally correct to mix-and-match styles if we don't
mind the inconsistency.
I've also converted a few tests in t0600 which were moved from t1404 (I
had written this patch before they were moved, but it seemed worth
porting over the changes rather than losing them).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most test snippets are wrapped in single quotes, like:
test_expect_success 'some description' '
do_something
'
This sometimes makes the snippets awkward to write, because you can't
easily use single quotes within them. We sometimes work around this with
$SQ, or by loosening regexes to use "." instead of a literal quote, or
by using double quotes when we'd prefer to use single-quotes (and just
adding extra backslash-escapes to avoid interpolation).
This commit adds another option: feeding the snippet via the function's
stdin. This doesn't conflict with anything the snippet would want to do,
because we always redirect its stdin from /dev/null anyway (which we'll
continue to do).
A few notes on the implementation:
- it would be nice to push this down into test_run_, but we can't, as
test_expect_success and test_expect_failure want to see the actual
script content to report it for verbose-mode. A helper function
limits the amount of duplication in those callers here.
- The helper function is a little awkward to call, as you feed it the
name of the variable you want to set. The more natural thing in
shell would be command substitution like:
body=$(body_or_stdin "$2")
but that loses trailing whitespace. There are tricks around this,
like:
body=$(body_or_stdin "$2"; printf .)
body=${body%.}
but we'd prefer to keep such tricks in the helper, not in each
caller.
- I implemented the helper using a sequence of "read" calls. Together
with "-r" and unsetting the IFS, this preserves incoming whitespace.
An alternative is to use "cat" (which then requires the gross "."
trick above). But this saves us a process, which is probably a good
thing. The "read" builtin does use more read() syscalls than
necessary (one per byte), but that is almost certainly a win over a
separate process.
Both are probably slower than passing a single-quoted string, but
the difference is lost in the noise for a script that I converted as
an experiment.
- I handle test_expect_success and test_expect_failure here. If we
like this style, we could easily extend it to other spots (e.g.,
lazy_prereq bodies) on top of this patch.
- even though we are using "local", we have to be careful about our
variable names. Within test_expect_success, any variable we declare
with local will be seen as local by the test snippets themselves (so
it wouldn't persist between tests like normal variables would).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The chainlint.pl script recently learned about the upcoming:
test_expect_success 'some test' - <<\EOT
TEST_BODY
EOT
syntax, where TEST_BODY should be checked in the usual way. Let's make
sure this works by adding a few tests. The "here-doc-body" file tests
the basic syntax, including an embedded here-doc which we should still
be able to recognize.
Likewise the "here-doc-body-indent" checks the same thing, but using the
"<<-" operator. We wouldn't expect this to be used normally, but we
would not want to accidentally miss a body that uses it. The
"pathological" variant checks the opposite: we don't get confused by an
indented tag within the here-doc body.
The "here-doc-double" tests the handling of two here-doc tags on the
same line. This is not something we'd expect anybody to do in practice,
but the code was written defensively to handle this, so let's make sure
it works.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to check tests for semantic problems, chainlint.pl scans test
scripts, looking for tests defined as:
test_expect_success [prereq] title '
body
'
where `body` is a single string which is then treated as a standalone
chunk of code and "linted" to detect semantic issues. (The same happens
for `test_expect_failure` definitions.)
The introduction of test definitions in which the test body is instead
presented via a heredoc rather than as a single string creates a blind
spot in the linting process since such invocations are not recognized by
chainlint.pl.
Prepare for this new style by also recognizing tests defined as:
test_expect_success [prereq] title - <<\EOT
body
EOT
A minor complication is that chainlint.pl has never considered heredoc
bodies significant since it doesn't scan them for semantic problems,
thus it has always simply thrown them away. However, with the new
`test_expect_success` calling sequence, heredoc bodies become
meaningful, thus need to be captured.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While working on chainlint.pl recently, we introduced some bugs that
showed incorrect line numbers in the output. But it was hard to notice,
since we sanitize the output by removing all of the line numbers! It
would be nice to retain these so we can catch any regressions.
The main reason we sanitize is for maintainability: we concatenate all
of the test snippets into a single file, so it's hard for each ".expect"
file to know at which offset its test input will be found. We can handle
that by storing the per-test line numbers in the ".expect" files, and
then dynamically offsetting them as we build the concatenated test and
expect files together.
The changes to the ".expect" files look like tedious boilerplate, but it
actually makes adding new tests easier. You can now just run:
perl chainlint.pl chainlint/foo.test |
tail -n +2 >chainlint/foo.expect
to save the output of the script minus the comment headers (after
checking that it is correct, of course). Whereas before you had to strip
the line numbers. The conversions here were done mechanically using
something like the script above, and then spot-checked manually.
It would be possible to do all of this in shell via the Makefile, but it
gets a bit complicated (and requires a lot of extra processes). Instead,
I've written a short perl script that generates the concatenated files
(we already depend on perl, since chainlint.pl uses it). Incidentally,
this improves a few other things:
- we incorrectly used $(CHAINLINTTMP_SQ) inside a double-quoted
string. So if your test directory required quoting, like:
make "TEST_OUTPUT_DIRECTORY=/tmp/h'orrible"
we'd fail the chainlint tests.
- the shell in the Makefile didn't handle &&-chaining correctly in its
loops (though in practice the "sed" and "cat" invocations are not
likely to fail).
- likewise, the sed invocation to strip numbers was hiding the exit
code of chainlint.pl itself. In practice this isn't a big deal;
since there are linter violations in the test files, we expect it to
exit non-zero. But we could later use exit codes to distinguish
serious errors from expected ones.
- we now use a constant number of processes, instead of scaling with
the number of test scripts. So it should be a little faster (on my
machine, "make check-chainlint" goes from 133ms to 73ms).
There are some alternatives to this approach, but I think this is still
a good intermediate step:
1. We could invoke chainlint.pl individually on each test file, and
compare it to the expected output (and possibly using "make" to
avoid repeating already-done checks). This is a much bigger change
(and we'd have to figure out what to do with the "# LINT" lines in
the inputs). But in this case we'd still want the "expect" files to
be annotated with line numbers. So most of what's in this patch
would be needed anyway.
2. Likewise, we could run a single chainlint.pl and feed it all of the
scripts (with "--jobs=1" to get deterministic output). But we'd
still need to annotate the scripts as we did here, and we'd still
need to either assemble the "expect" file, or break apart the
script output to compare to each individual ".expect" file.
So we may pursue those in the long run, but this patch gives us more
robust tests without too much extra work or moving in a useless
direction.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The lexer in chainlint.pl can't handle CRLF line endings; it complains
about an internal error in scan_token() if we see one. For example, in
our Windows CI environment:
$ perl chainlint.pl chainlint/for-loop.test | cat -v
Thread 2 terminated abnormally: internal error scanning character '^M'
This doesn't break "make check-chainlint" (yet), because we assemble a
concatenated input by passing the contents of each file through "sed".
And the "sed" we use will strip out the CRLFs. But the next patch is
going to rework this a bit, which does break check-chainlint on Windows.
Plus it's probably nicer to folks on Windows who might work on chainlint
itself and write new tests.
In theory we could fix the parser to handle this, but it's not really
worth the trouble. We should be able to ask the input layer to translate
the line endings for us. In fact, I'd expect this to happen by default,
as perl's documentation claims Win32 uses the ":unix:crlf" PERLIO layer
by default ("unix" here just refers to using read/write syscalls, and
then "crlf" layers the translation on top). However, this doesn't seem
to be the case in our Windows CI environment. I didn't dig into the
exact reason, but it is perhaps because we are using an msys build of
perl rather than a "true" Win32 build.
At any rate, it is easy-ish to just ask explicitly for the conversion.
In the above example, setting PERLIO=crlf in the environment is enough
to make it work. Curiously, though, this doesn't work when invoking
chainlint via "make". Again, I didn't dig into it, but it may have to do
with msys programs calling Windows programs or vice versa.
We can make it work consistently by just explicitly asking for CRLF
translation when we open the files. This will even work on non-Windows
platforms, though we wouldn't really expect to find CRLF files there.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The chainlint.pl script spawns worker threads to check many scripts in
parallel. This is good if you feed it a lot of scripts. But if you give
it few (or one), then the overhead of spawning the threads dominates. We
can easily notice that we have fewer scripts than threads and scale back
as appropriate.
This patch reduces the time to run:
time for i in chainlint/*.test; do
perl chainlint.pl $i
done >/dev/null
on my system from ~4.1s to ~1.1s, where I have 8+8 cores.
As with the previous patch, this isn't the usual way we run chainlint
(we feed many scripts at once, which is why it supports threading in the
first place). So this won't make a big difference in the real world, but
it may help us out in the future, and it makes experimenting with and
debugging the chainlint tests a bit more pleasant.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the system supports threads, chainlint.pl will always spawn worker
threads to do the real work. But when --jobs=1, this is pointless, since
we could just do the work in the main thread. And spawning even a single
thread has a high overhead. For example, on my Linux system, running:
for i in chainlint/*.test; do
perl chainlint.pl --jobs=1 $i
done >/dev/null
takes ~1.7s without this patch, and ~1.1s after. We don't usually spawn
a bunch of individual chainlint.pl processes (instead we feed several
scripts at once, and the parallelism outweighs the setup cost). But it's
something we've considered doing, and since we already have fallback
code for systems without thread support, it's pretty easy to make this
work.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>