odb: guard against data loss checking out a huge file

This introduces an additional guard for platforms where `unsigned long`
and `size_t` are not of the same size. If the size of an object in the
database would overflow `unsigned long`, instead we now exit with an
error.

A complete fix will have to update _many_ other functions throughout the
codebase to use `size_t` instead of `unsigned long`. It will have to be
implemented at some stage.

This commit puts in a stop-gap for the time being.

Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Matt Cooper <vtbassmatt@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit is contained in:
Matt Cooper
2021-11-02 15:46:10 +00:00
committed by Junio C Hamano
parent e2ffeae3f6
commit d6a09e795d
3 changed files with 9 additions and 9 deletions

View File

@ -90,15 +90,15 @@ static inline unsigned long get_delta_hdr_size(const unsigned char **datap,
const unsigned char *top)
{
const unsigned char *data = *datap;
unsigned long cmd, size = 0;
size_t cmd, size = 0;
int i = 0;
do {
cmd = *data++;
size |= (cmd & 0x7f) << i;
size |= st_left_shift(cmd & 0x7f, i);
i += 7;
} while (cmd & 0x80 && data < top);
*datap = data;
return size;
return cast_size_t_to_ulong(size);
}
#endif