On Wed, Oct 31, 2018 at 01:03:39AM -0400, Jeff King wrote:

> Phew. I almost just deleted all of the above, because now I think I'm
> ready to write that comment you asked for. ;) But I left it since maybe
> it makes sense to follow my thought process.

So here it is in a more succinct form.

-Peff

-- >8 --
Subject: [PATCH] read_istream_pack_non_delta(): document input handling

Twice now we have scratched our heads about why the loose streaming code
needs the protection added by 692f0bc7ae (avoid infinite loop in
read_istream_loose, 2013-03-25), but the similar code in its pack
counterpart does not.

The short answer is that use_pack() will die before it lets us run out
of bytes. Note that this could mean reading garbage (including the
trailing hash) from the packfile in some cases of corruption, but that's
OK. zlib will notice and complain (and if not, certainly the end result
will not match the object hash we expect).

Let's leave a comment this time to document our findings.

Signed-off-by: Jeff King <p...@peff.net>
---
 streaming.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/streaming.c b/streaming.c
index d1e6b2dce6..ac7c7a22f9 100644
--- a/streaming.c
+++ b/streaming.c
@@ -408,6 +408,15 @@ static read_method_decl(pack_non_delta)
                        st->z_state = z_done;
                        break;
                }
+
+               /*
+                * Unlike the loose object case, we do not have to worry here
+                * about running out of input bytes and spinning infinitely. If
+                * we get Z_BUF_ERROR due to too few input bytes, then we'll
+                * replenish them in the next use_pack() call when we loop. If
+                * we truly hit the end of the pack (i.e., because it's corrupt
+                * or truncated), then use_pack() catches that and will die().
+                */
                if (status != Z_OK && status != Z_BUF_ERROR) {
                        git_inflate_end(&st->z);
                        st->z_state = z_error;
-- 
2.19.1.1298.g19f18f2a22

Reply via email to