On Mon, Jun 04, 2018 at 12:44:09AM -0400, Jeff King wrote:

Thanks for the comments, I will do the things you proposed,
or try to and get back later if there are any issues. Some
notes below.

> On Sun, Jun 03, 2018 at 12:27:49AM +0300, Max Kirillov wrote:
> Since this is slightly less efficient, and because it only matters if
> the web server does not already close the pipe, should this have a
> run-time configuration knob, even if it defaults to
> safe-but-slightly-slower?

Personally, I of course don't want this. Also, I don't think
the difference is much noticeable. But you can never be sure
without trying. I'll try to measure some numbers.

>> +            if (write_in_full(out, buf, n) < 0)
>> +                    die_errno("%s aborted reading request", prog_name);
> 
> We don't necessarily know why the write failed. If it's EPIPE, then yes,
> the program probably did abort. But all we know is that write() failed.
> We should probably say something more generic like:
> 
>   die_errno("unable to write to '%s'");
> 
> or similar.

Actually, it is already 3rd same error in this file. Maybe
deserve some refactoring. I will change the message also.

>> +test_expect_success 'setup repository' '
>> +    test_commit c0 &&
>> +    test_commit c1
>> +'
>> +
>> +hash_head=$(git rev-parse HEAD)
>> +hash_prev=$(git rev-parse HEAD~1)
> 
> We generally prefer to have all commands, even ones we don't expect to
> fail, inside test_expect blocks (e.g., with a "setup" description).

Will the defined variables get to the next test? I'll try to
do as you describe.

>> +cat >fetch_body <<EOF
>> +0032want $hash_head
>> +00000032have $hash_prev
>> +0009done
>> +EOF
> 
> This depends on the size of the hash. That's always 40 for now, but is
> something that may change soon.
> 
> We already have a packetize() helper; could we use it here?

Could you point me to it? I cannot find it.

My understanfing is that the current protocol assumes
40 symbols hash, so another hash length would be another
protocol, and since it's manually forged here it would
anyway has to be changeda.

>> +test_expect_success 'fetch plain truncated' '
>> +    test_http_env upload \
>> +            "$TEST_DIRECTORY"/t5562/invoke-with-content-length.pl 
>> fetch_body.trunc git http-backend >act.out 2>act.err &&
>> +    test_must_fail verify_http_result "200 OK"
>> +'
> 
> Usually test_must_fail on a checking function like this is a sign that
> the check is not as robust as we'd like. If the function checks two
> things "A && B", then checking test_must_fail will only let us know
> "!A || !B", but you probably want to check both.

Well here I just want to know that the request has failed,
and we already know that it can fail in different ways,
but the test is not going to differentiate those ways.

> (We'd also generally not use test_must_fail with a non-git command, and
> just use a simple "! verify_http_result"; that would apply equally if
> gets split into two commands).

Will use ! there.

>> +sleep 1; # is interrupted by SIGCHLD
>> +if (!$exited) {
>> +        close($out);
>> +        die "Command did not exit after reading whole body";
>> +}

...

> Also, do we need to protect ourselves against other signals being
> delivered? E.g., if I resize my xterm and this process gets SIGWINCH, is
> it going to erroneously end the sleep and say "nope, no exited signal"?

I'll check, but what could I do? Should I add blocking other
signals there?

Reply via email to