Hi,

On Wed, Jul 28, 2021 at 9:21 AM Kazuho Oku <[email protected]> wrote:

> Tatsuhiro, thank you for raising the issue.
>
> I actually wonder if the example being provided is a tip of an iceberg -
> is the problem related to FIN at all?
>
> Let's consider the following pattern:
>
> * client sends a request
> * server starts sending response, alongside a PUSH_PROMISE
> * in addition, server initiates a push stream and starts sending data
> * server-sent packets carrying the PUSH_PROMISE frame are lost
> * client decides to cancel the request and sends RESET_STREAM &
> STOP_SENDING
> * server continues sending the contents of the push stream
>
> I could well be missing some aspects of push, but to me, the problem looks
> like the lack of delivery guarantee of PUSH_PROMISE frames.
>
>
My initial example is intentionally nallowed down to the specific case so
that client does not send STOP_SENDING, but yes, essentially, you are
correct that the inherent problem is that server push design relays on the
PUSH_PROMISE which can be lost.

Best regards,
Tatsuhiro Tsujikawa




>
> 2021年7月27日(火) 17:56 Tatsuhiro Tsujikawa <[email protected]>:
>
>> Hi,
>>
>> On Tue, Jul 27, 2021 at 3:31 PM Martin Thomson <[email protected]> wrote:
>>
>>> This is a case where QUIC processing something doesn't imply HTTP/3
>>> processing something.  QUIC read the data and "processed" it.  HTTP/3
>>> decided not to handle it deliberately, and the PUSH_PROMISE fell between
>>> the cracks.
>>>
>>> One "solution" is to insist that endpoints attempt to process streams
>>> for this stuff if they decide to discard responses.  This is like how in
>>> HTTP/2 you still have to update the HPACK table after resetting a stream.
>>> It's awkward, but I think that it would work if data is available.  I don't
>>> think that there is any case in which data is unavailable to the client but
>>> the server doesn't receive STOP_SENDING.
>>>
>>>
>> It is indeed awkward that QUIC stack has to pass stream data all the way
>> to the end of the stream (or sees RESET_STREAM or closed) to HTTP/3 client
>> even after it requested stopping reading.  And the HTTP/3 client has to
>> process it just for PUSH_PROMISE.  Does any implementation do this?
>>
>> Sending STOP_SENDING alone is not enough.  As I wrote in the previous
>> post, by the time STOP_SENDING is received by the HTTP/3 server, it has
>> finished processing the pushed stream and forgets it.  I imagine that if
>> QUIC stack provides BSD socket-like interface and HTTP/3 server writes all
>> data and can clear the memory without waiting for an acknowledgement.
>>
>> Best regards,
>> Tatsuhiro Tsujikawa
>>
>>
>>> On Tue, Jul 27, 2021, at 13:01, Tatsuhiro Tsujikawa wrote:
>>> > Hi,
>>> >
>>> > It looks like in certain conditions, client is unable to process pushed
>>> > stream and leaves it in unprocessable state indefinitely.
>>> >
>>> > Consider that client opens bidi stream and server sends PUSH_PROMISE
>>> > and completes the response body which is very short (just a single
>>> packet
>>> > or two).  For some reason, client has decided to stop reading
>>> > response, but FIN is seen (data recvd state), and does not send
>>> > STOP_SENDING because it is not required (RFC mentions that it has a
>>> > little value to send STOP_SENDING in data recvd state and
>>> > unnecessary).  Client discards all stream data without handing it over
>>> > to application, so PUSH_PROMISE is not processed.  Client does not
>>> > know the push ID.  Because STOP_SENDING is not sent, server has no
>>> signal
>>> > which indicates PUSH_PROMISE is not processed, and opens a pushed
>>> > stream.  Client receives pushed stream, but unable to find the
>>> > corresponding PUSH_PROMISE.  It holds pushed stream until it sees
>>> > PUSH_PROMISE, but it never come.  This causes the pushed stream to be
>>> held by
>>> > client indefinitely.
>>> >
>>> > Even if client sends STOP_SENDING to the bidi stream, it might not
>>> > work if server finishes sending stream data and a pushed stream and
>>> > forgets them before receiving STOP_SENDING.
>>> >
>>> > Best regards,
>>> > Tatsuhiro Tsujikawa
>>>
>>>
>
> --
> Kazuho Oku
>

Reply via email to