> Am 17.01.2017 um 20:52 schrieb Kyriakos Zarifis <kyr.zari...@gmail.com>:
> 
> Hi Stefan,
> 
> Sorry for the delay, I just got back from traveling. I just tried your new 
> patch and indeed it gets rid of the 100ms delay: The server now serves the 
> high priority object only ~5-20ms (did a few runs) after it receives its 
> request, and only sending 5-6 lower-prio frames in between!

Happy to hear that!

> That's is a dramatic improvement compared to what I was observing in the 
> first experiments (~500ms delay), and I think it affects not only this 
> scenario that I was testing, but any scenario where objects of different 
> priorities are conflicting. To verify this, I also tested another simple 
> scenario in which I aggressively Server-Push several big objects when the 
> server gets the base HTML file. Without the patch, objects embedded in the 
> HTML (requests normally) are backed behind a large fraction of the pushed 
> payload and delayed considerably (500ms). With the patch this is avoided 
> (embedded objects server within a few ms after their request arrives, 
> preempting Pushed objects) 
> 
> If you are interested, I have logs comparing the v1.8.8 performance to the 
> baseline, for both the scenarios ( 1: "prefetched" objects triggered at the 
> end of a page load delaying normal objects from the next navigation, and 2: 
> "server-pushed" objects conflicting with embedded objects on the current page)

Very interested. I'd like to add a page over at https://icing.github.io/mod_h2/ 
about it, so that people can easily grasp what the advantages are. For that, 
your numbers (do you have screenshots of browser timelines maybe?) would be 
very welcome. Also that someone besides the module author has measured it adds 
credibility. :-)

If you write yourself somewhere about it, I am happy to link that.

> Would this patch eventually make it upstream? I'd be very interested in some 
> details on what was causing this and how you resolved it. 

This version sits in the 2.4.x maintenance branch and will be part of the next 
server release. We target releases every quarter and the last was shortly, so 
it might take a while.

The details about what happend was:
 - the callback writing out on the client connection needs to tell the nghttp2 
library explicitly to stop otherwise it will be called over and over again, so 
long as DATA is available.
 - once the write-out stops, mod_http2 is checking if new responses became 
available or if streams need to be resumed. It also checks if there are new 
frames from the client to read. So, basically there are three events that need 
to be served/checked. I have ideas to make that more event triggered, but it is 
basically polling now with timed waits inbetween if nothing is happening.

The change I did first was to never write more than 100ms and then check the 
other things again. That is what you saw in v1.8.7.
In v1.8.8 there is a new atomic flag that signals new response data becoming 
available. This is checked every time before a new frame is assembled for 
writing out. Once the writeout itself is called, there is no more interruption 
at the moment until it is done. That explains the 5-20ms delay you observe now.

Ideally, this all would run poll triggered and I have some ideas how to do 
that. Now I need to find the time to do it.

Nevertheless, thanks to your investigation, we have now a much nicer behaviour 
regarding this and a more responsive server!

Cheers,

Stefan
 

> On Fri, Jan 13, 2017 at 8:43 AM, Stefan Eissing 
> <stefan.eiss...@greenbytes.de> wrote:
> Hi Kyriakos,
> 
> maybe you can give https://github.com/icing/mod_h2/releases/tag/v1.8.8 a try 
> in your setup? I would be interested if it gets rid of the 100ms delay in 
> response processing. Thanks!
> 
> Cheers,
> 
> Stefan
> 
> > Am 04.01.2017 um 19:27 schrieb Kyriakos Zarifis <kyr.zari...@gmail.com>:
> >
> > Hi Stefan,
> >
> > Yes, this is making a big, obvservable difference!
> >
> > Specifically, in all 3 repeats, the high priority stream is now served 
> > 100ms after it was received, writing ~100 frames (~1.6MB) of currently 
> > served, lower-priority stream.   (was: 500ms, 500frames(~7.5MB))
> >
> > In more detail, after the high-prio request is received, 20 more low-prio 
> > frames are served before the h2_task for it logs that it opens the output 
> > for the new stream. Then, another 80 low-frames are served before the 
> > high-prio reply is written. (relevant logs below)
> >
> > This already has an observable impact on the transition to the next page 
> > the moment I click on the link (goes from 1.5sec to less than 500ms), which 
> > I think is great because this tweak is relevant not just to this scenario, 
> > but to any higher level stream that begins while lower ones are served, 
> > even within a single page.
> >
> > I'm wondering if the change you made can be pushed harder to make the 
> > switch to the new stream even faster, e.g. avoiding even those 100 frames?
> >
> >
> > Thanks,
> > Kyriakos
> >
> >
> >
> > [Wed Jan 04 10:14:48.577687 2017] [http2:debug] [pid 24864] 
> > h2_stream.c(213): [client] AH03082: h2_stream(0-19): opened
> >
> > [Wed Jan 04 10:14:48.577758 2017] [http2:debug] [pid 24864] 
> > h2_session.c(452): [client] AH03066: h2_session(0): recv 
> > FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1486 (r/s)
> >
> > 20 x lower-prio frames:
> >
> > [Wed Jan 04 10:14:48.577864 2017] [http2:debug] [pid 24864] 
> > h2_session.c(685): [client] AH03068: h2_session(0): sent 
> > FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1486 (r/s)
> >
> > [Wed Jan 04 10:14:48.578775 2017] [http2:debug] [pid 24864] h2_task.c(106): 
> > [client] AH03348: h2_task(0-19): open output to GET 204.57.7.200 
> > /preposition/nextnav.html
> >
> > 80 x lower-prio frames:
> > [Wed Jan 04 10:14:48.578790 2017] [http2:debug] [pid 24864] 
> > h2_session.c(685): [client] AH03068: h2_session(0): sent 
> > FRAME[DATA[length=16275, flags=0, stream=5, padlen=0]], frames=16/1504 (r/s)
> >
> > [Wed Jan 04 10:14:48.682168 2017] [http2:debug] [pid 24864] 
> > h2_session.c(685): [client] AH03068: h2_session(0): sent 
> > FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/1587 (r/s)
> >
> >
> > [Wed Jan 04 10:14:48.682186 2017] [http2:debug] [pid 24864] 
> > h2_session.c(685): [client] AH03068: h2_session(0): sent 
> > FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/1588 (r/s)
> >
> >
> > On Wed, Jan 4, 2017 at 9:28 AM, Stefan Eissing 
> > <stefan.eiss...@greenbytes.de> wrote:
> > Hi Kyriakos,
> >
> > sorry for not replying earlier. I could find the issue you ran into, namely 
> > that mod_http2 is obsessed with the streams it already has and does not 
> > submit ready responses - until the existing streams are done or pause.
> >
> > I hope that the new release works much more nicely for you. You find it at 
> > https://github.com/icing/mod_h2/releases/tag/v1.8.7
> >
> > Thanks,
> >
> > Stefan
> >
> > > Am 02.01.2017 um 23:33 schrieb Kyriakos Zarifis <kyr.zari...@gmail.com>:
> > >
> > > Thanks Stefan!
> > >
> > > I just tried the tweaked version. I think I am seeing similar behavior, 
> > > i.e. the higher-prio HTML reply is sent ~500ms after its request is 
> > > received, writing ~500 lower-prio DATA frames (~7.5MB) in the meantime.
> > >
> > > Before any conclusions, I wanted to make sure I compiled/used the tweaked 
> > > mod properly with my existing Apache/2.4.25 on Ubuntu, since I haven't 
> > > done the process before: I couldn't find details on the right way to swap 
> > > in/out module versions, so I ended up compiling v.1.8.6 and pointing to 
> > > the created mod_http2.so in "/etc/apache2/mods-enabled/http2.load", but 
> > > I'm really not sure that's the right way. The only way I verified it was 
> > > seeing this in /var/log/apache2/error.log:
> > >
> > > "[http2:info] [pid 24935] AH03090: mod_http2 (v1.8.6-git, 
> > > feats=CHPRIO+SHA256+INVHD, nghttp2 1.17.0), initializing..."
> > >
> > >
> > > Assuming this is an acceptable way to use the tweaked version of the 
> > > module (please let me know if not), where should I share two apache log 
> > > files (one trace for each module version) so you could verify what I see?
> > >
> > >
> > >
> > >
> > > A few relevant lines from the v1.8.6 run (similar to the stable module, 
> > > AFAICT):
> > >
> > > [Mon Jan 02 13:59:59.636519 2017] [http2:debug] [pid 26718] 
> > > h2_session.c(439): [client ] AH03066: h2_session(0): recv 
> > > FRAME[HEADERS[length=39, hend=1, stream=19, eos=1]], frames=13/1721 (r/s)
> > > [Mon Jan 02 13:59:59.637099 2017] [http2:debug] [pid 26718] 
> > > h2_task.c(106): [client ] AH03348: h2_task(0-19): open output to GET  
> > > /preposition/nextnav.html
> > >
> > > [ ... continue sending ~500 DATA frames for streams 7-11 ...]
> > >
> > > [Mon Jan 02 14:00:00.177350 2017] [http2:debug] [pid 26718] 
> > > h2_session.c(661): [client ] AH03068: h2_session(0): sent 
> > > FRAME[HEADERS[length=87, hend=1, stream=19, eos=0]], frames=16/2209 (r/s)
> > > [Mon Jan 02 14:00:00.177366 2017] [http2:debug] [pid 26718] 
> > > h2_session.c(661): [client ] AH03068: h2_session(0): sent 
> > > FRAME[DATA[length=456, flags=1, stream=19, padlen=0]], frames=16/2210 
> > > (r/s)8.6
> > >
> > > [ ... continue sending streams 11 onwards ...]
> > >
> > > Thanks!
> > >
> > > On Sat, Dec 31, 2016 at 5:43 AM, Stefan Eissing 
> > > <stefan.eiss...@greenbytes.de> wrote:
> > > Hi Kyriakos,
> > >
> > > have a look at https://github.com/icing/mod_h2/releases/tag/v1.8.6
> > >
> > > That version flushes when at least 2 TLS records are ready to send. Also, 
> > > frame sizes are now aligned to TLS record sizes. So they are influenced 
> > > by the H2TLSWarmUpSize and H2TLSCoolDownSecs settings.
> > >
> > > Additionally, and highly experimental, I added H2TLSFlushCount to 
> > > configure the number of records to flush. You may play around with it 
> > > (default is 2) in your scenarios.
> > >
> > > I hope that this reduces buffering and makes the server more (another 
> > > word for agile, pls) to stream changes. Please let me know if that had 
> > > any effect on your tests.
> > >
> > > Thanks,
> > >
> > > Stefan
> > >
> > > > Am 29.12.2016 um 12:40 schrieb Kyriakos Zarifis <kyr.zari...@gmail.com>:
> > > >
> > > > That means the images should get a minim of ~30% of the available 
> > > > bandwidth as long as they have data. My reading.
> > > >
> > > > Right. Makes sense.
> > >
> > > Stefan Eissing
> > >
> > > <green/>bytes GmbH
> > > Hafenstrasse 16
> > > 48155 Münster
> > > www.greenbytes.de
> > >
> > >
> >
> > Stefan Eissing
> >
> > <green/>bytes GmbH
> > Hafenstrasse 16
> > 48155 Münster
> > www.greenbytes.de
> >
> >
> 
> Stefan Eissing
> 
> <green/>bytes GmbH
> Hafenstrasse 16
> 48155 Münster
> www.greenbytes.de
> 
> 

Stefan Eissing

<green/>bytes GmbH
Hafenstrasse 16
48155 Münster
www.greenbytes.de

Reply via email to