On 12/7/11 2:44 AM, Pierre Ossman wrote:
>>> Annoying. I did some work at making the thing more asynchronous, but
>>> more might be needed. If the problem is getting the data on the wire
>>> rather than the actual encoding, then a quick fix is increasing the
>>> outgoing buffer size. As long as your entire update fits in there, then
>>> X won't be throttled (and the update timer should also be more precise).
>>
>> Not sure if I follow.  It's not the send delay that's the problem-- it's
>> the time that the CPU takes to encode the framebuffer.
> 
> Just wanted to double check that you've looked at actual CPU time, and
> not the wall time it spends in writeFramebufferUpdate(). The latter
> includes both CPU time and the time needed to drain the socket buffer
> (if it fills).

This is an Amdahl's Law thing.  The frame rate is capped to 1 /
({deferred update time} + {FBU processing time}).  Whether or not the
FBU processing time is all CPU or some CPU and some I/O is irrelevant.
As long as the server can't receive new X updates during that time, then
the effect is the same.


>> Now, with the latest TigerVNC code, my understanding of it is that
>> FBUR's no longer result in an immediate FBU unless there is no deferred
>> update currently in progress.  Thus, almost all updates will be deferred
>> updates now.  That means that we're always going to incur the overhead
>> of the deferred update timer on every frame.
> 
> Indeed. The purpose of the deferred updates is to aggregate X11 stuff,
> and so should be fairly independent of what the VNC clients are up to.
> Note that FBUR:s still have some influence here as it will continue to
> aggregate stuff (and not reset the timer) if there is no client ready.
> 
> As to a solution, the only "proper" one is to reduce the time spent
> encoding stuff. We can't really do it in the background as we can't

No, because, per above, if we speed up processing, then the DUT delay
will have more of a relative effect, not less.  If it takes 10 ms on
average to process every update, then the difference between a 1 ms and
a 10 ms DUT is a 45% throughput loss.  If it takes 100 ms on average to
process every update, then the difference between a 1 ms and a 10 ms DUT
is only an 8% throughput loss.


> allow the framebuffer to be modified whilst we're encoding. Double
> buffering is one way to go, but I wouldn't be surprised if the copying
> between the buffers would eat up any performance gain.

It wouldn't eat up "any" performance gain.  memcpy()ing large blocks is
typically very quick, and actually, the CUT is already double buffering.
 However, double buffering does eat up a lot of memory.  I looked into
double buffering with TurboVNC, in an attempt to figure out how to
create a separate compress/send thread for each client and do flow
control that way (the way VirtualGL does it.)  Ultimately, I figured out
that it was possible, but it would require maintaining an intermediary
buffer for each client, and you can imagine that with the 4-megapixel
sessions that TurboVNC users commonly use, if they try to collaborate
with 5 people, suddenly they have a 100 MB VNC process on their hands.


> That said, perhaps we can consider treating this as a corner case and
> dial back the aggregation when we are hitting the CPU wall.

Well, ultimately the purpose of
aggregation/coalescence/whatever-you-want-to-call-it is increased
performance, so I'm assuming there is data to show that 10 ms performs
better than 1 ms under certain conditions?  I have never seen it to have
any effect other than a negative one, either on a WAN or a LAN.  We
really need to figure out a way to address these issues quantitatively,
because I feel like I have a lot of data to show what does and doesn't
work for 3D and video apps, but there is not the same data to show what
does and doesn't work for Firefox or OpenOffice or whatever, nor a
reproducible way to measure the effect of certain design decisions on
such apps.

I will also say that LAN usage is not a corner case, and if we treat it
as such, it will become a self-fulfilling prophesy, because no one will
want to use TigerVNC on a LAN if it's slow.

------------------------------------------------------------------------------
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
_______________________________________________
Tigervnc-devel mailing list
Tigervnc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tigervnc-devel

Reply via email to