One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batch
On 06/06/16 11:14, Chris Wilson wrote:
On Mon, May 23, 2016 at 09:53:40AM +0100, Tvrtko Ursulin wrote:
On 20/05/16 13:19, Chris Wilson wrote:
On Fri, May 20, 2016 at 01:04:13PM +0100, Tvrtko Ursulin wrote:
+ p = &b->waiters.rb_node;
+ while (*p) {
+ parent = *p;
+
On Mon, May 23, 2016 at 09:53:40AM +0100, Tvrtko Ursulin wrote:
>
> On 20/05/16 13:19, Chris Wilson wrote:
> >On Fri, May 20, 2016 at 01:04:13PM +0100, Tvrtko Ursulin wrote:
> >>>+ p = &b->waiters.rb_node;
> >>>+ while (*p) {
> >>>+ parent = *p;
> >>>+ if (wait->seqno == to_wai
On 20/05/16 13:19, Chris Wilson wrote:
On Fri, May 20, 2016 at 01:04:13PM +0100, Tvrtko Ursulin wrote:
+ p = &b->waiters.rb_node;
+ while (*p) {
+ parent = *p;
+ if (wait->seqno == to_wait(parent)->seqno) {
+ /* We have multiple wait
On Fri, May 20, 2016 at 01:04:13PM +0100, Tvrtko Ursulin wrote:
> >+p = &b->waiters.rb_node;
> >+while (*p) {
> >+parent = *p;
> >+if (wait->seqno == to_wait(parent)->seqno) {
> >+/* We have multiple waiters on the same seqno, select
> >+
On 19/05/16 12:32, Chris Wilson wrote:
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batch