On Wed, Sep 18, 2013 at 09:23:09AM +0100, Alex Bligh wrote:
> Paolo,
>
> On 18 Sep 2013, at 08:57, Paolo Bonzini wrote:
>
> > Il 17/09/2013 19:32, Alex Bligh ha scritto:
> >>
> >> On 17 Sep 2013, at 18:04, Paolo Bonzini wrote:
> >>
> >>> Alex, what's missing before block.c and QED can use aio_t
On Wed, Sep 18, 2013 at 11:25:25AM +0200, Paolo Bonzini wrote:
> Il 18/09/2013 11:02, Alex Bligh ha scritto:
> > Paolo,
> >
> > On 18 Sep 2013, at 09:23, Alex Bligh wrote:
> >
> >>> Yes, that was my understanding too. Can we do it for 1.7?
> >
> > Whilst we are changing the calling semantics, d
Il 18/09/2013 11:02, Alex Bligh ha scritto:
> Paolo,
>
> On 18 Sep 2013, at 09:23, Alex Bligh wrote:
>
>>> Yes, that was my understanding too. Can we do it for 1.7?
>
> Whilst we are changing the calling semantics, do you think
> qemu_coroutine_yield() should also run the timers for the
> aio_c
Paolo,
On 18 Sep 2013, at 09:23, Alex Bligh wrote:
>> Yes, that was my understanding too. Can we do it for 1.7?
Whilst we are changing the calling semantics, do you think
qemu_coroutine_yield() should also run the timers for the
aio_context? IE should timers always be deferred to the
next qemu_
Paolo,
On 18 Sep 2013, at 08:57, Paolo Bonzini wrote:
> Il 17/09/2013 19:32, Alex Bligh ha scritto:
>>
>> On 17 Sep 2013, at 18:04, Paolo Bonzini wrote:
>>
>>> Alex, what's missing before block.c and QED can use aio_timer_new on
>>> the main AioContext, instead of timer_new?
>>
>> If we assume
Il 17/09/2013 19:32, Alex Bligh ha scritto:
>
> On 17 Sep 2013, at 18:04, Paolo Bonzini wrote:
>
>> Alex, what's missing before block.c and QED can use aio_timer_new on
>> the main AioContext, instead of timer_new?
>
> If we assume at this stage the threading is no different, very little
> I thi
On 17 Sep 2013, at 18:04, Paolo Bonzini wrote:
> Alex, what's missing before block.c and QED can use aio_timer_new on
> the main AioContext, instead of timer_new?
If we assume at this stage the threading is no different, very little
I think. Off the top of my head it should be a case of:
1. Aud
On 2013-09-17 18:38, Alex Bligh wrote:
>
> On 17 Sep 2013, at 17:19, Paolo Bonzini wrote:
>
>> Il 17/09/2013 18:09, Jan Kiszka ha scritto:
>>> On 2013-08-13 16:22, Stefan Hajnoczi wrote:
On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote:
> Yeah:
>
> -/* No AIO operat
Il 17/09/2013 18:50, Jan Kiszka ha scritto:
> On 2013-09-17 18:38, Alex Bligh wrote:
>>
>> On 17 Sep 2013, at 17:19, Paolo Bonzini wrote:
>>
>>> Il 17/09/2013 18:09, Jan Kiszka ha scritto:
On 2013-08-13 16:22, Stefan Hajnoczi wrote:
> On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wr
On 17 Sep 2013, at 17:50, Jan Kiszka wrote:
>>
>> If
>> that's true, I think we /shouldn't/ return. Equally if there
>> are no timers but something is genuinely attempting to wait
>> on an aio_notify, I don't think we should return.
>>
>
> In any case, test-aio seems to stress that if clause.
On 17 Sep 2013, at 17:19, Paolo Bonzini wrote:
> Il 17/09/2013 18:09, Jan Kiszka ha scritto:
>> On 2013-08-13 16:22, Stefan Hajnoczi wrote:
>>> On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote:
Yeah:
-/* No AIO operations? Get us out of here */
-if (!busy)
Il 17/09/2013 18:09, Jan Kiszka ha scritto:
> On 2013-08-13 16:22, Stefan Hajnoczi wrote:
>> On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote:
>>> Yeah:
>>>
>>> -/* No AIO operations? Get us out of here */
>>> -if (!busy) {
>>> +/* early return if we only have the aio_notify(
On 2013-08-13 16:22, Stefan Hajnoczi wrote:
> On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote:
>> On 2013-08-13 15:39, Alex Bligh wrote:
>>> Jan,
>>>
>>> On 13 Aug 2013, at 14:25, Jan Kiszka wrote:
>>>
To my understanding, the use case behind the current behavior is
qemu_aio_wa
On 15 Aug 2013, at 13:40, Stefan Hajnoczi wrote:
> This is looking pretty good. Jan, Ping Fan, and I have already worked
> on top of this series. I'd like to merge it soon.
>
> Are you ready to roll v11?
The only things I have queued for v11 are:
* Disentangle typedef struct vs struct in head
On Sun, Aug 11, 2013 at 05:42:54PM +0100, Alex Bligh wrote:
> [ This patch set is available from git at:
>https://github.com/abligh/qemu/tree/aio-timers10
> As autogenerated patch 30 of the series is too large for the mailing list. ]
>
> This patch series adds support for timers attached to an
On Tue, Aug 13, 2013 at 03:26:53PM +0100, Alex Bligh wrote:
>
> On 13 Aug 2013, at 15:22, Stefan Hajnoczi wrote:
>
> > We can change the semantics of aio_poll() so long as we don't break
> > existing callers and tests. It would make sense to do that after
> > merging the io_flush and AioContext
On 2013-08-13 16:26, Alex Bligh wrote:
>
> On 13 Aug 2013, at 15:22, Stefan Hajnoczi wrote:
>
>> We can change the semantics of aio_poll() so long as we don't break
>> existing callers and tests. It would make sense to do that after
>> merging the io_flush and AioContext timers series.
>
> Whil
On 13 Aug 2013, at 15:22, Stefan Hajnoczi wrote:
> We can change the semantics of aio_poll() so long as we don't break
> existing callers and tests. It would make sense to do that after
> merging the io_flush and AioContext timers series.
Whilst I think we should wait until your 'drop ioflush'
On Tue, Aug 13, 2013 at 03:45:44PM +0200, Jan Kiszka wrote:
> On 2013-08-13 15:39, Alex Bligh wrote:
> > Jan,
> >
> > On 13 Aug 2013, at 14:25, Jan Kiszka wrote:
> >
> >> To my understanding, the use case behind the current behavior is
> >> qemu_aio_wait() which is only supposed to block when the
On 13 Aug 2013, at 14:45, Jan Kiszka wrote:
> Yeah:
>
> -/* No AIO operations? Get us out of here */
> -if (!busy) {
> +/* early return if we only have the aio_notify() fd */
> +if (ctx->pollfds->len == 1) {
> return progress;
> }
>
> So this is even worse for my us
On 2013-08-13 15:39, Alex Bligh wrote:
> Jan,
>
> On 13 Aug 2013, at 14:25, Jan Kiszka wrote:
>
>> To my understanding, the use case behind the current behavior is
>> qemu_aio_wait() which is only supposed to block when there are pending
>> requests for the main aio context. We should be able to
Jan,
On 13 Aug 2013, at 14:25, Jan Kiszka wrote:
> To my understanding, the use case behind the current behavior is
> qemu_aio_wait() which is only supposed to block when there are pending
> requests for the main aio context. We should be able to address this
> scenarios also in a different way.
On 2013-08-13 15:12, Alex Bligh wrote:
>
> On 13 Aug 2013, at 13:22, Jan Kiszka wrote:
>
>> Another trick necessary to make this work is the following:
>>
>> static int rtc_aio_flush_true(EventNotifier *e)
>> {
>>return 1;
>> }
>>
>> ...
>>s->aio_ctx = aio_context_new();
>>aio_set_eve
On 13 Aug 2013, at 13:22, Jan Kiszka wrote:
> Another trick necessary to make this work is the following:
>
> static int rtc_aio_flush_true(EventNotifier *e)
> {
>return 1;
> }
>
> ...
>s->aio_ctx = aio_context_new();
>aio_set_event_notifier(s->aio_ctx, &s->aio_ctx->notifier,
>
On 2013-08-13 14:44, Alex Bligh wrote:
>
> On 13 Aug 2013, at 13:22, Jan Kiszka wrote:
>
>> With tweaking I mean:
>>
>> bool aio_poll(AioContext *ctx, bool blocking,
>> void (*blocking_cb)(bool, void *),
>> void *blocking_cb_opaque);
>>
>> i.e. adding a callback that aio
On 13 Aug 2013, at 13:22, Jan Kiszka wrote:
> With tweaking I mean:
>
> bool aio_poll(AioContext *ctx, bool blocking,
> void (*blocking_cb)(bool, void *),
> void *blocking_cb_opaque);
>
> i.e. adding a callback that aio_poll will invoke before and right after
> waiting
On 2013-08-11 18:42, Alex Bligh wrote:
> [ This patch set is available from git at:
>https://github.com/abligh/qemu/tree/aio-timers10
> As autogenerated patch 30 of the series is too large for the mailing list. ]
>
> This patch series adds support for timers attached to an AioContext clock
> w
[ This patch set is available from git at:
https://github.com/abligh/qemu/tree/aio-timers10
As autogenerated patch 30 of the series is too large for the mailing list. ]
This patch series adds support for timers attached to an AioContext clock
which get called within aio_poll.
In doing so it re
28 matches
Mail list logo