On 12/10/06, Zhu Han <[EMAIL PROTECTED]> wrote:
Kevin, please see the commets inside the body.
On 12/11/06, Kevin Sanders <[EMAIL PROTECTED]> wrote:

> One of my coworkers recently observed, that handles associated with a
> IOCP seem to have CPU affinity, at least sometimes.  In a read
> completion callback, he posted another read (which is fine and
> encouraged) and then went off and did a lot of processing which
> preventing it from calling GQCS for about 20 seconds (very bad).  Even
> though there were 3 other threads waiting on GQCS, they couldn't pop
> the completion status for the read from the IOCP even though the read
> had completed.  Finally, as soon as the original thread came back
> around and called GQCS, it popped the completion instead of the other
> threads which had been waiting the whole time.
>
> This makes sense, because a running thread that is reading & writing
> would suffer a CPU cache flush if it changed CPUs.  This was on a true
> dual CPU box, not a dual core, or hyperthreaded.  I've never read
> anything that confirms this, but we did see it in this case.

What' the limit for the running thread for the IOCP?  If you choose 0 for it
and the platform is UP, there could be only one thread which can running for
the IOCP.

I'm not sure what limit your asking about.  Are you talking about the
GQCS milliseconds timeout value?  In any case I'm not sure.  He was
using his own IOCP code.

I just can't believe it what you have observed. Are you sure the 2nd read
operation is completed before the 1st thread back to the GQCS?

That's what we kept thinking.  Hard to believe.  At one point he ran
the thread in an infinite loop after posting the second read and the
completion would never pop out of the IOCP with another thread/CPU.

Do you mean only the 2nd read operation's result  can't be get by GQCS? Does
the other IO operations completed during the time be get by GQCS?

Oh I'm not saying this is fact, only something we observed that we
can't explain.  The handle appeared to have some kind of affinity for
that thread in his code, on that machine, that day.  I actually forgot
about this until this discussion, tomorrow I'll try to reproduce this
with my code.

Kevin
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to