Phil Muldoon wrote:
Roland McGrath wrote:
I think of the interface as asynchronous at base.  There may at some
point be some synchronous calls to optimize the round trips.  But we
know that by its nature an interface for handling many threads at once
has to be asynchronous, because an event notification can always be
occurring while you are doing an operation on another thread.  So what
keeping it simple means for the first version is that we only worry
about the asynchronous model.  Think of it like a network protocol
between the application and the "tracing server" inside the kernel.
(Don't get hung up on that metaphor if it sounds like a complication to
you.)

Hi. I'm absorbing the email you wrote, but I keep coming back to this issue:

When mixing asynchronous and synchronous requests to utrace in one event loop, how do these events handled? If the client sends five asynchronous requests, (and has a thread waiting on replies), and then sends one synchronous request , does the synchronous request always return before the previous five? Is it basically a blocking call? And are the events returned asynchronously in a guaranteed order?

I'm not sure what Roland has in mind, but the way utracer worked was that every attached task had associated with it two debugger threads., what I called the "control" and "response" threads. All requests, either synchronous or asynchronous, originated in the control thread and were non-blocking. Synchronous reqs were, by definition, "trivial" in the sense that they could only request information immediately available to the kernel.

Since the threads were independent, in answer to the questions, there was no guarantee either that the synchronous request would return, on the control thread, before any of the prior asynch reqs generated responses on the response thread, or that the asynch reqs would generate results in any particular order.

At least in utracer, asynch reqs were typically task-control commands like a quiesce request, which could generate an asynch response if the appropriate utrace report_* had been enabled to do that. Asynch ops could also enable syscall entries or exits or signal reports, resulting responses at arbitrary times in no deterministic order. I also had a built-in memory-access request (like /proc/$pid/mem) that could generate an asynch response, depending, e.g., on paging.

Under the covers, all requests, either for synchronous or asynchronous results, are made via ioctl()s which can pass a pointer to a user-space struct that contains not only a precise specification of the command or data request, it can also contain pointers to user-space memory into which the module can store synchronous results. Asynchronous results are implemented as blocking read()s.

No guarantees, of course--the next-generation stuff may do things differently--but even as I type this, I'm hacking together a boilerplate framework that does exactly as described above.


Feedback welcome.

cm

Regards

Phil


--
Chris Moller

 I know that you believe you understand what you think I said, but
 I'm not sure you realize that what you heard is not what I meant.
     -- Robert McCloskey


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to