Chris Moller wrote:


Phil Muldoon wrote:
Roland McGrath wrote:
I think of the interface as asynchronous at base.  There may at some
point be some synchronous calls to optimize the round trips.  But we
know that by its nature an interface for handling many threads at once
has to be asynchronous, because an event notification can always be
occurring while you are doing an operation on another thread.  So what
keeping it simple means for the first version is that we only worry
about the asynchronous model.  Think of it like a network protocol
between the application and the "tracing server" inside the kernel.
(Don't get hung up on that metaphor if it sounds like a complication to
you.)

Hi. I'm absorbing the email you wrote, but I keep coming back to this issue:

When mixing asynchronous and synchronous requests to utrace in one event loop, how do these events handled? If the client sends five asynchronous requests, (and has a thread waiting on replies), and then sends one synchronous request , does the synchronous request always return before the previous five? Is it basically a blocking call? And are the events returned asynchronously in a guaranteed order?

I'm not sure what Roland has in mind, but the way utracer worked was that every attached task had associated with it two debugger threads., what I called the "control" and "response" threads. All requests, either synchronous or asynchronous, originated in the control thread and were non-blocking. Synchronous reqs were, by definition, "trivial" in the sense that they could only request information immediately available to the kernel.

Since the threads were independent, in answer to the questions, there was no guarantee either that the synchronous request would return, on the control thread, before any of the prior asynch reqs generated responses on the response thread, or that the asynch reqs would generate results in any particular order.

At least in utracer, asynch reqs were typically task-control commands like a quiesce request, which could generate an asynch response if the appropriate utrace report_* had been enabled to do that. Asynch ops could also enable syscall entries or exits or signal reports, resulting responses at arbitrary times in no deterministic order. I also had a built-in memory-access request (like /proc/$pid/mem) that could generate an asynch response, depending, e.g., on paging.

Under the covers, all requests, either for synchronous or asynchronous results, are made via ioctl()s which can pass a pointer to a user-space struct that contains not only a precise specification of the command or data request, it can also contain pointers to user-space memory into which the module can store synchronous results. Asynchronous results are implemented as blocking read()s.

No guarantees, of course--the next-generation stuff may do things differently--but even as I type this, I'm hacking together a boilerplate framework that does exactly as described above.

Thanks for the description Chris. From an ntrace client implementation "point of view", non-ordered replies to asynchronous requests present an ordering conundrum in the client - especially as it scales to many many inferiors. But without knowing how events are structured and how then can paired with the original requester in the client, not sure how much of an issue it will be. Frysk uses the observer design pattern for a lot of these type requests, to solve this problem. Do you think this will be a similar pattern? Anyway not a big deal, it is a known problem with known solutions. Tom mentioned that the X protocol has something similar in this regard. To look a little closer at you synchronous event description, I'm struggling to classify what type of events would be synchronous. I don't want to get tied up here too much, as both you and Roland mention this is all fluid. But for the sake of understanding and documentation, can you give me a use-case where a synchronous request would be needed/used, and also an asynchronous one?

Regards

Phil

Reply via email to