Hi Tobias,
thank you for the preview of your weekly status report. I just want to
add some of our discussion points for the sake of completeness.
On 25.06.2012 15:21, Tobias Börtitz wrote:
> Hi,
>
> sorry for not responding for such a long time.
> Thank you for all the comments. After reading them and discussing the
> issues with Stefan we concluded what the solutions would look like:
> - To keep it simple every communication will be made through kernel IPC
> mechanisms. Therefore every syscall will contain the specific thread-id
> the message is addressed to as one of the free arguments.
> - Since Genode is using a Messagebuffer for communicating, every
> communication will be made through the syscall sending complex data.
This is not fully true. We agreed that the baseline implementation only
uses the so to say "long-version" of IPC. Nevertheless, the message
buffer used in Genode does not limit you to use only that. Dependend on
how much data the message buffer contains, you can dynamically decide to
use another kind of IPC with less items to transfer.
> - There will be one manager thread per task for incoming calls. It will
> dispatch those calls to thread specific call queues. The manager thread
> will be started as soon as the first worker thread is waiting for an
> incoming call (e.g. when a service is registered or a callback
> connection requested).
> - The Core task in Genode will be acting as a initial naming service.
> When a new task is spawned it will ask Core to connect to it's parent.
> Therefore Core has to keep track of the task hierarchy
Correct we discussed that core will be the intial name-service for all
tasks. Because core already knows the parent of each task no additional
tracking effort has to be done.
Moreover, we discussed a two staged approach as a possible solution of
how a connection to a service might get established. Assuming we've the
following situation:
Service Client
\ /
Init
(Core) - first nameservice
The lines are already established connections. The client would first
send an IPC-request to it's parent (here Init), this is how Genode's
session mechanism works in general. Init - dependend on its policy -
changes the session parameters, and informs one of it's children (here
Service) again via normal IPC. As a result a Genode::Capability is
returned, containing a unique number. On the capability's way back to
the client every node saves the correlation of unique-id,
requesting-party and destination. During the second-stage Spartan's
connection-request mechanism ('connect_me_to' I believe) is used
together with the capability's unique-id as argument. The dispatcher
thread of every node on the request's way (Core - Init - Service), asks
it's database via the unique-id, whether it has to forward the request
again, accept the connection, or denies it. After that the database
entry gets removed. At the end the new connection is established.
But this proposed solution can only be used for new Genode-sessions. In
general it's also possible in Genode to share a capability (and thereby
a connection) with another task via IPC (with the capability as
argument). This use-case isn't covered by the proposed solution.
Therefore, Tobias wanted to investigate how the cloning of a connection
works, and whether it gives him the right tool to close that gap.
Best regards
Stefan
--
Stefan Kalkowski
Genode Labs
http://www.genode-labs.com/ · http://genode.org/
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/cgi-bin/listinfo/helenos-devel