Hello Tobias,

thanks for your detailed report. I believe that Jakub has already commented on most parts of your report, so let me just add a few points (and perhaps reiterate, since I have already started drafting my email before Jakub's email arrived :)).

|        HelenOS provides        |
----------------------------------
| between tasks only             |
| (separate address spaces)      |
| - identified by their id       |

While the kernel does identify the communicating parties by task IDs, that doesn't mean that a task can connect to any task just by knowing its task ID. A new IPC connection can only be created via an already existing connection, as we discussed previously.

  b) Using syscalls for every communication, even for communication
between threads in the same address space.

I back Jakub's opinion here, this should be your initial baseline implementation: Use the same communication mechanism (the kernel IPC) for all cases.

You can optimize the communication mechanism later on for the case of communicating threads of a single task, but don't do premature optimizations. First make sure that you have a sound and universal solution before tweaking it for performance.

  a) Create one thread per task which is dedicated to fetching messages
from the answerbox and forwarding the fetched messages to the addressed
thread within the address space.

Again, the dispatcher thread method should be your initial baseline implementation. We use it in the async framework and it simply works.

The other proposed variant (where all threads are allowed to receive messages) is actually quite similar to the dispatcher thread from the design point of view (it can be called "wandering dispatcher"), but it is technically more complicated.

This would require some relatively deep
digging into the Genode code since the task creation process would have
to altered.

Modifications to the Genode task creation process will be inevitable one way or another, due to all the specifics of SPARTAN.

For instance, SPARTAN cannot run new user space tasks except the init tasks. It can only create new instances of the "loader" init task (which is then responsible for transforming itself into the target task).

The kernel also does not manage any parent-child relation between tasks, it really only spawns new instances of the loader. As you probably need this parent-child relation for various purposes, you will have to keep track of it in the naming service (as this is the only task in the system which all other tasks are connected to).

  a) A newly created task would have to ask the first loaded task to
connect it to it's parent. The first loaded task therefore would in some
way have to know where the parent is located to forward the request.

As Jakub suggested, you can use the connection to the task loader for this.

My alternative suggestion would be to use the naming service for this, in a similar fashion as the HelenOS Naming Service is used to store and retrieve tasks' return values.

(2) - Problem 2: The first loaded task in the Genode Framework is per
definition the "core" task. As far is I can judge core has to do
nothing at all with IPC mechanisms. In the current state I do not know
any solution to this problem, since I have not such deep knowledge of
the Genode system.

Well, clearly this is the time where you have to start thinking out of the box. Genode needs the "core" task, SPARTAN needs a naming service. Thus you can either make the "core" task the naming service, but then it will inevitably do some IPC. Or you can have a dedicated naming service as the first task and the "core" task will become the second task.

In both cases you are just breaking a convention, but it is a technically feasible and sound step required to make the worlds of Genode and SPARTAN work together.

(4) - It has to be considered carefully which syscall is used for which
task, since the syscalls differ in execution speed (how much do the
actually differ?). Which one is to use for which task will clarify on
the way of portation.

As Jakub already tried to explain, you choose the proper IPC syscall variant not according to the task, but according to the size and complexity of the payload you want to send, for each individual message.

If the payload fits for SYS_IPC_CALL_ASYNC_FAST, then this is obviously the best (and fastest) option. If it doesn't, you need to use SYS_IPC_CALL_ASYNC_SLOW, which is slower due to the copying of some of the arguments, but not too much.

If the payload does not fit into 6 integers, you have to use the IPC_M_DATA_WRITE, IPC_M_DATA_READ, IPC_M_SHARE_OUT and IPC_M_SHARE_IN messages to copy the data between address spaces or create a shared memory area. The shared memory approach is obviously quite efficient for data of any size, but it requires the IPC handshake to establish the memory sharing and further IPC messages for synchronization. Thus the goal is to amortize this overhead as much as possible, for example by not establishing and tearing down the shared memory area for each individual message.


M.D.

_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/cgi-bin/listinfo/helenos-devel

Reply via email to