Hello,

On 2020-12-03 03:51, Andrew Warkentin wrote:
On 12/2/20, Indan Zupancic [] wrote:

Please don't quote full e-mail addresses on public mailing lists,
bots can find those.


The main problem is that you don't want VM user space to block
the whole VM by doing a blocking seL4 call. Currently it does
not provide a good way to receive notifications, only to send
them. And interaction with IRQ delivery is tricky.

Because of the above limitation it's unclear if it's worth it
for our use case, so I haven't actually tried it out yet.


That would basically render it useless for my purposes. I was thinking
of an interface that would vector incoming messages to an interrupt.

That seems possible to implement. The challenge is to implement
this without increasing seL4's complexity and without slowing down
all notification and IPC system calls, as it would be a couple of
special checks somewhere. It would require new system calls, e.g:
seL4_Notification_SetIRQHandler() and seL4_IPC_SetIRQHandler().
Or it can be a TCB/VCPU binding instead.

It does not solve blocking outgoing IPC calls, but I assume you
don't mind those. Interrupts must be kept disabled for the VM
during such calls.

It also has to work on x86 as well.

The VM receive side needs special seL4 kernel support to generate
interrupts when there are pending notifications or messages. This
is architecture independent.

The VM send side needs a way to directly make calls into the seL4
kernel instead of going via the VMM. This needs VM kernel support,
because such instructions are usually privileged. All VM <-> VMM
communication happens via the seL4 kernel already, so all that is
needed is for seL4 to know whether it is a VMM call or a system
call. On ARM this happens by looking at whether it was a svc or a
hvc call. It seems likely that x86 can do something similar.

For people who are wondering what use this has, the advantage of
taking the VMM out of the loop for seL4 <-> VM communication is not
only possible performance increases, but it also simplifies the VMM
and makes it more agnostic about the system as a whole. The system
design becomes simpler because you can use the normal seL4 API
everywhere, including VM user space. This makes it easier to move
code between VM user space and native seL4.

The downside is that the VM kernel needs support for seL4 and marshal
the calls. And the whole VM has only one TCB, which severely limits
usability. It moves complexity from the VMM to the VM.

Greetings,

Indan
_______________________________________________
Devel mailing list -- devel@sel4.systems
To unsubscribe send an email to devel-leave@sel4.systems

Reply via email to