Several years ago I worked extensively on a KVM version of Qubes and I am
interested in contributing again.

On Fri, Nov 17, 2023 at 11:27 AM Andrew “Arthur” Summers <
arthur.summ...@gmail.com> wrote:

> I know this is rather old, so I apologize for the necro. Has there been
> any progress on this effort? There is a lot of flexibility that KVM would
> bring (especially for passthrough support), and I would be interested in
> helping test. I'm a web developer, so my ability to contribute is a bit
> difficult, but I'll do what I can!
>
> On Saturday, August 1, 2020 at 6:57:01 PM UTC-5 Demi M. Obenour wrote:
>
>> On 2020-08-01 18:02, Marek Marczykowski-Górecki wrote:
>> >> In most KVM setups that I know of, the kernel network stack is
>> >> considered trusted. That’s a reasonable assumption for production
>> >> servers, which have server-grade NICs and are behind enterprise
>> >> routers, but not for Qubes.
>> >
>> > TBH I don't think "behind enterprise routers" really helps them. In
>> many
>> > cases (all the cloud providers) the untrusted traffic comes from within
>> > the VMs, not only from outside.
>>
>> That is true. Nevertheless, RCEs in the Linux network stack are
>> *rare*, especially in LTS kernels. Other than GRO Packet of Death
>> (which didn’t affect any LTS kernel), I am not aware of a single
>> such RCE in the last decade. There are likely some vulnerabilities in
>> exotic protocols like SCTP and AX_25, but exploiting them requires
>> the relevant modules to be loaded, which they usually are not.
>> I believe most cloud hosts run very few (if any) network services
>> that are exposed to untrusted guests, so the userspace attack surface
>> is likely small as well. Furthermore, cloud hosts don’t need to
>> implement Wi-Fi or Bluetooth.
>>
>> >>> One idea is to use netdev socket to connect two VMs directly, but I
>> >>> worry about the performance...
>> >
>> >> We could also reimplement the Xen netfront/netback protocols on top
>> >> of KVM shared memory. Future versions of KVM might even have direct
>> >> support for Xen paravirtualized drivers.
>> >
>> > While this would be a nice development, I think it *way more* complex
>> > development that is realistic in the short term future and without a
>> > specific directed funding for that.
>> >
>> > As for the KVM upstream support for Xen PV drivers, all the plans I've
>> > seen focus on just compatibility (run unmodified Xen VMs on KVM),
>> > ignoring superior Xen security model. Specifically - implement backend
>> > drivers with the assumption of full guest memory access and have them
>> > only on the host system. There is also dual topic discussed - virtio
>> > support on Xen, and suffers from the same issue. Everyone says avoiding
>> > full guest memory access (and being able to have backend in another VM)
>> > is nice to have that no one is willing to work on :/
>>
>> I am not familiar with virtio internals, but the protocol might be
>> inherently incompatible with a de-privileged backend. If there is no
>> explicit setup of backend-writable memory mappings, it probably is.
>>
>> It is almost certainly possible to implement Xen PV protocols on top
>> of KVM, but I wonder if it would be better to implement KVM-specific
>> protocols that map better to KVM’s primitives, in order to reduce
>> attack surface in the host. Nevertheless, I believe that Xen is
>> likely to continue to be a better fit in the future. Xen is focusing
>> on the security-critical embedded space, whereas KVM is mostly used
>> in datacenters.
>>
>> >>> One thing to consider is also enabling memory deduplication in KVM
>> >>> (KSM). This should nicely save memory when running multiple similar
>> VMs,
>> >>> but at the same time is risky in light of speculative execution and
>> also
>> >>> rowhammer-style attacks.
>> >
>> >> Honestly, I don’t think that deduplication is worth it, especially
>> >> in light of TRRespass. It also makes side-channel attacks far, *far*
>> >> easier to exploit.
>> >
>> >> What *might* be safe is mapping read-only data (such as dom0-provided
>> >> kernels) into multiple VMs
>> >
>> > I don't think the gain from just this worth the effort.
>>
>> Agreed.
>>
>> >>> Yes, no issue as stubdomains are not existing on KVM.
>> >
>> >> This could be a nasty issue for HVMs,
>> >
>> > You mean any VM. If not emulation, then PV backends.
>>
>> No, I meant HVMs specifically. PV backends can run in full domains,
>> as they do now.
>>
>> >> as without stubdomains, all
>> >> emulation must be done in dom0. Google and Amazon both solved this by
>> >> writing their own VMMs. At some point, KVM might be able to move the
>> >> instruction emulation into userspace, which might be a significant
>> win.
>> >
>> >> (What I *really* want is a version of QubesOS based on seL4.
>> >> But seL4 didn’t support 64-bit VMs last I checked, and I am not
>> >> aware of any tooling for creating and destroying VMs at run-time.
>> >> Most of the userspace tooling around seL4 is based on CAmkES, which
>> >> requires that every component be statically known at compile-time.
>> >> Furthermore, getting seL4’s hypervisor and IOMMU support verified
>> >> would either require someone to fund the project, or someone who has
>> >> sufficient skill with the Isabelle proof assistant to do it
>> themselves.
>> >> Without verification, the benefits of using seL4 are significantly
>> >> diminished.)
>> >
>> > Well, seL4 kernel is still significantly smaller and delegate much more
>> > tasks to less trusted components. But I agree - for the current
>> features
>> > of Qubes, the static nature of seL4 is quite limiting.
>>
>> seL4 is actually fully dynamic ― it is the userspace tooling that
>> is static. The seL4 developers have expressed interest in using seL4
>> in dynamic systems, and there are others who are working on building
>> dynamic systems on top of seL4, so I suspect this limitation will be
>> lifted someday. QubesOS is explicitly listed on the seL4 website as an
>> area where seL4 is a much better fit than Xen, so the seL4 developers
>> might be willing to cooperate on a seL4-based version of Qubes.
>>
>> The SMP version of seL4 also has a global lock. This should not be
>> much of a problem as seL4 system calls are *very* fast. The one
>> exception is object destruction, but it makes explicit checks to
>> allow preemption. It would be a problem on large NUMA systems,
>> but my understanding is that QubesOS is rarely used on such systems.
>> Future versions of seL4 will support a multikernel mode of operation
>> that avoids this limitation.
>>
>> Sincerely,
>>
>> Demi
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "qubes-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to qubes-devel+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-devel/b918d75e-a3cf-4b8e-a3a1-bff387b2c971n%40googlegroups.com
> <https://groups.google.com/d/msgid/qubes-devel/b918d75e-a3cf-4b8e-a3a1-bff387b2c971n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-devel/CAM_kMMjH8kqkSxi%3D3uVfVHhSfQoa_ReetiEvyXc7LZOuV3H%2BNQ%40mail.gmail.com.

Reply via email to