I have begun work on porting Qubes to work within a KVM host. I need a 
development environment that can utilize the cuda cores on a secondary 
Nvidia RTX GPU and also prefer to be able to utilize the graphics card.  

For several weeks I attempted to get the GPU to successfully pass-though to 
a Qubes virtual machine using the Nvidia drivers without success.  I can 
get it to work in dom0, but how could one think to do any work there.

I looked into KVM and discovered I can pass-though the GPU with no issues, 
it just works.

So now I am in a dilemma.  I love Qubes, I have been using Qubes since 2014 
and was having a hard time coming to terms with having to move to a plain 
KVM environment.  I have come to the realization that moving to KVM is the 
only option now.  It was time to say goodbye to Qubes and was having a 
difficult time dealing with that, so I again spent way too much time trying 
to get the Nvidia GPU working in Qubes.  

I then looked into alternatives to prevent my complete departure from 
Qubes.  Marek told me about DomB, which is now in its design stages.  It 
would allow me to statically partition my machine (like having 2 dom0 VMs - 
remember the Nvidia GPU with nvidia drivers works in dom0), but there is no 
experimental code ready yet.  So, then I attempted to get Qubes to run 
nested within KVM.  I was having issues with the display and then decided 
that instead of running Qubes nested, I should just get Qubes to run within 
KVM.  That's where we are today, no need to leave Qubes, invite Qubes over 
to KVM!

*GOALS*
The final goals would be to support all Qubes features and apps.

*STAGE 1*
The initial goal is to get Qubes to be able to manage the virtual machines 
(start, stop, etc) using 'qvm-*' tools and *Qubes Manager*.  Seamless VM 
video or audio will not be implemented in stage 1 so either a GPU will need 
to be passed through to the VM (which will also be able to provide HDMI 
audio), or access using spice or vnc.  Stage 1 goals include the following:

   - Use same template system Qubes currently uses including settings like 
      *qvm-prefs*, *features*, *tags*, etc.
      - Obviously support PCI pass-through using Nvidia drivers for RTX GPU
      - Support qrexec communication from host <-> vm
      - Locking down KVM host
      - Securing the network - look into ability to enabling *sys-net* and 
         *sys-firewall*
      
*FUTURE*

   - Seamless windows 
      - Audio
      - Encrypted memory within each VM (AMD processors)
      

*BUILD STATUS*

I have modified where necessary all Qubes source repos to allow building 
for KVM within a Fedora-32 host and guest.  All build modifications used 
conditional tests based on the 'BACKEND_VMM' build configuration option 
which is set to 'kvm'.  When 'BACKEND_VMM' is set to 'xen', everything 
builds as normal.

   - *vmm-xen*:  I still include this package to allow booting into KVM or 
   XEN.  There is also one dependency on it I need to remove.
   - *core-libvirt*:  Configured to also compile the KVM modules and any 
   other modules provided within the Fedora 32 distribution packages.
   - *core-vchan-xen*:  Not required.  Components that require it use the 
   'BACKEND_VMM' build variable.  Nice forward thinking from the Qubes 
   developers!
   - *core-vchan-libkvmchan*: Packaged *libkvmchan 
   <https://github.com/shawnanastasio/libkvmchan>* code based on the work 
   completed by @shawnanastasio <https://github.com/shawnanastasio>.
   - *qubes-core-vchan-kvm*: Packaged *qubes-core-vchan-kvm 
   <https://github.com/shawnanastasio/qubes-core-vchan-kvm>* code based on 
   the work completed by @shawnanastasio <https://github.com/shawnanastasio>
   .
   - *linux-utils*: Removed *qmemman* for KVM build.  Not sure is they can 
   be adapted for KVM.  Will revisit KVM alternative later.
   - *core-admin*:
      - Added KVM *libvirt* template
      - Added additional conditional 'BACKEND_VMM' for Xen specific build 
      and depends
      - Still installs *qubes-qmemman.service* until files.  Not sure if 
      they can be adapted to KVM.
      - *qubes* python
   - *other*: Minor changes here and there.


*INSTALL STATUS*

   - *dom0*:
      - All dom0 packages install without error (minus vmm-stubdom and iso 
      related packages)
      - All Qubes services start successfully on boot
      

   - *template*:
      - qubes-template-fedora-32 installs within kvm host.  A few manual 
      modification were made to qubes.xml to facilitate this. 
   

*WIP*

   - *core-admin*
      - *qubes python package*
         - Added a 'hypervisor' module to detect hypervisor type (xen, kvm, 
         etc) for cases like the following where it is expected that the 
hypervisor 
         is Xen if 'xen.lowlevel' is able to be imported.  In my case the Xen 
module 
         is installed since I also have a Xen boot option:
            - *qubes.app.VMMConnection.init_vmm_connection* change.  
            - old: if 'xen.lowlevel.{xs,xc}' in sys.modules:
               - new: if hypervisor_type('xen'):
               
               - There are a few dependencies on Xen such as:
            - 
*qubes.app.xs (xenstore) *
            I was hoping that '*xenstore*' could be used as a standalone 
            application (without Xen being activated).  I have not yet looked 
at the 
            source code but tried starting the *xenstore* service which 
            failed since the '/proc/xen' directory does not exist.  Wondering a 
I 
            created an *procfs* entry for '/proc/xen' if the store would 
            run without Xen.
            
            If *xenstore* won't work without X*en* then need to determine 
            the best alternative; convert *xenstore* to work without Xen or 
            some other solution?
            
            - *qubes.ext.pci.attached_devices*: 
               - ls('', 'backend/pci'), ls('', 'backend/pci' + domid)
                  - read('', devpath + '/domain'), read('', devpath + 
                  '/num_devs'), read('', devpath + '/dev-' + str(dev)) 
               - *qubes.vm.qubesvm.stubdom_xid*: if xs is None: return -1  
               # No issue
               - *qubes.vm.qubesvm.start_time*:  read('', 
               '/vm/{}/start_time').format(self.uuid)
               - *qubes.vm.qubesvm.create_qdb_entries*:  set_permissions('', 
               '/local/domain/{}/memory'.format(self.xid, [{'dom': self.xid}])
               - *qubes.vm.qubesvm.get_prefmem*:  read('', 
               '/local/domain/{}/memory/meminfo').format(self.xid))
               - 
               - *qubes.app.xc (xen connection)*
            - *qubes.app.QubesHost.get_free_xen_memory*:  
               physinfo()['free_memory']
               - *qubes.app.QubesHost.is_iommu_supported*:  
               physinfo()['virt_caps']
               - *qubes.app.QubesHost.get_vm_stats*:  domain_getinfo(int, 
               int)['domid', 'mem_kb', 'cpu_time', 'online_vpus']
               
               - Added a 'hypervisor' script to '/usr/lib/qubes' for other 
      scripts like 'startup-misc.sh'
         - if /usr/lib/qubes/hypervisor_type xen; then ...
      

*CURRENT ISSUES TO RESOLVE*

   - *xenstore*:  Will it work without Xen?  If not convert it so it will 
   or provide another alternative?
   - *qmemman*:
      - Provide KVM alternative
      - Should components like linux-utils that provide xen only utilities 
      have the xen utilities split into another repo like 'linux-utils-xen'?  
      Then when a KVM alternative can be provided it could be placed in 
      'linux-utils-kvm'?
   - *Qubes python packages*:
      - Not yet sure how much of it relies on any xen packages. Currently I 
      will continue using the hypervisor check and once all python packages are 
      functioning correctly with KVM we can look into better ways to handle xen 
      vs kvm or other hypervisors.
   - *qubes-builder*:
      - For some reason I can not build with 'DIST_BUILD_TOOLS=1" (standard 
         qubes xen components).  I always get an error when building dom-fc32 
of 
         "sudo: unrecognized option 
         
'--resultdir=/home/user/qubes/chroot-dom0-fc32/home/user/qubes-src/vmm-xen/pkgs/dom0-fc32'.
  
         Am I missing another config option?
         - Libvirt often fails to compile using 32 cores giving some error 
         about some file that does not exist (when it does fail, it always 
fails at 
         the same spot with same error message).  It seems to be compiling too 
fast 
         or maybe has something to do with using BTRFS filesystem.  The rpm 
spec for 
         libvirt uses the number of processors available (make -j32 V=1).  It 
will 
         build without errors if I generate a '.rpmmacros' file containing 
         '%_smp_mflags' -j10' to the 'chroot-dom0-fc32/home/user' directory.  
Just 
         wondering is there is a way to set number of jobs per component, or 
maybe 
         switching to using 'DIST_BUILD_TOOLS' will help.
         

Comments welcome,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-devel/7850e47a-64df-4b2b-81b1-c29e0e22ccd4o%40googlegroups.com.

Reply via email to