[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-08-04 Thread Marcus at WetwareLabs
On Monday, August 1, 2016 at 10:24:32 PM UTC+3, tom...@gmail.com wrote:
> Hi Marcus,
> 
> I'm bit confused with this
> > Edit /etc/default/grub and add following options (change the pci address if 
> > needed)
> 
> Which version of Qubes is this? Aint 3.1 EFI-only?
> And EFI version of kernel args are to be passed via /boot/efi/EFI/qubes 
> (kernel=)?
> 
> regards,
>   Tom

Hi Tom, 

I use 3.1 and 3.2 rc2. Actually I haven't thought about this before. It seems 
on my system the default state is 'BIOS compatibilty mode' even though it's a 
new motherboard which is running on UEFI firmware. As for the partition table 
type on my SDD, it has always been 'dos' type MBR and that was never converted 
to GPT by Qubes Installer. 

I'm not familiar how to configure EFI type bootloader, but it seems editing  
/boot/efi/EFI/qubes/xen.cfg should work. There's lots of discussion about it 
here: https://github.com/QubesOS/qubes-issues/issues/794

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b5d085b6-8517-4cc7-9da3-4c2fb8c54eac%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-07-14 Thread Marcus at WetwareLabs
Some more experimentation with GTX980:

- Tried Core2Duo CPUID from KVM VM
- Ported NoSnoop patch from KVM

Sadly, neither of these would help with BSODs / Code 43 errors.

I posted the results (with patches and more detailed information) on Xen-devel 
(https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01713.html). I 
hope the experts there might have more suggestions.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4d7096ea-a92f-4ba8-aaa3-e3ad40ddb82d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-07-09 Thread Marcus at WetwareLabs
On Saturday, July 9, 2016 at 5:57:42 PM UTC+3, Marcus at WetwareLabs wrote:
> Here's the patch.

Forgot to add that if spoofing is turned on for an already-installed Windows 
VM, there was a BSOD during boot (Windows really doesn't like if hypervisor 
suddenly disappears..). Re-installing Windows (with spoofing on) fixes this 
(maybe fixing installation with rescue CD could work also, but I did not test 
that).

 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/df99a045-e0b4-41c7-9791-c30debdbfd29%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-07-09 Thread Marcus at WetwareLabs
Here's the patch.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5a936f01-adc6-4f25-af00-ebeed9a9bb7e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff -ur -x .cproject -x .project -x '*.swp' xen-4.6.1/tools/firmware/hvmloader/hvmloader.c xen-4.6.1-new/tools/firmware/hvmloader/hvmloader.c
--- xen-4.6.1/tools/firmware/hvmloader/hvmloader.c	2016-02-09 16:44:19.0 +0200
+++ xen-4.6.1-new/tools/firmware/hvmloader/hvmloader.c	2016-07-04 23:31:32.81500 +0300
@@ -127,9 +127,11 @@
 
 if ( !strcmp("XenVMMXenVMM", signature) )
 break;
+if ( !strcmp("ZenZenZenZen", signature) )
+break;
 }
 
-BUG_ON(strcmp("XenVMMXenVMM", signature) || ((eax - base) < 2));
+BUG_ON( (strcmp("XenVMMXenVMM", signature) && strcmp("ZenZenZenZen", signature) ) || ((eax - base) < 2));
 
 /* Fill in hypercall transfer pages. */
 cpuid(base + 2, &eax, &ebx, &ecx, &edx);
diff -ur -x .cproject -x .project -x '*.swp' xen-4.6.1/tools/libxl/libxl_create.c xen-4.6.1-new/tools/libxl/libxl_create.c
--- xen-4.6.1/tools/libxl/libxl_create.c	2016-07-09 16:47:05.18100 +0300
+++ xen-4.6.1-new/tools/libxl/libxl_create.c	2016-07-04 23:49:54.80200 +0300
@@ -284,6 +284,8 @@
 libxl_defbool_setdefault(&b_info->u.hvm.acpi_s4,true);
 libxl_defbool_setdefault(&b_info->u.hvm.nx, true);
 libxl_defbool_setdefault(&b_info->u.hvm.viridian,   false);
+libxl_defbool_setdefault(&b_info->u.hvm.spoof_viridian, false);
+libxl_defbool_setdefault(&b_info->u.hvm.spoof_xen,  false);
 libxl_defbool_setdefault(&b_info->u.hvm.hpet,   true);
 libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,  true);
 libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm, false);
@@ -1263,6 +1265,11 @@
 libxl__device_console_add(gc, domid, &console, state, &device);
 libxl__device_console_dispose(&console);
 
+LOG(DEBUG, "wetware - checking spoofing for guest (domid %d): xen %d, vir %d", domid,
+	 libxl_defbool_val(d_config->b_info.u.hvm.spoof_xen),
+	 libxl_defbool_val(d_config->b_info.u.hvm.spoof_viridian)
+ );
+
 dcs->dmss.dm.guest_domid = domid;
 if (libxl_defbool_val(d_config->b_info.device_model_stubdomain))
 libxl__spawn_stub_dm(egc, &dcs->dmss);
diff -ur -x .cproject -x .project -x '*.swp' xen-4.6.1/tools/libxl/libxl_dom.c xen-4.6.1-new/tools/libxl/libxl_dom.c
--- xen-4.6.1/tools/libxl/libxl_dom.c	2016-07-09 16:47:05.21200 +0300
+++ xen-4.6.1-new/tools/libxl/libxl_dom.c	2016-07-04 23:31:32.81900 +0300
@@ -287,6 +287,10 @@
 libxl_defbool_val(info->u.hvm.nested_hvm));
 xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
 libxl_defbool_val(info->u.hvm.altp2m));
+xc_hvm_param_set(handle, domid, HVM_PARAM_SPOOF_XEN,
+libxl_defbool_val(info->u.hvm.spoof_xen));
+xc_hvm_param_set(handle, domid, HVM_PARAM_SPOOF_VIRIDIAN,
+libxl_defbool_val(info->u.hvm.spoof_viridian));
 }
 
 int libxl__build_pre(libxl__gc *gc, uint32_t domid,
diff -ur -x .cproject -x .project -x '*.swp' xen-4.6.1/tools/libxl/libxl_types.idl xen-4.6.1-new/tools/libxl/libxl_types.idl
--- xen-4.6.1/tools/libxl/libxl_types.idl	2016-02-09 16:44:19.0 +0200
+++ xen-4.6.1-new/tools/libxl/libxl_types.idl	2016-07-09 16:31:16.18100 +0300
@@ -468,6 +468,8 @@
("viridian", libxl_defbool),
("viridian_enable",  libxl_bitmap),
("viridian_disable", libxl_bitmap),
+   ("spoof_viridian",   libxl_defbool),
+   ("spoof_xen",libxl_defbool),
("timeoffset",   string),
("hpet", libxl_defbool),
("vpt_align",libxl_defbool),
diff -ur -x .cproject -x .project -x '*.swp' xen-4.6.1/tools/libxl/xl_cmdimpl.c xen-4.6.1-new/tools/libxl/xl_cmdimpl.c
--- xen-4.6.1/tools/libxl/xl_cmdimpl.c	2016-07-09 16:47:05.02700 +0300
+++ xen-4.6.1-new/tools/libxl/xl_cmdimpl.c	2016-07-04 23:32:38.04600 +0300
@@ -1507,6 +1507,10 @@
 xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
 xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
 
+xlu_cfg_get_defbool(config, "spoof_xen", &b_info->u.hvm.spoof_xen, 0);
+ 

[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-07-09 Thread Marcus at WetwareLabs
I've continued experimenting with GTX 980 passthrough with Arch Linux. I 
noticed that the xf86-video-nouveau does NOT in fact have Maxwell support. One 
would think otherwise looking at their Feature Matrix here: 
https://nouveau.freedesktop.org/wiki/FeatureMatrix/
NV110 is the Maxwell family (GTX 980 including). But the mode-setting driver 
can be used instead, so *I finally got GTX 980 PT working in Arch Linux*:

Add this file to  /etc/X11/xorg.conf.d/20-nouveau.conf
```
Section "Device"
  Identifier "NVidia Card"
  Driver "modesetting"
  BusID "PCI:0:5:0"
EndSection
```

Note that the PCI address is the address that the GPU has inside the VM (use 
lspci IN the vm to find out that).  Also "pci_msitranslate=0" has to be set in 
VM configuration, otherwise VM will hang when X is started.

This is tested with Arch linux (up to date 8.7.2016), with Linux 4.6.3-1-ARCH, 
modesetting and X versions 1.18.3.

-

Ok, now that it's proven that newer Nvidia cards CAN in fact be passed through 
in Xen, I tried the official NVidia binary driver, but it failed with error 
message "The NVIDIA GPU at PCI:0:5:0 is not supported by the 367.27 NVIDIA 
driver".

I think that's the proprietary driver refusing to work when it detects that 
it's running under hypervisor (the Code 43 issue in Windows). Since KVM has for 
a while supported hiding both the "KVMKVMKVMKVM" signature (with "-cpu kvm=off" 
flag) as well as the Viridian hypervisor signature ("-cpu hv_vendor_id="..." 
flag), and currently there's no such functionality in Xen, I patched  it in 
quite similar way to what Alex Willimson did to KVM.

Attached is a patch for Xen 4.6.1 that spoofs Xen signature ("XenVMMXenVMM" to 
"ZenZenZenZen") and Viridian signature ("Microsoft Hv" to "Wetware Labs") when 
"spoof_xen=1" and "spoof_viridian=1" are added VM configuration file.

The signatures are currently hard-coded, and currently there's no way to modify 
them (beyond re-compiling Xen), since HVMLoader also uses a hard-coded string 
to detect Xen and there's no API (understandably) to change that signature in 
real-time.

WARNING! In case you try the patch, you MUST re-compile and install also 
core-libvirt (in addition to vmm-xen) packages. Otherwise starting all DomUs 
will fail! You have been warned :)

-

With this patch, the *NVidia binary driver (version 367.27) works also on Arch 
Linux* :)

However this was not enough on Windows 7 and 8.1 VMs (driver version 368.39) 
still announce Error 43 :(

I would love if others could test this as well. Maybe the Windows driver uses 
some other functionality to check for hypervisor, or maybe it's not a spoofing 
issue at all.

More investigation coming in..




-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/716e4cc4-548e-4cb0-8f75-82894cd29eef%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Question

2016-07-03 Thread Marcus at WetwareLabs


On Saturday, July 2, 2016 at 6:12:57 PM UTC+3, foss-...@isvanmij.nl wrote:
>
> Interesting, I haven't noticed the thread you mentioned. The thread I 
> referenced to was more than a year ago.
>
> So if I read through it quickly, this guy had succeeded in passing through 
> his GTX980, but it went wrong on the driver installation (code 43). This is 
> expected, as nVidia disables the card automatically when a virtualised 
> environment is found. Solution is to hide the virtualisation extensions 
> inside the VM, more info on this matter here:
>
> https://lime-technology.com/forum/index.php?topic=38664.0
>
> Marcus, are you reading?
>
>
Hi, 

passing through now works with the QEMU running in dom0, but as it's 
inherently quite unsafe, we are working on the passthrough issues when QEMU 
is running in stubdom (a separate "helper-VM" only for QEMU) which is the 
default configuration of HVMs created with Qubes VM Manager. (see 
discussion here: https://github.com/QubesOS/qubes-issues/issues/1659 ). 
Currently it's broken now, but some progress has been made.

Yes, the issue with Nvidia cards (code 43) could be related to the driver 
detecting that it's running inside VM. The link you provided tells about a 
solution that's specific to KVM (the -cpu kvm=off flag) and there's not yet 
a way to hide the hypervisor in Xen (AFAIK).  Also there's the new patch in 
KVM to spoof the hypervisor vendor id (*hv_vendor_id)* that supposedly has 
solved remaining problems.  It would be awesome if Xen could have these 
patches ported from KVM!  My Oculus Rift should arrive in few weeks, so I'm 
very anxious to get GTX980 working before that :)

Note that I had in many occasions also BSOD during boot (and not just code 
43) when testing with GTX980 drivers installed. Also there was similar 
issues with Radeon 6950, but the reset patch (see here 
https://groups.google.com/d/msg/qubes-users/zHmaZ3dbus8/4ZfZf6BmCAAJ) 
 
seemed to solve those, and I haven't had BSOD after that (regarding Radeon, 
but I haven't tested the reset patch with Nvidia cards yet).

Best regards,
Marcus


 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/52a5da5b-a98e-4b9e-8206-e3e9b20b7214%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-06-26 Thread Marcus at WetwareLabs


On Wednesday, June 22, 2016 at 6:26:50 PM UTC+3, Marcus at WetwareLabs 
wrote:
>
> Hello all,
>
> I've been tinkering with GPU passthrough these couple of weeks and I 
> thought I should now share some of my findings. It's not so much unlike the 
> earlier report on GPU passthrough here (
> https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ
> ).
>
> I started with *Nvidia GTX 980*, but I had no luck with ANY of the Xen 
> hypervisors or Qubes versions. Please see my other thread for more 
> information (
> https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ
> ).
>
> However after I switched to *Radeon 6950*, I've had success with all the 
> Xen versions. So I guess it's a thing with Nvidia driver initialization. On 
> a side note, someone should really test this with Nvidia Quadros that are 
> officially supported to be used in VMs. (And of course, there are the hacks 
> to convert older Geforces to Quadros..)
>
> Anyway, here's a quick and most likely incomplete list (for most users) 
> for getting GPU passthrough working on Win 8.1 VM. (works identically on 
> Win7)
>
> Enclosed are the VM configuration file and HCL file for information about 
> my hardware setup (feel free to add this to HW compatibility list!)
>
> TUTORIAL
>
>
>- *Check which PCI addresses correspond to your GPU (and optionally, 
>USB host) with lspci.*
>
> Here's mine:
> ...
> # lspci
> 
> 03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] 
> Cayman XT [Radeon HD 6970]
> 03:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/
> Antilles HDMI Audio [Radeon HD 6900 Series]
>
> Note that you have to pass both of these devices if you have similar GPU 
> with dual functionality.
>
>
>- *Edit /etc/default/grub and add following options *(change the pci 
>address if needed*):*
>
> GRUB_CMDLINE_LINUX=" rd.qubes.hide_pci=03:00.0,03:00.1 
> modprobe=xen-pciback.passthrough=1 xen-pciback.permissive"
> GRUB_CMDLINE_XEN_DEFAULT="... dom0_mem=min:1024M dom0_mem=max:4096M"
>
>
> For extra logging:
> GRUB_CMDLINE_XEN_DEFAULT="... apic_verbosity=debug loglvl=all 
> guest_loglvl=all iommu=verbose"
>
>
> There are many other options available, but I didn't see any difference in 
> success rate. See here:
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> http://wiki.xenproject.org/wiki/Xen_PCI_Passthrough
> http://wiki.xenproject.org/wiki/XenVGAPassthrough
>
>
>- *Update grub:*
>
> # grub2-mkconfig -o /boot/grub2/grub.cfg
>
>
>- *Reboot. Check that VT-t is enabled:*
>
> # xl dmesg
> ...
> (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
>
>
>- *Check that pci devices are available to be passed:*
>
> # xl pci-assignable list
> :03:00.0
> :03:00.1
>
>
>- *Create disk images:*
>
> # dd if=/dev/zero of=win8.img bs=1M count=3
> # dd if=/dev/zero of=win8-user.img bs=1M count=3
>
>
>- *Install VNC server into Dom0*
>
> # qubes-dom0-update vnc
>
>
>- *Modify the win8.hvm:*
>
>
>-  Check that the disk images and Windows installation CDROM image are 
>   correct, and that the IP address does not conflict with any other VM (I 
>   haven't figured out yet how to set up dhcp)
>   -  Check that 'pci = [  ]' is commented for now
>
>
>- *Start the VM ( -V option runs automatically VNC client)*
>
> # xl create win8.hvm -V
>
>
> If you happen to close the client (but VM is still running), start it 
> again with
> # xl vncviewer win8
>
> Note that I had success starting the VM only as root. Also killing the VM 
> with 'xl destroy win8' would leave the qemu process lingering if not done 
> as root (if that occurs, you have to kill that process manually)
>
>- *Install Windows*
>- *Partition the user image using 'Disk Manager'*
>- *Download signed paravirtualized drivers here* (Qubes PV drivers 
>work only in Win 7):
>
>
> http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
> Don't min

[qubes-users] Re: USB mouse jumping in 3.2 installer

2016-06-26 Thread Marcus at WetwareLabs


On Friday, June 24, 2016 at 11:10:58 AM UTC+3, Salmiakki wrote:
>
> I tried installing 3.2 this morning but in the installer whenever I click 
> the left mouse button the cursor jumps to the left edge of the screen.
>
> I can still click buttons by clicking+holding then moving back to the 
> button and releasing but for now I decided to not continue with the 
> installation.
>
> Any advice on how to debug or ideas what might be wrong?
>

It's this bug: https://bbs.archlinux.org/viewtopic.php?id=204830
 
I just bypassed it by plugging in another mouse, but let us know if those 
tricks (as discussed there) work out for you.


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e87056d8-781e-4da7-8ec8-114e2cd9d175%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-06-23 Thread Marcus at WetwareLabs


On Thursday, June 23, 2016 at 10:17:59 PM UTC+3, Marek Marczykowski-Górecki 
wrote:
>
> -BEGIN PGP SIGNED MESSAGE- 
> Hash: SHA256 
>
> On Thu, Jun 23, 2016 at 12:07:29PM -0700, 
> '01938'019384'091843'091843'09183'04918'029348'019 wrote: 
> > Hello, 
> > 
> > wow cool. 
> > 
> > 
> > Would this mean, I can in some way (extra manual work) use the full GPU 
> power in a WindowsVM or a LinuxVM, without security issues for the hole 
> QubesOS System? 
> > (Or should I first use this setup on a separate machine or some 
> Qubes-Qubes Dual boot machine). 
>
> I haven't reviewed the instruction details, but it most likely involve 
> running qemu process in dom0, which is a huge security drawback for the 
> whole system. 
>
> - -- 
> Best Regards, 
> Marek Marczykowski-Górecki 
> Invisible Things Lab 
> A: Because it messes up the order in which people normally read text. 
> Q: Why is top-posting such a bad thing? 
> -BEGIN PGP SIGNATURE- 
> Version: GnuPG v2 
>
> iQEcBAEBCAAGBQJXbDXdAAoJENuP0xzK19csZJkH/0eH6sttRaGVL5FWbPrWkEN8 
> BrhB/9WA6fI/c0pVkNAQI0uzZwRlL+yQuKzI6Epi08kQXgO8AK/sUnc8C5l8u+jX 
> 0Gv0fDwG9vEAsmMfCBkAnPun509JUjMonKgxE5KBb4mrz+3/KlLjj40+djRSDxRg 
> vr5U96EMeqDfLr7ikx1CMUSTGAAypQFXE7YyGKW+q9z/6mO3ya7bM7DVZhZEzBy7 
> vbK4Kau27ycpGCgWZ/T7ftQsrLbxC2O6fHHdl9AEeRBWPtiMfKktRa3QfoHwF7wc 
> xWDliQy7bQ3ieAd7n+lfbXd0Nxtu/Kv3UwQVJXOLSYrmc9/YkzMafAzR6rQPd6A= 
> =m+57 
> -END PGP SIGNATURE- 
>

Hi Marek,

you're right, it's using qemu-xen-traditional and qemu is running in dom0, 
so inherently it's more exposed than running VMs in stub domain.  

In the end, rigorous risks vs benefits analysis should be done which 
programs should be allowed to run there. Personally, I use it only for 
those few applications that I really need (Office, Visual Studio, Atmel 
Studio, Diptrace) and deem "safe". Networking is also disabled by default. 
Another Windows VM (without GPU passthrough) is running in stubdom to be 
used for those occasional needs for trying out miscellaneous less-trusted 
programs that need internet connection.

Continuing on this matter, what is your personal opinion about the security 
of the following scenarios:
- VM running in Dom0 (on Xen)
- VM running in Dom0 (on KVM) (I assume this is the default case, or does 
KVM have it's own version of stubdom?)
- Dual booting Qubes and Windows, without AEM

BTW. I saw you found the culprit for PCI passthrough not working in 
stubdom! (https://github.com/QubesOS/qubes-issues/issues/1659)  Congrats!  
Finally we may be getting closer to getting Qubes both secure AND usable 
for larger masses  :)

Best regards,
Marcus

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8d684f5f-048c-4f3f-b038-06276cdf046f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-06-23 Thread Marcus at WetwareLabs


On Wednesday, June 22, 2016 at 11:33:50 PM UTC+3, Ilpo Järvinen wrote:
>
> Great to hear you got it working! I've done some googling related to 
> techniques you mention below and I want to share some thoughts / 
> information related to them. 
>
> On Wed, 22 Jun 2016, Marcus at WetwareLabs wrote: 
>
> > If you still don't get passthrough working, make sure that it is even 
> > possible with you current hardware. Most of the modern (<3 years old) 
> > working GPU PT installations seem to using KVM (I got even my grumpy 
> NVidia 
> > GTX 980 functional!), so you should at least try creating bare-metal 
> Arch 
> > Linux installion and then following instructions here: 
> > https://bufferoverflow.io/gpu-passthrough/ 
> > or Arch wiki entry here: 
> > https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF 
> > or a series of tutorials here: 
> http://vfio.blogspot.se/2015/05/vfio-gpu-how-to-series-part-1-hardware.html 
> > 
> > 
> > Most of the instructions are KVM specific, but there's lot of great 
> > non-hypervisor specific information there as well, especially in the 
> latter 
> > blog. Note that all the info about VFIO and IOMMU groups can be 
> misleading 
> > since they are KVM specific functionality and not part of Xen (don't ask 
> me 
> > how much time I spent time figuring out why I can't seem to find IOMMU 
> group 
> > entries in /sys/bus/pci/ under Qubes...) 
>
> This contradicts what I've understood about PCI ACS functionality. 
>
> IOMMU groups may be named differently for Xen or not exist (I don't know, 
> it's news to me that they don't exist), but lack of PCI ACS functionality 
> is still a HW thing and according to my understanding the same limit on 
> isolation applies regardless of hypervisor. ACS support relates how well, 
> that is, how fine-grained, those "IOMMU groups" were partitioned. Each 
> different group indicates a boundary were IOMMU is truly able separate 
> PCIe devices and are based on HW limitation not on a hypervisor feature. 
> Unfortunately mostly high-end, server platforms have true support of ACS 
> (some consumer oriented ones support it only inofficially, see 
> drivers/pci/quirks.c for the most current known to support list). 
>

Moi, Ilpo!

And thanks for chiming in. 

Yes, you're right about ACS being a hardware capability. What I've 
understood is that IOMMU group and VFIO are software packages (developed by 
guys at Red Hat specifically for KVM) in the kernel / hypervisor that in 
turn use ACS (but please correct if I'm wrong). On Arch Linux / KVM I 
checked that the GPU was alone (together with the combined sound device) in 
its own IOMMU group, so passing those two together should be safe (safe as 
in "no accidental memory access violations via peer-to-peer transactions"). 
However I'm not sure how this (conforming to restrictions according to 
IOMMU groups while passing through ) translates into isolation in Xen. Is 
ACS turned on by default and is the isolation as good as with KVM and its 
IOMMU groups? 

In my setup I can see this log entry in messages:
pci :00:1c.0: Intel PCH root port ACS workaround enabled
pci :00:1c.3: Intel PCH root port ACS workaround enabled

Those devices are the X99 series chipset PCI Express Root Ports.

And in the /linux/drivers/pci/quirks.c there's entry also for X99 (along 
with few other inlet chipsets): 

3877 <http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3877> */**3878 
<http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3878> * * Many 
Intel PCH root ports do provide ACS-like features to disable peer*3879 
<http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3879> * * 
transactions and validate bus numbers in requests, but do not provide an*3880 
<http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3880> * * actual 
PCIe ACS capability.  This is the list of device IDs known to fall*3881 
<http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3881> * * into that 
category as provided by Intel in Red Hat bugzilla 1037684.*3882 
<http://lxr.free-electrons.com/source/drivers/pci/quirks.c#L3882> 

* */*

This relates to this patch
https://patchwork.kernel.org/patch/6312441/

So I guess (for X99) this should be supported starting from Linux 4.0 
onwards.  But I'm not certain how well is this actually enforced. I should 
try to passthrough a device belonging to a group that has other PCI devices 
as wall and see if it's denied.  


> Lack of ACS may not be a big deal to many. But it may limit isolation in 
> some cases, most notably having storage on PCIe slot connected SSDs and 
> GPU passthrough. And passing through more than a single G

[qubes-users] Re: Downgrade Xen / switch to KVM? (for GPU passthrough experimentation)

2016-06-22 Thread Marcus at WetwareLabs
Update on the matter of sluggishness of Win 7 on Xen 4.6.1: Disabling MSI 
translation by setting "pci_msitranslate = 0" in VM config file resolves 
this. So both Win 7 and 8.1 seem to work fine on newers Qubes OS, and no 
need thus to mess with Xen 4.3 :) 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/489e4e2c-d830-4e6c-b306-796eac9f709b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] SUCCESS: GPU passthrough on Qubes 3.1 (Xen 4.6.1) / Radeon 6950 / Win 7 & Win 8.1 (TUTORIAL + HCL)

2016-06-22 Thread Marcus at WetwareLabs
Hello all,

I've been tinkering with GPU passthrough these couple of weeks and I 
thought I should now share some of my findings. It's not so much unlike the 
earlier report on GPU passthrough here (
https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/cmPRMOkxkdA/gIV68O0-CQAJ
).

I started with *Nvidia GTX 980*, but I had no luck with ANY of the Xen 
hypervisors or Qubes versions. Please see my other thread for more 
information (
https://groups.google.com/forum/#!searchin/qubes-users/passthrough/qubes-users/PuZLWxhTgM0/pWe7LXI-AgAJ
).

However after I switched to *Radeon 6950*, I've had success with all the 
Xen versions. So I guess it's a thing with Nvidia driver initialization. On 
a side note, someone should really test this with Nvidia Quadros that are 
officially supported to be used in VMs. (And of course, there are the hacks 
to convert older Geforces to Quadros..)

Anyway, here's a quick and most likely incomplete list (for most users) for 
getting GPU passthrough working on Win 8.1 VM. (works identically on Win7)

Enclosed are the VM configuration file and HCL file for information about 
my hardware setup (feel free to add this to HW compatibility list!)

TUTORIAL


   - *Check which PCI addresses correspond to your GPU (and optionally, USB 
   host) with lspci.*

Here's mine:
...
# lspci

03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] 
Cayman XT [Radeon HD 6970]
03:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles 
HDMI Audio [Radeon HD 6900 Series]

Note that you have to pass both of these devices if you have similar GPU 
with dual functionality.


   - *Edit /etc/default/grub and add following options *(change the pci 
   address if needed*):*

GRUB_CMDLINE_LINUX=" rd.qubes.hide_pci=03:00.0,03:00.1 
modprobe=xen-pciback.passthrough=1 xen-pciback.permissive"
GRUB_CMDLINE_XEN_DEFAULT="... dom0_mem=min:1024M dom0_mem=max:4096M"


For extra logging:
GRUB_CMDLINE_XEN_DEFAULT="... apic_verbosity=debug loglvl=all 
guest_loglvl=all iommu=verbose"


There are many other options available, but I didn't see any difference in 
success rate. See here:
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
http://wiki.xenproject.org/wiki/Xen_PCI_Passthrough
http://wiki.xenproject.org/wiki/XenVGAPassthrough


   - *Update grub:*

# grub2-mkconfig -o /boot/grub2/grub.cfg


   - *Reboot. Check that VT-t is enabled:*

# xl dmesg
...
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed


   - *Check that pci devices are available to be passed:*

# xl pci-assignable list
:03:00.0
:03:00.1


   - *Create disk images:*

# dd if=/dev/zero of=win8.img bs=1M count=3
# dd if=/dev/zero of=win8-user.img bs=1M count=3


   - *Install VNC server into Dom0*

# qubes-dom0-update vnc


   - *Modify the win8.hvm:*


   -  Check that the disk images and Windows installation CDROM image are 
  correct, and that the IP address does not conflict with any other VM (I 
  haven't figured out yet how to set up dhcp)
  -  Check that 'pci = [  ]' is commented for now
   

   - *Start the VM ( -V option runs automatically VNC client)*

# xl create win8.hvm -V


If you happen to close the client (but VM is still running), start it again 
with
# xl vncviewer win8

Note that I had success starting the VM only as root. Also killing the VM 
with 'xl destroy win8' would leave the qemu process lingering if not done 
as root (if that occurs, you have to kill that process manually)

   - *Install Windows*
   - *Partition the user image using 'Disk Manager'*
   - *Download signed paravirtualized drivers here* (Qubes PV drivers work 
   only in Win 7):

http://apt.univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
Don't mind the name, it works on Win 8.1 as well.
For more info: 
http://wiki.univention.com/index.php?title=Installing-signed-GPLPV-drivers


   - *Move the drivers inside user image partition* (shut down VM first):

# losetup   (Check for free loop device)
# losetup -P /dev/loop10 win8-user.img   (Setup loop device and scan 
partition. Assuming loop10 is free)
# mount /dev/loop10p1 /mnt/removable  ( Mount the first partition )
- copy the driver there and unmount.


   - *Reboot VM, install paravirtual drivers and reboot again*


   - *Create this script inside sys-firewall* (check that the *sys-net* vm 
   ip address 10.137.1.1 is correct though):

fwcfg.sh:
#!/bin/bash
   vmip=$1

iptables -A FORWARD -s $vmip -p udp -d 10.137.1.1   --dport 53 -j ACCEPT
iptables -A FORWARD -s $vmip -p udp -d 10.137.1.254 --dport 53

[qubes-users] Re: Downgrade Xen / switch to KVM? (for GPU passthrough experimentation)

2016-06-20 Thread Marcus at WetwareLabs
Ok,  so here are the results of the time-consuming process of patching, 
building, fixing, rebuilding, installing and testing (ad nauseam) of 
various builds of Qubes to get GPU passthrough working :)

In a nutshell, I have now tried both Win7 Pro and Win 8.1 with these Xen 
versions:
- 4.6.1
- 4.6.0
- 4.4.4
- 4.4.2
- 4.3.4
- 4.3.2
I started with *Nvidia GTX 980*, but I had no luck with ANY of the Xen 
hypervisors or Qubes versions. All hang on Nvidia Catalyst installation (or 
BSOD during boot, if the driver is first extracted and then manually 
installed via Device Manager).  Then I tried Arch Linux VM: The text mode 
frame buffer could find and initialize GTX 980 and survive for few DomU 
boots, but then it would not work again until Dom0 was booted again. Then I 
tried Xorg, but the Nvidia open source driver still doesn't seem to support 
newest GPUs and frankly, at this point I was so tired that I didn't even 
try the binary blob driver from Nvidia.

Then, I switched the GPU to my old *Radeon 6950* (Cayman) and had much 
better success.
- *Xen 4.6.1 & 4.6.0 (Qubes 3.1)*: 
Win 8.1: Passthrough works by manual driver installation. Catalyst 
installation hangs when the devices are being detected.  DomU survives 
multiple boots.
Win 7: Passthrough works, but the whole system is very sluggish. Mouse 
moves quickly but there is about 1 second lag before windows are redrawn. 
Not very usable.
Arch Linux: Open Source ATI driver works fine. DomU survives multiple boots.
Curiously, Win 8.1 can be booted multiple times without problem with 
passthrough, but after Win 7 or Arch Linux is started even once (after Dom0 
boot), BSOD (or code 43) occurs when booting Win 8.1. So Win7 and ATI Xorg 
drivers leave the GPU in such state that Win 8.1 driver cannot overcome 
this. Dom0 reboot then fixes this so that Win 8.1 can be started again.
Surprisingly, the same occurs with Arch Linux: DomU survives multiple 
boots, but GPU state is garbled if Win 8.1 (or Win 7) VM has been started 
even once. Xorg starts, but half of the screen is messed.

-* Xen 4.3.4 (Qubes 3.0 patched), 4.4.2, 4.4.4 (Qubes 3.0 vanilla)*:  Win 
8.1 works fine as above. Win 7 works without the sluggishness. Win7 DomU 
also survives few boots, but hopping between Win 7 and 8.1 causes always 
BSODs or Code 43 (device could not be initialised), which won’t go away 
until Dom0 is restarted.

For Win 7 HVMs, the 4.3.4 version seemed to work quite nicely, so if 
someone wants to experiment, you can find my adaptation for 4.3 -branch for 
Qubes here: https://github.com/WetwareLabs/qubes-vmm-xen  . Note that this 
version can be built only on top of Qubes 3.0, since 3.1 has already 
progressed a lot and quite many Qubes-related patches would have to adapted 
from Xen 4.6 to 4.3. I’m not saying it isn’t possible, just very time 
consuming :)

For Win 8.1 HVMs, I suggest using 4.6.0/4.6.1 since they have all the 
newest security updates. But like Marek said above, there's much more 
attack surface available for malware and whatnot, since Qemu is run in dom0 
when using the qemu-emu-traditional mode. But again, it's up to the user to 
evaluate the risks and benefits.

I'll write another post later that includes basic tutorial and HCL.

Best regards,
Marcus

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/83ef0bf0-4db2-4a91-a8d3-28e14fe9a7f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Qubes 3.2 rc1 has been released!

2016-06-19 Thread Marcus at WetwareLabs


On Saturday, June 18, 2016 at 11:49:02 AM UTC+3, Marek Marczykowski-Górecki 
wrote:
>
> -BEGIN PGP SIGNED MESSAGE- 
> Hash: SHA256 
>
> Details here: 
>
> https://www.qubes-os.org/news/2016/06/18/qubes-OS-3-2-rc1-has-been-released/ 
>
> As usual, you can download new image from: 
> https://www.qubes-os.org/downloads/ 
>
> Keep in mind it is only release candidate, so not recommended for daily 
> use. 
>
> - -- 
> Best Regards, 
> Marek Marczykowski-Górecki 
> Invisible Things Lab 
> A: Because it messes up the order in which people normally read text. 
> Q: Why is top-posting such a bad thing? 
> -BEGIN PGP SIGNATURE- 
> Version: GnuPG v2 
>
> iQEcBAEBCAAGBQJXZQr2AAoJENuP0xzK19csn7UH/jCj+lfb6i9FGWXvWZi+2f1j 
> 9Jg+LUNzJmKFtcvUqmzkN75tJ4ErSGPJsOBLZef4b1d0y9xR8Xcv4tfnG09fv+xe 
> lQM+BY0VZ2vWjwyjrKZKyvOA5aDyjA73NFeOW1XFojafl7m3ykef7M2j6cW8eyEz 
> VXq+IetkSFzvGW9yAA3qwxi8QuytbAvih9qPqqqzKLGIPF6bauXxoLNgm4Vqjy37 
> fq91hYD9+/DK3yGN0SlYQv3mojlrKQ+yBSA8S74dRPHeNp/laL/P/zWLWVFKqDng 
> 9e0TZSDAg4igKHlKJ7il9X8A72LnHG3OAIRpAgVTyJ1OwTG3f2KxDxANM7zRgjg= 
> =5mIU 
> -END PGP SIGNATURE- 
>


Hi,

I gave it a quick spin. Here's few issues that I noticed:
- There's a problem with mouse cursor jumping left on every click. 
(Logitech G600).  It's the same problem discussed here:  
https://bbs.archlinux.org/viewtopic.php?id=204830.  And here's the bug 
report . I never had a 
same problem with previous Qubes versions, so it must be the new Xorg 
version. I could continue installation by plugging in another mouse. Also 
the solution is discussed there in the first link, and problem can be fixed 
later after installation.
- During package installation there's notice: "You have specified that 
package 'chrony' should be installed. This package does not exist".
- Error during first boot: "Failed to start “Load kernel modules”". 
Recommendation to check "systemctl status systemd-modules-load.service", 
but there's nothing obviously wrong there, and in the end of the listing, 
there's "Started Load Kernel Modules."
- When creating VM's there's error: ['/usr/bin/qvm-prefs' '--force root' 
'--set' 'sys-firewall' 'netvm' 'sys-net'] failed: stdout: "" stderr: "A VM 
with the name 'sys-firewall' does not exist in the system.
  Nothing related in the console screen though. So it's the same problem 
Mr. "fake" noticed earlier.


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f7b6edc7-0574-4d35-9097-98960415c362%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Downgrade Xen / switch to KVM? (for GPU passthrough experimentation)

2016-06-09 Thread Marcus at WetwareLabs
Hello everyone!

What would be the steps for installing and trying out different Xen 
versions in Qubes 3.1? Or even switching to KVM? Shouldn't HAL make this 
possible on Qubes 3.0+ ?

I'm mainly interested in testing Xen 4.3 branch, since there's anecdotal 
evidence that something might have broken with GPU passthrough between Xen 
versions 4.4 and 4.3 and I have not seen any success stories of passthrough 
after 4.3. 
http://www.gossamer-threads.com/lists/xen/users/349649
https://lime-technology.com/forum/index.php?topic=36101.0
https://www.linuxserver.io/index.php/2013/09/12/xen-4-3-windows-8-with-vga-passthrough-on-arch-linux/
Only exception is here:
https://groups.google.com/forum/#!topic/qubes-users/cmPRMOkxkdA
with Qubes 3.0RC2 but he seems to be using AMD GPU & CPU whereas I'm with 
Intel and Nvidia.

Personally I've been trying to get the GPU passthrough (as secondary GPU) 
working for the past two weeks now, without luck. It's always the same 
result: Windows BSODs during the first boot after driver installation. I've 
tried Windows  7 Pro SP1 and Windows 8.1 and both act the same way. I know 
it's not a hardware problem, since GPU passthrough using *KVM on Arch Linux* 
*works 
without hiccup*. Also the same BSOD happens with Xen on Arch Linux, so I 
also know that *it's not restricted to just Qubes. *Also it's not about the 
well-known problem of "BSOD after 2nd boot", since with KVM I could boot 
DomU many times flawlessly without any requirement to boot Dom0 (to reset 
the GPU as well).

I've tried out these OS's with stock Xen versions:
Arch Linux, Xen 4.6.1: BSOD on DomU boot
Qubes 3.1, Xen 4.6.0: BSOD on DomU boot
Qubes 3.0 RC 2, Xen 4.4.2: BSOD on DomU boot
Qubes 2.0, Xen 4.1.6:  Sadly BSOD on DomU boot here as well..  

My current HW is:
Intel I7-5820K
Asrock X99 WS
EVGA GTX 980 (passthrough GPU)
Asus Radeon R5 230 (dom0 GPU)

I've tried also Radeon as passthrough GPU on Xen 4.6.0 with many driver 
versions (win 7 pro), but with same results.

I would be very interested hearing what kind of results others have 
achieved!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/fc7117f0-0fb7-4f5b-a0ae-abec7214d6ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Some Progress: Windows 7 (with Qubes Windows Tools) in Qubes OS 3.1 (Full Desktop Mode works)

2016-05-29 Thread Marcus at WetwareLabs
Piit,

thanks for your investigation and nice tutorial!

Have you managed to get Windows HVM into real full screen mode without 
installing Windows Tools GUI? Setting *allow_fullscreen=true* in 
*/etc/qubes/guid.conf 
*does not seem to enable it anyway (it stays grayed out on window bar -> 
More actions -menu). 

I have dual monitor setup (1920x1200 Displayport-0 and 1920x1080 DVI-0), 
but  I've managed  to install GUI tools (as you did) by only setting dom0 
resolution the same as in Windows and detaching the other monitor. As soon 
I add the other monitor, Windows wouldn't start no matter if seamless mode 
is on or off.   HOWEVER if I add the 2nd monitor only AFTER starting 
Windows, it works nicely and I can use Windows both in full screen mode (in 
1st or 2nd monitor) or in seamless mode. Also switching between seamless 
and normal mode from VM Manager is possible in real-time :)

Some things I noticed (only 1 monitor in use):
- some of the Windows Updates have incompatibilities with QWT. Whether 
Network Tools are installed or not, DHCP would not work, but manual 
configuration would resolve situation. 
- Seamless mode starts anyway even if it is not enabled in VM Manager when 
the VM is booted the first time after GUI tools installation. After that, 
it boots as it is configured in VM manager.
- starting in seamless mode is very erratic (about 50% of the times the 
startup fails and restart is needed).
- starting without seamless mode works slightly better (starts in full 
screen by default but works as windowed as well), but can get stuck as well 
during boot
- shutdown from VM Manager does not work (seamless or not), but "shutdown 
/s /t 0" from windows command line works even in seamless mode

After changing between seamless and normal modes and booting few times, the 
VM would not start anymore (stuck at Starting Windows screen). Only after 
many restarts between normal and safe mode (which works), would normal mode 
boot again..

Here are attached logs from one of the times it was stuck during boot (one 
monitor):
http://txt.do/5bcf7
http://txt.do/5bcf2
http://txt.do/5bcfv
http://txt.do/5bcfn
http://txt.do/5bcfw
http://txt.do/5bcfb
http://txt.do/5bcf3
http://txt.do/5bcfi

and logs when starting with two monitors (always gets stuck):
http://txt.do/5bcp6 
http://txt.do/5bcpj 
http://txt.do/5bcp4 
http://txt.do/5bcpl 
http://txt.do/5bcpq 
http://txt.do/5bcpc 
http://txt.do/5bcpm 
http://txt.do/5bcph 
http://txt.do/5bcpx 
http://txt.do/5bcpu 
http://txt.do/5bcpf 
http://txt.do/5bcpz 

I noticed there was 2 different log files for qga and 4 for qrexec-wrapper. 
But I deleted all logs in safe mode before starting VM in normal mode, so 
these logs should be only from one run.

I noticed that behaviour got more erratic after setting more verbose log 
levels (5) and sometimes it would BSOD as well (SYSTEM_SERVICE_EXCEPTION, 
STOP: 0x003B (...) ). Maybe race condition? I can send dump files and 
Xen logs somewhere also if you're interested.

Using Q3.1 with stock kernel 4.1.13-9 and latest QWT (3.0.4-1). HW: Intel 
i7-5820K with Asrock X99 WS. GFX card is GTX980 with stock Nouveau driver.




On Wednesday, March 16, 2016 at 12:26:49 AM UTC+2, piitb...@gmail.com wrote:
>
> Hello, 
>
> after spending some more hours trying to figure out what breaks my windows 
> 7 VM as soon as I install the Qubes Windows Tools, I would like to share 
> what I found out so far. 
>
> In short: 
> It seems that the installation of the "Qubes GUI Agent" within the Qubes 
> Windows Tools create some kind of problem on the Windows 7 VM, as this 
> triggers if I can use the VM or not. 
>
> More detailed description to revalidate my experience. 
> (as this is targeted also at newbies, which will find this via Google, 
> while troubleshooting, I have included every step), sorr :-) 
>
>
> Install a Win7 VM in Qubes OS 3.1 
> = 
>
> - Create Windows VM 
>   qvm-create win7 --hvm --label orange 
>
> - Increase initial RAM and max RAM to 4GB (4096) 
>
> - Install Windows from .ISO 
>   qvm-start win7 --cdrom=/home/piit/Downloads/windows_7-Pro-64bit-DE.iso 
>
> - Several reboots (=restarts) until Windows is running 
>
> - Start VM as final test -> GUI available? 
>   qvm-start --debug win7 
>
> - Clone VM to have a working copy if you screw up during the next steps 
> (win7-plain) 
>
> - Install qubes-windows-tools 
>   qubes-dom0-update --enablerepo=qubes*testing qubes-windows-tools 
>   or: 
>   sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing 
> qubes-windows-tools 
>
> - Disable User Login 
>   Start > cmd.exe (as Administrator) 
>   netplwiz 
>   Uncheck "Users must enter a user name and password to use this computer" 
>   Double check

[qubes-users] Re: "failed to prepare PCI device 07:00.0" error when trying to start netvm

2016-05-29 Thread Marcus at WetwareLabs
Hello,

I can confirm that this bug with VM Manager is still lurking somewhere. 
I've been using Qubes 3.1 for few days now and suddenly after booting the 
sys-net won't start (Error starting VM 'sys-net': PCI device 08:00.0 does 
not exist (domain sys-net)).  Funny thing, I even don't have a PCI device 
with that address..  

In Qubes VM Manager in Devices tab theres 00:19.0 (Intel network adapter) 
attached, but no 08:00.0. But when listing attached devices with
qvm-pci -l sys-net
it shows
['00:19.0' , '08:00.0']
Then
qvm-pci -d sys-net 08:00.0
resolves the situation.

So at least theres still the bug that VM Manager does not list attached 
(but non-existing) devices, and that is now preventing from starting VMs

I don't know what caused this. Only thing that can come to mind is that 
I've tried the PCI passthrough reset relaxation for USB VM (explained here 
https://www.qubes-os.org/doc/assigning-devices/ ), but that would not work, 
so I looked at the Xen pages and their example lists command
echo :08:00.0 > /sys/bus/pci/drivers/pciback/permissive
instead of 
echo :04:00.0 > /sys/bus/pci/drivers/pciback/permissive
So I modified the qubes-pre-netvm.service accordingly.

At first I thought these were some kind of general configuration flags for 
setting permissive mode, but then I realised these are actually just 
example PCI addresses and have to be changed to reflect the address of the 
actual USB controller! :) However at this point I've already tried setting 
the pci_strictreset false with qvm-prefs and that worked, so I forgot about 
the pre-netvm service until this happened. Interestingly, just disabling 
the problem causing service would not be enough to resolve the situation, 
but I had to use qvm-pci manually (only once) to remove the attachment. So 
somehow these settings survive the restart..?


On Sunday, December 7, 2014 at 5:49:07 AM UTC+2, Eric Smith wrote:
>
> Yeah, I am wondering what this means.  I can start appvms but can't start 
> any that depend on netvm.  
> Then, to get more info, I tried a qubes-dom0-update in Konsole.  This 
> time, it told me that:
> "/usr/lib/qubes/unbind-pci-device.sh:  line 47: echo: write error: No such 
> device"
> So I looked at the code and appearent it didn't like what was in the $BIND 
> variable and appearently
> could not find the driver whose name was stored in the $BIND variable 
> appearently.
> This problem started after I had to turn off the computer manually because 
> it froze.  
> I rebooted about 4 times but the problem persists.  I can start the usbvm 
> because it's a Standalone
> vm and does not depend on netvm, but not the others.  I wonder how I could 
> fix this.
>   
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b5efb10a-07db-4a23-bd5e-e28cf5901e9e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.