Re: [qubes-users] GPU vs NIC: firmware security

2019-04-27 Thread taii...@gmx.com
On 04/15/2019 12:28 AM, demioben...@gmail.com wrote:
> My laptop (Lenovo P51) works fantastically with QubesOS.
>
> It has two GPUs: Intel integrated graphics and a discrete NVIDIA card.  For 
> gaming, I am interested in pass-through of one (NOT both) to a VM.

Impossible.

Optimus works via muxing the dGPU signal through the iGPU which results
in you being able to the same muxing with an eGPU if you have one set up
and only an iGPU etc.

>
> I believe that the integrated graphics controls the internal monitor, and 
> that all external monitors are connected to the dedicated graphics card.  Can 
> someone confirm this, and can this be changed?
>
> I will not give another VM control of my primary display, for obvious 
> reasons.  I also consider the VM that I would like to give GPU access to to 
> be highly untrustworthy and potentially compromised, since it will be running 
> untrustworthy games.  My current plan is to give the gaming VM access to one 
> monitor, while I use the other monitor for normal operation of QubesOS.
>
> My main questions are:
>
> * How feasible are firmware attacks on the graphics card,

Very Expert level, it is not easy to do and still have it be a graphics
card.

You probably don't have anything that valuable to steal or hack.

I have only heard of hacked nics, serial cards etc more simple stuff not
gpus.

Messing around with the option rom is alot easier though but you can set
the VMM to not pass that memory region so afaik it can't be flashed.

> if I choose the NVIDIA card?  I trust that the IOMMU will keep me safe from a 
> compromised card.

Not on a system with black boxes and proprietary firmware, for DRM
reasons the iGPU and dGPU are tightly linked to the ME - and the ME is
not subject to IOMMU controls.

All new x86 stuff is not owner controlled thus ones libre-IOMMU options
are limited to some older x86 stuff in the narrow window between IOMMU
becoming available and AMD closing up their firmware or OpenPOWER (like
blackbird/talos) etc although there aren't many POWER games right now
unfortunately so it is a workstation/server platform.

>but only if the compromise does not persist across reboots.  In the  > case of 
>the integrated graphics, the GPU has no persistent storage, but I am nervous 
>about >possible compromise of the internal display, which would be fatal. For 
>the > dedicated graphics, I am worried that the graphics card’s firmware could 
>be overwritten.  >Is this possible without PCI configuration space access?
>
> Finally, can NVIDIA cards work with PCI pass-through?

Yeah but its way more difficult and finicky than with AMD.

Laptop gaming sucks anyway just pick up a KCMA-D8, Opteron 4386
(microcode update req otherwise 4284), 32GB RAM and a RX590 8GB then
install coreboot-libre and play games at max settings.

This is a very affordable libre firmware gaming setup that can play
games in a VM at max at 1080p with smooth FPS as long as they can use
all 8 cores which almost everything new can, ironically new stuff like
GTA5 runs better than old stuff and it uses all 8 cores at max.

Since you would have more PCI-e slots to spare you can also pop in
another single slot GPU for your primary desktop since the onboard sucks.

The D8 has dual onboard usb controllers and can be obtained for $50-100
on fleabay used, the 4386 is the best C32 CPU and is $50-100 as well.

You also need an at least 3U (pref 4U) tower cooler for it let me know
if you can't find one and I can help (some Socket F coolers are compatible)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7e3b6655-164b-e49b-ffd3-82d2c563616b%40gmx.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU vs NIC: firmware security

2019-04-16 Thread 'awokd' via qubes-users

demioben...@gmail.com wrote on 4/15/19 4:28 AM:

My laptop (Lenovo P51) works fantastically with QubesOS.

It has two GPUs: Intel integrated graphics and a discrete NVIDIA card.  For 
gaming, I am interested in pass-through of one (NOT both) to a VM.

I believe that the integrated graphics controls the internal monitor, and that 
all external monitors are connected to the dedicated graphics card.  Can 
someone confirm this, and can this be changed?

I will not give another VM control of my primary display, for obvious reasons.  
I also consider the VM that I would like to give GPU access to to be highly 
untrustworthy and potentially compromised, since it will be running 
untrustworthy games.  My current plan is to give the gaming VM access to one 
monitor, while I use the other monitor for normal operation of QubesOS.

My main questions are:

* How feasible are firmware attacks on the graphics card, if I choose the 
NVIDIA card?  I trust that the IOMMU will keep me safe from a compromised card, 
but only if the compromise does not persist across reboots.  In the case of the 
integrated graphics, the GPU has no persistent storage, but I am nervous about 
possible compromise of the internal display, which would be fatal.  For the 
dedicated graphics, I am worried that the graphics card’s firmware could be 
overwritten.  Is this possible without PCI configuration space access?

Finally, can NVIDIA cards work with PCI pass-through?

From what I understand about NVIDIA and Qubes, passing them through is 
not possible. You can limit firmware attacks to some extent by hiding 
devices from Xen/Qubes so it doesn't process them.


--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a3bcf296-58d4-17ad-1439-00299e8a61f6%40danwin1210.me.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU vs NIC: firmware security

2019-04-14 Thread demiobenour
My laptop (Lenovo P51) works fantastically with QubesOS.

It has two GPUs: Intel integrated graphics and a discrete NVIDIA card.  For 
gaming, I am interested in pass-through of one (NOT both) to a VM.

I believe that the integrated graphics controls the internal monitor, and that 
all external monitors are connected to the dedicated graphics card.  Can 
someone confirm this, and can this be changed?

I will not give another VM control of my primary display, for obvious reasons.  
I also consider the VM that I would like to give GPU access to to be highly 
untrustworthy and potentially compromised, since it will be running 
untrustworthy games.  My current plan is to give the gaming VM access to one 
monitor, while I use the other monitor for normal operation of QubesOS.

My main questions are:

* How feasible are firmware attacks on the graphics card, if I choose the 
NVIDIA card?  I trust that the IOMMU will keep me safe from a compromised card, 
but only if the compromise does not persist across reboots.  In the case of the 
integrated graphics, the GPU has no persistent storage, but I am nervous about 
possible compromise of the internal display, which would be fatal.  For the 
dedicated graphics, I am worried that the graphics card’s firmware could be 
overwritten.  Is this possible without PCI configuration space access?

Finally, can NVIDIA cards work with PCI pass-through?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/237511cd-8456-4862-93b2-d84027689850%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2018-02-07 Thread Alex Dubois
On Sunday, 17 December 2017 12:16:04 UTC, Tom Zander  wrote:
> On Saturday, 16 December 2017 03:25:46 CET Yuraeitha wrote:
> > Initially, this is all the reasons I can think of for wanting V-GPU.
> ...
> > - Extending a single Qubes machine around the house or company, using
> > multiple of screens, keyboards/mouses or other thinkable means.
> 
> This sounds inherently unsafe.
> Not sure what your usecase is, but there has to be a better way than 
> allowing a multitude of foreign, not-directly-connected hardware from 
> accessing various very security sensitive channels.
> 
> ...
> > - Cryptocoin miners who wish to utilize a single machine
> > for all round purposes. 
> 
> To build a proper crypto-mining rig based on GPUs, you would not run an OS 
> on the machine. It literally drains money out of your system to use it on 
> the same hardware as you main desktop.
> If you install 8 GPUs on a mainboard, you have to realize that the mainboard 
> ends up costing a fraction of the total.
> Reusing it for non-mining purposes (while mining) just doesn't make any 
> sense. Both from an economics as well as a security point of view.

I think it makes sense it you are on a budget. But you do not need GPU 
path-through, you only need CUDA interface, so I believe it is already feasible 
today.

Use the integrated GPU for Dom0 and all your work VMs.
Have a MiningVM with all the other GPU attached to it.
However ,you probably want a kvm switch to distance yourself from your new 
radiator and noise generator.
> 
> -- 
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/13948b3c-f21f-45ae-ae87-20593c6d424e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Demi Obenour
Another thought I had was to do binary translation of GPU instructions
and/or Software Fault Isolation a la NaCl.

On Jan 20, 2018 10:29 AM, "Vít Šesták" <
groups-no-private-mail--contact-me-at--contact.v6ak@v6ak.com> wrote:

> When Qubes gets a separate GUIVM, the risks of GUI virtualization could
> become lower, because the GUIVM is expected to be more up-to-date (and thus
> have recent security updates for the drivers) than the current dom0.
>
> The GUI virtualization should be optional (so user can choose the
> reasonable tradeoff). This can be actually good for security provided that
> the choice is informed. User that wants some GPU-insentive tasks will now
> probably choose Ubuntu (or dualboot) over Qubes. None of them are better
> choices than allowing to take some risks for some VMs.
>
> Before GUIVM is implemented, it probably does not make much sense to
> implement GPU virtualization, because it would make additional maintenance
> effort for ITL.
>
> GPU passthrough (that can be also used with some less secure approach of
> GPU virtualization) might be a reasonable addition for some people, but not
> as a general solution for all Qubes users, because external monitors often
> connected to the dedicated GPU*. Not mentioning laptops with just one GPU.
> (Those can be more common for Linux and Qubes users.)
>
> I foresee a GPUVM in VM settings (like today's NetVM in VM settings).
>
> Regards,
> Vít Šesták 'v6ak'
>
>
> *) I honestly don't know the reason for that. In the past, I had laptop
> with three graphical outputs (screen, VGA and HDMI). Since the old
> integrated GPU was able only two of them, it makes sense that one of the
> outputs goes through the dedicated cards. The last time I checked, it
> however looks like this should be no longer a problem. Today's Intel CPUs
> seem to often support three displays (quickly verified on Intel ARK on few
> random CPUs), while today's laptops tend to have just two outputs (internal
> and HDMI).
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "qubes-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/qubes-users/l2oqYEWpY-A/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> qubes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to qubes-users@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/qubes-users/3a20b39b-7ee8-43ca-9cfc-1d5e2ed26f18%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAJEMUN9qXq71yxmUjSbTNutjWQV7ywDzYMjZcO6OqUrtc12qiA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Vít Šesták
When Qubes gets a separate GUIVM, the risks of GUI virtualization could become 
lower, because the GUIVM is expected to be more up-to-date (and thus have 
recent security updates for the drivers) than the current dom0.

The GUI virtualization should be optional (so user can choose the reasonable 
tradeoff). This can be actually good for security provided that the choice is 
informed. User that wants some GPU-insentive tasks will now probably choose 
Ubuntu (or dualboot) over Qubes. None of them are better choices than allowing 
to take some risks for some VMs.

Before GUIVM is implemented, it probably does not make much sense to implement 
GPU virtualization, because it would make additional maintenance effort for ITL.

GPU passthrough (that can be also used with some less secure approach of GPU 
virtualization) might be a reasonable addition for some people, but not as a 
general solution for all Qubes users, because external monitors often connected 
to the dedicated GPU*. Not mentioning laptops with just one GPU. (Those can be 
more common for Linux and Qubes users.)

I foresee a GPUVM in VM settings (like today's NetVM in VM settings).

Regards,
Vít Šesták 'v6ak'


*) I honestly don't know the reason for that. In the past, I had laptop with 
three graphical outputs (screen, VGA and HDMI). Since the old integrated GPU 
was able only two of them, it makes sense that one of the outputs goes through 
the dedicated cards. The last time I checked, it however looks like this should 
be no longer a problem. Today's Intel CPUs seem to often support three displays 
(quickly verified on Intel ARK on few random CPUs), while today's laptops tend 
to have just two outputs (internal and HDMI).

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/3a20b39b-7ee8-43ca-9cfc-1d5e2ed26f18%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Foppe de Haan
On Saturday, January 20, 2018 at 2:53:38 PM UTC+1, Alex Dubois wrote:
> On Saturday, 20 January 2018 09:40:36 UTC, Foppe de Haan  wrote:
> > On Saturday, January 20, 2018 at 9:38:06 AM UTC+1, Alex Dubois wrote:
> > > On Thursday, 18 January 2018 22:56:10 UTC, Tom Zander  wrote:
> > > > On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> > > > > Is qubes able to use the computing power of the gpu or is the type of 
> > > > > gpu
> > > > > installed a waste in this issue?
> > > > 
> > > > Relevant here is an email I wrote recently;
> > > > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> > > 
> > > I'll reply in that thread about this to stay in topic.
> > > 
> > > But in few words: Not possible until GPU virtualization to have a 
> > > trustable solution.
> > > 
> > > > 
> > > > The context is a GSoC proposal proposal to modernize the painting 
> > > > pipeline of Qubes.
> > > > 
> > > > Today GL using software uses [llvmpipe] to compile and render GL inside 
> > > > of 
> > > > a Qube, completely in software and then push the 2d image to dom0.
> > > > This indeed wastes the GPU.
> > > > 
> > > > 
> > > > [llvmpipe]: 
> > > > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> > > > 
> > > > -- 
> > > > Tom Zander
> > > > Blog: https://zander.github.io
> > > > Vlog: https://vimeo.com/channels/tomscryptochannel
> > 
> > Since I am unable to estimate the security aspects of any given approach, 
> > and you do, have you seen this approach? 
> > https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387
> 
> I am not member of the Qubes core team, I am an avid user/developer and 
> believer :) so my view is only mine...
> The project you mention is doing a great job (for a VMWare workstation type 
> set-up), however as far as I understood the copy is from/to the same GPU. 
> This is where I am NOT comfortable with. As explained the client VM would 
> issue processing requests to the GPU (and may abuse it).
> 
> However, using their work to copy from one GPU (assigned to ONE VM) to Dom0 
> GPU could be good. However you still have the problem with the BW on the bus 
> (luckily depending on your hardware build 2 different buses (your 2 cards are 
> on different PCIe lines). You will not get 144Hz but 60Hz is within reach. 
> Temptation to compress the stream will be there, the decompression code will 
> be in the attack surface.

Thanks for looking at it, and your thoughts. :)

To clarify: their idea indeed is to use two GPUs, since SR-IOV support simply 
isn't an option for regular users (due to artificial market segmentation), and 
according to them, any dom0 GPU that supports PCIe gen3 x4 can handle up to 
4k60 at least.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8d621fb6-33e9-4f19-b5e0-886bd5c759df%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Alex Dubois
On Saturday, 20 January 2018 09:40:36 UTC, Foppe de Haan  wrote:
> On Saturday, January 20, 2018 at 9:38:06 AM UTC+1, Alex Dubois wrote:
> > On Thursday, 18 January 2018 22:56:10 UTC, Tom Zander  wrote:
> > > On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> > > > Is qubes able to use the computing power of the gpu or is the type of 
> > > > gpu
> > > > installed a waste in this issue?
> > > 
> > > Relevant here is an email I wrote recently;
> > > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> > 
> > I'll reply in that thread about this to stay in topic.
> > 
> > But in few words: Not possible until GPU virtualization to have a trustable 
> > solution.
> > 
> > > 
> > > The context is a GSoC proposal proposal to modernize the painting 
> > > pipeline of Qubes.
> > > 
> > > Today GL using software uses [llvmpipe] to compile and render GL inside 
> > > of 
> > > a Qube, completely in software and then push the 2d image to dom0.
> > > This indeed wastes the GPU.
> > > 
> > > 
> > > [llvmpipe]: 
> > > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> > > 
> > > -- 
> > > Tom Zander
> > > Blog: https://zander.github.io
> > > Vlog: https://vimeo.com/channels/tomscryptochannel
> 
> Since I am unable to estimate the security aspects of any given approach, and 
> you do, have you seen this approach? 
> https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387

I am not member of the Qubes core team, I am an avid user/developer and 
believer :) so my view is only mine...
The project you mention is doing a great job (for a VMWare workstation type 
set-up), however as far as I understood the copy is from/to the same GPU. This 
is where I am NOT comfortable with. As explained the client VM would issue 
processing requests to the GPU (and may abuse it).

However, using their work to copy from one GPU (assigned to ONE VM) to Dom0 GPU 
could be good. However you still have the problem with the BW on the bus 
(luckily depending on your hardware build 2 different buses (your 2 cards are 
on different PCIe lines). You will not get 144Hz but 60Hz is within reach. 
Temptation to compress the stream will be there, the decompression code will be 
in the attack surface.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/553b21c0-cc0c-47f6-b9e1-2ba03762164f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread 'Tom Zander' via qubes-users
On Saturday, 20 January 2018 10:40:36 CET Foppe de Haan wrote:
> Since I am unable to estimate the security aspects of any given approach,
> and you do, have you seen this approach?
> https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122
> 387

That looks exactly like the approach my (very naive) proposal was thinking 
of; but these guys actually seem to know their GL and went ahead
and did it :)

Their proof-of-concept showing that the result is *faster* (much less 
bandwidth) than the Qubes approach is very exciting.

Thanks for the link!
-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1829903.i5khPQVWEZ%40mail.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Foppe de Haan
On Saturday, January 20, 2018 at 9:38:06 AM UTC+1, Alex Dubois wrote:
> On Thursday, 18 January 2018 22:56:10 UTC, Tom Zander  wrote:
> > On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> > > Is qubes able to use the computing power of the gpu or is the type of gpu
> > > installed a waste in this issue?
> > 
> > Relevant here is an email I wrote recently;
> > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> 
> I'll reply in that thread about this to stay in topic.
> 
> But in few words: Not possible until GPU virtualization to have a trustable 
> solution.
> 
> > 
> > The context is a GSoC proposal proposal to modernize the painting 
> > pipeline of Qubes.
> > 
> > Today GL using software uses [llvmpipe] to compile and render GL inside of 
> > a Qube, completely in software and then push the 2d image to dom0.
> > This indeed wastes the GPU.
> > 
> > 
> > [llvmpipe]: 
> > https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> > 
> > -- 
> > Tom Zander
> > Blog: https://zander.github.io
> > Vlog: https://vimeo.com/channels/tomscryptochannel

Since I am unable to estimate the security aspects of any given approach, and 
you do, have you seen this approach? 
https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/cdb0d122-d579-4e60-b8d2-db991efa10fe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-20 Thread Alex Dubois
On Thursday, 18 January 2018 22:56:10 UTC, Tom Zander  wrote:
> On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> > Is qubes able to use the computing power of the gpu or is the type of gpu
> > installed a waste in this issue?
> 
> Relevant here is an email I wrote recently;
> https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

I'll reply in that thread about this to stay in topic.

But in few words: Not possible until GPU virtualization to have a trustable 
solution.

> 
> The context is a GSoC proposal proposal to modernize the painting 
> pipeline of Qubes.
> 
> Today GL using software uses [llvmpipe] to compile and render GL inside of 
> a Qube, completely in software and then push the 2d image to dom0.
> This indeed wastes the GPU.
> 
> 
> [llvmpipe]: 
> https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ
> 
> -- 
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/bde05d64-6516-432a-8a2b-120ee64797cc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-19 Thread Demi Obenour
I think that Qubes needs 3 things to really take off:

1. It Just Works.  Even on new systems with new hardware.  That means an
up-to-date kernel and drivers.  Probably not an LTS.  It also means getting
UEFI to work out of the box — it doesn't for me.  That also means recent
installers that are aware of the quirks of different kinds of firmware.

2. GPU acceleration.  A big use for Qubes IMO is running games in a
sandboxed environment.  But games need hardware-accelerated graphics.  In
fact, recent games often require dedicated graphics cards to get acceptable
performance.  That means GPU virtualization for ALL GPUs.  Not just Intel
integrated graphics.

And it's not just games.  Firefox’s WebRender makes heavy use of the GPU.
So does QT5.  And I suspect Chromium will follow suit.  GPUs are quickly
becoming a requirement, not an option.

I think that the solution is to implement OpenGL on WebGL inside the VMs,
and expose WebGL from GUIVM.  That's what browsers do.

3. Windows support that Just Works.  One should not need to know anything
about Linux or Xen to use Qubes.  Even though they are what Qubes is built
on, they should be implementation details that one need not be familiar
with.

On Jan 18, 2018 5:56 PM, "'Tom Zander' via qubes-users" <
qubes-users@googlegroups.com> wrote:

On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> Is qubes able to use the computing power of the gpu or is the type of gpu
> installed a waste in this issue?

Relevant here is an email I wrote recently;
https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

The context is a GSoC proposal proposal to modernize the painting
pipeline of Qubes.

Today GL using software uses [llvmpipe] to compile and render GL inside of
a Qube, completely in software and then push the 2d image to dom0.
This indeed wastes the GPU.


[llvmpipe]: https://groups.google.com/forum/#!msg/qubes-devel/
40ImS390sAw/Z7M0E8RiAQAJ

--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel


--
You received this message because you are subscribed to a topic in the
Google Groups "qubes-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/qubes-users/l2oqYEWpY-A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/qubes-users/1970768.QL1Wn2a4Hl%40mail.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAJEMUN_e%3D6cU-wAczH-ZoRHBtzASzwyrkhfOWfWQv7cDfCDN%2BA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU?

2018-01-18 Thread 'Tom Zander' via qubes-users
On Sunday, 14 January 2018 08:12:24 CET r...@tuta.io wrote:
> Is qubes able to use the computing power of the gpu or is the type of gpu
> installed a waste in this issue?

Relevant here is an email I wrote recently;
https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

The context is a GSoC proposal proposal to modernize the painting 
pipeline of Qubes.

Today GL using software uses [llvmpipe] to compile and render GL inside of 
a Qube, completely in software and then push the 2d image to dom0.
This indeed wastes the GPU.


[llvmpipe]: 
https://groups.google.com/forum/#!msg/qubes-devel/40ImS390sAw/Z7M0E8RiAQAJ

-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1970768.QL1Wn2a4Hl%40mail.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU?

2018-01-13 Thread Rory
Is qubes able to use the computing power of the gpu or is the type of gpu 
installed a waste in this issue?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/6bb9ff70-79d3-4085-bd45-c32353aa7484%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2017-12-17 Thread 'Tom Zander' via qubes-users
On Saturday, 16 December 2017 03:25:46 CET Yuraeitha wrote:
> Initially, this is all the reasons I can think of for wanting V-GPU.
...
> - Extending a single Qubes machine around the house or company, using
> multiple of screens, keyboards/mouses or other thinkable means.

This sounds inherently unsafe.
Not sure what your usecase is, but there has to be a better way than 
allowing a multitude of foreign, not-directly-connected hardware from 
accessing various very security sensitive channels.

...
> - Cryptocoin miners who wish to utilize a single machine
> for all round purposes. 

To build a proper crypto-mining rig based on GPUs, you would not run an OS 
on the machine. It literally drains money out of your system to use it on 
the same hardware as you main desktop.
If you install 8 GPUs on a mainboard, you have to realize that the mainboard 
ends up costing a fraction of the total.
Reusing it for non-mining purposes (while mining) just doesn't make any 
sense. Both from an economics as well as a security point of view.

-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8533554.PhlilUoQuC%40cherry.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2017-12-17 Thread 'Tom Zander' via qubes-users
On Sunday, 17 December 2017 11:59:26 CET Yuraeitha wrote:
> f, but from what I understand, complex software is hard to make secure,
> compared to well-made hardware minimizing use of software. If Qubes
> hypothetically were to adopt these, would the hardware approach be more
> secure here?

The question isn't really about software vs hardware.
The overall design and concept is what is more important.
The actual approach of how to do this makes or breaks the security mode. 
>From that approach follows what parts are required to be in hardware (to 
still be fast and secure).

I claim no expertise in the domain you address in this thread, so apologies 
for the generic answer.
-- 
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1828191.tAHdXYOLUq%40cherry.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2017-12-17 Thread Yuraeitha
On Saturday, December 16, 2017 at 4:47:24 PM UTC+1, awokd wrote:
> On Sat, December 16, 2017 2:25 am, Yuraeitha wrote:
> > Aight, so the idea of this thread, is to get an overview of where we
> > stand, that is, how far are we away from archiving GPU Passthrough on
> > Qubes.
> 
> If you look at how the "competition" is approaching it, you need GPU
> hardware capable of virtualization such as Nvidia Grid, Radeon Sky(?),
> Intel GVT-g and hypervisor support.
> 
> https://www.nvidia.com/object/grid-technology.html
> https://www.amd.com/en-us/innovations/software-technologies/sky
> https://01.org/igvt-g
> https://code.vmware.com/article-detail/-/asset_publisher/8n011DnrSCHt/content/vsga-datasheet
> https://docs.citrix.com/content/dam/docs/en-us/xenserver/xenserver-7-0/downloads/xenserver-7-0-configuring-graphics.pdf
> 
> Not something I've ever played with, but it seems kind of like IOMMU to
> me. You could write a software layer to provide slow virtualized GPUs, or
> use hardware for faster ones.
> 
> Of these, it seems like Intel's approach is the most open source friendly.
> XenGT has working code. No idea how hard it would be to integrate with
> Qubes, though.
> 

That's a very interesting perspective, to bring in the market movements and 
other open source developments into the discussion as well, possibly detecting 
spots that might work together with Qubes. The competition also seems to be 
getting more fierce as virtual augmentation and reality becomes bigger? That's 
a very good idea for topic discussion too, I agree. It's interesting questions 
you set in motion, like for example to ponder over how far these developments 
can be be put together with Qubes with our current or emerging means of 
tomorrow.

Between software or hardware controlled IOMMU graphics, maybe the question for 
Qubes is which one of them is more secure though? I'm not a code developer my 
self, but from what I understand, complex software is hard to make secure, 
compared to well-made hardware minimizing use of software. If Qubes 
hypothetically were to adopt these, would the hardware approach be more secure 
here? Or maybe one can even use software controlled IOMMU in a less secure 
Stub-domain, for less important things as well? Kind of like a Qubes opt-in 
feature? I wonder how feasible this would be though, but it sounds really 
attractive to have user-choices like these.

I haven't read through all the links and their interconnected topics yet, but 
plan to do that over the next couple of days as I have more time. The ones I 
read were quite interesting to read already.

> > I must be tired, I initially wrote 'qubestions' instead of 'questions'
> > here... aight, so possible questions for the discussion.
> 
> I like it! Let's rename the FAQ to Frequently Asked Qubestions.

huehue, mistakes when tired (or even when high) can lead to some interesting 
places sometimes :-)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/534e830a-cbed-4d37-99f8-9c9d47383d77%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2017-12-16 Thread 'awokd' via qubes-users
On Sat, December 16, 2017 2:25 am, Yuraeitha wrote:
> Aight, so the idea of this thread, is to get an overview of where we
> stand, that is, how far are we away from archiving GPU Passthrough on
> Qubes.

If you look at how the "competition" is approaching it, you need GPU
hardware capable of virtualization such as Nvidia Grid, Radeon Sky(?),
Intel GVT-g and hypervisor support.

https://www.nvidia.com/object/grid-technology.html
https://www.amd.com/en-us/innovations/software-technologies/sky
https://01.org/igvt-g
https://code.vmware.com/article-detail/-/asset_publisher/8n011DnrSCHt/content/vsga-datasheet
https://docs.citrix.com/content/dam/docs/en-us/xenserver/xenserver-7-0/downloads/xenserver-7-0-configuring-graphics.pdf

Not something I've ever played with, but it seems kind of like IOMMU to
me. You could write a software layer to provide slow virtualized GPUs, or
use hardware for faster ones.

Of these, it seems like Intel's approach is the most open source friendly.
XenGT has working code. No idea how hard it would be to integrate with
Qubes, though.

> I must be tired, I initially wrote 'qubestions' instead of 'questions'
> here... aight, so possible questions for the discussion.

I like it! Let's rename the FAQ to Frequently Asked Qubestions.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/61a84e9c7dc6a5d9303fdd39f3c25ca9.squirrel%40tt3j2x4k5ycaa5zt.onion.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU Passthrough Status - (Purely a meta-discussion, no specifics)

2017-12-15 Thread Yuraeitha
Aight, so the idea of this thread, is to get an overview of where we stand, 
that is, how far are we away from archiving GPU Passthrough on Qubes. 

The underlying reason it's currently not working, appears to be because of its 
current state a virtual GPU for a specific VM, would require direct access to 
dom0. This is deemed a serious security threat breaking a central pillar of 
what Qubes is all about, attempting to isolate dom0 as far as possibly 
possible. Therefore, from what I can gather, what we need is virtual GPU 
operating from an underlying DomU stub-domain, preferably, one separated from 
another DomU stub-domain, which holds the important and critical VM data and 
user operations. Thereby it's not only about single virtualization anymore, but 
also about group segmenting and isolating entire virtual stub-domains. That 
means, one group of VM's is isolated from another group of VM's. Please correct 
me if I'm wrong here, it's great for the discussion to have the most accurate 
information.

Here is a scenario that stresses the above, 
https://groups.google.com/forum/#!topic/qubes-users/cmPRMOkxkdA
Managing to make GPU passthrough work, but only by passing it directly to Xen, 
instead of Libvirt, which in turn, exposes dom0.

Initially, this is all the reasons I can think of for wanting V-GPU. 
- Heavy graphic designer job or hobby (movies, animations, etc.).
- Running Qubes on many screens at desk. 
- Extending a single Qubes machine around the house or company, using multiple 
of screens, keyboards/mouses or other thinkable means.
- Gamers who take security and privacy seriously (there is surprisingly many of 
them out there).
- Cryptocoin miners who wish to utilize a single machine for all round purposes.
- Using a qube as a streaming TV, and want good graphics for the specific 
TV-VM. For example 4k or even 8k+ or more on multiple tied screens.

Some of these are exotic and probably not many around use them, however, others 
are quite common. Whichever the case, it's all scenarios with a common problem. 
The point here, is to underpin the possible use-cases.



I must be tired, I initially wrote 'qubestions' instead of 'questions' here... 
aight, so possible questions for the discussion.

- What would it take for Qubes to obtain stubdomains in a feasible means to 
allow safe GPU Passthrough? 
- Are there other problems that needs solving too? If so, which ones? 
- What is the grand big picture status between the above two questions? 
- Are there currently any plans for any of these required implementations? For 
example Qubes stub-domains in Qubes 4.1? Qubes 5? or are they still unplanned? 
If planned, or in part planned, like only halfway there, then, what are these 
plans? Please elaborate. 
- Other possible questions you can think of. 


I'm sure there are aspects I did not think of, but that's fine, after all, this 
is a discussion. This initial post is just to kick it off. The purpose is to 
combine information that a few selected individuals might be sitting on, with 
the many users who do not know about the current state. Thereby, building 
community awareness of the current situation. Whatever you got to say, or ask, 
about GPU Passthrough, this thread can be used for that! The only limitation, 
is that it is a discussion, and not a place to ask how to get your own specific 
case of GPU Passthrough to work. It's a general, meta discussion. 

What is your thoughts on the matter?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/2dad5415-fd4b-42f7-b6cb-ad0094cfca07%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-22 Thread Matty South
On Tuesday, August 22, 2017 at 4:10:57 AM UTC-5, cdga...@gmail.com wrote:
> > > Summary: Deal-breaker probably is down to getting VLC working
> > > properly
> > >
> >
> > did you tried to switch video output?
> > I would start with X11 instead of automatic.
> >
> 
> Not sure what you mean, but have other pressing projects to work on right 
> now. Will look into it further in the future when I have the time available.
> 
> As per above, VLC is a make or break for me - but others (eg: gamers) would 
> benefit from detaching GPU from Dom0 and attaching GPU to their games domain 
> qube, if it meant that OpenGL could then be available to the attached qube

When you have more time to look into this again, looks like this guy was able 
to pass his GPU through to his Windows HVM: 
https://www.reddit.com/r/Qubes/comments/66wk4q/gpu_passthrough/ 

Might be an option for you. You could Skype/VLC in your Windows VM. I use my 
WinVM do to Sharepoint and MS Office stuff and it works pretty well. Good luck!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e099f4be-7310-4553-88cf-c536774fbdac%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-22 Thread cdgamlin
> > Summary: Deal-breaker probably is down to getting VLC working
> > properly
> >
>
> did you tried to switch video output?
> I would start with X11 instead of automatic.
>

Not sure what you mean, but have other pressing projects to work on right now. 
Will look into it further in the future when I have the time available.

As per above, VLC is a make or break for me - but others (eg: gamers) would 
benefit from detaching GPU from Dom0 and attaching GPU to their games domain 
qube, if it meant that OpenGL could then be available to the attached qube

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ba60d101-38e9-479d-a8f0-422fa7b2cd34%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-22 Thread cdgamlin
> > Summary: Deal-breaker probably is down to getting VLC working
> > properly
> > 
> 
> did you tried to switch video output?
> I would start with X11 instead of automatic.
> 

Not sure what you mean, but have other pressing projects to work on right now. 
Will look into it further in the future when I have the time available.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/af7fd1c2-b7e3-40d6-b525-fbdf49f63385%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-22 Thread Zrubi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/22/2017 10:18 AM, cdgam...@gmail.com wrote:

> Summary: Deal-breaker probably is down to getting VLC working
> properly
> 

did you tried to switch video output?
I would start with X11 instead of automatic.



- -- 
Zrubi
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJZm+ndAAoJEH7adOMCkunm2h8P/iWgY9IyZtSpxvzK7c6EFhQO
LJSh6x7WVJkRwK4lUSHtvabzMQ5FjZkd/tJIOPjGF0pLpnqofKorfMf02ZGDNrIW
n1yuccSo+3bGIYL94UVV2biNHY9IXALm1QnPB/cp2dehkQYPHQhLRi1Jh1Yhfsps
5Fj+4jVWenqtzzjN0Q9shOthhN0d6xarcnig8jTidDQ1ss8FBI/rQ9Ds2ZrNsJ7B
DeyrC9/nUKr4ZsOqHoXXL/dVtJxs9t3mwvck1ed9a4KwB2jmHdW9smANyu5HV09W
3jD22RQPRe6FB4g0Sqb1He6mgAryWT55J43FwiL5VxWIaZXNG0IN02u15BQivmIA
VLGQpcBD0l8qUKrdtQnEMJIcyHFvvclA/3EnIAjqKFfN7SkLESuYZ5VPqFHm6b8E
6JtQhyW/AlnD9BWf8uatl8nqUHOjrBKfkH0FzHBeffdjTgOlkMknoUsyhg4VQgAD
APRUzrqJJSXLVkgK0I2KN3mbaaaY9gI74hzikuLnP/SJylLM3Cu7QJxXwhWD25yq
8QUeqUCkmTYx2PA6NnXSKRrXxOJ60RBCW5Xc1pruSdMYkrO3apKwRdDjqki7z+UZ
Io/EUHeOA5rMBZjFf7DlD3JfTpm4N2Qwy2yLzXn8jOk9QIG8tWgdgIDp4ssomFsS
bVq6MeralPxwE6y3vLzE
=OzTG
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b1b7152b-7e85-dc3d-b76f-566b2ebdae81%40zrubi.hu.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-22 Thread cdgamlin
My laptop specs (if it helps): https://support.hp.com/au-en/document/c03146718

My situation: Don't have funds to get a new computer (for hardware compliance 
or multiple GPUs) or mobile phone (for Skype), and can't use an alternative to 
Skype (not my choice and beyond my control).

Screen-shooting rather than screen-sharing for Skype seems reasonable, as Skype 
shouldn't be on Dom0. I think that would solve my issues with Skype (I'd have 
to reinstall Qubes+Skype and check it out)- but I am still stuck with VLC video 
glitching up, even if the audio keeps playing well.

Summary: Deal-breaker probably is down to getting VLC working properly

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/fac99c74-26f7-4ea7-be12-00f48f7bcfe3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-21 Thread cdgamlin
My situation: Don't have funds to get a new computer (for hardware compliance) 
or mobile phone (for Skype), and can't use an alternative to Skype (not my 
choice and beyond my control)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ae0fbda8-1106-4280-ba41-78867f093be2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-21 Thread Sandy Harris
On Mon, Aug 21, 2017 at 8:54 AM, Matty South  wrote:

> On Monday, August 21, 2017 at 7:14:29 AM UTC-5, Francesco wrote:

>> On Mon, Aug 21, 2017 at 12:38 AM,   wrote:

>> *** TL;DR: Would the option to attach the GPU to a single qube be feasible? 
>> ***

> I can't really speak to the GPU, but for screen sharing with Skype, that will 
> not be a possibility on Qubes. Dom0 controls the GUI/desktop and you can't 
> install (nor would you ever want to) install an insecure MS product on Dom0.

If you have multiple video devices, can you use one for Dom0 and put
another under direct control of a guest OS?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CACXcFm%3D7bKb-ng3JYDZ1vFakDrCz3E9Q1bJNraQ4Pee1EQWo0Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-21 Thread Matty South
On Monday, August 21, 2017 at 7:14:29 AM UTC-5, Francesco wrote:
> Hello
> 
> 
> 
> 
> On Mon, Aug 21, 2017 at 12:38 AM,   wrote:
> Hi!
> 
> 
> 
> *** TL;DR: Would the option to attach the GPU to a single qube be feasible? 
> ***
> 
> 
> 
> Recently tried out Q3.2 and Q4.0-rc1. Pretty happy with most of it, and have 
> some ideas on what might make it better (if those ideas are plausible) - but 
> the GPU seems to be the deal breaker.
> 
> 
> 
> On LinuxMint, I like using VLC video player to watch lectures, using it's 
> option to speed up without altering pitch. On both versions of Q, video on 
> VLC behaved badly (often freezing up). Audio was good, so can only think it 
> is GPU issue
> 
> 
> 
> 
> 
> This is not normal, probably an issue with your hardware. Look if your 
> computer is on HCL
>  
> 
> I also use Skype a fair bit on LinuxMint, and find the "share screen" mode 
> useful to show stuff. Video on Skype also performed badly on on both versions 
> of Q, and "share screen" wouldn't work at all. Again, I can only think this 
> is GPU
> 
> 
> 
> 
> 
> For the video it is the same as above, but for Skype and VOIP in general I 
> find it much practical to use it on my cellphone

I can't really speak to the GPU, but for screen sharing with Skype, that will 
not be a possibility on Qubes. Dom0 controls the GUI/desktop and you can't 
install (nor would you ever want to) install an insecure MS product on Dom0. 
For me, I just send screenshots now instead of screen sharing. It's a little 
less convenient, but i'm happy to trade a little convenience for security.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b8efafee-9efe-44ef-932b-6c34b365ad1b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU is deal-breaker

2017-08-21 Thread Franz
Hello

On Mon, Aug 21, 2017 at 12:38 AM,  wrote:

> Hi!
>
> *** TL;DR: Would the option to attach the GPU to a single qube be
> feasible? ***
>
> Recently tried out Q3.2 and Q4.0-rc1. Pretty happy with most of it, and
> have some ideas on what might make it better (if those ideas are plausible)
> - but the GPU seems to be the deal breaker.
>
> On LinuxMint, I like using VLC video player to watch lectures, using it's
> option to speed up without altering pitch. On both versions of Q, video on
> VLC behaved badly (often freezing up). Audio was good, so can only think it
> is GPU issue
>
>
This is not normal, probably an issue with your hardware. Look if your
computer is on HCL


> I also use Skype a fair bit on LinuxMint, and find the "share screen" mode
> useful to show stuff. Video on Skype also performed badly on on both
> versions of Q, and "share screen" wouldn't work at all. Again, I can only
> think this is GPU
>
>
For the video it is the same as above, but for Skype and VOIP in general I
find it much practical to use it on my cellphone

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAPzH-qC0fRzXP_XhVYKsfV0Mi34TqJibMfQiO_dKPpaaLV%3DDsw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU is deal-breaker

2017-08-20 Thread cdgamlin
Hi!

*** TL;DR: Would the option to attach the GPU to a single qube be feasible? ***

Recently tried out Q3.2 and Q4.0-rc1. Pretty happy with most of it, and have 
some ideas on what might make it better (if those ideas are plausible) - but 
the GPU seems to be the deal breaker.

On LinuxMint, I like using VLC video player to watch lectures, using it's 
option to speed up without altering pitch. On both versions of Q, video on VLC 
behaved badly (often freezing up). Audio was good, so can only think it is GPU 
issue

I also use Skype a fair bit on LinuxMint, and find the "share screen" mode 
useful to show stuff. Video on Skype also performed badly on on both versions 
of Q, and "share screen" wouldn't work at all. Again, I can only think this is 
GPU

While I've read that OpenGL doesn't work on Q (obviously important for gamers 
using Q), I don't know the status for OpenCL - but programming OpenCL using GPU 
is also another use for me (for scientific computing stuff), which I'm guessing 
Q will also have an issue with

>From what I've read, it seems that Dom0 has the GPU, and won't share with 
>others because it could create a security issue (my guesses: from sharing 
>memory, or sharing processes, or both). My suggestion (if feasible) is for 
>Dom0 not to use GPU (CPU only should make sense, as Dom0 should be as 
>light-weight as possible), and have GPU as a device that can is deployable to 
>any 1 qube the same way as you can for other devices (such as microphone) - if 
>GPU assignment feasibly works without security issues. Reading that some 
>successful experiments have been done for "GPU passthrough" sounds like 
>assignment of GPU to a qube may be feasible (Am I right??)

Love the Qubes concept. Hoping the GPU issue can be sorted, opening up the OS 
for a wider user base.

Cheers!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/310e462e-4c22-4ee1-a2a0-9b63b2abb200%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU passthrough: 2000 USD bounty

2017-04-21 Thread Stickstoff
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello everyone,

I would like to be able to do a little gaming on my regular computer
from time to time, for sanity reasons. I use Qubes OS on a dual GPU
notebook. I don't want to compromise security with unsafe code in DOM0
nor dual booting. My budget towards this is up to 2000 USD.

Options I can think of (ordered by preference):

- - put 2000 USD to a bounty for programming of general (secondary) GPU
passthrough to an app-VM (including consumer nvidia GPUs)

- - replace my computer with an nvidia quadro equipped computer, put
whats left of the 2000 USD towards a bounty to get ATI and nvidia
quadro GPUs (apparently both easier to do than consumer nvidia GPUs)

- - buy an additional computer and stream the gaming via VNC or the like
to a Qubes app-VM

- -buy and use an additional computer


Gaming on Qubes is a niche and unrelated to its real goal. Still, it
would open new possibilities with running different OS' in VMs with
hardware acceleration, from gaming to grafics rendering to video
editing to scientifical calculations. It would be a big step towards
one-system-fits-all for the security conscious.
If some universally useable code came from this, it would make
migration from windows to "regular" linux distros much easier for a
lot of people who still need some gpu-dependent windows function.

I understand that 2000 USD is probably too little for a project of
such magnitude. Maybe it's a start of a bounty that becomes big enough
for this.

What do you people think? Or am I overlooking other options?
Kernel 4.10 adds "virtual GPU support" [1], will that make things
easier?

Cheers,

Stickstoff


[1]
http://news.softpedia.com/news/linux-kernel-4-10-officially-released-wit
h-virtual-gpu-support-many-features-513077.shtml
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJY+mOSAAoJEPyQPtcO3Q1iyc4P/3u79Lx+8vXJ1/wyfcoKljI6
LYVEIC5ZaUNNl1k14rOL69V3Ndf3AFTPdRLUV9j5pvqpBCRHzrokKAJJ32vfQg6R
8uiJaDaYgje8RYUDx8K4U3oq69ETWx1aLYANnp5gV71IoMES2mK+XOW71+EhfjhF
GE7XQob/dgYXLWRHExarTGy1Rr+Nr3rScdGc3mAWAPqlreN58OZmkS0T/K7HCCcR
NPDpne7Pljb6MM8rBb9cZcG4Vz6nHOdJyuKKqEnquYLU8hoKsFEO90k7xK1GEFP1
iyBwK7yV0vauLmaHkf4HXN3PMRo4Hhuz2RfrHkW+AP0j5wIaqk4Wq2FZFvxz4C3n
ErQrYgqHi7eFrcBm+rwSedbi6BfgYqK15lRRqXwLsYbMUKdaN1eYnYpLKV/sl6UK
FGv9Y08G44ZPhNS5JAGbxBdvsKe+Nde0V/H/u8MzRXCLmkk8XKRbKyf+lQ5ZTmtd
r+XLmWiQ5DwOKUi24h8pMltngWc/nqhSDMy7mbf4JBBhjWV1T3o0o4MDg4YatR4d
x8vDs64U4A1lqTMbw+U4mZU2crka4xSFJ+OZk3h76heIrVF/jOwGzGpKGFL0+cHH
yDWFQj8r+PZ/BHChkJluthD0mj1bkDebilA33K1tMXOvbA3/Xd+1WDg1Q9YvskNv
ExN45lREneOMcWeLiHUV
=Up+F
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f78b0c26-0ae4-4a23-4372-9c7f59eefd01%40posteo.de.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Question

2016-07-03 Thread Marcus at WetwareLabs


On Saturday, July 2, 2016 at 6:12:57 PM UTC+3, foss-...@isvanmij.nl wrote:
>
> Interesting, I haven't noticed the thread you mentioned. The thread I 
> referenced to was more than a year ago.
>
> So if I read through it quickly, this guy had succeeded in passing through 
> his GTX980, but it went wrong on the driver installation (code 43). This is 
> expected, as nVidia disables the card automatically when a virtualised 
> environment is found. Solution is to hide the virtualisation extensions 
> inside the VM, more info on this matter here:
>
> https://lime-technology.com/forum/index.php?topic=38664.0
>
> Marcus, are you reading?
>
>
Hi, 

passing through now works with the QEMU running in dom0, but as it's 
inherently quite unsafe, we are working on the passthrough issues when QEMU 
is running in stubdom (a separate "helper-VM" only for QEMU) which is the 
default configuration of HVMs created with Qubes VM Manager. (see 
discussion here: https://github.com/QubesOS/qubes-issues/issues/1659 ). 
Currently it's broken now, but some progress has been made.

Yes, the issue with Nvidia cards (code 43) could be related to the driver 
detecting that it's running inside VM. The link you provided tells about a 
solution that's specific to KVM (the -cpu kvm=off flag) and there's not yet 
a way to hide the hypervisor in Xen (AFAIK).  Also there's the new patch in 
KVM to spoof the hypervisor vendor id (*hv_vendor_id)* that supposedly has 
solved remaining problems.  It would be awesome if Xen could have these 
patches ported from KVM!  My Oculus Rift should arrive in few weeks, so I'm 
very anxious to get GTX980 working before that :)

Note that I had in many occasions also BSOD during boot (and not just code 
43) when testing with GTX980 drivers installed. Also there was similar 
issues with Radeon 6950, but the reset patch (see here 
https://groups.google.com/d/msg/qubes-users/zHmaZ3dbus8/4ZfZf6BmCAAJ) 
 
seemed to solve those, and I haven't had BSOD after that (regarding Radeon, 
but I haven't tested the reset patch with Nvidia cards yet).

Best regards,
Marcus


 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/52a5da5b-a98e-4b9e-8206-e3e9b20b7214%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Question

2016-07-02 Thread foss-groups
Interesting, I haven't noticed the thread you mentioned. The thread I
referenced to was more than a year ago. 

So if I read through it quickly, this guy had succeeded in passing
through his GTX980, but it went wrong on the driver installation (code
43). This is expected, as nVidia disables the card automatically when a
virtualised environment is found. Solution is to hide the virtualisation
extensions inside the VM, more info on this matter here: 

https://lime-technology.com/forum/index.php?topic=38664.0 

Marcus, are you reading? 

On 2016-07-02 16:11, Andrew David Wong wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> On 2016-07-02 06:24, foss-gro...@isvanmij.nl wrote: 
> 
>> Hi All,
>> 
>> Is there any update on GPU-passtrough support, since it is on the 
>> roadmap?
>> 
>> I need to use Windows for some tasks heavily relying on GPU-power, 
>> but rebooting every time isn't ideal, to say the least.
>> 
>> My system has a Intel Core i5 2400S-processor, a nVidia GTX680 GPU 
>> and a chipset/BIOS supporting VT-d. Now ideally, VT-d works just 
>> out of the box. But the VT-d version found on desktop processors
>> is a bit different than the one found on server processors.
>> Especially when it comes to the handling of IOMMU groups. AFAIK,
>> Xeon E5 and E7 handles them properly out of the box. But with my
>> processor, I needed a patched VFIO kernel (ACS Patch, Intel VGA
>> Arbiter patch). I have successfully passed through my GPU to a
>> QEMU-VM, but on XEN, it doesn't work. Some XEN devs said QEMU has a
>> lot of device specific quirks, that is.
>> 
>> However, working around this IOMMU grouping bug imposes a security 
>> risk. That needs to be addressed, off course. But for now I want
>> to get things working and increase my productivity.
>> 
>> I read some guys had success passing through an nVidia card using 
>> XEN 4.6.1, which is the default in the just released in Fedora 24.
> 
> Is this the thread you're referring to?
> 
> https://groups.google.com/d/topic/qubes-users/zHmaZ3dbus8/discussion
> 
> If so, I'm not aware of more recent news on this topic than what's
> contained in that thread (which, as you can see, is quite recent).
> 
>> Qubes devs, will this version find it's way into 3.2?
> 
> I don't think so, but I'll leave it to Marek to answer.
> 
>> Anyway, the other option was to use a method called 
>> "qubes-qvm-traditional", but I didn't find any information about 
>> it, nor how to configure it.
>> 
>> The information about GPU-passtrough on the XEN wiki is outdated.
>> 
>> Any ideas or news on this?
> 
> - -- 
> Andrew David Wong (Axon)
> Community Manager, Qubes OS
> https://www.qubes-os.org
> -BEGIN PGP SIGNATURE-
> 
> iQIcBAEBCgAGBQJXd8uRAAoJENtN07w5UDAwxN8P/isZVQ2RdwQB+cXnC55+Co8/
> Oi74oakWjLs8QLYjYHGebjyCRCK5J2s8W1FUroW5n3nFU9Moe7C8C5XA1B+SrbzF
> a8snrRFbKyMS5u2LqvvDfInQQo9XodvQbx+E7SPbVyt7ms78R1mT9o66DixliiMI
> 3ZCgKlh8nrc73yH429y0IpXVILKnzXkWbWd4An0TuWsA/m6RmD2y07WAey9Lkr/8
> weGx488Izyxyinawjpr1fqLneNSSPfylwdRpwvwYIRR6Qu0Y3T4G+a4I3XtOZrYf
> mJu/nDxTUQrLGPggTlH6i1LlCfGYiFlGTgGLQafu4htpbgRHXjsyQHmOL0wi0xbQ
> NLqSkJfWLkjY7nKTGKdQ8rhHEuJtDX/IxNdxzF00jwMpMmcWeQRi9qsn+Nm1DlmN
> TFaJU240OINigPJ+eQpZR0Pb18XN+ppN9MTR6q4MvY4L8vuimThUboWKcMEs8GkV
> 141YvnNvLpdpcuVnWDnElRrHGsZeU2fsxEosAUNffighhgatvZiTK6G7kN8YQ8Ha
> kxavRwCbu7lrn3I3Z/26YHzOrxLHdVA49/CMijFp9HQvcgch8sPCVj1ZsLS5pA30
> 1X3hZj7vHbyEtpxGZg8AtWOREr4BuruWv0kT3ZTn5aBO8QbDEZuWWxRSz3qXsWsh
> DJnC7Kk203MfqWHzCfVT
> =xANk
> -END PGP SIGNATURE-

  

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/9c58927d8d9921b6abeca6cc05001b33%40isvanmij.nl.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] GPU Passthrough Question

2016-07-02 Thread Andrew David Wong
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 2016-07-02 06:24, foss-gro...@isvanmij.nl wrote:
> Hi All,
> 
> Is there any update on GPU-passtrough support, since it is on the 
> roadmap?
> 
> I need to use Windows for some tasks heavily relying on GPU-power, 
> but rebooting every time isn't ideal, to say the least.
> 
> My system has a Intel Core i5 2400S-processor, a nVidia GTX680 GPU 
> and a chipset/BIOS supporting VT-d. Now ideally, VT-d works just 
> out of the box. But the VT-d version found on desktop processors
> is a bit different than the one found on server processors.
> Especially when it comes to the handling of IOMMU groups. AFAIK,
> Xeon E5 and E7 handles them properly out of the box. But with my
> processor, I needed a patched VFIO kernel (ACS Patch, Intel VGA
> Arbiter patch). I have successfully passed through my GPU to a
> QEMU-VM, but on XEN, it doesn't work. Some XEN devs said QEMU has a
> lot of device specific quirks, that is.
> 
> However, working around this IOMMU grouping bug imposes a security 
> risk. That needs to be addressed, off course. But for now I want
> to get things working and increase my productivity.
> 
> I read some guys had success passing through an nVidia card using 
> XEN 4.6.1, which is the default in the just released in Fedora 24.

Is this the thread you're referring to?

https://groups.google.com/d/topic/qubes-users/zHmaZ3dbus8/discussion

If so, I'm not aware of more recent news on this topic than what's
contained in that thread (which, as you can see, is quite recent).

> Qubes devs, will this version find it's way into 3.2?

I don't think so, but I'll leave it to Marek to answer.

> Anyway, the other option was to use a method called 
> "qubes-qvm-traditional", but I didn't find any information about 
> it, nor how to configure it.
> 
> The information about GPU-passtrough on the XEN wiki is outdated.
> 
> Any ideas or news on this?
> 

- -- 
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXd8uRAAoJENtN07w5UDAwxN8P/isZVQ2RdwQB+cXnC55+Co8/
Oi74oakWjLs8QLYjYHGebjyCRCK5J2s8W1FUroW5n3nFU9Moe7C8C5XA1B+SrbzF
a8snrRFbKyMS5u2LqvvDfInQQo9XodvQbx+E7SPbVyt7ms78R1mT9o66DixliiMI
3ZCgKlh8nrc73yH429y0IpXVILKnzXkWbWd4An0TuWsA/m6RmD2y07WAey9Lkr/8
weGx488Izyxyinawjpr1fqLneNSSPfylwdRpwvwYIRR6Qu0Y3T4G+a4I3XtOZrYf
mJu/nDxTUQrLGPggTlH6i1LlCfGYiFlGTgGLQafu4htpbgRHXjsyQHmOL0wi0xbQ
NLqSkJfWLkjY7nKTGKdQ8rhHEuJtDX/IxNdxzF00jwMpMmcWeQRi9qsn+Nm1DlmN
TFaJU240OINigPJ+eQpZR0Pb18XN+ppN9MTR6q4MvY4L8vuimThUboWKcMEs8GkV
141YvnNvLpdpcuVnWDnElRrHGsZeU2fsxEosAUNffighhgatvZiTK6G7kN8YQ8Ha
kxavRwCbu7lrn3I3Z/26YHzOrxLHdVA49/CMijFp9HQvcgch8sPCVj1ZsLS5pA30
1X3hZj7vHbyEtpxGZg8AtWOREr4BuruWv0kT3ZTn5aBO8QbDEZuWWxRSz3qXsWsh
DJnC7Kk203MfqWHzCfVT
=xANk
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/0edeff57-f545-f222-3bdc-06e1bc6fc63a%40qubes-os.org.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] GPU Passthrough Question

2016-07-02 Thread foss-groups
Hi All, 

Is there any update on GPU-passtrough support, since it is on the
roadmap? 

I need to use Windows for some tasks heavily relying on GPU-power, but
rebooting every time isn't ideal, to say the least. 

My system has a Intel Core i5 2400S-processor, a nVidia GTX680 GPU and a
chipset/BIOS supporting VT-d. Now ideally, VT-d works just out of the
box. But the VT-d version found on desktop processors is a bit different
than the one found on server processors. Especially when it comes to the
handling of IOMMU groups. AFAIK, Xeon E5 and E7 handles them properly
out of the box. But with my processor, I needed a patched VFIO kernel
(ACS Patch, Intel VGA Arbiter patch). I have successfully passed through
my GPU to a QEMU-VM, but on XEN, it doesn't work. Some XEN devs said
QEMU has a lot of device specific quirks, that is. 

However, working around this IOMMU grouping bug imposes a security risk.
That needs to be addressed, off course. But for now I want to get things
working and increase my productivity. 

I read some guys had success passing through an nVidia card using XEN
4.6.1, which is the default in the just released in Fedora 24. Qubes
devs, will this version find it's way into 3.2? Anyway, the other option
was to use a method called "qubes-qvm-traditional", but I didn't find
any information about it, nor how to configure it. 

The information about GPU-passtrough on the XEN wiki is outdated. 

Any ideas or news on this?

  

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ecadd986882943adf979bc2b895d8fe8%40isvanmij.nl.
For more options, visit https://groups.google.com/d/optout.