On 03/22/2016 07:03 PM, John Ferlan wrote:
On 03/14/2016 03:41 PM, Laine Stump wrote:
Suggested by Alex Williamson.
If you plan to assign a GPU to a virtual machine, but that GPU happens
to be the host system console, you likely want it to start out using
the host driver (so that boot
On 03/14/2016 03:41 PM, Laine Stump wrote:
> Suggested by Alex Williamson.
>
> If you plan to assign a GPU to a virtual machine, but that GPU happens
> to be the host system console, you likely want it to start out using
> the host driver (so that boot messages/etc will be displayed), then
>
On Tue, 22 Mar 2016 19:04:31 +0100
Andrea Bolognani wrote:
> On Tue, 2016-03-22 at 09:04 -0600, Alex Williamson wrote:
> > > Could this be controlled by a kernel parameter? So you
> > > can just add something like
> > >
> > > vfio-pci.devices=:02:00.0,:03:00.0
> >
On Tue, 2016-03-22 at 09:04 -0600, Alex Williamson wrote:
> > Could this be controlled by a kernel parameter? So you
> > can just add something like
> >
> > vfio-pci.devices=:02:00.0,:03:00.0
> >
> > to your bootloader configuration and be sure that the
> > devices you've listed will
On Tue, 22 Mar 2016 12:54:12 +0100
Andrea Bolognani wrote:
> On Fri, 2016-03-18 at 11:03 -0600, Alex Williamson wrote:
> > > Anyway, after reading your explanation I'm wondering if we
> > > shouldn't always recommend a setup where devices that are going
> > > to be assigned
On Fri, 2016-03-18 at 11:03 -0600, Alex Williamson wrote:
> > Anyway, after reading your explanation I'm wondering if we
> > shouldn't always recommend a setup where devices that are going
> > to be assigned to guests are just never bound to any host driver,
> > as that sounds like it would have
On Thu, Mar 17, 2016 at 12:18:49PM -0600, Alex Williamson wrote:
> On Thu, 17 Mar 2016 17:59:53 +
> "Daniel P. Berrange" wrote:
>
> >
> > I don't think it is a significant burden really. Apps which want this
> > blacklisted forever likely want to setup the modprobe
Sorry, apparently missed this reply previously
On Wed, 16 Mar 2016 11:19:38 +0100
Andrea Bolognani wrote:
> On Tue, 2016-03-15 at 13:31 -0600, Alex Williamson wrote:
> > So we have all sorts of driver issues that are sure to come and go over
> > time and all sorts of use
On Thu, 17 Mar 2016 17:59:53 +
"Daniel P. Berrange" wrote:
> On Thu, Mar 17, 2016 at 11:52:14AM -0600, Alex Williamson wrote:
> > On Thu, 17 Mar 2016 17:32:08 +
> > "Daniel P. Berrange" wrote:
> >
> > > On Tue, Mar 15, 2016 at 02:21:35PM
On Tue, Mar 15, 2016 at 02:21:35PM -0400, Laine Stump wrote:
> On 03/15/2016 01:00 PM, Daniel P. Berrange wrote:
> >On Mon, Mar 14, 2016 at 03:41:48PM -0400, Laine Stump wrote:
> >>Suggested by Alex Williamson.
> >>
> >>If you plan to assign a GPU to a virtual machine, but that GPU happens
> >>to
On Thu, Mar 17, 2016 at 05:37:28PM -0400, Laine Stump wrote:
> On 03/17/2016 02:32 PM, Daniel P. Berrange wrote:
> >On Thu, Mar 17, 2016 at 12:18:49PM -0600, Alex Williamson wrote:
> >>On Thu, 17 Mar 2016 17:59:53 +
> >>"Daniel P. Berrange" wrote:
> >>
> >>>I don't think
Hi,
> So what do you do after shutting down the guest? Your host
> ends up having no usable GPU, so you have to access it using
> some other mean (eg. ssh) and reboot it, right?
If I need a console, yes, I have to reboot.
Usually I just start another guest though.
> My point is that if your
On Fri, 2016-03-18 at 12:23 +0100, Gerd Hoffmann wrote:
> > For ad-hoc usage such as with desktop virt, then I think users would
> > typically want to use have PCI devices re-assigned to host at shutdown
> > for the most part.
>
> Unfortunately that tends to not work very well for some kinds of
>
On Thu, 17 Mar 2016 17:32:08 +
"Daniel P. Berrange" wrote:
> On Tue, Mar 15, 2016 at 02:21:35PM -0400, Laine Stump wrote:
> > On 03/15/2016 01:00 PM, Daniel P. Berrange wrote:
> > >On Mon, Mar 14, 2016 at 03:41:48PM -0400, Laine Stump wrote:
> > >>Suggested by Alex
On 03/17/2016 02:32 PM, Daniel P. Berrange wrote:
On Thu, Mar 17, 2016 at 12:18:49PM -0600, Alex Williamson wrote:
On Thu, 17 Mar 2016 17:59:53 +
"Daniel P. Berrange" wrote:
I don't think it is a significant burden really. Apps which want this
blacklisted forever
Hi,
> For ad-hoc usage such as with desktop virt, then I think users would
> typically want to use have PCI devices re-assigned to host at shutdown
> for the most part.
Unfortunately that tends to not work very well for some kinds of
devices.
I have an explicit "managed=no" in my configs,
On Thu, Mar 17, 2016 at 11:52:14AM -0600, Alex Williamson wrote:
> On Thu, 17 Mar 2016 17:32:08 +
> "Daniel P. Berrange" wrote:
>
> > On Tue, Mar 15, 2016 at 02:21:35PM -0400, Laine Stump wrote:
> > > On 03/15/2016 01:00 PM, Daniel P. Berrange wrote:
> > > >On Mon, Mar
On Tue, 2016-03-15 at 13:31 -0600, Alex Williamson wrote:
> So we have all sorts of driver issues that are sure to come and go over
> time and all sorts of use cases that seem difficult to predict. If we
> know we're in a ovirt/openstack environment, managed='detach' might
> actually be a more
On Tue, 15 Mar 2016 14:21:35 -0400
Laine Stump wrote:
> On 03/15/2016 01:00 PM, Daniel P. Berrange wrote:
> > On Mon, Mar 14, 2016 at 03:41:48PM -0400, Laine Stump wrote:
> >> Suggested by Alex Williamson.
> >>
> >> If you plan to assign a GPU to a virtual machine, but that
On 03/15/2016 01:00 PM, Daniel P. Berrange wrote:
On Mon, Mar 14, 2016 at 03:41:48PM -0400, Laine Stump wrote:
Suggested by Alex Williamson.
If you plan to assign a GPU to a virtual machine, but that GPU happens
to be the host system console, you likely want it to start out using
the host
On Mon, Mar 14, 2016 at 03:41:48PM -0400, Laine Stump wrote:
> Suggested by Alex Williamson.
>
> If you plan to assign a GPU to a virtual machine, but that GPU happens
> to be the host system console, you likely want it to start out using
> the host driver (so that boot messages/etc will be
Suggested by Alex Williamson.
If you plan to assign a GPU to a virtual machine, but that GPU happens
to be the host system console, you likely want it to start out using
the host driver (so that boot messages/etc will be displayed), then
later have the host driver replaced with vfio-pci for
22 matches
Mail list logo