Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-13 Thread Ayal Baron


- Original Message -
> On 03/12/2012 10:19 PM, Ayal Baron wrote:
> >
> >
> > - Original Message -
> >> On 03/12/2012 02:12 PM, Itamar Heim wrote:
> >>> On 03/12/2012 09:01 PM, Anthony Liguori wrote:
> >>>>
> >>>> It's a trade off. From a RAS perspective, it's helpful to have
> >>>> information about the host available in the guest.
> >>>>
> >>>> If you're already exposing a compatible family, exposing the
> >>>> actual
> >>>> processor seems to be worth the extra effort.
> >>>
> >>> only if the entire cluster is (and will be?) identical cpu.
> >>
> >> At least in my experience, this isn't unusual.
> >
> > I can definitely see places choosing homogeneous hardware and
> > upgrading every few years.
> > Giving them max capabilities for their cluster sounds logical to
> > me.
> > Esp. cloud providers.
> 
> they would get same performance as from the matching "cpu family".
> only difference would be if the guest known the name of the host cpu.
> 
> >
> >>
> >>> or if you don't care about live migration i guess, which could be
> >>> hte case for
> >>> clouds, then again, not sure a cloud provider would want to
> >>> expose
> >>> the physical
> >>> cpu to the tenant.
> >>
> >> Depends on the type of cloud you're building, I guess.
> >>
> >
> > Wouldn't this affect a simple startup of a VM with a different CPU
> > (if motherboard changed as well cause reactivation issues in
> > windows and fun things like that)?
> 
> that's an interesting question, I have to assume this works though,
> since we didn't see issues with changing the cpu family for guests so
> far.
> 

assumption... :)
I'd try changing twice in a row (run VM, stop, change family, restart VM, stop, 
change family restart VM).

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-12 Thread Ayal Baron


- Original Message -
> On 03/12/2012 02:12 PM, Itamar Heim wrote:
> > On 03/12/2012 09:01 PM, Anthony Liguori wrote:
> >>
> >> It's a trade off. From a RAS perspective, it's helpful to have
> >> information about the host available in the guest.
> >>
> >> If you're already exposing a compatible family, exposing the
> >> actual
> >> processor seems to be worth the extra effort.
> >
> > only if the entire cluster is (and will be?) identical cpu.
> 
> At least in my experience, this isn't unusual.

I can definitely see places choosing homogeneous hardware and upgrading every 
few years. 
Giving them max capabilities for their cluster sounds logical to me.
Esp. cloud providers.

> 
> > or if you don't care about live migration i guess, which could be
> > hte case for
> > clouds, then again, not sure a cloud provider would want to expose
> > the physical
> > cpu to the tenant.
> 
> Depends on the type of cloud you're building, I guess.
> 

Wouldn't this affect a simple startup of a VM with a different CPU (if 
motherboard changed as well cause reactivation issues in windows and fun things 
like that)?
Even if the cloud doesn't support live migration, they don't pin VMs to a host. 
User could shut it down and start it up again and it might run on a different 
node.  Your ephemeral storage would be lost, but persistent image storage could 
still contain os info pertinent to cpu type.
Btw, I don't see why internally they would not support live migration even for 
when they need to put a host in maintenance etc. live storage migration could 
take care of the ephemeral storage if that's the issue (albeit take a million 
years to finish).

> >>> ovirt allows to set "cpu family" per cluster. assume tomorrow it
> >>> could
> >>> do it an
> >>> even more granular way.
> >>> it could also do it automatically based on subset of flags on all
> >>> hosts - but
> >>> would it really make sense to expose a set of capabilities which
> >>> doesn't exist
> >>> in the real world (which iiuc, is pretty much aligned with the
> >>> cpu
> >>> families?),
> >>> that users understand?
> >>
> >> No, I think the lesson we've learned in QEMU (the hard way) is
> >> that
> >> exposing a CPU that never existed will cause something to break.
> >> Often
> >> times, that something is glibc or GCC which tends to be rather
> >> epic in
> >> terms of failure.
> >
> > good to hear - I think this is the important part.
> > so from that perspective, cpu families sounds the right abstraction
> > for general
> > use case to me.
> > for ovirt, could improve on smaller/dynamic subsets of migration
> > domains rather
> > than current clusters
> > and sounds like you would want to see "expose host cpu for non
> > migratable
> > guests, or for identical clusters".
> 
> Would it be possible to have a "best available" option in
> oVirt-engine that
> would assume that all processors are of the same class and fail an
> attempt to
> add something that's an older class?
> 
> I think that most people probably would start with "best available"
> and then
> after adding a node fails, revisit the decision and start lowering
> the minimum
> CPU family (I'm assuming that it's possible to modify the CPU family
> over time).

But then they'd already have VMs that were started with the better CPU and now 
it'd change under their feet? or would we start them up with the best and fail 
to start these VMs on the newly added hosts which have the lower cpu 
family/type?

> 
>  From a QEMU perspective, I think that means having per-family CPU
>  options and
> then Alex's '-cpu best'.  But presumably it's also necessary to be
> able to
> figure out in virsh capabilities what '-cpu best' would be.
> 
> Regards,
> 
> Anthony Liguori
> 
> > ___
> > Arch mailing list
> > a...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/arch
> >
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC [3/3]: Lock manager usage scenarios

2010-09-16 Thread Ayal Baron

- "Daniel P. Berrange"  wrote:

> On Tue, Sep 14, 2010 at 05:03:21PM -0400, Ayal Baron wrote:
> > 
> > - "Daniel P. Berrange"  wrote:
> > 
> > > 
> > > That is probably possible with the current security driver
> > > implementations
> > > but more generally I think it will still hit some trouble.
> > > Specifically 
> > > one of the items on our todo list is a new security driver that
> makes
> > > use
> > > of Linux container namespace functionality to isolate the VMs, so
> they
> > > 
> > > can't even see other resources / processes on the host. This may
> well
> > > prevent the sync manager wrapper talking to a central sync mnager
> > > process
> > > The general rule we aim for is that once libvirtd has spawned a
> VM
> > > they
> > > are completely isolated with exception of any disks marked with
> > > 
> > > In other words, any communictions channels must be
> > > initiated/established 
> > > by the mgmt layer to the VM process, with nothing to be
> established in
> > > the
> > > reverse direction.
> > Correct me if I'm wrong, but the security limitations (selinux
> context) 
> > would only take effect after the "exec", no? so the process could
> still 
> > communicate with the daemon, open an FD and then exec.  After exec,
> the 
> > VM would be locked down but the daemon could still wait on the FD to
> see
> > whether VM has died.
> 
> It depends on which exec you are talking about here. If the comms to
> the daemon are done straight from the libvirtd plugin, then it would
> still be unrestricted. If the comms were done from the supervisor
> process, it would be restricted.
> 
> Daniel
I'm talking about the supervisor.  You said you spoke to Dan Walsh and that the 
supervisor and qemu processes could get different contexts.  Now you're saying 
the supervisor would be restricted nonetheless.  What am I missing?

> -- 
> |: Red Hat, Engineering, London-o-  
> http://people.redhat.com/berrange/ :|
> |: http://libvirt.org -o- http://virt-manager.org -o-
> http://deltacloud.org :|
> |: http://autobuild.org-o-
> http://search.cpan.org/~danberr/ :|
> |: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B
> 9505 :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC [3/3]: Lock manager usage scenarios

2010-09-14 Thread Ayal Baron

- "Daniel P. Berrange"  wrote:

> On Mon, Sep 13, 2010 at 03:49:38PM +0200, Saggi Mizrahi wrote:
> > On Mon, 2010-09-13 at 14:29 +0100, Daniel P. Berrange wrote:
> > > On Mon, Sep 13, 2010 at 03:20:13PM +0200, Saggi Mizrahi wrote:
> > > > On Mon, 2010-09-13 at 13:35 +0100, Daniel P. Berrange wrote:
> > > > > > 
> > > > > > Overall, this looks workable to me.  As proposed, this
> assumes a 1:1 
> > > > > > relation between LockManager process and managed VMs.  But I
> guess you 
> > > > > > can still have a central manager process that manages all
> the VMs, by 
> > > > > > having the lock manager plugin spawn a simple shim process
> that does all 
> > > > > > the communication with the central lock manager.
> > > > > 
> > > > > I could have decided it such that it didn't assume presence of
> a angel
> > > > > process around each VM, but I think it is easier to be able to
> presume
> > > > > that there is one. It can be an incredibly thin stub if
> desired, so I
> > > > > don't think it'll be too onerous on implementations
> > > > 
> > > > > We are looking into the possibility of not having a process
> manage a
> > > > VM but rather having the sync_manager process register with a
> central
> > > > daemon and exec into qemu (or anything else) so assuming there
> is a
> > > > process per VM is essentially false. But the verb could be used
> for
> > > > "unregistering" the current instance with the main manager so
> the verb
> > > > does have it use.
> > > > 
> > > > Further more, even if we decide to leave the current
> 'sync_manager
> > > > process per child process' system as is for now. The general
> direction
> > > > is a central deamon per host for managing all the leases and
> guarding
> > > > all processes. So be sure to keep that in mind while assembling
> the API.
> > > 
> > > Having a single daemon per host that exec's the VMs is explicitly
> *not*
> > > something we intend to support because the QEMU process needs to
> inherit
> > > its process execution state from libvirtd. It is fundamental to
> the
> > > security architecture that processes are completely isolated the
> moment
> > > that libvirtd has spawned them. We don't want to offload all the
> security
> > > driver setup into a central lock manager daemon. Aside from this
> we also
> > > pass open file descriptors down from libvirtd to the QEMU daemon.
> > My explanation might have been confusing or ill phrased. I'll try
> again.
> > What the suggestion was:
> > instead of libvirt running sync manager that will fork off and run
> qemu.
> > libvirt would run sync_manager wrapper that will register with the
> > central daemon wait for it to acquire leases and then exec to qemu
> (in
> > process). From that moment the central daemon monitors the process
> and
> > when it quits frees it's leases.
> > This way we still keep all the context stuff from libvirt and have
> only
> > 1 process managing the leases.
> > But, as I said, this is only a suggestion and is still in very
> early
> > stages. We might not implement in the initial version and leave the
> > current forking method.
> 
> That is probably possible with the current security driver
> implementations
> but more generally I think it will still hit some trouble.
> Specifically 
> one of the items on our todo list is a new security driver that makes
> use
> of Linux container namespace functionality to isolate the VMs, so they
> 
> can't even see other resources / processes on the host. This may well
> prevent the sync manager wrapper talking to a central sync mnager
> process
> The general rule we aim for is that once libvirtd has spawned a VM
> they
> are completely isolated with exception of any disks marked with
> 
> In other words, any communictions channels must be
> initiated/established 
> by the mgmt layer to the VM process, with nothing to be established in
> the
> reverse direction.
Correct me if I'm wrong, but the security limitations (selinux context) would 
only take effect after the "exec", no? so the process could still communicate 
with the daemon, open an FD and then exec.  After exec, the VM would be locked 
down but the daemon could still wait on the FD to see whether VM has died.

> 
> Daniel
> -- 
> |: Red Hat, Engineering, London-o-  
> http://people.redhat.com/berrange/ :|
> |: http://libvirt.org -o- http://virt-manager.org -o-
> http://deltacloud.org :|
> |: http://autobuild.org-o-
> http://search.cpan.org/~danberr/ :|
> |: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B
> 9505 :|
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list