At CERN, we run KVM and Hyper-V. Both work fine.
Depending on the size of your cluster, you may have other factors to consider
such as monitoring and configuration management. We use Puppet to configure
both environnments.
Images are tagged with a property hypervisor_type which is used to
That is interesting Tim.
Why Hyper-V if I may ask? Why not stick just with KVM?
Maish
On 19/03/15 08:22, Tim Bell wrote:
At CERN, we run KVM and Hyper-V. Both work fine.
Depending on the size of your cluster, you may have other factors to
consider such as monitoring and configuration
Hello,
I just resolved an issue where migrating instances with iSCSI volumes would
occasionally fail. There's a bug report here:
https://bugs.launchpad.net/nova/+bug/1423772
The core cause ended up being libvirt transferring the volume paths
verbatim. For example, take the situation where:
Hey Tim,
Which networking mode do you use with Hyper-V? We wanted to us it with
nova-network HA mode but found it would not work with that configuration.
Neutron HA-DVR we are considering.
-Ben
On Thu, Mar 19, 2015 at 12:59 AM, Tim Bell tim.b...@cern.ch wrote:
We had a Hyper-V based
We're running nova network flat (with some CERN specifics for legacy network
integration). We're looking at Neutron but migration will require careful
testing.
Tim
From: Ben Hines [mailto:bhi...@gmail.com]
Sent: 19 March 2015 09:18
To: Tim Bell
Cc: maishsk+openst...@maishsk.com;
I get what you are saying. That makes sense.
On Thu, Mar 19, 2015 at 12:44 PM, Fox, Kevin M kevin@pnnl.gov wrote:
I don't believe they do, but its not about that. its about capacity. To
get the most out of your really expensive hyperv datacenter license, you
should load it up with as
Apologies. i was waiting for one more changeset to merge.
Please try oslo.messaging master branch
https://github.com/openstack/oslo.messaging/commits/master/
https://github.com/openstack/oslo.messaging/commits/master/
(you need at least till Change-Id: I4b729ed1a6ddad2a0e48102852b2ce7d66423eaa
On 03/19/2015 10:33 AM, Fox, Kevin M wrote:
We've running it both ways. We have clouds with dedicated storage nodes, and
clouds sharing storage/compute.
The storage/compute solution with ceph is working ok for us. But, that
particular cloud is 1gigabit only and seems very slow compared to our
I would avoid co-locating Ceph and compute processes. Memory on compute
nodes is a scare resource, if you're not running with any overcommit, which
you shouldn't. Ceph requires a fair amount (2GB per OSD to be safe) of
guaranteed memory to deal with recovery. You can certainly overload memory
and
On 3/19/15 10:15 AM, Davanum Srinivas wrote:
Apologies. i was waiting for one more changeset to merge.
Please try oslo.messaging master branch
https://github.com/openstack/oslo.messaging/commits/master/
(you need at least till Change-Id:
I4b729ed1a6ddad2a0e48102852b2ce7d66423eaa - change id is
On 3/19/15 9:08 AM, Jared Cook wrote:
Hi, I'm starting to see a number of vendors push hyper-converged
OpenStack solutions where compute and Ceph OSD nodes are one in the
same. In addition, Ceph monitors are placed on OpenStack controller
nodes in these architectures.
Recommendations I have
At the Operator’s midcycle meetup in Philadelphia recently there was a lot of
operator interest[1] in the idea behind this patch:
https://review.openstack.org/#/c/146047/
Operators may want to take note that it merged yesterday. Happy testing!
[1] See bottom of
Hi, I'm starting to see a number of vendors push hyper-converged OpenStack
solutions where compute and Ceph OSD nodes are one in the same. In
addition, Ceph monitors are placed on OpenStack controller nodes in these
architectures.
Recommendations I have read in the past have been to keep these
I was under the impression hyper-v didn't charge a per seat license on non
windows instances?
On Thu, Mar 19, 2015 at 12:05 PM, Fox, Kevin M kevin@pnnl.gov wrote:
So, in the pets vs cattle cloud philosophy, you want to be able to have
as many cattle as you need, rather then limit the sets
I have been working with dism and sileht on testing this patch in one of
our pre-prod environments. There are still issues with rabbitmq behind
haproxy that we are working through. However, in testing if you are using
a list of hosts you should see significantly better catching/fixing of
faults.
I don't believe they do, but its not about that. its about capacity. To get the
most out of your really expensive hyperv datacenter license, you should load it
up with as many windows vm's as you can. A physical machine can only handle a
fixed number of vm's max. If you put a linux vm on it,
16 matches
Mail list logo