On 05/12/2015 12:06 AM, Arne Wiebalck wrote:
Here’s Dan’s answer for the exact procedure (he replied, but it bounced):
We have two clusters with mons behind two DNS aliases:
cephmon.cern.ch: production cluster with five mons A, B, C, D, E
cephmond.cern.ch: testing cluster with five mons X
Here’s Dan’s answer for the exact procedure (he replied, but it bounced):
We have two clusters with mons behind two DNS aliases:
cephmon.cern.ch: production cluster with five mons A, B, C, D, E
cephmond.cern.ch: testing cluster with five mons X, Y, Z
The procedure was:
1. Stop mon on host
On 05/08/2015 12:41 AM, Arne Wiebalck wrote:
Hi Josh,
In our case adding the monitor hostnames (alias) would have made only a
slight difference:
as we moved the servers to another cluster, the client received an
authorisation failure rather
than a connection failure and did not try to fail over
Hi Josh,
In our case adding the monitor hostnames (alias) would have made only a slight
difference:
as we moved the servers to another cluster, the client received an
authorisation failure rather
than a connection failure and did not try to fail over to the next IP in the
list. So, adding the
a
Josh,
Certainly in our case the the monitor hosts (in addition to IPs) would have
made a difference.
On Thu, May 7, 2015 at 3:21 PM, Josh Durgin wrote:
> Hey folks, thanks for filing a bug for this:
>
> https://bugs.launchpad.net/cinder/+bug/1452641
>
> Nova stores the volume connection info in
Hey folks, thanks for filing a bug for this:
https://bugs.launchpad.net/cinder/+bug/1452641
Nova stores the volume connection info in its db, so updating that
would be a workaround to allow restart/migration of vms to work.
Otherwise running vms shouldn't be affected, since they'll notice any
ne
Hi Arne,
We've had this EXACT same issue.
I don't know of a way to force an update as you are basically pulling the
rug out from under a running instance. I don't know if it is
possible/feasible to update the virsh xml in place and then migrate to get
it to actually use that data. (I think we tri
Hi,
As we swapped a fraction of our Ceph mon servers between the pre-production and
production cluster
— something we considered to be transparent as the Ceph config points to the
mon alias—, we ended
up in a situation where VMs with volumes attached were not able to boot (with a
probability th