[ovirt-users] Disaster recovery failed with direct lun 4.4.10

2023-01-21 Thread jaime luque
Good night

I am configuring the disaster recovery with the following environment

Ovirt 4.4.10

Site A

An external manager
A hypervisor server

Site B

An external manager
A hypervisor server

*Mons are being replicated from one storage to another with the conditions 
mentioned in the active passive guide.

In the File

disaster_recovery_vars.yml

 The mapping of the virtual machines is carried out, one of these has 
direct luns

When I execute the failover the virtual machines are not importing in the 
secondary site.

If I comment on the mapping of the direct luns, it is possible to import the 
virtual machines and they remain working correctly.

NOTE: The direct lun that the vm has is without format or partition, since I am 
in the testing phase.

Can you please give me ideas of what to check.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/67ZTOKLJ54VAJU37BGYJBEE4QVTQXYJ2/


[ovirt-users] Disaster Recovery using Ansible roles

2021-01-14 Thread Henry lol
Hi,

I have 2 questions about Disaster Recovery with Ansible.

1)
regarding Ansible failover, AFAIK, a mapping file must be generated before
running failover.
then, should I periodically generate a new mapping file to reflect the
latest structure of the environment?

2)
I guess Ansible failover uses OVF_STOREs to restore the environment and
OVF_STOREs in storage domains are updated every 1 min as default.
then, how to recover the changes made in the meantime?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IPMQQ4PXH7AG3YDFPJMOO772AI3UOWEQ/


[ovirt-users] Disaster recovery active-active question

2019-09-17 Thread wodel youchi
Hi,

On the active-active DR, in the documentation it is said :
"You require replicated storage that is writeable on both sites to allow
virtual machines to migrate between sites and continue running on the
site’s storage."

I know that it's not a direct oVirt question, but if someone has
implemented this ...

Which type of storage that can offer both real time synchronization and
read/write on both ends on the replicated volumes?


Regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FYIVK3KU7A22IBOQG4I476NRJUBQRONC/


[ovirt-users] disaster recovery, connecting to existing storage domains from new engine

2017-06-05 Thread Thomas Wakefield
Is it possible to force a connection to existing storage domains from a new 
engine?  I have a cluster that I can’t get the web console to restart, but 
everything else is working.  So I want to know if it’s possible to just remount 
those storage domains to the new engine?

Also, how do I clear an export domain that complains that it’s still attached 
to another pool?  Because I could also just restore most of my VM’s from the 
export domain, but it is still locked to the old engine.

Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disaster Recovery Testing

2017-02-16 Thread Fred Rolland
Gary,

See this wiki page, it explains how to import storage domains :
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/

Regards,

Fred

On Wed, Feb 15, 2017 at 10:15 PM, Nir Soffer  wrote:

> On Wed, Feb 15, 2017 at 9:30 PM, Gary Lloyd  wrote:
> > Hi Nir thanks for the guidance
> >
> > We started to use ovirt a good few years ago now (version 3.2).
> >
> > At the time iscsi multipath wasn't supported, so we made our own
> > modifications to vdsm and this worked well with direct lun.
> > We decided to go with direct lun in case things didn't work out with
> OVirt
> > and in that case we would go back to using vanilla kvm / virt-manager.
> >
> > At the time I don't believe that you could import iscsi data domains that
> > had already been configured into a different installation, so we
> replicated
> > each raw VM volume using the SAN to another server room for DR purposes.
> > We use Dell Equallogic and there is a documented limitation of 1024 iscsi
> > connections and 256 volume replications. This isn't a problem at the
> moment,
> > but the more VMs that we have the more conscious I am about us reaching
> > those limits (we have around 300 VMs at the moment and we have a vdsm
> hook
> > that closes off iscsi connections if a vm is migrated /powered off).
> >
> > Moving to storage domains keeps the number of iscsi connections /
> replicated
> > volumes down and we won't need to make custom changes to vdsm when we
> > upgrade.
> > We can then use the SAN to replicate the storage domains to another data
> > centre and bring that online with a different install of OVirt (we will
> have
> > to use these arrays for at least the next 3 years).
> >
> > I didn't realise that each storage domain contained the configuration
> > details/metadata for the VMs.
> > This to me is an extra win as we can recover VMs faster than we can now
> if
> > we have to move them to a different data centre in the event of a
> disaster.
> >
> >
> > Are there any maximum size / vm limits or recommendations for each
> storage
> > domain ?
>
> The recommended limit in rhel 6 was 350 lvs per storage domain. We believe
> this
> limit is not correct for rhel 7 and recent ovirt versions. We are
> testing currently
> 1000 lvs per storage domain, but we did not finish testing yet, so I
> cannot say
> what is the recommended limit yet.
>
> Preallocated disk has one lv, if you have thin disk, you have one lv
> per snapshot.
>
> There is no practical limit to the size of a storage domain.
>
> > Does Ovirt support moving VM's between different storage domain type e.g.
> > ISCSI to gluster ?
>
> Sure, you can move vm disks from any storage domain to any storage domain
> (except ceph).
>
> >
> >
> > Many Thanks
> >
> > Gary Lloyd
> > 
> > I.T. Systems:Keele University
> > Finance & IT Directorate
> > Keele:Staffs:IC1 Building:ST5 5NB:UK
> > +44 1782 733063
> > 
> >
> > On 15 February 2017 at 18:56, Nir Soffer  wrote:
> >>
> >> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd 
> wrote:
> >> > Hi
> >> >
> >> > We currently use direct lun for our virtual machines and I would like
> to
> >> > move away from doing this and move onto storage domains.
> >> >
> >> > At the moment we are using an ISCSI SAN and we use on replicas created
> >> > on
> >> > the SAN for disaster recovery.
> >> >
> >> > As a test I thought I would replicate an existing storage domain's
> >> > volume
> >> > (via the SAN) and try to mount again as a separate storage domain
> (This
> >> > is
> >> > with ovirt 4.06 (cluster mode 3.6))
> >>
> >> Why do want to replicate a storage domain and connect to it?
> >>
> >> > I can log into the iscsi disk but then nothing gets listed under
> Storage
> >> > Name / Storage ID (VG Name)
> >> >
> >> >
> >> > Should this be possible or will it not work due the the uids being
> >> > identical
> >> > ?
> >>
> >> Connecting 2 storage domains with same uid will not work. You can use
> >> either
> >> the old or the new, but not both at the same time.
> >>
> >> Can you explain how replicating the storage domain volume is related to
> >> moving from direct luns to storage domains?
> >>
> >> If you want to move from direct lun to storage domain, you need to
> create
> >> a new disk on the storage domain, and copy the direct lun data to the
> new
> >> disk.
> >>
> >> We don't support this yet, but you can copy manually like this:
> >>
> >> 1. Find the lv of the new disk
> >>
> >> lvs -o name --select "{IU_} = lv_tags" vg-name
> >>
> >> 2. Activate the lv
> >>
> >> lvchange -ay vg-name/lv-name
> >>
> >> 3. Copy the data from the lun
> >>
> >> qemu-img convert -p -f raw -O raw -t none -T none
> >> /dev/mapper/xxxyyy /dev/vg-name/lv-name
> >>
> >> 4. Deactivate the disk
> >>
> >> lvchange -an vg-name/lv-name
> >>
> >> Nir
> >
> >
> ___
> Users mailing list
> Users@ovirt.org
> 

Re: [ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Nir Soffer
On Wed, Feb 15, 2017 at 9:30 PM, Gary Lloyd  wrote:
> Hi Nir thanks for the guidance
>
> We started to use ovirt a good few years ago now (version 3.2).
>
> At the time iscsi multipath wasn't supported, so we made our own
> modifications to vdsm and this worked well with direct lun.
> We decided to go with direct lun in case things didn't work out with OVirt
> and in that case we would go back to using vanilla kvm / virt-manager.
>
> At the time I don't believe that you could import iscsi data domains that
> had already been configured into a different installation, so we replicated
> each raw VM volume using the SAN to another server room for DR purposes.
> We use Dell Equallogic and there is a documented limitation of 1024 iscsi
> connections and 256 volume replications. This isn't a problem at the moment,
> but the more VMs that we have the more conscious I am about us reaching
> those limits (we have around 300 VMs at the moment and we have a vdsm hook
> that closes off iscsi connections if a vm is migrated /powered off).
>
> Moving to storage domains keeps the number of iscsi connections / replicated
> volumes down and we won't need to make custom changes to vdsm when we
> upgrade.
> We can then use the SAN to replicate the storage domains to another data
> centre and bring that online with a different install of OVirt (we will have
> to use these arrays for at least the next 3 years).
>
> I didn't realise that each storage domain contained the configuration
> details/metadata for the VMs.
> This to me is an extra win as we can recover VMs faster than we can now if
> we have to move them to a different data centre in the event of a disaster.
>
>
> Are there any maximum size / vm limits or recommendations for each storage
> domain ?

The recommended limit in rhel 6 was 350 lvs per storage domain. We believe this
limit is not correct for rhel 7 and recent ovirt versions. We are
testing currently
1000 lvs per storage domain, but we did not finish testing yet, so I cannot say
what is the recommended limit yet.

Preallocated disk has one lv, if you have thin disk, you have one lv
per snapshot.

There is no practical limit to the size of a storage domain.

> Does Ovirt support moving VM's between different storage domain type e.g.
> ISCSI to gluster ?

Sure, you can move vm disks from any storage domain to any storage domain
(except ceph).

>
>
> Many Thanks
>
> Gary Lloyd
> 
> I.T. Systems:Keele University
> Finance & IT Directorate
> Keele:Staffs:IC1 Building:ST5 5NB:UK
> +44 1782 733063
> 
>
> On 15 February 2017 at 18:56, Nir Soffer  wrote:
>>
>> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd  wrote:
>> > Hi
>> >
>> > We currently use direct lun for our virtual machines and I would like to
>> > move away from doing this and move onto storage domains.
>> >
>> > At the moment we are using an ISCSI SAN and we use on replicas created
>> > on
>> > the SAN for disaster recovery.
>> >
>> > As a test I thought I would replicate an existing storage domain's
>> > volume
>> > (via the SAN) and try to mount again as a separate storage domain (This
>> > is
>> > with ovirt 4.06 (cluster mode 3.6))
>>
>> Why do want to replicate a storage domain and connect to it?
>>
>> > I can log into the iscsi disk but then nothing gets listed under Storage
>> > Name / Storage ID (VG Name)
>> >
>> >
>> > Should this be possible or will it not work due the the uids being
>> > identical
>> > ?
>>
>> Connecting 2 storage domains with same uid will not work. You can use
>> either
>> the old or the new, but not both at the same time.
>>
>> Can you explain how replicating the storage domain volume is related to
>> moving from direct luns to storage domains?
>>
>> If you want to move from direct lun to storage domain, you need to create
>> a new disk on the storage domain, and copy the direct lun data to the new
>> disk.
>>
>> We don't support this yet, but you can copy manually like this:
>>
>> 1. Find the lv of the new disk
>>
>> lvs -o name --select "{IU_} = lv_tags" vg-name
>>
>> 2. Activate the lv
>>
>> lvchange -ay vg-name/lv-name
>>
>> 3. Copy the data from the lun
>>
>> qemu-img convert -p -f raw -O raw -t none -T none
>> /dev/mapper/xxxyyy /dev/vg-name/lv-name
>>
>> 4. Deactivate the disk
>>
>> lvchange -an vg-name/lv-name
>>
>> Nir
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Gary Lloyd
Hi Nir thanks for the guidance

We started to use ovirt a good few years ago now (version 3.2).

At the time iscsi multipath wasn't supported, so we made our own
modifications to vdsm and this worked well with direct lun.
We decided to go with direct lun in case things didn't work out with OVirt
and in that case we would go back to using vanilla kvm / virt-manager.

At the time I don't believe that you could import iscsi data domains that
had already been configured into a different installation, so we replicated
each raw VM volume using the SAN to another server room for DR purposes.
We use Dell Equallogic and there is a documented limitation of 1024 iscsi
connections and 256 volume replications. This isn't a problem at the
moment, but the more VMs that we have the more conscious I am about us
reaching those limits (we have around 300 VMs at the moment and we have a
vdsm hook that closes off iscsi connections if a vm is migrated /powered
off).

Moving to storage domains keeps the number of iscsi connections /
replicated volumes down and we won't need to make custom changes to vdsm
when we upgrade.
We can then use the SAN to replicate the storage domains to another data
centre and bring that online with a different install of OVirt (we will
have to use these arrays for at least the next 3 years).

I didn't realise that each storage domain contained the configuration
details/metadata for the VMs.
This to me is an extra win as we can recover VMs faster than we can now if
we have to move them to a different data centre in the event of a disaster.


Are there any maximum size / vm limits or recommendations for each storage
domain ?
Does Ovirt support moving VM's between different storage domain type e.g.
ISCSI to gluster ?


Many Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 15 February 2017 at 18:56, Nir Soffer  wrote:

> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd  wrote:
> > Hi
> >
> > We currently use direct lun for our virtual machines and I would like to
> > move away from doing this and move onto storage domains.
> >
> > At the moment we are using an ISCSI SAN and we use on replicas created on
> > the SAN for disaster recovery.
> >
> > As a test I thought I would replicate an existing storage domain's volume
> > (via the SAN) and try to mount again as a separate storage domain (This
> is
> > with ovirt 4.06 (cluster mode 3.6))
>
> Why do want to replicate a storage domain and connect to it?
>
> > I can log into the iscsi disk but then nothing gets listed under Storage
> > Name / Storage ID (VG Name)
> >
> >
> > Should this be possible or will it not work due the the uids being
> identical
> > ?
>
> Connecting 2 storage domains with same uid will not work. You can use
> either
> the old or the new, but not both at the same time.
>
> Can you explain how replicating the storage domain volume is related to
> moving from direct luns to storage domains?
>
> If you want to move from direct lun to storage domain, you need to create
> a new disk on the storage domain, and copy the direct lun data to the new
> disk.
>
> We don't support this yet, but you can copy manually like this:
>
> 1. Find the lv of the new disk
>
> lvs -o name --select "{IU_} = lv_tags" vg-name
>
> 2. Activate the lv
>
> lvchange -ay vg-name/lv-name
>
> 3. Copy the data from the lun
>
> qemu-img convert -p -f raw -O raw -t none -T none
> /dev/mapper/xxxyyy /dev/vg-name/lv-name
>
> 4. Deactivate the disk
>
> lvchange -an vg-name/lv-name
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Nir Soffer
On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd  wrote:
> Hi
>
> We currently use direct lun for our virtual machines and I would like to
> move away from doing this and move onto storage domains.
>
> At the moment we are using an ISCSI SAN and we use on replicas created on
> the SAN for disaster recovery.
>
> As a test I thought I would replicate an existing storage domain's volume
> (via the SAN) and try to mount again as a separate storage domain (This is
> with ovirt 4.06 (cluster mode 3.6))

Why do want to replicate a storage domain and connect to it?

> I can log into the iscsi disk but then nothing gets listed under Storage
> Name / Storage ID (VG Name)
>
>
> Should this be possible or will it not work due the the uids being identical
> ?

Connecting 2 storage domains with same uid will not work. You can use either
the old or the new, but not both at the same time.

Can you explain how replicating the storage domain volume is related to
moving from direct luns to storage domains?

If you want to move from direct lun to storage domain, you need to create
a new disk on the storage domain, and copy the direct lun data to the new
disk.

We don't support this yet, but you can copy manually like this:

1. Find the lv of the new disk

lvs -o name --select "{IU_} = lv_tags" vg-name

2. Activate the lv

lvchange -ay vg-name/lv-name

3. Copy the data from the lun

qemu-img convert -p -f raw -O raw -t none -T none
/dev/mapper/xxxyyy /dev/vg-name/lv-name

4. Deactivate the disk

lvchange -an vg-name/lv-name

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Gary Lloyd
Hi

We currently use direct lun for our virtual machines and I would like to
move away from doing this and move onto storage domains.

At the moment we are using an ISCSI SAN and we use on replicas created on
the SAN for disaster recovery.

As a test I thought I would replicate an existing storage domain's volume
(via the SAN) and try to mount again as a separate storage domain (This is
with ovirt 4.06 (cluster mode 3.6))

I can log into the iscsi disk but then nothing gets listed under Storage
Name / Storage ID (VG Name)


Should this be possible or will it not work due the the uids being
identical ?


Many Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disaster recovery

2015-06-21 Thread Nathanaël Blanchet
Sure that ovirt is not directly concerned but this has been done 
following the foreman integration feature. So a warning doing this in a 
fc environnement could be very preventive.
My question was rather : now is there a possibility to recover this 
disk? After all, it is only a LVM disk with one VG per LUN and as much 
LV as much as existing VMs. Recovering LVM metadata might be enough?


Le 21/06/2015 20:23, Dan Yasny a écrit :


Anaconda and Kickstart are dangerous in FC environments and have been 
known to wipe all kinds of data if zoning wasn't properly done prior 
to os deployment to new hosts.


Nothing to do with oVirt, it's a common mistake people make at least 
once before they step on this specific rake.


On Jun 21, 2015 2:09 PM, "Nathanaël Blanchet" <mailto:blanc...@abes.fr>> wrote:


In a traditional installation of an ovirt host, it seems that
provisionning the OS is the first step before talking about vdsm.
I didn't go further, the problem is that this host saw the
production LUN and the kickstart took all the lun into the same VG
with a XFS format.
Yes it is an error from myself for not having dissociated the lun.
Yes it is an error from myself for not having taken care of having
snapshot lun level backup.
But now what should I do?
My idea was to use pvreduce to unlabel the LUN, and then using the
/etc/lvm/backup/lun for recovering physical volume metadata on
that lun (http://www.tldp.org/HOWTO/LVM-HOWTO/recovermetadata.html)

Le 21/06/2015 12:02, mots a écrit :

Let me get this straight:

You added a new host and instead of letting VDSM manage the VM
storage you added it manually, completely independent from
oVirt, during an unattended installation of the host OS? In a
production environment?

Why?
  -Ursprüngliche Nachricht-

Von:Nathanaël Blanchet mailto:blanc...@abes.fr>>
Gesendet: Fre 19 Juni 2015 14:09
An: users@ovirt.org <mailto:users@ovirt.org>
    Betreff: [ovirt-users] disaster recovery

Hello all,

Here is what can happen as a nightmare for a sysadmin:

I installed a new host to my pool of pre existing ovirt
hosts with a
kickstart file and a default partitionning. I used to do
this with vms
that usually get one local disk.
But..
This time this host was attached to several existing ovirt
(production)
lun and anaconda gathered all existing disk (local and
lun) into the
same VG (lv_home) et formatted them with XFS
In the webadmin, the domain storage became unvailaible,
vms were still
up (thankgoodness), but it was impossible to interact with
them. If I
stopped them, it was impossible to reboot them. If I
launch lvs command,
some hosts can still see the ovirt LV, but other see only
the lv_home
while /dev/[idofvms] are still present.
So it was my chance that vms were still present, and I
began to export
them with a tar at file system level so as to import them
in a new
domain storage.

It seems that data are still present because the vms are
still running.
So my question is really : instead of this difficult
export step, is
there a way to recover the initial format of the ovirt lun
so as to make
the lvm index come back on the disk?

Any help or disaster experience would be much appreciated.

-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr <mailto:blanc...@abes.fr>

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users



--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disaster recovery

2015-06-21 Thread Dan Yasny
Anaconda and Kickstart are dangerous in FC environments and have been known
to wipe all kinds of data if zoning wasn't properly done prior to os
deployment to new hosts.

Nothing to do with oVirt, it's a common mistake people make at least once
before they step on this specific rake.
On Jun 21, 2015 2:09 PM, "Nathanaël Blanchet"  wrote:

> In a traditional installation of an ovirt host, it seems that
> provisionning the OS is the first step before talking about vdsm. I didn't
> go further, the problem is that this host saw the production LUN and the
> kickstart took all the lun into the same VG with a XFS format.
> Yes it is an error from myself for not having dissociated the lun.
> Yes it is an error from myself for not having taken care of having
> snapshot lun level backup.
> But now what should I do?
> My idea was to use pvreduce to unlabel the LUN, and then using the
> /etc/lvm/backup/lun for recovering physical volume metadata on that lun (
> http://www.tldp.org/HOWTO/LVM-HOWTO/recovermetadata.html)
>
> Le 21/06/2015 12:02, mots a écrit :
>
>> Let me get this straight:
>>
>> You added a new host and instead of letting VDSM manage the VM storage
>> you added it manually, completely independent from oVirt, during an
>> unattended installation of the host OS? In a production environment?
>>
>> Why?
>>   -Ursprüngliche Nachricht-----
>>
>>> Von:Nathanaël Blanchet 
>>> Gesendet: Fre 19 Juni 2015 14:09
>>> An: users@ovirt.org
>>> Betreff: [ovirt-users] disaster recovery
>>>
>>> Hello all,
>>>
>>> Here is what can happen as a nightmare for a sysadmin:
>>>
>>> I installed a new host to my pool of pre existing ovirt hosts with a
>>> kickstart file and a default partitionning. I used to do this with vms
>>> that usually get one local disk.
>>> But..
>>> This time this host was attached to several existing ovirt (production)
>>> lun and anaconda gathered all existing disk (local and lun) into the
>>> same VG (lv_home) et formatted them with XFS
>>> In the webadmin, the domain storage became unvailaible, vms were still
>>> up (thankgoodness), but it was impossible to interact with them. If I
>>> stopped them, it was impossible to reboot them. If I launch lvs command,
>>> some hosts can still see the ovirt LV, but other see only the lv_home
>>> while /dev/[idofvms] are still present.
>>> So it was my chance that vms were still present, and I began to export
>>> them with a tar at file system level so as to import them in a new
>>> domain storage.
>>>
>>> It seems that data are still present because the vms are still running.
>>> So my question is really : instead of this difficult export step, is
>>> there a way to recover the initial format of the ovirt lun so as to make
>>> the lvm index come back on the disk?
>>>
>>> Any help or disaster experience would be much appreciated.
>>>
>>> --
>>> Nathanaël Blanchet
>>>
>>> Supervision réseau
>>> Pôle Infrastrutures Informatiques
>>> 227 avenue Professeur-Jean-Louis-Viala
>>> 34193 MONTPELLIER CEDEX 5
>>> Tél. 33 (0)4 67 54 84 55
>>> Fax  33 (0)4 67 54 84 14
>>> blanc...@abes.fr
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disaster recovery

2015-06-21 Thread Nathanaël Blanchet
In a traditional installation of an ovirt host, it seems that 
provisionning the OS is the first step before talking about vdsm. I 
didn't go further, the problem is that this host saw the production LUN 
and the kickstart took all the lun into the same VG with a XFS format.

Yes it is an error from myself for not having dissociated the lun.
Yes it is an error from myself for not having taken care of having 
snapshot lun level backup.

But now what should I do?
My idea was to use pvreduce to unlabel the LUN, and then using the 
/etc/lvm/backup/lun for recovering physical volume metadata on that lun 
(http://www.tldp.org/HOWTO/LVM-HOWTO/recovermetadata.html)


Le 21/06/2015 12:02, mots a écrit :

Let me get this straight:

You added a new host and instead of letting VDSM manage the VM storage you 
added it manually, completely independent from oVirt, during an unattended 
installation of the host OS? In a production environment?

Why?
  
-Ursprüngliche Nachricht-

Von:Nathanaël Blanchet 
Gesendet: Fre 19 Juni 2015 14:09
An: users@ovirt.org
Betreff: [ovirt-users] disaster recovery

Hello all,

Here is what can happen as a nightmare for a sysadmin:

I installed a new host to my pool of pre existing ovirt hosts with a
kickstart file and a default partitionning. I used to do this with vms
that usually get one local disk.
But..
This time this host was attached to several existing ovirt (production)
lun and anaconda gathered all existing disk (local and lun) into the
same VG (lv_home) et formatted them with XFS
In the webadmin, the domain storage became unvailaible, vms were still
up (thankgoodness), but it was impossible to interact with them. If I
stopped them, it was impossible to reboot them. If I launch lvs command,
some hosts can still see the ovirt LV, but other see only the lv_home
while /dev/[idofvms] are still present.
So it was my chance that vms were still present, and I began to export
them with a tar at file system level so as to import them in a new
domain storage.

It seems that data are still present because the vms are still running.
So my question is really : instead of this difficult export step, is
there a way to recover the initial format of the ovirt lun so as to make
the lvm index come back on the disk?

Any help or disaster experience would be much appreciated.

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disaster recovery

2015-06-21 Thread mots
Let me get this straight:

You added a new host and instead of letting VDSM manage the VM storage you 
added it manually, completely independent from oVirt, during an unattended 
installation of the host OS? In a production environment?

Why?
 
-Ursprüngliche Nachricht-
> Von:Nathanaël Blanchet 
> Gesendet: Fre 19 Juni 2015 14:09
> An: users@ovirt.org
> Betreff: [ovirt-users] disaster recovery
> 
> Hello all,
> 
> Here is what can happen as a nightmare for a sysadmin:
> 
> I installed a new host to my pool of pre existing ovirt hosts with a 
> kickstart file and a default partitionning. I used to do this with vms 
> that usually get one local disk.
> But..
> This time this host was attached to several existing ovirt (production) 
> lun and anaconda gathered all existing disk (local and lun) into the 
> same VG (lv_home) et formatted them with XFS
> In the webadmin, the domain storage became unvailaible, vms were still 
> up (thankgoodness), but it was impossible to interact with them. If I 
> stopped them, it was impossible to reboot them. If I launch lvs command, 
> some hosts can still see the ovirt LV, but other see only the lv_home 
> while /dev/[idofvms] are still present.
> So it was my chance that vms were still present, and I began to export 
> them with a tar at file system level so as to import them in a new 
> domain storage.
> 
> It seems that data are still present because the vms are still running.
> So my question is really : instead of this difficult export step, is 
> there a way to recover the initial format of the ovirt lun so as to make 
> the lvm index come back on the disk?
> 
> Any help or disaster experience would be much appreciated.
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr  
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] disaster recovery

2015-06-19 Thread Nathanaël Blanchet

Hello all,

Here is what can happen as a nightmare for a sysadmin:

I installed a new host to my pool of pre existing ovirt hosts with a 
kickstart file and a default partitionning. I used to do this with vms 
that usually get one local disk.

But..
This time this host was attached to several existing ovirt (production) 
lun and anaconda gathered all existing disk (local and lun) into the 
same VG (lv_home) et formatted them with XFS
In the webadmin, the domain storage became unvailaible, vms were still 
up (thankgoodness), but it was impossible to interact with them. If I 
stopped them, it was impossible to reboot them. If I launch lvs command, 
some hosts can still see the ovirt LV, but other see only the lv_home 
while /dev/[idofvms] are still present.
So it was my chance that vms were still present, and I began to export 
them with a tar at file system level so as to import them in a new 
domain storage.


It seems that data are still present because the vms are still running.
So my question is really : instead of this difficult export step, is 
there a way to recover the initial format of the ovirt lun so as to make 
the lvm index come back on the disk?


Any help or disaster experience would be much appreciated.

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr  


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users