Re: [Gluster-users] how to restore snapshot LV's

2017-05-29 Thread Mohammed Rafi K C
Did you mount the snapshot bricks after you reconfigured the vgs.


Regards

Rafi KC


On 05/29/2017 01:08 PM, WoongHee Han wrote:
> right, i had reconfigured the vg in one node. and  activate the brick
> path then restored the snapshot.
>
>
> 2017-05-29 15:54 GMT+09:00 Mohammed Rafi K C  >:
>
>
> On 05/27/2017 09:22 AM, WoongHee Han wrote:
>> Ih, i'm sorry for my late reply
>>
>> I've tried to solve it using your answer. It worked as well
>> thanks. it means the snapshot was activated.
>> and then i was restore the snapshot.
>>
>> but, after i restored the snapshot ,there was nothing  in the
>> volume(like files) 
>> can't it recover automatically?
> I remember you were saying that you had reconfigured the vg's. Did
> you had mount for the snapshot brick path active ?
>
> Rafi KC
>
>
>
>>
>> Thank you agin for your answer.
>>
>>
>> Best regards
>>
>>
>>
>> 2017-05-19 15:34 GMT+09:00 Mohammed Rafi K C > >:
>>
>> I do not know how you ended up in this state. This usually
>> happens when there is a commit failure. To recover from this
>> state you can change the value of "status" from
>>
>> the path /var/lib/glusterd/snaps///info .
>> From this file change the status to 0 in nodes where the
>> values are one. Then restart glusterd on those node where we
>> changed manually.
>>
>> Then try to activate it.
>>
>>
>> Regards
>>
>> Rafi KC
>>
>>
>> On 05/18/2017 09:38 AM, Pranith Kumar Karampuri wrote:
>>> +Rafi, +Raghavendra Bhat
>>>
>>> On Tue, May 16, 2017 at 11:55 AM, WoongHee Han
>>> mailto:polishe...@gmail.com>> wrote:
>>>
>>> Hi, all!
>>>
>>> I erased the VG having snapshot LV related to gluster
>>> volumes
>>> and then, I tried to restore volume;
>>>
>>> 1. vgcreate vg_cluster /dev/sdb
>>> 2. lvcreate --size=10G --type=thin-pool -n tp_cluster
>>> vg_cluster
>>> 3. lvcreate -V 5G --thinpool vg_cluster/tp_cluster -n
>>> test_vol vg_cluster
>>> 4. gluster v stop test_vol
>>> 5. getfattr -n trusted.glusterfs.volume-id
>>> /volume/test_vol ( in other node)
>>> 6. setfattr -n trusted.glusterfs.volume-id -v
>>>  0sKtUJWIIpTeKWZx+S5PyXtQ== /volume/test_vol (already
>>> mounted)
>>> 7. gluster v start test_vol
>>> 8. restart glusterd
>>> 9. lvcreate -s vg_cluster/test_vol --setactivationskip=n
>>> --name 6564c50651484d09a36b912962c573df_0
>>> 10. lvcreate -s vg_cluster/test_vol
>>> --setactivationskip=n
>>> --name ee8c32a1941e4aba91feab21fbcb3c6c_0
>>> 11. lvcreate -s vg_cluster/test_vol
>>> --setactivationskip=n
>>> --name bf93dc34233646128f0c5f84c3ac1f83_0 
>>> 12. reboot
>>>
>>> It works, but bricks for snapshot is not working.
>>>
>>> 
>>> --
>>> ~]# glsuter snpshot status
>>> Brick Path:  
>>> 
>>> 192.225.3.35:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick1
>>> Volume Group  :   vg_cluster
>>> Brick Running :   No
>>> Brick PID :   N/A
>>> Data Percentage   :   0.22
>>> LV Size   :   5.00g
>>>
>>>
>>> Brick Path:  
>>> 
>>> 192.225.3.36:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick2
>>> Volume Group  :   vg_cluster
>>> Brick Running :   No
>>> Brick PID :   N/A
>>> Data Percentage   :   0.22
>>> LV Size   :   5.00g
>>>
>>>
>>> Brick Path:  
>>> 
>>> 192.225.3.37:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick3
>>> Volume Group  :   vg_cluster
>>> Brick Running :   No
>>> Brick PID :   N/A
>>> Data Percentage   :   0.22
>>> LV Size   :   5.00g
>>>
>>>
>>> Brick Path:  
>>> 
>>> 192.225.3.38:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick4
>>> Volume Group  :   vg_cluster
>>> Brick Running :   Tes
>>> Brick PID :   N/A
>>> Data Percentage   :   0.22
>>> LV Size   :   5.00g
>>>
>>> ~]# gluster snapshot deactivate t3_GMT-2017.05.15-08.01.37
>>> Deactivating snap will make its data inaccessible. Do
>>> you want to continue? (y/n) y
>>> snapshot deactivate

Re: [Gluster-users] how to restore snapshot LV's

2017-05-28 Thread Mohammed Rafi K C

On 05/27/2017 09:22 AM, WoongHee Han wrote:
> Ih, i'm sorry for my late reply
>
> I've tried to solve it using your answer. It worked as well thanks. it
> means the snapshot was activated.
> and then i was restore the snapshot.
>
> but, after i restored the snapshot ,there was nothing  in the
> volume(like files) 
> can't it recover automatically?
I remember you were saying that you had reconfigured the vg's. Did you
had mount for the snapshot brick path active ?

Rafi KC


>
> Thank you agin for your answer.
>
>
> Best regards
>
>
>
> 2017-05-19 15:34 GMT+09:00 Mohammed Rafi K C  >:
>
> I do not know how you ended up in this state. This usually happens
> when there is a commit failure. To recover from this state you can
> change the value of "status" from
>
> the path /var/lib/glusterd/snaps///info . From
> this file change the status to 0 in nodes where the values are
> one. Then restart glusterd on those node where we changed manually.
>
> Then try to activate it.
>
>
> Regards
>
> Rafi KC
>
>
> On 05/18/2017 09:38 AM, Pranith Kumar Karampuri wrote:
>> +Rafi, +Raghavendra Bhat
>>
>> On Tue, May 16, 2017 at 11:55 AM, WoongHee Han
>> mailto:polishe...@gmail.com>> wrote:
>>
>> Hi, all!
>>
>> I erased the VG having snapshot LV related to gluster volumes
>> and then, I tried to restore volume;
>>
>> 1. vgcreate vg_cluster /dev/sdb
>> 2. lvcreate --size=10G --type=thin-pool -n tp_cluster vg_cluster
>> 3. lvcreate -V 5G --thinpool vg_cluster/tp_cluster -n
>> test_vol vg_cluster
>> 4. gluster v stop test_vol
>> 5. getfattr -n trusted.glusterfs.volume-id /volume/test_vol (
>> in other node)
>> 6. setfattr -n trusted.glusterfs.volume-id -v
>>  0sKtUJWIIpTeKWZx+S5PyXtQ== /volume/test_vol (already mounted)
>> 7. gluster v start test_vol
>> 8. restart glusterd
>> 9. lvcreate -s vg_cluster/test_vol --setactivationskip=n
>> --name 6564c50651484d09a36b912962c573df_0
>> 10. lvcreate -s vg_cluster/test_vol --setactivationskip=n
>> --name ee8c32a1941e4aba91feab21fbcb3c6c_0
>> 11. lvcreate -s vg_cluster/test_vol --setactivationskip=n
>> --name bf93dc34233646128f0c5f84c3ac1f83_0 
>> 12. reboot
>>
>> It works, but bricks for snapshot is not working.
>>
>> 
>> --
>> ~]# glsuter snpshot status
>> Brick Path:  
>> 
>> 192.225.3.35:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick1
>> Volume Group  :   vg_cluster
>> Brick Running :   No
>> Brick PID :   N/A
>> Data Percentage   :   0.22
>> LV Size   :   5.00g
>>
>>
>> Brick Path:  
>> 
>> 192.225.3.36:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick2
>> Volume Group  :   vg_cluster
>> Brick Running :   No
>> Brick PID :   N/A
>> Data Percentage   :   0.22
>> LV Size   :   5.00g
>>
>>
>> Brick Path:  
>> 
>> 192.225.3.37:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick3
>> Volume Group  :   vg_cluster
>> Brick Running :   No
>> Brick PID :   N/A
>> Data Percentage   :   0.22
>> LV Size   :   5.00g
>>
>>
>> Brick Path:  
>> 
>> 192.225.3.38:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick4
>> Volume Group  :   vg_cluster
>> Brick Running :   Tes
>> Brick PID :   N/A
>> Data Percentage   :   0.22
>> LV Size   :   5.00g
>>
>> ~]# gluster snapshot deactivate t3_GMT-2017.05.15-08.01.37
>> Deactivating snap will make its data inaccessible. Do you
>> want to continue? (y/n) y
>> snapshot deactivate: failed: Pre Validation failed on
>> 192.225.3.36. Snapshot t3_GMT-2017.05.15-08.01.37 is already
>> deactivated.
>> Snapshot command failed
>>
>> ~]# gluster snapshot activate t3_GMT-2017.05.15-08.01.37
>> snapshot activate: failed: Snapshot
>> t3_GMT-2017.05.15-08.01.37 is already activated
>>
>> 
>> --
>>
>>
>> how to  restore snapshot LV's ?
>>
>> my nodes consist of four nodes and  distributed, replicated (2x2)
>>
>>
>> thank you.
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://lists.gluster.org/mail

Re: [Gluster-users] how to restore snapshot LV's

2017-05-18 Thread Mohammed Rafi K C
I do not know how you ended up in this state. This usually happens when
there is a commit failure. To recover from this state you can change the
value of "status" from

the path /var/lib/glusterd/snaps///info . From this
file change the status to 0 in nodes where the values are one. Then
restart glusterd on those node where we changed manually.

Then try to activate it.


Regards

Rafi KC


On 05/18/2017 09:38 AM, Pranith Kumar Karampuri wrote:
> +Rafi, +Raghavendra Bhat
>
> On Tue, May 16, 2017 at 11:55 AM, WoongHee Han  > wrote:
>
> Hi, all!
>
> I erased the VG having snapshot LV related to gluster volumes
> and then, I tried to restore volume;
>
> 1. vgcreate vg_cluster /dev/sdb
> 2. lvcreate --size=10G --type=thin-pool -n tp_cluster vg_cluster
> 3. lvcreate -V 5G --thinpool vg_cluster/tp_cluster -n test_vol
> vg_cluster
> 4. gluster v stop test_vol
> 5. getfattr -n trusted.glusterfs.volume-id /volume/test_vol ( in
> other node)
> 6. setfattr -n trusted.glusterfs.volume-id -v
>  0sKtUJWIIpTeKWZx+S5PyXtQ== /volume/test_vol (already mounted)
> 7. gluster v start test_vol
> 8. restart glusterd
> 9. lvcreate -s vg_cluster/test_vol --setactivationskip=n
> --name 6564c50651484d09a36b912962c573df_0
> 10. lvcreate -s vg_cluster/test_vol --setactivationskip=n
> --name ee8c32a1941e4aba91feab21fbcb3c6c_0
> 11. lvcreate -s vg_cluster/test_vol --setactivationskip=n
> --name bf93dc34233646128f0c5f84c3ac1f83_0 
> 12. reboot
>
> It works, but bricks for snapshot is not working.
>
> 
> --
> ~]# glsuter snpshot status
> Brick Path:  
> 
> 192.225.3.35:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick1
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:  
> 
> 192.225.3.36:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick2
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:  
> 
> 192.225.3.37:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick3
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:  
> 
> 192.225.3.38:/var/run/gluster/snaps/bf93dc34233646128f0c5f84c3ac1f83/brick4
> Volume Group  :   vg_cluster
> Brick Running :   Tes
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
> ~]# gluster snapshot deactivate t3_GMT-2017.05.15-08.01.37
> Deactivating snap will make its data inaccessible. Do you want to
> continue? (y/n) y
> snapshot deactivate: failed: Pre Validation failed on
> 192.225.3.36. Snapshot t3_GMT-2017.05.15-08.01.37 is already
> deactivated.
> Snapshot command failed
>
> ~]# gluster snapshot activate t3_GMT-2017.05.15-08.01.37
> snapshot activate: failed: Snapshot t3_GMT-2017.05.15-08.01.37 is
> already activated
>
> 
> --
>
>
> how to  restore snapshot LV's ?
>
> my nodes consist of four nodes and  distributed, replicated (2x2)
>
>
> thank you.
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
>
> -- 
> Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] how to restore snapshot LV's

2017-05-17 Thread Pranith Kumar Karampuri
+Rafi, +Raghavendra Bhat

On Tue, May 16, 2017 at 11:55 AM, WoongHee Han  wrote:

> Hi, all!
>
> I erased the VG having snapshot LV related to gluster volumes
> and then, I tried to restore volume;
>
> 1. vgcreate vg_cluster /dev/sdb
> 2. lvcreate --size=10G --type=thin-pool -n tp_cluster vg_cluster
> 3. lvcreate -V 5G --thinpool vg_cluster/tp_cluster -n test_vol vg_cluster
> 4. gluster v stop test_vol
> 5. getfattr -n trusted.glusterfs.volume-id /volume/test_vol ( in other
> node)
> 6. setfattr -n trusted.glusterfs.volume-id -v  0sKtUJWIIpTeKWZx+S5PyXtQ== 
> /volume/test_vol
> (already mounted)
> 7. gluster v start test_vol
> 8. restart glusterd
> 9. lvcreate -s vg_cluster/test_vol --setactivationskip=n --name
> 6564c50651484d09a36b912962c573df_0
> 10. lvcreate -s vg_cluster/test_vol --setactivationskip=n --name
> ee8c32a1941e4aba91feab21fbcb3c6c_0
> 11. lvcreate -s vg_cluster/test_vol --setactivationskip=n --name
> bf93dc34233646128f0c5f84c3ac1f83_0
> 12. reboot
>
> It works, but bricks for snapshot is not working.
>
> 
> 
> --
> ~]# glsuter snpshot status
> Brick Path:   192.225.3.35:/var/run/gluster/snaps/
> bf93dc34233646128f0c5f84c3ac1f83/brick1
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:   192.225.3.36:/var/run/gluster/snaps/
> bf93dc34233646128f0c5f84c3ac1f83/brick2
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:   192.225.3.37:/var/run/gluster/snaps/
> bf93dc34233646128f0c5f84c3ac1f83/brick3
> Volume Group  :   vg_cluster
> Brick Running :   No
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
>
> Brick Path:   192.225.3.38:/var/run/gluster/snaps/
> bf93dc34233646128f0c5f84c3ac1f83/brick4
> Volume Group  :   vg_cluster
> Brick Running :   Tes
> Brick PID :   N/A
> Data Percentage   :   0.22
> LV Size   :   5.00g
>
> ~]# gluster snapshot deactivate t3_GMT-2017.05.15-08.01.37
> Deactivating snap will make its data inaccessible. Do you want to
> continue? (y/n) y
> snapshot deactivate: failed: Pre Validation failed on 192.225.3.36.
> Snapshot t3_GMT-2017.05.15-08.01.37 is already deactivated.
> Snapshot command failed
>
> ~]# gluster snapshot activate t3_GMT-2017.05.15-08.01.37
> snapshot activate: failed: Snapshot t3_GMT-2017.05.15-08.01.37 is already
> activated
>
> 
> 
> --
>
>
> how to  restore snapshot LV's ?
>
> my nodes consist of four nodes and  distributed, replicated (2x2)
>
>
> thank you.
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users