Re: [ceph-users] RBD Mirror DR Testing

2019-11-25 Thread Vikas Rana
, -Vikas -Original Message- From: Jason Dillaman Sent: Thursday, November 21, 2019 10:24 AM To: Vikas Rana Cc: dillaman ; ceph-users Subject: Re: [ceph-users] RBD Mirror DR Testing On Thu, Nov 21, 2019 at 10:16 AM Vikas Rana wrote: > > Thanks Jason. > We are just mounting and

Re: [ceph-users] RBD Mirror DR Testing

2019-11-22 Thread Vikas Rana
something wrong? Thanks, -Vikas -Original Message- From: Jason Dillaman Sent: Thursday, November 21, 2019 10:24 AM To: Vikas Rana Cc: dillaman ; ceph-users Subject: Re: [ceph-users] RBD Mirror DR Testing On Thu, Nov 21, 2019 at 10:16 AM Vikas Rana wrote: > > Thanks Jason. > We

Re: [ceph-users] RBD Mirror DR Testing

2019-11-21 Thread Vikas Rana
: Thursday, November 21, 2019 9:58 AM To: Vikas Rana Cc: ceph-users Subject: Re: [ceph-users] RBD Mirror DR Testing On Thu, Nov 21, 2019 at 9:56 AM Jason Dillaman wrote: > > On Thu, Nov 21, 2019 at 8:49 AM Vikas Rana wrote: > > > > Thanks Jason for such a quick response.

Re: [ceph-users] RBD Mirror DR Testing

2019-11-21 Thread Vikas Rana
Rana Cc: ceph-users Subject: Re: [ceph-users] RBD Mirror DR Testing On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote: > > Hi all, > > > > We have a 200TB RBD image which we are replicating using RBD mirroring. > > We want to test the DR copy and make sure that w

[ceph-users] RBD Mirror DR Testing

2019-11-21 Thread Vikas Rana
Hi all, We have a 200TB RBD image which we are replicating using RBD mirroring. We want to test the DR copy and make sure that we have a consistent copy in case primary site is lost. We did it previously and promoted the DR copy which broken the DR copy from primary and we have to resync

Re: [ceph-users] Ceph Replication not working

2019-04-08 Thread Vikas Rana
irror --cluster=cephdr" Thanks, -Vikas -Original Message- From: Jason Dillaman Sent: Monday, April 8, 2019 9:30 AM To: Vikas Rana Cc: ceph-users Subject: Re: [ceph-users] Ceph Replication not working The log appears to be missing all the librbd log messages. The process seems to

[ceph-users] Ceph Replication not working

2019-04-05 Thread Vikas Rana
Hi there, We are trying to setup a rbd-mirror replication and after the setup, everything looks good but images are not replicating. Can some please please help? Thanks, -Vikas root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs Mode: pool Peers: UUID

[ceph-users] RBD Mirror Image Resync

2019-03-12 Thread Vikas Rana
Hi there, We are replicating a RBD image from Primary to DR site using RBD mirroring. On Primary, we were using 10.2.10. DR site is luminous and we promoted the DR copy to test the failure. Everything checked out good. Now we are trying to restart the replication and we did the demote

[ceph-users] mirroring global id mismatch

2018-12-14 Thread Vikas Rana
Hi there, We are replicating a RBD image from Primary to DR site using RBD mirroring. We were using 10.2.10. We decided to upgrade the DR site to luminous and upgrade went fine and mirroring status also was good. We then promoted the DR copy to test the failure. Everything checked out good. The

Re: [ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
, 2018 at 1:08 PM Vikas Rana wrote: > To give more output. This is XFS FS. > > root@vtier-node1:~# rbd-nbd --read-only map testm-pool/test01 > 2018-12-12 13:04:56.674818 7f1c56e29dc0 -1 asok(0x560b19b3bdf0) > AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to

Re: [ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
superblock root@vtier-node1:~# mount -o ro,norecovery /dev/nbd0 /mnt mount: /dev/nbd0: can't read superblock root@vtier-node1:~# fdisk -l /dev/nbd0 root@vtier-node1:~# Thanks, -Vikas On Wed, Dec 12, 2018 at 10:44 AM Vikas Rana wrote: > Hi, > > We are using Luminous and copying a 100TB

[ceph-users] Mounting DR copy as Read-Only

2018-12-12 Thread Vikas Rana
Hi, We are using Luminous and copying a 100TB RBD image to DR site using RBD Mirror. Everything seems to works fine. The question is, can we mount the DR copy as Read-Only? We can do it on Netapp and we are trying to figure out if somehow we can mount it RO on DR site, then we can do backups at

[ceph-users] CEPH DR RBD Mount

2018-11-27 Thread Vikas Rana
Hi There, We are replicating a 100TB RBD image to DR site. Replication works fine. rbd --cluster cephdr mirror pool status nfs --verbose health: OK images: 1 total 1 replaying dir_research: global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d state: up+replaying

[ceph-users] ceph-deploy error

2018-10-19 Thread Vikas Rana
Hi there, While upgrading from jewel to luminous, all packages wereupgraded but while adding MGR with cluster name CEPHDR, it fails. It works with default cluster name CEPH root@vtier-P-node1:~# sudo su - ceph-deploy ceph-deploy@vtier-P-node1:~$ ceph-deploy --ceph-conf /etc/ceph/cephdr.conf mgr

Re: [ceph-users] RBD Mirror Question

2018-10-04 Thread Vikas Rana
: > On Thu, Oct 4, 2018 at 10:27 AM Vikas Rana wrote: > > > > on Primary site, we have OSD's running on 192.168.4.x address. > > > > Similarly on Secondary site, we have OSD's running on 192.168.4.x > address. 192.168.3.x is the old MON network.on both site which was non &

Re: [ceph-users] RBD Mirror Question

2018-10-04 Thread Vikas Rana
. now primary and secondary can see each other. Do the OSD daemon from primary and secondary have to talk to each other? we have same non routed networks for OSD. Thanks, -Vikas On Thu, Oct 4, 2018 at 10:13 AM Jason Dillaman wrote: > On Thu, Oct 4, 2018 at 10:10 AM Vikas Rana wr

Re: [ceph-users] RBD Mirror Question

2018-10-04 Thread Vikas Rana
ts trying to connect to 192.x address instead of 165.x.y address? I could do ceph -s from both side and they can see each other. Only rbd command is having issue. Thanks, -Vikas On Tue, Oct 2, 2018 at 5:14 PM Jason Dillaman wrote: > On Tue, Oct 2, 2018 at 4:47 PM Vikas Rana wrote

[ceph-users] RBD Mirror Question

2018-10-02 Thread Vikas Rana
Hi, We have a CEPH 3 node cluster at primary site. We created a RBD image and the image has about 100TB of data. Now we installed another 3 node cluster on secondary site. We want to replicate the image at primary site to this new cluster on secondary site. As per documentation, we enabled

Re: [ceph-users] rbd-nbd map question

2018-09-21 Thread Vikas Rana
]. > > > > On Wed, Sep 19, 2018 at 2:49 PM Vikas Rana wrote: > > > > > > Hi there, > > > > > > With default cluster name "ceph" I can map rbd-nbd without any issue. > > > > > > But for a different cluster name, i'm not able

[ceph-users] rbd-nbd map question

2018-09-19 Thread Vikas Rana
Hi there, With default cluster name "ceph" I can map rbd-nbd without any issue. But for a different cluster name, i'm not able to map image using rbd-nbd and getting root@vtier-P-node1:/etc/ceph# rbd-nbd --cluster cephdr map test-pool/testvol rbd-nbd: unknown command: --cluster I looked at

[ceph-users] RBD Map and CEPH Replication question

2018-09-16 Thread Vikas Rana
Hi There, We are using a rbd mapped image as a NFS backend(XFS) and sharing to NFS clients. This setup has been working fine. Now we need to replicate this image to second cluster on the campus. For replication to work, we need exclusive-lock and journaling feature to be enabled. If we enable