Re: [ceph-users] Error in ceph rbd mirroring(rbd::mirror::InstanceWatcher: C_NotifyInstanceRequestfinish: resending after timeout)

2019-07-27 Thread Ajitha Robert
Thanks for the clarification


*1) Will there be any folder related to rbd-mirroring in /var/lib/ceph ? *


*2) Is ceph rbd-mirror authentication mandatory?*


*3)when even i create any cinder volume loaded with glance image i get the
following error.. *

2019-07-27 17:26:46.762571 7f93eb0a5780 20 librbd::api::Mirror: peer_list:
2019-07-27 17:27:07.541701 7f939d7fa700  0 rbd::mirror::ImageReplayer:
0x7f93c800e9e0 [19/b6656be7-6006-4246-ba93-a49a220e33ce] handle_shut_down:
remote image no longer exists: scheduling deletion
2019-07-27 17:27:16.766199 7f93eb0a5780 20 librbd::api::Mirror: peer_list:
2019-07-27 17:27:22.568970 7f939d7fa700  0 rbd::mirror::ImageReplayer:
0x7f93c800e9e0 [19/b6656be7-6006-4246-ba93-a49a220e33ce] handle_shut_down:
mirror image no longer exists
2019-07-27 17:27:46.769158 7f93eb0a5780 20 librbd::api::Mirror: peer_list:
2019


*4) Finally,*


*Attimes i can able to create bootable cinder volume apart from the above
errors, *


*but certain times i face the following*


example, For a 50 gb volume, Local image get created, but it couldnt create
a mirror image

Logs of the image status showed as replaying but pls see the rbd-mirror log

http://paste.openstack.org/show/754917/


rbd-mirror.log

http://paste.openstack.org/show/754916/




On Fri, Jul 26, 2019 at 6:55 PM Mykola Golub 
wrote:

> On Fri, Jul 26, 2019 at 04:40:35PM +0530, Ajitha Robert wrote:
> > Thank you for the clarification.
> >
> > But i was trying with openstack-cinder.. when i load some data into the
> > volume around 50gb, the image sync will stop by 5 % or something within
> > 15%...  What could be the reason?
>
> I suppose you see image sync stop in mirror status output? Could you
> please provide an example? And I suppose you don't see any other
> messages in rbd-mirror log apart from what you have already posted?
> Depending on configuration rbd-mirror might log in several logs. Could
> you please try to find all its logs? `lsof |grep 'rbd-mirror.*log'`
> may be useful for this.
>
> BTW, what rbd-mirror version are you running?
>
> --
> Mykola Golub
>


-- 


*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error in ceph rbd mirroring(rbd::mirror::InstanceWatcher: C_NotifyInstanceRequestfinish: resending after timeout)

2019-07-26 Thread Ajitha Robert
Thank you for the clarification.

But i was trying with openstack-cinder.. when i load some data into the
volume around 50gb,
either it will say

ImageReplayer: 0x7f7264016c50 [17/244d1ab5-8147-45ed-8cd1-9b3613f1f104]
handle_shut_down: mirror image no longer exists

or

 the image sync will stop by 5 % or something within 15%...  What could be
the reason?


On Fri, Jul 26, 2019 at 4:40 PM Ajitha Robert 
wrote:

>
>
>
>
>
>
> On Fri, Jul 26, 2019 at 3:01 PM Mykola Golub 
> wrote:
>
>> On Fri, Jul 26, 2019 at 12:31:59PM +0530, Ajitha Robert wrote:
>> >  I have a rbd mirroring setup with primary and secondary clusters as
>> peers
>> > and I have a pool enabled image mode.., In this i created a rbd image ,
>> > enabled with journaling.
>> > But whenever i enable mirroring on the image,  I m getting error in
>> > rbdmirror.log and  osd.log.
>> > I have increased the timeouts.. nothing worked and couldnt traceout the
>> > error
>> > please guide me to solve this error.
>> >
>> > *Logs*
>> > http://paste.openstack.org/show/754766/
>>
>> What do you mean by "nothing worked"? According to mirroring status
>> the image is mirroring: it is in "up+stopped" state on the primary as
>> expected, and in "up+replaying" state on the secondary with 0 entries
>> behind master.
>>
>> The "failed to get omap key" error in the osd log is harmless, and
>> just a week ago the fix was merged upstream not to display it.
>>
>> The cause of "InstanceWatcher: ... resending after timeout" error in
>> the rbd-mirror log is not clear but if it is not repeating it is
>> harmless too.
>>
>> I see you were trying to map the image with krbd. It is expected to
>> fail as the krbd does not support "journaling" feature, which is
>> necessary for mirroring. You can access those images only with librbd
>> (e.g. mapping with rbd-nbd driver or via qemu).
>>
>> --
>> Mykola Golub
>>
>
>
> --
>
>
> *Regards,Ajitha R*
>


-- 


*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error in ceph rbd mirroring(rbd::mirror::InstanceWatcher: C_NotifyInstanceRequestfinish: resending after timeout)

2019-07-26 Thread Ajitha Robert
Thank you for the clarification.

But i was trying with openstack-cinder.. when i load some data into the
volume around 50gb, the image sync will stop by 5 % or something within
15%...  What could be the reason?






On Fri, Jul 26, 2019 at 3:01 PM Mykola Golub 
wrote:

> On Fri, Jul 26, 2019 at 12:31:59PM +0530, Ajitha Robert wrote:
> >  I have a rbd mirroring setup with primary and secondary clusters as
> peers
> > and I have a pool enabled image mode.., In this i created a rbd image ,
> > enabled with journaling.
> > But whenever i enable mirroring on the image,  I m getting error in
> > rbdmirror.log and  osd.log.
> > I have increased the timeouts.. nothing worked and couldnt traceout the
> > error
> > please guide me to solve this error.
> >
> > *Logs*
> > http://paste.openstack.org/show/754766/
>
> What do you mean by "nothing worked"? According to mirroring status
> the image is mirroring: it is in "up+stopped" state on the primary as
> expected, and in "up+replaying" state on the secondary with 0 entries
> behind master.
>
> The "failed to get omap key" error in the osd log is harmless, and
> just a week ago the fix was merged upstream not to display it.
>
> The cause of "InstanceWatcher: ... resending after timeout" error in
> the rbd-mirror log is not clear but if it is not repeating it is
> harmless too.
>
> I see you were trying to map the image with krbd. It is expected to
> fail as the krbd does not support "journaling" feature, which is
> necessary for mirroring. You can access those images only with librbd
> (e.g. mapping with rbd-nbd driver or via qemu).
>
> --
> Mykola Golub
>


-- 


*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Error in ceph rbd mirroring(rbd::mirror::InstanceWatcher: C_NotifyInstanceRequestfinish: resending after timeout)

2019-07-26 Thread Ajitha Robert
 I have a rbd mirroring setup with primary and secondary clusters as peers
and I have a pool enabled image mode.., In this i created a rbd image ,
enabled with journaling.
But whenever i enable mirroring on the image,  I m getting error in
rbdmirror.log and  osd.log.
I have increased the timeouts.. nothing worked and couldnt traceout the
error
please guide me to solve this error.

*Logs*
http://paste.openstack.org/show/754766/

-- 


*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-21 Thread Ajitha Robert
 I have a rbd mirroring setup with primary and secondary clusters as peers
and I have a pool enabled image mode.., In this i created a rbd image ,
enabled with journaling.

But whenever i enable mirroring on the image,  I m getting error in
osd.log. I couldnt trace it out. please guide me to solve this error.
I think initially it worked fine. but after ceph process restart. these
error coming


Secondary.osd.0.log

2019-07-22 05:36:17.371771 7ffbaa0e9700  0 
/build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap
key: client_a5c76849-ba16-480a-a96b-ebfdb7f6ac65
2019-07-22 05:36:17.388552 7ffbaa0e9700  0 
/build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set
earlier than minimum: 0 < 1
2019-07-22 05:36:17.413102 7ffbaa0e9700  0 
/build/ceph-12.2.12/src/cls/journal/cls_journal.cc:61: failed to get omap
key: order
2019-07-22 05:36:23.341490 7ffbab8ec700  0 
/build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id
for global id '9e36b9f8-238e-4a54-a055-19b19447855e': (2) No such file or
directory


primary-osd.0.log

2019-07-22 05:16:49.287769 7fae12db1700  0 log_channel(cluster) log [DBG] :
1.b deep-scrub ok
2019-07-22 05:16:54.078698 7fae125b0700  0 log_channel(cluster) log [DBG] :
1.1b scrub starts
2019-07-22 05:16:54.293839 7fae125b0700  0 log_channel(cluster) log [DBG] :
1.1b scrub ok
2019-07-22 05:17:04.055277 7fae12db1700  0 
/build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set
earlier than minimum: 0 < 1

2019-07-22 05:33:21.540986 7fae135b2700  0 
/build/ceph-12.2.12/src/cls/journal/cls_journal.cc:472: active object set
earlier than minimum: 0 < 1
2019-07-22 05:35:27.447820 7fae12db1700  0 
/build/ceph-12.2.12/src/cls/rbd/cls_rbd.cc:4125: error retrieving image id
for global id '8a61f694-f650-4ba1-b768-c5e7629ad2e0': (2) No such file or
directory


-- 


*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Failed to get omap key when mirroring of image is enabled

2019-07-20 Thread Ajitha Robert
I  have two queries,

1) I have a rbd mirroring setup with two primary and secondary clusters as
peers and I have enabled image mode.., In this i creates rbd image enabled
with journaling.

But whenever i enable mirroring on the image,  I m getting error in osd.log

Primary osd log: failed to get omap key, error retrieving image id for
global id

Secondary osd log: error retrieving image id for global id

2)
I have deployed ceph using ceph-ansible. Is it possible to give the SAN
multipath device in osd devices??. I m using stable3.0 and luminous

I have given
devices

/dev/mapper/
I m getting symlink issue
How to rectify it.
Environment:

OS (e.g. from /etc/os-release): debian stretch
Ansible version (e.g. ansible-playbook --version):2.4.0
ceph-ansible version (e.g. git head or tag or stable branch):3.0
Ceph version (e.g. ceph -v): luminous(12.2)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph status

2015-01-06 Thread Ajitha Robert
Hi all,

I have installed ceph using ceph-deploy utility.. I have created three
VM's, one for monitor+mds and other two VM's for OSD's. ceph admin is
another seperate machine...


.Status and health of ceph are shown below.. Can you please suggest What i
can infer from the status.. I m a beginner to this..

*ceph status*

  cluster 3a946c74-b16d-41bd-a5fe-41efa96f0ee9
 health HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale;
46 pgs stuck degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs
stuck unclean; 46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk
space
 monmap e1: 1 mons at {MON=10.184.39.66:6789/0}, election epoch 1,
quorum 0 MON
 osdmap e19: 5 osds: 2 up, 2 in
  pgmap v33: 64 pgs, 1 pools, 0 bytes data, 0 objects
10304 MB used, 65947 MB / 76252 MB avail
  18 stale+incomplete
  46 stale+active+undersized+degraded


*ceph health*

HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale; 46 pgs stuck
degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs stuck unclean;
46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk space

*ceph -w*
cluster 3a946c74-b16d-41bd-a5fe-41efa96f0ee9
 health HEALTH_WARN 46 pgs degraded; 18 pgs incomplete; 64 pgs stale;
46 pgs stuck degraded; 18 pgs stuck inactive; 64 pgs stuck stale; 64 pgs
stuck unclean; 46 pgs stuck undersized; 46 pgs undersized; mon.MON low disk
space
 monmap e1: 1 mons at {MON=10.184.39.66:6789/0}, election epoch 1,
quorum 0 MON
 osdmap e19: 5 osds: 2 up, 2 in
  pgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects
10305 MB used, 65947 MB / 76252 MB avail
  18 stale+incomplete
  46 stale+active+undersized+degraded

2015-01-05 20:38:53.159998 mon.0 [INF] from='client.? 10.184.39.66:0/1011909'
entity='client.bootstrap-mds' cmd='[{prefix: auth get-or-create,
entity: mds.MON, caps: [osd, allow rwx, mds, allow, mon,
allow profile mds]}]': finished


2015-01-05 20:41:42.003690 mon.0 [INF] pgmap v32: 64 pgs: 18
stale+incomplete, 46 stale+active+undersized+degraded; 0 bytes data, 10304
MB used, 65947 MB / 76252 MB avail
2015-01-05 20:41:50.100784 mon.0 [INF] pgmap v33: 64 pgs: 18
stale+incomplete, 46 stale+active+undersized+degraded; 0 bytes data, 10304
MB used, 65947 MB / 76252 MB avail





*Regards,Ajitha R*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com