[ceph-users] How to hide internal ip on ceph mount

2017-02-27 Thread gjprabu
Hi Team,



 How to hide internal ip address on cephfs mounting. Due to 
security reason we need to hide ip address. Also we are running docker 
container in the base machine and which will shown the partition details over 
there. Kindly let us know is there any solution for this. 



192.168.xxx.xxx:6789,192.168.xxx.xxx:6789,192.168.xxx.xxx:6789:/ ceph  6.4T 
 2.0T  4.5T  31% /home/




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] librbd logging

2017-02-27 Thread Jason Dillaman
On Mon, Feb 27, 2017 at 12:36 PM, Laszlo Budai  wrote:
> Currently my system does not have the /var/log/quemu directory. Is it enough
> to create that directory in order to have some logs from librbd? Or I need
> to restart the vm?


If you have the admin socket file, you can run "ceph --admin-daemon
/var/run/ceph/guests/ log reopen" after creating the directory.
If you don't have the asok file, I believe the next best option would
be a live migration to pick up the change.

Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Massimiliano Cuttini

Not really tested,

but searching around, many people say at moment RBD-NBD as more or less 
same perforamance.

While RBD-FUSE is really slow.

At the moment I cannot anymore test the kernel version because 
downgrading/re-upgrading CRUSH tunable will be a nightmare.


But you can try.




Il 27/02/2017 19:41, Simon Weald ha scritto:

Is there a performance hit when using rbd-nbd?

On 27/02/17 18:34, Massimiliano Cuttini wrote:

But if everybody get Kernel Mismatch (me too)

... why don't use directly rbd-nbd and forget about kernel-rbd

All feature, almost same performance.

No?




Il 27/02/2017 18:54, Ilya Dryomov ha scritto:

On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo 
wrote:

We already discussed this:

https://www.spinics.net/lists/ceph-devel/msg34559.html

What do you think of comment posted in that ML?
Would that make sense to you as well?

Sorry, I dropped the ball on this.  I'll try to polish and push my man
page branch this week.

Thanks,

  Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
2017-02-27 20:53 GMT+01:00 Roger Brown :

> replace "master" with the release codename, eg. http://docs.ceph.com/docs/
> kraken/
>
>
Thanks

I suggest to add the doc version list on http://docs.ceph.com
 page.

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Roger Brown
replace "master" with the release codename, eg.
http://docs.ceph.com/docs/kraken/


On Mon, Feb 27, 2017 at 12:45 PM Stéphane Klein 
wrote:

> Hi,
>
> how can I read old Ceph version documentation?
>
> http://docs.ceph.com I see only "master" documentation.
>
> I look for 0.94.5 documentation.
>
> Best regards,
> Stéphane
> --
> Stéphane Klein 
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
Hi,

how can I read old Ceph version documentation?

http://docs.ceph.com I see only "master" documentation.

I look for 0.94.5 documentation.

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.94.10 Hammer release rpm signature issue

2017-02-27 Thread Andrew Schoen
Sorry about this, we did have signed rpm repos up but a mistake by
myself overwrote those with the unsigned ones. This should be fixed
now. Let me know if you have anymore issues with the repos.

Thanks,
 Andrew

>
> On Mon, Feb 27, 2017 at 8:30 AM, Pietari Hyvärinen
>  wrote:
>> Hi!
>>
>> I have still some servers running hammer release and now new packages are 
>> not signed at all. yum update will fail on these new packages. I have seen 
>> attack vectors against systems with nonsigned packages.
>>
>>  rpm -qp --queryformat %{SIGPGP} 
>> http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-0.94.10-0.el7.x86_64.rpm
>> (none)
>>
>> Previous hammer packages are signed.
>>
>> rpm -qp --queryformat %{SIGPGP} 
>> http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-0.94.9-0.el7.x86_64.rpm
>> warning: 
>> http://download.ceph.com/rpm-hammer/el7/x86_64/ceph-0.94.9-0.el7.x86_64.rpm: 
>> Otsikko V4 RSA/SHA1 Signature, key ID 460f3994: NOKEY
>> 89021c040001020006050257c5b9ab000a0910e84ac2c0460f39943f5b0ffe37aada0b3ec344dac9a2fc0d859d1d151e243f51592212327a29463d329d8156830cfca2bd53d69bdc8241d46697728b4b0a496e707aa895b8930da06b849ede1a9cd12d60bc3d70e77a88ba4edb429cd0f5e567864cb1e05c7b99b77cc0b60d25642ca522f37fecbfae562f7adbed0fa5e1780515790c0a3fa3a6ad4ea057813ac1ce28aa1611578665244be0cb8b4f6e8f4c34f29003d4a1bc2a2f161a357dd6bba7ac0783baae897c74fddd8c65b5dd9ac763c6534f2aa4485dbbbd46545b1f4d6cf890e7185460aab63bc9f318dda6b66b632254386b95fe338583c62b7fbe6966dd3722f416ef3ca8c4e10555fa50ac88da4976acea6d4317020974fbd550a4e9361214490d5df13ad7e3066d146a733de6c684fc596c58c8f50d9fe72fbe9f584d5eb49b5702a96bc8fb71d92144c54a9d6c16008b4c7e2326f2336a02dc4aac8e788716f196f955562ec99d4c91072cee8f02779d09b60e5992420419875ff655efe02e01458d4637055be344e377f594329a64324e0cb73b0eee0b7135433a1d313b643f71eff988995993f300ee10bc2076f44b663d28a536a5b6ba3c1b0a940a924af9c7b7508e5f2d0debe7445280d39737be18806d5fc710e241cc2e93a4d6a08481b8d054e5ac4!
>>  9a0876dce
>>  
>> eb233c8c02c2dd012ea5d79e8b0b5bcae8b7e11d08da999dcc69e9ab19b34f80e7a0da81825439d08fa10e8d
>>
>> ps. SHA1...
>>
>> --
>> Pietari Hyvärinen
>> Storage Platforms
>> CSC - IT Center for Science Ltd.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Massimiliano Cuttini

But if everybody get Kernel Mismatch (me too)

... why don't use directly rbd-nbd and forget about kernel-rbd

All feature, almost same performance.

No?




Il 27/02/2017 18:54, Ilya Dryomov ha scritto:

On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo  wrote:

We already discussed this:

https://www.spinics.net/lists/ceph-devel/msg34559.html

What do you think of comment posted in that ML?
Would that make sense to you as well?

Sorry, I dropped the ball on this.  I'll try to polish and push my man
page branch this week.

Thanks,

 Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo  wrote:
> We already discussed this:
>
> https://www.spinics.net/lists/ceph-devel/msg34559.html
>
> What do you think of comment posted in that ML?
> Would that make sense to you as well?

Sorry, I dropped the ball on this.  I'll try to polish and push my man
page branch this week.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
thanks for that link,  It will be nice to have that interface supported by
.ko,  regardless i raised this http://tracker.ceph.com/issues/19095

On Mon, Feb 27, 2017 at 9:47 AM, Shinobu Kinjo  wrote:

> We already discussed this:
>
> https://www.spinics.net/lists/ceph-devel/msg34559.html
>
> What do you think of comment posted in that ML?
> Would that make sense to you as well?
>
>
> On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni 
> wrote:
> > Ilya,
> >
> > Many folks hit this and its quite difficult since the error is not
> properly
> > printed out(unless one scans syslogs), Is it possible to default the
> feature
> > to
> > the one that kernel supports or its not possible to handle that case?
> >
> > Thanks
> >
> > On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov 
> wrote:
> >>
> >> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald 
> wrote:
> >> > I've currently having some issues making some Jessie-based Xen hosts
> >> > talk to a Trusty-based cluster due to feature mismatch errors. Our
> >> > Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our
> Jessie
> >> > hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
> >> > map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1);
> still
> >> > no joy. I then tried compiling the latest kernel in the 4.9 branch
> >> > (4.9.12) from source with the Debian kernel config - still no joy. As
> I
> >> > understand it there have been a lot of changes in krbd which I should
> >> > have pulled in when building from source - am I missing something?
> Some
> >> > info about the Xen hosts:
> >> >
> >> > root@xen-host:~# uname -r
> >> > 4.9.12-internal
> >> >
> >> > root@xen-host:~# ceph -v
> >> > ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> >> >
> >> > root@xen-host:~# rbd map -p cinder
> >> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
> >> > rbd: sysfs write failed
> >> > RBD image feature set mismatch. You can disable features unsupported
> by
> >> > the kernel with "rbd feature disable".
> >> > In some cases useful info is found in syslog - try "dmesg | tail" or
> so.
> >> > rbd: map failed: (6) No such device or address
> >> >
> >> > root@xen-host:~# dmesg | grep 'unsupported'
> >> > [252723.885948] rbd: image volume-88188973-0f40-48a3-
> 8a88-302d1cb5e093:
> >> > image uses unsupported features: 0x38
> >> >
> >> > root@xen-host:~# rbd info -p cinder
> >> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
> >> > rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
> >> > size 1024 MB in 256 objects
> >> > order 22 (4096 kB objects)
> >> > block_name_prefix: rbd_data.c6bd3c5f705426
> >> > format: 2
> >> > features: layering, exclusive-lock, object-map, fast-diff,
> >> > deep-flatten
> >> > flags:
> >>
> >> object-map, fast-diff, deep-flatten are still unsupported.
> >>
> >> > Do
> >> >
> >> > $ rbd feature disable 
> >> > deep-flatten,fast-diff,object-map,exclusive-lock
> >> >
> >> > to disable features unsupported by the kernel client.  If you are
> using
> >> > the
> >> > kernel client, you should create your images with
> >> >
> >> > $ rbd create --size  --image-feature layering 
> >> >
> >> > or add
> >> >
> >> > rbd default features = 3
> >> >
> >> > to ceph.conf on the client side.  (Setting rbd default features on the
> >> > OSDs will have no effect.)
> >>
> >> exclusive-lock is supported starting with 4.9.  The above becomes
> >>
> >> > $ rbd feature disable  deep-flatten,fast-diff,object-
> map
> >> > $ rbd create --size  --image-feature layering,exclusive-lock
> >> > 
> >> > rbd default features = 5
> >>
> >> if you want it.
> >>
> >> Thanks,
> >>
> >> Ilya
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Shinobu Kinjo
We already discussed this:

https://www.spinics.net/lists/ceph-devel/msg34559.html

What do you think of comment posted in that ML?
Would that make sense to you as well?


On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni  wrote:
> Ilya,
>
> Many folks hit this and its quite difficult since the error is not properly
> printed out(unless one scans syslogs), Is it possible to default the feature
> to
> the one that kernel supports or its not possible to handle that case?
>
> Thanks
>
> On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov  wrote:
>>
>> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald  wrote:
>> > I've currently having some issues making some Jessie-based Xen hosts
>> > talk to a Trusty-based cluster due to feature mismatch errors. Our
>> > Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
>> > hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
>> > map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
>> > no joy. I then tried compiling the latest kernel in the 4.9 branch
>> > (4.9.12) from source with the Debian kernel config - still no joy. As I
>> > understand it there have been a lot of changes in krbd which I should
>> > have pulled in when building from source - am I missing something? Some
>> > info about the Xen hosts:
>> >
>> > root@xen-host:~# uname -r
>> > 4.9.12-internal
>> >
>> > root@xen-host:~# ceph -v
>> > ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>> >
>> > root@xen-host:~# rbd map -p cinder
>> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
>> > rbd: sysfs write failed
>> > RBD image feature set mismatch. You can disable features unsupported by
>> > the kernel with "rbd feature disable".
>> > In some cases useful info is found in syslog - try "dmesg | tail" or so.
>> > rbd: map failed: (6) No such device or address
>> >
>> > root@xen-host:~# dmesg | grep 'unsupported'
>> > [252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
>> > image uses unsupported features: 0x38
>> >
>> > root@xen-host:~# rbd info -p cinder
>> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
>> > rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
>> > size 1024 MB in 256 objects
>> > order 22 (4096 kB objects)
>> > block_name_prefix: rbd_data.c6bd3c5f705426
>> > format: 2
>> > features: layering, exclusive-lock, object-map, fast-diff,
>> > deep-flatten
>> > flags:
>>
>> object-map, fast-diff, deep-flatten are still unsupported.
>>
>> > Do
>> >
>> > $ rbd feature disable 
>> > deep-flatten,fast-diff,object-map,exclusive-lock
>> >
>> > to disable features unsupported by the kernel client.  If you are using
>> > the
>> > kernel client, you should create your images with
>> >
>> > $ rbd create --size  --image-feature layering 
>> >
>> > or add
>> >
>> > rbd default features = 3
>> >
>> > to ceph.conf on the client side.  (Setting rbd default features on the
>> > OSDs will have no effect.)
>>
>> exclusive-lock is supported starting with 4.9.  The above becomes
>>
>> > $ rbd feature disable  deep-flatten,fast-diff,object-map
>> > $ rbd create --size  --image-feature layering,exclusive-lock
>> > 
>> > rbd default features = 5
>>
>> if you want it.
>>
>> Thanks,
>>
>> Ilya
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] librbd logging

2017-02-27 Thread Laszlo Budai

Hello,

I have these settings in my /etc/ceph/ceph.conf:
[client]
  rbd cache = true
  rbd cache writethrough until flush = true
  admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
  log file = /var/log/qemu/qemu-guest-$pid.log
  rbd concurrent management ops = 20

Currently my system does not have the /var/log/quemu directory. Is it enough to 
create that directory in order to have some logs from librbd? Or I need to 
restart the vm?

Kind regards,
Laszlo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
Ilya,

Many folks hit this and its quite difficult since the error is not properly
printed out(unless one scans syslogs), Is it possible to default the
feature to
the one that kernel supports or its not possible to handle that case?

Thanks

On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov  wrote:

> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald  wrote:
> > I've currently having some issues making some Jessie-based Xen hosts
> > talk to a Trusty-based cluster due to feature mismatch errors. Our
> > Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
> > hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
> > map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
> > no joy. I then tried compiling the latest kernel in the 4.9 branch
> > (4.9.12) from source with the Debian kernel config - still no joy. As I
> > understand it there have been a lot of changes in krbd which I should
> > have pulled in when building from source - am I missing something? Some
> > info about the Xen hosts:
> >
> > root@xen-host:~# uname -r
> > 4.9.12-internal
> >
> > root@xen-host:~# ceph -v
> > ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> >
> > root@xen-host:~# rbd map -p cinder
> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
> > rbd: sysfs write failed
> > RBD image feature set mismatch. You can disable features unsupported by
> > the kernel with "rbd feature disable".
> > In some cases useful info is found in syslog - try "dmesg | tail" or so.
> > rbd: map failed: (6) No such device or address
> >
> > root@xen-host:~# dmesg | grep 'unsupported'
> > [252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
> > image uses unsupported features: 0x38
> >
> > root@xen-host:~# rbd info -p cinder
> > volume-88188973-0f40-48a3-8a88-302d1cb5e093
> > rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
> > size 1024 MB in 256 objects
> > order 22 (4096 kB objects)
> > block_name_prefix: rbd_data.c6bd3c5f705426
> > format: 2
> > features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
> > flags:
>
> object-map, fast-diff, deep-flatten are still unsupported.
>
> > Do
> >
> > $ rbd feature disable  deep-flatten,fast-diff,object-
> map,exclusive-lock
> >
> > to disable features unsupported by the kernel client.  If you are using
> the
> > kernel client, you should create your images with
> >
> > $ rbd create --size  --image-feature layering 
> >
> > or add
> >
> > rbd default features = 3
> >
> > to ceph.conf on the client side.  (Setting rbd default features on the
> > OSDs will have no effect.)
>
> exclusive-lock is supported starting with 4.9.  The above becomes
>
> > $ rbd feature disable  deep-flatten,fast-diff,object-map
> > $ rbd create --size  --image-feature layering,exclusive-lock
> 
> > rbd default features = 5
>
> if you want it.
>
> Thanks,
>
> Ilya
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph SElinux denials on OSD startup

2017-02-27 Thread Benjeman Meekhof
Hi,

I'm seeing some SElinux denials for ops to nvme devices.  They only
occur at OSD start, they are not ongoing.  I'm not sure it's causing
an issue though I did try a few tests with SElinux in permissive mode
to see if it made any difference with startup/recovery CPU loading we
have seen since update to Kraken (another thread).  There doesn't seem
to be a noticeable difference in behaviour when we turn enforcing off
- our default state is with enforcing on and has been since the start
of our cluster.

Familiar to anyone?  I can open a tracker issue if it isn't obviously
an issue on my end.

thanks,
Ben

---
type=AVC msg=audit(1487971555.994:39654): avc:  denied  { read } for
pid=470733 comm="ceph-osd" name="nvme0n1p13" dev="devtmpfs" ino=28742
scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487971555.994:39654): avc:  denied  { open } for
pid=470733 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=28742 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487971555.995:39655): avc:  denied  { getattr }
for  pid=470733 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=28742 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487971555.995:39656): avc:  denied  { ioctl } for
pid=470733 comm="ceph-osd" path="/dev/nvme0n1p13" dev="devtmpfs"
ino=28742 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file

type=AVC msg=audit(1487978131.752:40937): avc:  denied  { getattr }
for  pid=528235 comm="fn_odsk_fstore" path="/dev/nvme0n1"
dev="devtmpfs" ino=16546 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487978131.752:40938): avc:  denied  { read } for
pid=528235 comm="fn_odsk_fstore" name="nvme0n1p1" dev="devtmpfs"
ino=16549 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487978131.752:40938): avc:  denied  { open } for
pid=528235 comm="fn_odsk_fstore" path="/dev/nvme0n1p1" dev="devtmpfs"
ino=16549 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
type=AVC msg=audit(1487978131.752:40939): avc:  denied  { ioctl } for
pid=528235 comm="fn_odsk_fstore" path="/devnvme0n1p1" dev="devtmpfs"
ino=16549 scontext=system_u:system_r:ceph_t:s0
tcontext=system_u:object_r:nvme_device_t:s0 tclass=blk_file
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Safely Upgrading OS on a live Ceph Cluster

2017-02-27 Thread Heller, Chris
I am attempting an operating system upgrade of a live Ceph cluster. Before I go 
an screw up my production system, I have been testing on a smaller 
installation, and I keep running into issues when bringing the Ceph FS metadata 
server online.

My approach here has been to store all Ceph critical files on non-root 
partitions, so the OS install can safely proceed without overwriting any of the 
Ceph configuration or data.

Here is how I proceed:

First I bring down the Ceph FS via `ceph mds cluster_down`.
Second, to prevent OSDs from trying to repair data, I run `ceph osd set noout`
Finally I stop the ceph processes in the following order: ceph-mds, ceph-mon, 
ceph-osd

Note my cluster has 1 mds and 1 mon, and 7 osd.

I then install the new OS and then bring the cluster back up by walking the 
steps in reverse:

First I start the ceph processes in the following order: ceph-osd, ceph-mon, 
ceph-mds
Second I restore OSD functionality with `ceph osd unset noout`
Finally I bring up the Ceph FS via `ceph mds cluster_up`

Everything works smoothly except the Ceph FS bring up. The MDS starts in the 
active:replay state and eventually crashes with the following backtrace:

starting mds.cuba at :/0
2017-02-27 16:56:08.233680 7f31daa3b7c0 -1 mds.-1.0 log_to_monitors 
{default=true}
2017-02-27 16:56:08.537714 7f31d30df700 -1 mds.0.sessionmap _load_finish got 
(2) No such file or directory
mds/SessionMap.cc : In function 'void 
SessionMap::_load_finish(int, ceph::bufferlist&)' thread 7f31d30df700 time 
2017-02-27 16:56:08.537739
mds/SessionMap.cc : 98: FAILED assert(0 == "failed to 
load sessionmap")
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) 
[0x98bb4b]
2: (SessionMap::_load_finish(int, ceph::buffer::list&)+0x2b4) [0x7df2a4]
3: (MDSIOContextBase::complete(int)+0x95) [0x7e34b5]
4: (Finisher::finisher_thread_entry()+0x190) [0x8bd6d0]
5: (()+0x8192) [0x7f31d9c8f192]
6: (clone()+0x6d) [0x7f31d919c51d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.
2017-02-27 16:56:08.538493 7f31d30df700 -1 mds/SessionMap.cc 
: In function 'void SessionMap::_load_finish(int, 
ceph::bufferlist&)' thread 7f31d30df700 time 2017-02-27 16:56:08.537739
mds/SessionMap.cc : 98: FAILED assert(0 == "failed to 
load sessionmap")

ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) 
[0x98bb4b]
2: (SessionMap::_load_finish(int, ceph::buffer::list&)+0x2b4) [0x7df2a4]
3: (MDSIOContextBase::complete(int)+0x95) [0x7e34b5]
4: (Finisher::finisher_thread_entry()+0x190) [0x8bd6d0]
5: (()+0x8192) [0x7f31d9c8f192]
6: (clone()+0x6d) [0x7f31d919c51d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.

 -106> 2017-02-27 16:56:08.233680 7f31daa3b7c0 -1 mds.-1.0 log_to_monitors 
{default=true}
   -1> 2017-02-27 16:56:08.537714 7f31d30df700 -1 mds.0.sessionmap _load_finish 
got (2) No such file or directory
0> 2017-02-27 16:56:08.538493 7f31d30df700 -1 mds/SessionMap.cc 
: In function 'void SessionMap::_load_finish(int, 
ceph::bufferlist&)' thread 7f31d30df700 time 2017-02-27 16:56:08.537739
mds/SessionMap.cc : 98: FAILED assert(0 == "failed to 
load sessionmap")

ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) 
[0x98bb4b]
2: (SessionMap::_load_finish(int, ceph::buffer::list&)+0x2b4) [0x7df2a4]
3: (MDSIOContextBase::complete(int)+0x95) [0x7e34b5]
4: (Finisher::finisher_thread_entry()+0x190) [0x8bd6d0]
5: (()+0x8192) [0x7f31d9c8f192]
6: (clone()+0x6d) [0x7f31d919c51d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.

terminate called after throwing an instance of 'ceph::FailedAssertion'
*** Caught signal (Aborted) **
in thread 7f31d30df700
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
1: ceph_mds() [0x89984a]
2: (()+0x10350) [0x7f31d9c97350]
3: (gsignal()+0x39) [0x7f31d90d8c49]
4: (abort()+0x148) [0x7f31d90dc058]
5: (__gnu_cxx::__verbose_terminate_handler()+0x155) [0x7f31d99e3555]
6: (()+0x5e6f6) [0x7f31d99e16f6]
7: (()+0x5e723) [0x7f31d99e1723]
8: (()+0x5e942) [0x7f31d99e1942]
9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x278) 
[0x98bd38]
10: (SessionMap::_load_finish(int, ceph::buffer::list&)+0x2b4) [0x7df2a4]
11: (MDSIOContextBase::complete(int)+0x95) [0x7e34b5]
12: (Finisher::finisher_thread_entry()+0x190) [0x8bd6d0]
13: (()+0x8192) [0x7f31d9c8f192]
14: (clone()+0x6d) [0x7f31d919c51d]
2017-02-27 16:56:08.540155 7f31d30df700 -1 *** Caught signal (Aborted) **
in thread 7f31d30df700

ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)
1: ceph_mds() [0x89984a]
2: (()+0x10350) [0x7f31d9c97350]
3: (gsi

Re: [ceph-users] Ceph on XenServer - RBD Image Size

2017-02-27 Thread Mike Jacobacci
Hi Michal,

Yes I have considered that, but I felt it was easier to administer the VM's
without having to interact with Ceph every time. I have a another smaller
image that I backup VM configs/data to for cold storage... The VM's are for
internal resources so they are expendable.

I am totally open to change if I am doing something wrong.

Cheers,
Mike
>Hi Mike,
>
>Have you considered creating SR which doesn't make one huge RBD volume
>and on top of it creates LVM but instead creates separate RBD volumes
>for each VDI?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] "STATE_CONNECTING_WAIT_BANNER_AND_IDENTIFY" showing in ceph -s

2017-02-27 Thread Gregory Farnum
On Sun, Feb 26, 2017 at 10:41 PM, nokia ceph  wrote:
> Hello,
>
> On a fresh installation ceph kraken 11.2.0 , we are facing below error in
> the "ceph -s" output.
>
> 
> 0 -- 10.50.62.152:0/675868622 >> 10.50.62.152:6866/13884 conn(0x7f576c002750
> :-1 s=STATE_CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0
> l=1)._process_connection connect claims to be 10.50.62.152:6866/1244305 not
> 10.50.62.152:6866/13884 - wrong node!
> 

As you see when comparing addresses, they differ only at the end, in
what we call the nonce. This most commonly just means that one end or
the other has a newer osd map epoch indicating the OSD went down and
it restarted itself. If it persists once they've all finished their
startup work, you may have an issue with your network config or
something.
-Greg

>
> May I know under what scenerio the above message will prompt in the screen.
> Also let me know what is the impact of this message.
>
> I suspect this message raised because of something wrong with the OSD
> creation.
>
> Env:-
> Kraken - 11.2.0 , 4 node , 3 mon
> RHEL 7.2
> EC 3+1 , 68 disks , bluestore
>
> Please suggest how to remove or skip these errors.
> FYI -
> https://github.com/ceph/ceph/blob/master/src/msg/async/AsyncConnection.h#L237
>
> Thanks
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RADOS as a simple object storage

2017-02-27 Thread Jan Kasprzak
Hello,

Gregory Farnum wrote:
: On Mon, Feb 20, 2017 at 11:57 AM, Jan Kasprzak  wrote:
: > Gregory Farnum wrote:
: > : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak  wrote:
: > : >
: > : > I have been using CEPH RBD for a year or so as a virtual machine storage
: > : > backend, and I am thinking about moving our another subsystem to CEPH:
[...]
: > : > Here is some statistics from our biggest instance of the object storage:
: > : >
: > : > objects stored: 100_000_000
: > : > < 1024 bytes:10_000_000
: > : > 1k-64k bytes:80_000_000
: > : > 64k-4M bytes:10_000_000
: > : > 4M-256M bytes:1_000_000
: > : >> 256M bytes:10_000
: > : > biggest object:   15 GBytes
: > : >
: > : > Would it be feasible to put 100M to 1G objects as a native RADOS objects
: > : > into a single pool?
: > :
: > : This is well outside the object size RADOS is targeted or tested with;
: > : I'd expect issues. You might want to look at libradosstriper from the
: > : requirements you've mentioned.
: >
: > OK, thanks! Is there any documentation for libradosstriper?
: > I am looking for something similar to librados documentation:
: > http://docs.ceph.com/docs/master/rados/api/librados/
: 
: Not that I see, and I haven't used it myself, but the header file (see
: ceph/src/libradosstriper) seems to have reasonable function docs. It's
: a fairly thin wrapper around librados AFAIK.

OK, I have read the docs in the header file and the comment
near the top of RadosStriperImpl.cc:

https://github.com/ceph/ceph/blob/master/src/libradosstriper/RadosStriperImpl.cc#L33

If I understand it correctly, it looks like libradosstriper only splits
large stored objects into smaller pieces (RADOS objects), but does not
consolidate more small stored objects into larger RADOS objects.

So do you think I am ok with >10M tiny objects (smaller than 1KB)
and ~100,000,000 to 1,000,000,000 total objects, provided that I split
huge objects using libradosstriper?

Thanks,

-Yenya

-- 
| Jan "Yenya" Kasprzak  |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
Assuming that OpenSSL is written as carefully as Wietse's own code,
every 1000 lines introduce one additional bug into Postfix."   --TLS_README
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 3:15 PM, Simon Weald  wrote:
> Hi Ilya
>
> On 27/02/17 13:59, Ilya Dryomov wrote:
>> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald  wrote:
>>> I've currently having some issues making some Jessie-based Xen hosts
>>> talk to a Trusty-based cluster due to feature mismatch errors. Our
>>> Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
>>> hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
>>> map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
>>> no joy. I then tried compiling the latest kernel in the 4.9 branch
>>> (4.9.12) from source with the Debian kernel config - still no joy. As I
>>> understand it there have been a lot of changes in krbd which I should
>>> have pulled in when building from source - am I missing something? Some
>>> info about the Xen hosts:
>>>
>>> root@xen-host:~# uname -r
>>> 4.9.12-internal
>>>
>>> root@xen-host:~# ceph -v
>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>>
>>> root@xen-host:~# rbd map -p cinder
>>> volume-88188973-0f40-48a3-8a88-302d1cb5e093
>>> rbd: sysfs write failed
>>> RBD image feature set mismatch. You can disable features unsupported by
>>> the kernel with "rbd feature disable".
>>> In some cases useful info is found in syslog - try "dmesg | tail" or so.
>>> rbd: map failed: (6) No such device or address
>>>
>>> root@xen-host:~# dmesg | grep 'unsupported'
>>> [252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
>>> image uses unsupported features: 0x38
>>>
>>> root@xen-host:~# rbd info -p cinder
>>> volume-88188973-0f40-48a3-8a88-302d1cb5e093
>>> rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
>>> size 1024 MB in 256 objects
>>> order 22 (4096 kB objects)
>>> block_name_prefix: rbd_data.c6bd3c5f705426
>>> format: 2
>>> features: layering, exclusive-lock, object-map, fast-diff, 
>>> deep-flatten
>>> flags:
>> object-map, fast-diff, deep-flatten are still unsupported.
>>
>>> Do
>>>
>>> $ rbd feature disable  
>>> deep-flatten,fast-diff,object-map,exclusive-lock
>>>
>>> to disable features unsupported by the kernel client.  If you are using the
>>> kernel client, you should create your images with
>>>
>>> $ rbd create --size  --image-feature layering 
>>>
>>> or add
>>>
>>> rbd default features = 3
>>>
>>> to ceph.conf on the client side.  (Setting rbd default features on the
>>> OSDs will have no effect.)
>> exclusive-lock is supported starting with 4.9.  The above becomes
>>
>>> $ rbd feature disable  deep-flatten,fast-diff,object-map
>>> $ rbd create --size  --image-feature layering,exclusive-lock 
>>> 
>>> rbd default features = 5
>> if you want it.
>>
>> Thanks,
>>
>> Ilya
>
>
> Ok, thanks, understood - I suspected it was still the kernel client
> which was causing it. As you may have guessed from the pool name, we are
> mapping volumes created by Openstack through to a separate platform - I
> would much rather not go altering features after Cinder has created them
> if at all possible, so this obviously rules krbd out. Which other rbd
> clients would you suggest we use? I've played with rbd-nbd, but it
> causes extra complexity as you can't query it for which volumes are
> mapped to which local nbd device. Additionally, I'm not sure if it'll
> have a significant performance overhead. I'd appreciate you thoughts!

If you add "rbd default features = 3" to the ceph.conf file that is fed
to cinder/glance on the openstack side, it will create images with just
layering enabled.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] VM hang on ceph

2017-02-27 Thread Jason Dillaman
How do you know you have a deadlock in ImageWatcher? I don't see that
in the provided log. Can you provide a backtrace for all threads?

On Sun, Feb 26, 2017 at 7:44 PM, Rajesh Kumar  wrote:
> Hi,
>
> We are using Ceph Jewel 10.2.5 stable release. We see deadlock with image
> watcher and VM becomes dead, at this point I can't ping the VM. Here are
> last few lines from qemu-rbd log. Has anyone seen this and is there a fix
> for it?
>
>
> 2017-02-26 11:05:54.647071 7faa927fc700 11 objectcacher flusher 34174976 /
> 33554432:  0 tx, 0 rx, 34174976 clean, 0 dirty (16777216 target, 25165824
> max)
> 2017-02-26 11:05:54.678375 7faa77fff700 11 objectcacher flusher 31911424 /
> 33554432:  0 tx, 0 rx, 31911424 clean, 0 dirty (16777216 target, 0 max)
> 2017-02-26 11:46:19.697590 7faaa898c700 20 librbd: flush 0x560f8cbbc450
> 2017-02-26 11:46:19.697604 7faaa898c700 10 librbd::ImageState:
> 0x560f8cbbb980 send_refresh_unlock
> 2017-02-26 11:46:19.697608 7faaa898c700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 send_v2_get_mutable_metadata
> 2017-02-26 11:46:19.700752 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 handle_v2_get_mutable_metadata: r=0
> 2017-02-26 11:46:19.700775 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 send_v2_get_flags
> 2017-02-26 11:46:19.702128 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 handle_v2_get_flags: r=0
> 2017-02-26 11:46:19.702146 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 send_v2_get_snapshots
> 2017-02-26 11:46:19.704675 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 handle_v2_get_snapshots: r=0
> 2017-02-26 11:46:19.704704 7faa93fff700 20 librbd::ExclusiveLock:
> 0x7faa78014e10 is_lock_owner=0
> 2017-02-26 11:46:19.704709 7faa93fff700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 send_v2_apply
> 2017-02-26 11:46:19.704747 7faa937fe700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 handle_v2_apply
> 2017-02-26 11:46:19.704763 7faa937fe700 20 librbd::image::RefreshRequest:
> 0x7faaa4001030 apply
> 2017-02-26 11:46:19.704771 7faa937fe700 20 librbd::image::RefreshRequest:
> new snapshot id=2536 name=61 size=50029658112
> 2017-02-26 11:46:19.704801 7faa937fe700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 send_flush_aio
> 2017-02-26 11:46:19.704817 7faa937fe700 10 librbd::image::RefreshRequest:
> 0x7faaa4001030 handle_flush_aio: r=0
> 2017-02-26 11:46:19.704830 7faa937fe700 10 librbd::ImageState:
> 0x560f8cbbb980 handle_refresh: r=0
> 2017-02-26 11:48:01.389504 7faaa898c700 20 librbd: flush 0x560f8cbbc450
>
> Thanks,
>
> Rajesh
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Simon Weald
Hi Ilya

On 27/02/17 13:59, Ilya Dryomov wrote:
> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald  wrote:
>> I've currently having some issues making some Jessie-based Xen hosts
>> talk to a Trusty-based cluster due to feature mismatch errors. Our
>> Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
>> hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
>> map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
>> no joy. I then tried compiling the latest kernel in the 4.9 branch
>> (4.9.12) from source with the Debian kernel config - still no joy. As I
>> understand it there have been a lot of changes in krbd which I should
>> have pulled in when building from source - am I missing something? Some
>> info about the Xen hosts:
>>
>> root@xen-host:~# uname -r
>> 4.9.12-internal
>>
>> root@xen-host:~# ceph -v
>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>>
>> root@xen-host:~# rbd map -p cinder
>> volume-88188973-0f40-48a3-8a88-302d1cb5e093
>> rbd: sysfs write failed
>> RBD image feature set mismatch. You can disable features unsupported by
>> the kernel with "rbd feature disable".
>> In some cases useful info is found in syslog - try "dmesg | tail" or so.
>> rbd: map failed: (6) No such device or address
>>
>> root@xen-host:~# dmesg | grep 'unsupported'
>> [252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
>> image uses unsupported features: 0x38
>>
>> root@xen-host:~# rbd info -p cinder
>> volume-88188973-0f40-48a3-8a88-302d1cb5e093
>> rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
>> size 1024 MB in 256 objects
>> order 22 (4096 kB objects)
>> block_name_prefix: rbd_data.c6bd3c5f705426
>> format: 2
>> features: layering, exclusive-lock, object-map, fast-diff, 
>> deep-flatten
>> flags:
> object-map, fast-diff, deep-flatten are still unsupported.
>
>> Do
>>
>> $ rbd feature disable  
>> deep-flatten,fast-diff,object-map,exclusive-lock
>>
>> to disable features unsupported by the kernel client.  If you are using the
>> kernel client, you should create your images with
>>
>> $ rbd create --size  --image-feature layering 
>>
>> or add
>>
>> rbd default features = 3
>>
>> to ceph.conf on the client side.  (Setting rbd default features on the
>> OSDs will have no effect.)
> exclusive-lock is supported starting with 4.9.  The above becomes
>
>> $ rbd feature disable  deep-flatten,fast-diff,object-map
>> $ rbd create --size  --image-feature layering,exclusive-lock 
>> 
>> rbd default features = 5
> if you want it.
>
> Thanks,
>
> Ilya


Ok, thanks, understood - I suspected it was still the kernel client
which was causing it. As you may have guessed from the pool name, we are
mapping volumes created by Openstack through to a separate platform - I
would much rather not go altering features after Cinder has created them
if at all possible, so this obviously rules krbd out. Which other rbd
clients would you suggest we use? I've played with rbd-nbd, but it
causes extra complexity as you can't query it for which volumes are
mapped to which local nbd device. Additionally, I'm not sure if it'll
have a significant performance overhead. I'd appreciate you thoughts!

Thanks

Simon


-- 

PGP: http://www.simonweald.com/SimonWeald.asc
 https://pgp.mit.edu/pks/lookup?op=get&search=0x988E9858747ABE88

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] help with crush rule

2017-02-27 Thread Maged Mokhtar

Thank you for the clarification.
apology for my late reply /maged



From: Brian Andrus 
Sent: Wednesday, February 22, 2017 2:23 AM
To: Maged Mokhtar 
Cc: ceph-users 
Subject: Re: [ceph-users] help with crush rule


I don't think a CRUSH rule exception is currently possible, but it makes sense 
to me for a feature request.


On Sat, Feb 18, 2017 at 6:16 AM, Maged Mokhtar  wrote:


  Hi,

  I have a need to support a small cluster with 3 hosts and 3 replicas given
  that in normal operation each replica will be placed on a separate host
  but in case one host dies then its replicas could be stored on separate
  osds on the 2 live hosts.

  I was hoping to write a rule that in case it could only find 2 replicas on
  separated nodes will emit it and do another select/emit to place the
  reaming replica. Is this possible ? i could not find a way to define an if
  condition or being able to determine the size of the working vector
  actually returned.

  Cheers /maged

  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






-- 

Brian Andrus | Cloud Systems Engineer | DreamHost
brian.and...@dreamhost.com | www.dreamhost.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase number of replicas per node

2017-02-27 Thread Maxime Guyot
Hi Massimiliano,

You’ll need to update the rule with something like that:

rule rep6 {
ruleset 1
type replicated
min_size 6
max_size 6
step take root
step choose firstn 3 type host
step choose firstn 2 type osd
step emit
}

Testing it with crushtool and assuming a crush map with osd0-3 are in host1, 
osd4-7 in host2 and osd8-12 in host3 I get the following results
crushtool -i map.bin --test --rule 1 --show-mappings --x 1 --num-rep 6
CRUSH rule 1 x 1 [3,0,9,11,5,7]

Cheers,
Maxime

On 27/02/17 13:22, "ceph-users on behalf of Massimiliano Cuttini" 
 wrote:

Dear all,

i have 3 nodes with 4 OSD each.
And I would like to have 6 replicas.
So 2 replicas for nodes.

Does anybody know how to allow CRUSH to use twice the same node but 
different OSD?

Thanks,
Max


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald  wrote:
> I've currently having some issues making some Jessie-based Xen hosts
> talk to a Trusty-based cluster due to feature mismatch errors. Our
> Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
> hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
> map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
> no joy. I then tried compiling the latest kernel in the 4.9 branch
> (4.9.12) from source with the Debian kernel config - still no joy. As I
> understand it there have been a lot of changes in krbd which I should
> have pulled in when building from source - am I missing something? Some
> info about the Xen hosts:
>
> root@xen-host:~# uname -r
> 4.9.12-internal
>
> root@xen-host:~# ceph -v
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>
> root@xen-host:~# rbd map -p cinder
> volume-88188973-0f40-48a3-8a88-302d1cb5e093
> rbd: sysfs write failed
> RBD image feature set mismatch. You can disable features unsupported by
> the kernel with "rbd feature disable".
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
> rbd: map failed: (6) No such device or address
>
> root@xen-host:~# dmesg | grep 'unsupported'
> [252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
> image uses unsupported features: 0x38
>
> root@xen-host:~# rbd info -p cinder
> volume-88188973-0f40-48a3-8a88-302d1cb5e093
> rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
> size 1024 MB in 256 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.c6bd3c5f705426
> format: 2
> features: layering, exclusive-lock, object-map, fast-diff, 
> deep-flatten
> flags:

object-map, fast-diff, deep-flatten are still unsupported.

> Do
>
> $ rbd feature disable  
> deep-flatten,fast-diff,object-map,exclusive-lock
>
> to disable features unsupported by the kernel client.  If you are using the
> kernel client, you should create your images with
>
> $ rbd create --size  --image-feature layering 
>
> or add
>
> rbd default features = 3
>
> to ceph.conf on the client side.  (Setting rbd default features on the
> OSDs will have no effect.)

exclusive-lock is supported starting with 4.9.  The above becomes

> $ rbd feature disable  deep-flatten,fast-diff,object-map
> $ rbd create --size  --image-feature layering,exclusive-lock 
> 
> rbd default features = 5

if you want it.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Simon Weald
I've currently having some issues making some Jessie-based Xen hosts
talk to a Trusty-based cluster due to feature mismatch errors. Our
Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
hosts were using the standard Jessie kernel (3.16). Volumes wouldn't
map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); still
no joy. I then tried compiling the latest kernel in the 4.9 branch
(4.9.12) from source with the Debian kernel config - still no joy. As I
understand it there have been a lot of changes in krbd which I should
have pulled in when building from source - am I missing something? Some
info about the Xen hosts:

root@xen-host:~# uname -r
4.9.12-internal

root@xen-host:~# ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)

root@xen-host:~# rbd map -p cinder
volume-88188973-0f40-48a3-8a88-302d1cb5e093
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by
the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

root@xen-host:~# dmesg | grep 'unsupported'
[252723.885948] rbd: image volume-88188973-0f40-48a3-8a88-302d1cb5e093:
image uses unsupported features: 0x38

root@xen-host:~# rbd info -p cinder
volume-88188973-0f40-48a3-8a88-302d1cb5e093
rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.c6bd3c5f705426
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:

And some about the storage cluster:

root@mon01:~# ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)

root@mon01:~# uname -r
3.19.0-80-generic

For security reasons, we won't provide direct access to the Ceph cluster
for VMs, so they have to be mapped on the host and then attached
(they're customer machines). Can anyone point me in the right direction
as to how we can get this working?


-- 

PGP: http://www.simonweald.com/SimonWeald.asc
 https://pgp.mit.edu/pks/lookup?op=get&search=0x988E9858747ABE88
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Increase number of replicas per node

2017-02-27 Thread Massimiliano Cuttini

Dear all,

i have 3 nodes with 4 OSD each.
And I would like to have 6 replicas.
So 2 replicas for nodes.

Does anybody know how to allow CRUSH to use twice the same node but 
different OSD?


Thanks,
Max


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rgw data migration

2017-02-27 Thread Малков Петр Викторович
Hi all!

2 clusters: jewel vs kraken

What is the best (not best, but working) way to migrate jewel rgw.pool.data -> 
kraken rgw.pool.data ?
if not touching jewel cluster to be upgraded


--
Petr Malkov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] deep-scrubbing

2017-02-27 Thread M Ranga Swami Reddy
Hello,
I use a ceph cluster and its show the deeps scrub's PG distribution as
below from "ceph pg dump" command:

  
   2000 Friday
   1000 Saturday
   4000  Sunday
==

On Friday, I have disabled the deep-scrub due to some reason. If this case,
all Friday's PG deep-scrub will be performed on Saturday or is it will be
done on Next Friday?

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-27 Thread Massimiliano Cuttini

It happens to my that OS being corrupted.
I just reinstalled the OS and deploy the monitor.
While I was going for zap and reinstal OSD I found that my OSD were 
already running again.


Magically



Il 27/02/2017 10:07, Iban Cabrillo ha scritto:

Hi,

  Could I reinstall the server and try only to activate de OSD again 
(without zap and prepare)?

Regards, I

2017-02-24 18:25 GMT+01:00 Iban Cabrillo >:


HI Eneko,
  yes the three mons are up and running.
  I do not have any other servers to plug-in these disk, but could
i reinstall the server and in some way mount the again the
osd-disk, ? I do not know the steps to do this

Regards, I

2017-02-24 14:52 GMT+01:00 Eneko Lacunza mailto:elacu...@binovo.es>>:

Hi Iban,

Is the monitor data safe? If it is, just install jewel in
other servers and plug in the OSD disks, it should work.

El 24/02/17 a las 14:41, Iban Cabrillo escribió:

Hi,
  We have a serious issue. We have a mini cluster (jewel
version) with two server (Dell RX730), with 16Bays and the OS
intalled on dual 8 GB sd card, But this configuration is
working really really bad.


  The replication is 2, but yesterday one server crash and
this morning the other One, this is not the first time, but
others we had one server up and the data could be replicated
without any troubles, reinstalling the osdserver completely.

  Until I understand, Ceph data and metadata is still on bays
(data on SATA and metadata on 2 fast SSDs), I think only the
OS installed on SD cards is corrupted.

  Is there any way to solve this situation?
  Any Idea will be great!!

Regards, I


-- 


Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969 
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC



Bertrand Russell:/"El problema con el mundo es que los
estúpidos están seguros de todo y los inteligentes están
//llenos de dudas/"



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Zuzendari Teknikoa / Director Técnico

Binovo IT Human Project, S.L.
Telf. 943493611
   943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es 

___ ceph-users
mailing list ceph-users@lists.ceph.com

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

-- 


Iban Cabrillo Bartolome Instituto de Fisica de Cantabria (IFCA)
Santander, Spain Tel: +34942200969 
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC


Bertrand Russell:/"El problema con el mundo es que los estúpidos
están seguros de todo y los inteligentes están //llenos de dudas/"

--
 
Iban Cabrillo Bartolome Instituto de Fisica de Cantabria (IFCA) 
Santander, Spain Tel: +34942200969
PGP PUBLIC KEY: 
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC 
 
Bertrand Russell:/"El problema con el mundo es que los estúpidos están 
seguros de todo y los inteligentes están //llenos de dudas/"


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas <
mariusvaitieku...@gmail.com> wrote:

>
>
> On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub 
> wrote:
>
>> On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas
>>  wrote:
>> >
>> >
>> > On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub <
>> yeh...@redhat.com>
>> > wrote:
>> >>
>> >> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
>> >>  wrote:
>> >> > Hi Cephers,
>> >> >
>> >> > We are testing rgw multisite solution between to DC. We have one
>> >> > zonegroup
>> >> > and to zones. At the moment all writes/deletes are done only to
>> primary
>> >> > zone.
>> >> >
>> >> > Sometimes not all the objects are replicated.. We've written
>> prometheus
>> >> > exporter to check replication status. It gives us each bucket object
>> >> > count
>> >> > from user perspective, because we have millions of objects and
>> hundreds
>> >> > of
>> >> > buckets. We just want to be sure, that everything is replicated
>> without
>> >> > using ceph internals like rgw admin api for now.
>> >> >
>> >> > Is it possible to initiate full resync of only one rgw bucket from
>> >> > master
>> >> > zone? What are the options about resync when things go wrong and
>> >> > replication
>> >> > misses some objects?
>> >> >
>> >> > We run latest jewel 10.2.5.
>> >>
>> >>
>> >> There's the 'radosgw-admin bucket sync init' command that you can run
>> >> on the specific bucket on the target zone. This will reinitialize the
>> >> sync state, so that when it starts syncing it will go through the
>> >> whole full sync process. Note that it shouldn't actually copy data
>> >> that already exists on the target. Also, in order to actually start
>> >> the sync, you'll need to have some change that would trigger the sync
>> >> on that bucket, e.g., create a new object there.
>> >>
>> >> Yehuda
>> >>
>> >
>> > Hi,
>> >
>> > I've tried to resync a bucket, but it didn't manage to resync a missing
>> > object. If I try to copy missing object by hand into secondary zone, i
>> get
>> > asked to overwrite existing object.. It looks like the object is
>> replicated,
>> > but is not in a bucket index. I've tried to check bucket index with
>> --fix
>> > and --check-objects flags, but nothing changes. What else should i try?
>> >
>>
>> That's weird. Do you see anything when you run 'radosgw-admin bi list
>> --bucket='?
>>
>> Yehuda
>>
>
> 'radosgw-admin bi list --bucket=' gives me an error:
> 2017-02-27 08:55:30.861659 7f20c15779c0  0 error in read_id for id  : (2)
> No such file or directory
> 2017-02-27 08:55:30.861991 7f20c15779c0  0 error in read_id for id  : (2)
> No such file or directory
> ERROR: bi_list(): (5) Input/output error
>
> 'radosgw-admin bucket list --bucket=' successfully list all the
> files except missing ones.
>
> --
> Marius Vaitiekūnas
>


I've done some more investigation. These missing objects could be found in
"rgw.buckets.data" pool, but bucket index is not aware about them.
How does 'radosgw-admin bucket check -b  --fix --check-objects'
works?
I guess that it's not scanning "rgw.buckets.data" pool for "leaked"
objects? These unreplicated objects looks for me the same like leaked ones
:)

-- 
Marius Vaitiekūnas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-27 Thread Iban Cabrillo
Hi,

  Could I reinstall the server and try only to activate de OSD again
(without zap and prepare)?
Regards, I

2017-02-24 18:25 GMT+01:00 Iban Cabrillo :

> HI Eneko,
>   yes the three mons are up and running.
>   I do not have any other servers to plug-in these disk, but could i
> reinstall the server and in some way mount the again the osd-disk, ? I do
> not know the steps to do this
>
> Regards, I
>
> 2017-02-24 14:52 GMT+01:00 Eneko Lacunza :
>
>> Hi Iban,
>>
>> Is the monitor data safe? If it is, just install jewel in other servers
>> and plug in the OSD disks, it should work.
>>
>> El 24/02/17 a las 14:41, Iban Cabrillo escribió:
>>
>> Hi,
>>   We have a serious issue. We have a mini cluster (jewel version) with
>> two server (Dell RX730), with 16Bays and the OS intalled on dual 8 GB sd
>> card, But this configuration is working really really bad.
>>
>>
>>   The replication is 2, but yesterday one server crash and this morning
>> the other One, this is not the first time, but others we had one server up
>> and the data could be replicated without any troubles, reinstalling the
>> osdserver completely.
>>
>>   Until I understand, Ceph data and metadata is still on bays (data on
>> SATA and metadata on 2 fast SSDs), I think only the OS installed on SD
>> cards is corrupted.
>>
>>   Is there any way to solve this situation?
>>   Any Idea will be great!!
>>
>> Regards, I
>>
>>
>> --
>> 
>> 
>> Iban Cabrillo Bartolome
>> Instituto de Fisica de Cantabria (IFCA)
>> Santander, Spain
>> Tel: +34942200969 <+34%20942%2020%2009%2069>
>> PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6
>> C8C08AC
>> 
>> 
>> Bertrand Russell:*"El problema con el mundo es que los estúpidos están
>> seguros de todo y los inteligentes están **llenos de dudas*"
>>
>>
>>
>> ___
>> ceph-users mailing 
>> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Zuzendari Teknikoa / Director Técnico
>> Binovo IT Human Project, S.L.
>> Telf. 943493611
>>   943324914
>> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun 
>> (Gipuzkoa)www.binovo.es
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> 
> 
> Iban Cabrillo Bartolome
> Instituto de Fisica de Cantabria (IFCA)
> Santander, Spain
> Tel: +34942200969 <+34%20942%2020%2009%2069>
> PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get&search=
> 0xD9DF0B3D6C8C08AC
> 
> 
> Bertrand Russell:*"El problema con el mundo es que los estúpidos están
> seguros de todo y los inteligentes están **llenos de dudas*"
>
>


-- 

Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC

Bertrand Russell:*"El problema con el mundo es que los estúpidos están
seguros de todo y los inteligentes están **llenos de dudas*"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub 
wrote:

> On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas
>  wrote:
> >
> >
> > On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub <
> yeh...@redhat.com>
> > wrote:
> >>
> >> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
> >>  wrote:
> >> > Hi Cephers,
> >> >
> >> > We are testing rgw multisite solution between to DC. We have one
> >> > zonegroup
> >> > and to zones. At the moment all writes/deletes are done only to
> primary
> >> > zone.
> >> >
> >> > Sometimes not all the objects are replicated.. We've written
> prometheus
> >> > exporter to check replication status. It gives us each bucket object
> >> > count
> >> > from user perspective, because we have millions of objects and
> hundreds
> >> > of
> >> > buckets. We just want to be sure, that everything is replicated
> without
> >> > using ceph internals like rgw admin api for now.
> >> >
> >> > Is it possible to initiate full resync of only one rgw bucket from
> >> > master
> >> > zone? What are the options about resync when things go wrong and
> >> > replication
> >> > misses some objects?
> >> >
> >> > We run latest jewel 10.2.5.
> >>
> >>
> >> There's the 'radosgw-admin bucket sync init' command that you can run
> >> on the specific bucket on the target zone. This will reinitialize the
> >> sync state, so that when it starts syncing it will go through the
> >> whole full sync process. Note that it shouldn't actually copy data
> >> that already exists on the target. Also, in order to actually start
> >> the sync, you'll need to have some change that would trigger the sync
> >> on that bucket, e.g., create a new object there.
> >>
> >> Yehuda
> >>
> >
> > Hi,
> >
> > I've tried to resync a bucket, but it didn't manage to resync a missing
> > object. If I try to copy missing object by hand into secondary zone, i
> get
> > asked to overwrite existing object.. It looks like the object is
> replicated,
> > but is not in a bucket index. I've tried to check bucket index with --fix
> > and --check-objects flags, but nothing changes. What else should i try?
> >
>
> That's weird. Do you see anything when you run 'radosgw-admin bi list
> --bucket='?
>
> Yehuda
>

'radosgw-admin bi list --bucket=' gives me an error:
2017-02-27 08:55:30.861659 7f20c15779c0  0 error in read_id for id  : (2)
No such file or directory
2017-02-27 08:55:30.861991 7f20c15779c0  0 error in read_id for id  : (2)
No such file or directory
ERROR: bi_list(): (5) Input/output error

'radosgw-admin bucket list --bucket=' successfully list all the
files except missing ones.

-- 
Marius Vaitiekūnas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com