[ceph-users] How to hide internal ip on ceph mount

2017-02-27 Thread gjprabu
Hi Team, How to hide internal ip address on cephfs mounting. Due to security reason we need to hide ip address. Also we are running docker container in the base machine and which will shown the partition details over there. Kindly let us know is there any solution for this.

Re: [ceph-users] librbd logging

2017-02-27 Thread Jason Dillaman
On Mon, Feb 27, 2017 at 12:36 PM, Laszlo Budai wrote: > Currently my system does not have the /var/log/quemu directory. Is it enough > to create that directory in order to have some logs from librbd? Or I need > to restart the vm? If you have the admin socket file, you

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Massimiliano Cuttini
Not really tested, but searching around, many people say at moment RBD-NBD as more or less same perforamance. While RBD-FUSE is really slow. At the moment I cannot anymore test the kernel version because downgrading/re-upgrading CRUSH tunable will be a nightmare. But you can try. Il

Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
2017-02-27 20:53 GMT+01:00 Roger Brown : > replace "master" with the release codename, eg. http://docs.ceph.com/docs/ > kraken/ > > Thanks I suggest to add the doc version list on http://docs.ceph.com page. Best regards, Stéphane

Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Roger Brown
replace "master" with the release codename, eg. http://docs.ceph.com/docs/kraken/ On Mon, Feb 27, 2017 at 12:45 PM Stéphane Klein wrote: > Hi, > > how can I read old Ceph version documentation? > > http://docs.ceph.com I see only "master" documentation. > > I look

[ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
Hi, how can I read old Ceph version documentation? http://docs.ceph.com I see only "master" documentation. I look for 0.94.5 documentation. Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter:

Re: [ceph-users] v0.94.10 Hammer release rpm signature issue

2017-02-27 Thread Andrew Schoen
Sorry about this, we did have signed rpm repos up but a mistake by myself overwrote those with the unsigned ones. This should be fixed now. Let me know if you have anymore issues with the repos. Thanks, Andrew > > On Mon, Feb 27, 2017 at 8:30 AM, Pietari Hyvärinen >

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Massimiliano Cuttini
But if everybody get Kernel Mismatch (me too) ... why don't use directly rbd-nbd and forget about kernel-rbd All feature, almost same performance. No? Il 27/02/2017 18:54, Ilya Dryomov ha scritto: On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo wrote: We already

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo wrote: > We already discussed this: > > https://www.spinics.net/lists/ceph-devel/msg34559.html > > What do you think of comment posted in that ML? > Would that make sense to you as well? Sorry, I dropped the ball on this. I'll

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
thanks for that link, It will be nice to have that interface supported by .ko, regardless i raised this http://tracker.ceph.com/issues/19095 On Mon, Feb 27, 2017 at 9:47 AM, Shinobu Kinjo wrote: > We already discussed this: > >

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Shinobu Kinjo
We already discussed this: https://www.spinics.net/lists/ceph-devel/msg34559.html What do you think of comment posted in that ML? Would that make sense to you as well? On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni wrote: > Ilya, > > Many folks hit this and its quite

[ceph-users] librbd logging

2017-02-27 Thread Laszlo Budai
Hello, I have these settings in my /etc/ceph/ceph.conf: [client] rbd cache = true rbd cache writethrough until flush = true admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log rbd concurrent management ops = 20 Currently

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
Ilya, Many folks hit this and its quite difficult since the error is not properly printed out(unless one scans syslogs), Is it possible to default the feature to the one that kernel supports or its not possible to handle that case? Thanks On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov

[ceph-users] Ceph SElinux denials on OSD startup

2017-02-27 Thread Benjeman Meekhof
Hi, I'm seeing some SElinux denials for ops to nvme devices. They only occur at OSD start, they are not ongoing. I'm not sure it's causing an issue though I did try a few tests with SElinux in permissive mode to see if it made any difference with startup/recovery CPU loading we have seen since

[ceph-users] Safely Upgrading OS on a live Ceph Cluster

2017-02-27 Thread Heller, Chris
I am attempting an operating system upgrade of a live Ceph cluster. Before I go an screw up my production system, I have been testing on a smaller installation, and I keep running into issues when bringing the Ceph FS metadata server online. My approach here has been to store all Ceph critical

Re: [ceph-users] Ceph on XenServer - RBD Image Size

2017-02-27 Thread Mike Jacobacci
Hi Michal, Yes I have considered that, but I felt it was easier to administer the VM's without having to interact with Ceph every time. I have a another smaller image that I backup VM configs/data to for cold storage... The VM's are for internal resources so they are expendable. I am totally

Re: [ceph-users] "STATE_CONNECTING_WAIT_BANNER_AND_IDENTIFY" showing in ceph -s

2017-02-27 Thread Gregory Farnum
On Sun, Feb 26, 2017 at 10:41 PM, nokia ceph wrote: > Hello, > > On a fresh installation ceph kraken 11.2.0 , we are facing below error in > the "ceph -s" output. > > > 0 -- 10.50.62.152:0/675868622 >> 10.50.62.152:6866/13884 conn(0x7f576c002750 > :-1

Re: [ceph-users] RADOS as a simple object storage

2017-02-27 Thread Jan Kasprzak
Hello, Gregory Farnum wrote: : On Mon, Feb 20, 2017 at 11:57 AM, Jan Kasprzak wrote: : > Gregory Farnum wrote: : > : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote: : > : > : > : > I have been using CEPH RBD for a year or so as a virtual machine

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 3:15 PM, Simon Weald wrote: > Hi Ilya > > On 27/02/17 13:59, Ilya Dryomov wrote: >> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald wrote: >>> I've currently having some issues making some Jessie-based Xen hosts >>> talk to a

Re: [ceph-users] VM hang on ceph

2017-02-27 Thread Jason Dillaman
How do you know you have a deadlock in ImageWatcher? I don't see that in the provided log. Can you provide a backtrace for all threads? On Sun, Feb 26, 2017 at 7:44 PM, Rajesh Kumar wrote: > Hi, > > We are using Ceph Jewel 10.2.5 stable release. We see deadlock with image >

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Simon Weald
Hi Ilya On 27/02/17 13:59, Ilya Dryomov wrote: > On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald wrote: >> I've currently having some issues making some Jessie-based Xen hosts >> talk to a Trusty-based cluster due to feature mismatch errors. Our >> Trusty hosts are using

Re: [ceph-users] help with crush rule

2017-02-27 Thread Maged Mokhtar
Thank you for the clarification. apology for my late reply /maged From: Brian Andrus Sent: Wednesday, February 22, 2017 2:23 AM To: Maged Mokhtar Cc: ceph-users Subject: Re: [ceph-users] help with crush rule I don't think a CRUSH rule exception is currently possible, but it makes sense

Re: [ceph-users] Increase number of replicas per node

2017-02-27 Thread Maxime Guyot
Hi Massimiliano, You’ll need to update the rule with something like that: rule rep6 { ruleset 1 type replicated min_size 6 max_size 6 step take root step choose firstn 3 type host step choose firstn 2 type osd step emit } Testing

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Ilya Dryomov
On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald wrote: > I've currently having some issues making some Jessie-based Xen hosts > talk to a Trusty-based cluster due to feature mismatch errors. Our > Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie > hosts

[ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Simon Weald
I've currently having some issues making some Jessie-based Xen hosts talk to a Trusty-based cluster due to feature mismatch errors. Our Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie hosts were using the standard Jessie kernel (3.16). Volumes wouldn't map, so I tried the

[ceph-users] rgw data migration

2017-02-27 Thread Малков Петр Викторович
Hi all! 2 clusters: jewel vs kraken What is the best (not best, but working) way to migrate jewel rgw.pool.data -> kraken rgw.pool.data ? if not touching jewel cluster to be upgraded -- Petr Malkov ___ ceph-users mailing list

[ceph-users] deep-scrubbing

2017-02-27 Thread M Ranga Swami Reddy
Hello, I use a ceph cluster and its show the deeps scrub's PG distribution as below from "ceph pg dump" command: 2000 Friday 1000 Saturday 4000 Sunday == On Friday, I have disabled the deep-scrub due to some reason. If this case, all Friday's PG deep-scrub will be performed on

Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-27 Thread Massimiliano Cuttini
It happens to my that OS being corrupted. I just reinstalled the OS and deploy the monitor. While I was going for zap and reinstal OSD I found that my OSD were already running again. Magically Il 27/02/2017 10:07, Iban Cabrillo ha scritto: Hi, Could I reinstall the server and try only

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Mon, Feb 27, 2017 at 9:59 AM, Marius Vaitiekunas < mariusvaitieku...@gmail.com> wrote: > > > On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub > wrote: > >> On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas >> wrote: >> > >> > >> > On

Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-27 Thread Iban Cabrillo
Hi, Could I reinstall the server and try only to activate de OSD again (without zap and prepare)? Regards, I 2017-02-24 18:25 GMT+01:00 Iban Cabrillo : > HI Eneko, > yes the three mons are up and running. > I do not have any other servers to plug-in these disk,

Re: [ceph-users] rgw multisite resync only one bucket

2017-02-27 Thread Marius Vaitiekunas
On Fri, Feb 24, 2017 at 6:35 PM, Yehuda Sadeh-Weinraub wrote: > On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas > wrote: > > > > > > On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub < > yeh...@redhat.com> > > wrote: > >> > >> On Wed, Feb