Re: [ceph-users] librbd + rbd-nbd

2017-04-07 Thread Дмитрий Глушенок
Hi! No experience, but be ready to limit your RBD devices by 2 TB in size: http://tracker.ceph.com/issues/17219 > 5 апр. 2017 г., в 22:15, Prashant Murthy написал(а): > > Hi all, > > > I wanted to ask if anybody is using librbd (user mode lib) with rbd-nbd > (kernel module) on their Ceph c

[ceph-users] Steps to stop/restart entire ceph cluster

2017-04-07 Thread TYLin
Hi all, We’re trying to stop and then restart our ceph cluster. Our steps are as following: stop cluster: stop mds -> stop osd -> stop mon restart cluster: start mon -> start osd -> start mds Our cluster will stuck in cephfs degraded and mds is replaying journal. After restart

[ceph-users] Warning or error messages

2017-04-07 Thread Cem Demirsoy
Hello ceph users, I get the following output when I execute the ceph health detail command. Could anyone assist me for that problem/warning? Thanks. HEALTH_WARN 41 pgs backfill_wait; 2 pgs backfilling; 23 pgs degraded; 21 pgs recovery_wait; 23 pgs stuck degraded; 64 pgs stuck unclean; 2 pgs stuck

Re: [ceph-users] rbd exclusive-lock feature not exclusive?

2017-04-07 Thread Matthias Ferdinand
Thanks for clarifying, would have been nice to protect against admin mistakes. Argument "--exclusive" is not available in jewel version of rbd-nbd. So I'd better be careful with those rbd volumes :-) Matthias On Thu, Apr 06, 2017 at 11:20:10PM -0400, Jason Dillaman wrote: > It's exclusive in that

Re: [ceph-users] Steps to stop/restart entire ceph cluster

2017-04-07 Thread K K
Hello,  my cluster didn't have MDS. I recommended add "ceph osd set noout" before shutdown OSD daemons.  I've done those operation tomorrow and my cluster working again.  Пятница, 7 апреля 2017, 13:47 +05:00 от TYLin : > >Hi all, > >We’re trying to stop and then restart our ceph cluster. Our s

[ceph-users] "RGW Metadata Search" and related

2017-04-07 Thread ceph . novice
Hi Cephers.   We try to get "metadata search" working on our test cluster. This is one of two things we promised an internal customer for a very soon to be stared PoC... the second feature is, as I wrote already in another post, the "object expiration" (lifecycle?!) [object's should be auto-remo

Re: [ceph-users] Steps to stop/restart entire ceph cluster

2017-04-07 Thread Serkan Çoban
Below steps are taken from redhat documentation: Follow the below procedure for Shutting down the Ceph Cluster: 1.Stop the clients from using the RBD images/Rados Gateway on this cluster or any other clients. 2.The cluster must be in healthy state before proceeding. 3.Set the noout, no

Re: [ceph-users] best way to resolve 'stale+active+clean' after disk failure

2017-04-07 Thread David Welch
Thanks for the suggestions. There turned out to be an old testing pool with replication of 1 that was causing the issue. Removing the pool fixed the issue. On 04/06/2017 07:34 PM, Brad Hubbard wrote: What are size and min_size for pool '7'... and why? On Fri, Apr 7, 2017 at 4:20 AM, David We

Re: [ceph-users] librbd + rbd-nbd

2017-04-07 Thread Jason Dillaman
That limit has been removed for kernel versions that don't have a bug [1]. [1] http://tracker.ceph.com/issues/18335 On Fri, Apr 7, 2017 at 4:27 AM, Дмитрий Глушенок wrote: > Hi! > > No experience, but be ready to limit your RBD devices by 2 TB in size: > http://tracker.ceph.com/issues/17219 > >

Re: [ceph-users] Librbd logging

2017-04-07 Thread Laszlo Budai
Hi Jason, I've tried to do enable the lttng tracing but no success. I have done the steps from [1] and [2] but nothing happened ... It's difficult to catch the event to dump the objecter requests because when we observe the event, this is already past away. Kind regards, Laszlo On 04.04.2017

Re: [ceph-users] Why is librados for Python so Neglected?

2017-04-07 Thread Kent Borg
Finally getting back to this. On 03/08/2017 05:08 PM, John Spray wrote: On Wed, Mar 8, 2017 at 9:28 PM, Kent Borg wrote: Python is such a great way to learn things. Such a shame the librados Python library is missing so much. It makes RADOS look so much more limited than it is. Specifically?

[ceph-users] null characters at the end of the file on hard reboot of VM

2017-04-07 Thread Laszlo Budai
Hello, we have observed that there are null characters written into the open files when hard rebooting a VM. Is tis a known issue? Our VM is using ceph (0.94.10) storage. we have a script like this: while sleep 1; do date >> somefile ; done if we hard reset the VM while the above line is runnin

[ceph-users] Ceph drives not detected

2017-04-07 Thread Melzer Pinto
Hello, I am setting up a 9 node ceph cluster. For legacy reasons I'm using Ceph giant (0.87) on Fedora 21. Each OSD node has 4x4TB SATA drives with journals on a separate SSD. The server is an HP XL190 Gen 9 with latest firmware. The issue I'm seeing is that only 2 drives get detected and mount

Re: [ceph-users] Flapping OSDs

2017-04-07 Thread Vlad Blando
The issue is now fixed, it turns out i have unnecessary iptables rules, flushed and deleted them all, restarted the OSDs and now they are running normally. ᐧ Regards, Vladimir FS Blando Cloud Operations Manager www.morphlabs.com On Fri, Apr 7, 2017 at 1:17 PM, Vlad Blando wrote: > Hi Brian,

Re: [ceph-users] CephFS: ceph-fuse segfaults

2017-04-07 Thread Patrick Donnelly
Hello Andras, On Wed, Mar 29, 2017 at 11:07 AM, Andras Pataki wrote: > Below is a crash we had on a few machines with the ceph-fuse client on the > latest Jewel release 10.2.6. A total of 5 ceph-fuse processes crashed more > or less the same way at different times. The full logs are at > http:/

Re: [ceph-users] Ceph drives not detected

2017-04-07 Thread Federico Lucifredi
Hi Melzer, Somewhat pointing out the obvious, but just in case: Ceph is in rapid development, and Giant is way behind where the state of the art is. If this is your first Ceph experience, it is definitely recommended you look at Jewel or even Kraken -- In Linux terms, it is almost as if you were r

Re: [ceph-users] null characters at the end of the file on hard reboot of VM

2017-04-07 Thread Peter Maloney
You should describe your configuration... krbd? librbd? cephfs? is rbd_cache = true? rbd cache writethrough until flush = true? is it kvm? maybe the filesystem in the VM is relevant (I saw something similar testing cephfs... if I blacklisted a client and then force unmounted, I would get whole fi

Re: [ceph-users] Working Ceph guide for Centos 7 ???

2017-04-07 Thread Mehmet
Perhaps ceph-deploy can Work when you disable the "epel" Repo? Purge all and try it again. Am 7. April 2017 04:27:59 MESZ schrieb Travis Eddy : >Here is what I tried: (several times) >Nothing works >The best I got was following the Ceph guide and adding >sudo yum install centos-release-ceph-jew

Re: [ceph-users] Ceph drives not detected

2017-04-07 Thread Melzer Pinto
Hi Federico, Yep I understand that. This is for legacy reasons. We already have 3 older clusters running with a similar setup with minor differences (hardware, etc.) and this one is being setup to test something [:(] Thanks From: Federico Lucifredi Sent: Fri

Re: [ceph-users] how-to undo a "multisite" config

2017-04-07 Thread Trey Palmer
Hi Anton, I'm not sure exactly what you're trying to do. If you want to delete everything and start over, then just remove the zones, zonegroups and realms on both sides, and remove their pools. If you have a master zone you want to keep, but you want to remove the non-master zone that is mirror

[ceph-users] Running the Ceph Erasure Code Benhmark

2017-04-07 Thread Henry Ngo
Hello, I have a 6 node cluster and I have installed Ceph on the admin node from source. I want to run the benchmark test on my cluster. How do I do this? If I type ceph_erasure_code_benchmark on the command line it gives a " parameter k is 0. But k needs to be > 0 ". What elese do I need to set up

Re: [ceph-users] CephFS: ceph-fuse segfaults

2017-04-07 Thread Shinobu Kinjo
Please open a ticket so that we track. http://tracker.ceph.com/ Regards, On Sat, Apr 8, 2017 at 1:40 AM, Patrick Donnelly wrote: > Hello Andras, > > On Wed, Mar 29, 2017 at 11:07 AM, Andras Pataki > wrote: > > Below is a crash we had on a few machines with the ceph-fuse client on > the > > la

[ceph-users] python3-rados

2017-04-07 Thread Gerald Spencer
Do the rados bindings exist for python3? I see this sprinkled in various areas.. https://github.com/ceph/ceph/pull/7621 https://github.com/ceph/ceph/blob/master/debian/python3-rados.install This being said, I can not find said package ___ ceph-users

Re: [ceph-users] null characters at the end of the file on hard reboot of VM

2017-04-07 Thread Laszlo Budai
Hello Peter, Thank you for your answer. In our setup we have the virtual machines running in KVM, and accessing the ceph storage using librbd. The rbd cache is set to "writethrough until flush = true". Here it is the result of ceph config show | grep cache : # ceph --admin-daemon /run/ceph/gue

Re: [ceph-users] Running the Ceph Erasure Code Benhmark

2017-04-07 Thread Shinobu Kinjo
You don't need to recompile that tool. Please see ``ceph_erasure_code_benchmark -h``. Some examples are: https://github.com/ceph/ceph/blob/master/src/erasure-code/isa/README#L31-L48 On Sat, Apr 8, 2017 at 8:21 AM, Henry Ngo wrote: > Hello, > > I have a 6 node cluster and I have installed Ceph on