Re: [ceph-users] Howto reduce the impact from cephx with small IO

2016-04-20 Thread Udo Lembke
Hi Mark, thanks for the links. If I search for wip-auth I found nothing in docs.ceph.com... this mean, that wip-auth don't find the way in the ceph code base?! But I'm wonder about the RHEL7 position at the link http://www.spinics.net/lists/ceph-devel/msg22416.html Unfortunality there are no

[ceph-users] Remove incomplete PG

2016-04-20 Thread Tyler Wilson
Hello All, Are there any documented steps to remove a placement group that is stuck inactive? I had a situation where we had two nodes go offline and tried rescuing with https://ceph.com/community/incomplete-pgs-oh-my/ however the PG remained inactive after importing and starting, now I am just tr

[ceph-users] RBD image mounted by command "rbd-nbd" the status is read-only.

2016-04-20 Thread Mika c
Hi cephers, Read this post "CEPH Jewel Preiew " before. Follow the steps can map and mount rbd image to /dev/nbd successfully. But I can not write any files. The error message is "Read-only file system". ​I

Re: [ceph-users] mds segfault on cephfs snapshot creation

2016-04-20 Thread Yan, Zheng
On Wed, Apr 20, 2016 at 11:52 PM, Brady Deetz wrote: > > > On Wed, Apr 20, 2016 at 4:09 AM, Yan, Zheng wrote: >> >> On Wed, Apr 20, 2016 at 12:12 PM, Brady Deetz wrote: >> > As soon as I create a snapshot on the root of my test cephfs deployment >> > with >> > a single file within the root, my m

Re: [ceph-users] cephfs does not seem to properly free up space

2016-04-20 Thread Yan, Zheng
to delete these orphan objects list all objects in cephfs data pool. Object name is in form of [inode number in dex].[offset in hex]. If an object with 'offset > 0', but there is no object with 'offset == 0' and same inode number, it's orphan object. It's not difficult to write a script to find

Re: [ceph-users] Howto reduce the impact from cephx with small IO

2016-04-20 Thread Mark Nelson
Hi Udo, There was quite a bit of discussion and some partial improvements to cephx performance about a year ago. You can see some of the discussion here: http://www.spinics.net/lists/ceph-devel/msg3.html and in particular these tests: http://www.spinics.net/lists/ceph-devel/msg22416.ht

[ceph-users] Howto reduce the impact from cephx with small IO

2016-04-20 Thread Udo Lembke
Hi, on an small test-system (3 nodes (mon + osd), 6 OSDs, ceph 0.94.6) I compare with and without cephx. I use fio for that inside an VM on an host, outside the 3 ceph-nodes, with this command: fio --max-jobs=1 --numjobs=1 --readwrite=read --blocksize=4k --size=4G --direct=1 --name=fiojob_4k

Re: [ceph-users] mds segfault on cephfs snapshot creation

2016-04-20 Thread Brady Deetz
On Wed, Apr 20, 2016 at 4:09 AM, Yan, Zheng wrote: > On Wed, Apr 20, 2016 at 12:12 PM, Brady Deetz wrote: > > As soon as I create a snapshot on the root of my test cephfs deployment > with > > a single file within the root, my mds server kernel panics. I understand > > that snapshots are not rec

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-20 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Udo Lembke > Sent: 20 April 2016 07:21 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5 > > Hi Mike, > I don't have experiences with RBD moun

[ceph-users] Monitor not starting: Corruption: 12 missing files

2016-04-20 Thread Daniel.Balsiger
Dear Ceph Users, I have the following situation in my small 3-node cluster: --snip root@ceph2:~# ceph status cluster d1af2097-8535-42f2-ba8c-0667f90cab61 health HEALTH_WARN 1 mons down, quorum 0,1 ceph0,ceph1 monmap e1: 3 mons at {ceph0=10.0.0.30:6789/0,ceph1=10.0.0.31:

[ceph-users] EC Jerasure plugin and StreamScale Inc

2016-04-20 Thread Chandan Kumar Singh
Hi What does the ceph community think of StreamScale's claims on Jerasure? Is it possible to use the EC plugin for commercial purposes? What is your advice? Regards Chandan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/list

Re: [ceph-users] ceph cache tier clean rate too low

2016-04-20 Thread Nick Fisk
I would advise you to take a look at the osd_agent_max_ops (and osd_agent_max_ops), these should in theory dictate how many parallel threads will be used for flushing. Do a conf dump from the admin socket to see what you are currently running with and then bump them up to see if it helps. > ---

Re: [ceph-users] mds segfault on cephfs snapshot creation

2016-04-20 Thread Yan, Zheng
On Wed, Apr 20, 2016 at 12:12 PM, Brady Deetz wrote: > As soon as I create a snapshot on the root of my test cephfs deployment with > a single file within the root, my mds server kernel panics. I understand > that snapshots are not recommended. Is it beneficial to developers for me to > leave my c

Re: [ceph-users] cephfs does not seem to properly free up space

2016-04-20 Thread Simion Rad
Yes, we do use customized layout settings for most of our folders. We have some long running backup jobs which require high-throughput writes in order to finish in a reasonable amount of time. From: Florent B Sent: Wednesday, April 20, 2016 11:07 To: Yan,

[ceph-users] Multiple OSD crashing a lot

2016-04-20 Thread Blade Doyle
I get a lot of osd crash with the following stack - suggestion please: 0> 1969-12-31 16:04:55.455688 83ccf410 -1 osd/ReplicatedPG.cc: In function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::RepGather*, unsigned int)' thread 83ccf410 time 295.324905 osd/ReplicatedPG.cc: 11011: FAILED assert

Re: [ceph-users] Build Raw Volume from Recovered RBD Objects

2016-04-20 Thread Wido den Hollander
> Op 19 april 2016 om 19:15 schreef Mike Dawson : > > > All, > > I was called in to assist in a failed Ceph environment with the cluster > in an inoperable state. No rbd volumes are mountable/exportable due to > missing PGs. > > The previous operator was using a replica count of 2. The clust