Re: [ceph-users] Register ceph daemons on initctl

2016-11-17 Thread 钟佳佳
if you built from git repo tag v10.2.3, refers to links below from ceph.com http://docs.ceph.com/docs/emperor/install/build-packages/ http://docs.ceph.com/docs/jewel/rados/operations/operating/#running-ceph-with-upstart if you built from ceph-10.2.3.tar.gz, it seems there's no debian stuff for d

Re: [ceph-users] Ceph Volume Issue

2016-11-17 Thread Mehul1.Jani
Thanks everyone for your inputs. Below is a small writeup which I wanted to share with everyone in Ceph User community. Summary of the Ceph Issue with Volumes Our Setup As mentioned earlier in our setup we have Openstack MOS 6.0 integrated with Ceph Storage cluster. The version details are as

Re: [ceph-users] Crush Adjustment

2016-11-17 Thread David Turner
Adding the pool of ssd's and changing their weights to balance them will not affect your pool of spinning disks. The PGs and OSD weights are isolated inside by being in different pools under different roots. [cid:image85b6bb.JPG@a2ecd567.4085178b]

[ceph-users] index-sharding on existing bucket ?

2016-11-17 Thread Yoann Moulin
Hello, is that possible to shard the index of existing buckets ? I have more than 100TB of data in a couples of buckets, I'd like to avoid to re upload everythings. Thanks for your help, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list ceph-us

[ceph-users] Crush Adjustment

2016-11-17 Thread Pasha
Hi guys, Fairly simple questions for you I'm sure, but never had to do it myself so thought I'd get your input. I am running a 5 node cluster with regular spinners and ssd journals at the moment. Recently I threw in 1TB SSD per node and wanted to create a pool that is purely the new SSDs. I

Re: [ceph-users] After OSD Flap - FAILED assert(oi.version == i->first)

2016-11-17 Thread Samuel Just
Puzzling, added a question to the ticket. -Sam On Thu, Nov 17, 2016 at 4:32 AM, Nick Fisk wrote: > Hi Sam, > > I've updated the ticket with logs from the wip run. > > Nick > >> -Original Message- >> From: Samuel Just [mailto:sj...@redhat.com] >> Sent: 15 November 2016 18:30 >> To: Nick Fi

[ceph-users] Register ceph daemons on initctl

2016-11-17 Thread Jaemyoun Lee
Dear all, I have a trouble for using Ceph. When I built Ceph by source code from the official repo with v10.2.3, I couldn't create a Ceph cluster. The stat of OSDs cannot be UP after they were activated. I think the problem is Upstart because "make install" didn't copy conf files to /etc/init/ #

Re: [ceph-users] After OSD Flap - FAILED assert(oi.version == i->first)

2016-11-17 Thread Nick Fisk
Hi Sam, I've updated the ticket with logs from the wip run. Nick > -Original Message- > From: Samuel Just [mailto:sj...@redhat.com] > Sent: 15 November 2016 18:30 > To: Nick Fisk > Cc: Ceph Users > Subject: Re: [ceph-users] After OSD Flap - FAILED assert(oi.version == > i->first) > >

Re: [ceph-users] Ceph Volume Issue

2016-11-17 Thread Alexey Sheplyakov
Hi, please share some details about your cluster (especially the hardware) - how many OSDs are there? How many disks per an OSD machine? - Do you use dedicated (SSD) OSD journals? - RAM size, CPUs model, network card bandwidth/model - Do you have a dedicated cluster network? - How many VMs (in th

Re: [ceph-users] how to list deleted objects in snapshot

2016-11-17 Thread Jan Krcmar
hi, it seems, it could be the SnapContext problem. i've tried stat command. it works fine. i will i post the bug report? thanks fous 2016-11-16 21:55 GMT+01:00 Gregory Farnum : > On Wed, Nov 16, 2016 at 5:13 AM, Jan Krcmar wrote: >> hi, >> >> i've got found problem/feature in pool snapshots >

Re: [ceph-users] ceph cluster having blocke requests very frequently

2016-11-17 Thread Nick Fisk
Hi Thomas, Do you have the OSD logs from around the time of that slow request (13:12 to 13:29 period)? Do you also see anything about OSD’s going down in the Mon ceph.log file around that time? 480 seconds is probably far too long for a disk to be busy for, I’m wondering if the OSD i

Re: [ceph-users] ceph cluster having blocke requests very frequently

2016-11-17 Thread Thomas Danan
Actually forgot to say that the following issue is describing very close symptoms : http://tracker.ceph.com/issues/9844 Thomas From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Thomas Danan Sent: jeudi 17 novembre 2016 09:59 To: n...@fisk.me.uk; 'Peter Maloney' Cc: ceph-

Re: [ceph-users] ceph cluster having blocke requests very frequently

2016-11-17 Thread Thomas Danan
Hi, I have recheck the pattern when slow request are detected. I have an example with following (primary: 411, secondary: 176, 594) On primary slow requests detected: waiting for subops (176, 594) during 16 minutes 2016-11-17 13:29:27.209754 7f001d414700 0 log_channel(cluster) log [WRN] :

Re: [ceph-users] degraded objects after osd add

2016-11-17 Thread Burkhard Linke
Hi, On 11/17/2016 08:07 AM, Steffen Weißgerber wrote: Hello, just for understanding: When starting to fill osd's with data due to setting the weigth from 0 to the normal value the ceph status displays degraded objects (>0.05%). I don't understand the reason for this because there's no stora