[ceph-users] CloudRuntimeException: Failed to create storage pool

2017-02-20 Thread Vince
Hello, I have created a ceph cluster with one admin server, one monitor and two osd's. The setup is completed. But when trying to add the ceph as primary storage of cloudstack, I am getting the below error in error logs. Am I missing something ? Please help. 2017-02-20

Re: [ceph-users] Migrate cephfs metadata to SSD in running cluster

2017-02-20 Thread jiajia zhong
yes, https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-a nd-ssd-within-the-same-box/ is enough don't test on your production env. before you start, backup your cursh map. ceph osd getcrushmap -o crushmap.bin below's some hint: ceph osd getcrushmap -o crushmap.bin crushtool -d

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-20 Thread Zhongyan Gu
Hi Jason, Thanks for the reply. We are not sure this issue is only occurring on cloned images. We think it would be a generic synchronization issue. Our production/test setup are all based on Hammer, so we don’t have a chance to touch Jewel. But we will try Jewel latter. We don’t use cache

Re: [ceph-users] How safe is ceph pg repair these days?

2017-02-20 Thread Christian Balzer
Hello, On Mon, 20 Feb 2017 17:15:59 -0800 Gregory Farnum wrote: > On Mon, Feb 20, 2017 at 4:24 PM, Christian Balzer wrote: > > > > Hello, > > > > On Mon, 20 Feb 2017 14:12:52 -0800 Gregory Farnum wrote: > > > >> On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk

Re: [ceph-users] How safe is ceph pg repair these days?

2017-02-20 Thread Christian Balzer
Hello, On Mon, 20 Feb 2017 14:12:52 -0800 Gregory Farnum wrote: > On Sat, Feb 18, 2017 at 12:39 AM, Nick Fisk wrote: > > From what I understand in Jewel+ Ceph has the concept of an authorative > > shard, so in the case of a 3x replica pools, it will notice that 2 replicas > >

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2017-02-20 Thread Christian Balzer
Hello, Just a quick update since I didn't have time for this yesterday. I did a similar test as below with only the XFS node active and as expected results are opposite: 3937 IOPS 3.16 3595 IOPS 4.9 As opposed to what I found out yesterday: --- Thus I turned off the XFS node and ran the test

Re: [ceph-users] RADOS as a simple object storage

2017-02-20 Thread Gregory Farnum
On Mon, Feb 20, 2017 at 11:57 AM, Jan Kasprzak wrote: > Gregory Farnum wrote: > : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote: > : > Hello, world!\n > : > > : > I have been using CEPH RBD for a year or so as a virtual machine storage > : >

Re: [ceph-users] RADOS as a simple object storage

2017-02-20 Thread Jan Kasprzak
Gregory Farnum wrote: : On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote: : > Hello, world!\n : > : > I have been using CEPH RBD for a year or so as a virtual machine storage : > backend, and I am thinking about moving our another subsystem to CEPH: : > : > The

Re: [ceph-users] RADOS as a simple object storage

2017-02-20 Thread Gregory Farnum
On Mon, Feb 20, 2017 at 6:46 AM, Jan Kasprzak wrote: > Hello, world!\n > > I have been using CEPH RBD for a year or so as a virtual machine storage > backend, and I am thinking about moving our another subsystem to CEPH: > > The subsystem in question is a simple

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-02-20 Thread Mike Miller
Hi, again, as I said, in normal operation everything is fine with SMR. They perform well in particular for large sequential writes because of the on platter cache (20 GB I think). All tests we have done were with good SSDs for OSD cache. Things blow up during backfill / recovery because the

Re: [ceph-users] PG stuck peering after host reboot

2017-02-20 Thread george.vasilakakos
Hi Wido, Just to make sure I have everything straight, > If the PG still doesn't recover do the same on osd.307 as I think that 'ceph > pg X query' still hangs? > The info from ceph-objectstore-tool might shed some more light on this PG. You mean run the objectstore command on 307, not remove

Re: [ceph-users] extending ceph cluster with osds close to near full ratio (85%)

2017-02-20 Thread Tyanko Aleksiev
Hi Brian, On 14 February 2017 at 19:33, Brian Andrus wrote: > > > On Tue, Feb 14, 2017 at 5:27 AM, Tyanko Aleksiev > wrote: > >> Hi Cephers, >> >> At University of Zurich we are using Ceph as a storage back-end for our >> OpenStack

Re: [ceph-users] removing ceph.quota.max_bytes

2017-02-20 Thread Chad William Seys
Thanks! Seems non-standard, but it works. :) C. Anyone know what's wrong? You can clear these by setting them to zero. John Everything is Jewel 10.2.5. Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Fwd: osd create dmcrypt cant find key

2017-02-20 Thread nigel davies
Hay All I created an small dev ceph cluster and dmcrypt the OSD's but i cant seam to see where the keys are stored after. >From looking at the debug notes the dir should be "/etc/ceph/dmcrypt-keys", but that foulder does not get created and no keys are stored?? Any help on this would be grate

[ceph-users] RADOS as a simple object storage

2017-02-20 Thread Jan Kasprzak
Hello, world!\n I have been using CEPH RBD for a year or so as a virtual machine storage backend, and I am thinking about moving our another subsystem to CEPH: The subsystem in question is a simple replicated object storage, currently implemented by a custom C code by yours truly. My

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-20 Thread Jason Dillaman
AFAIK, that fix is scheduled to be included in Hammer 0.94.10 (which hasn't been released yet). Is this issue only occurring on cloned images? Since Hammer is nearly end-of-life, can you repeat this issue on Jewel? Are the affected images using cache tiering? Can you determine an

[ceph-users] 答复: Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-20 Thread 许雪寒
Hi, everyone. I read the source code. Could this be a case: a "WRITE" op designated to OBJECT X is followed by a series of Ops at the end of which is a "READ" op designated to the same OBJECT that come from the "rbd EXPORT" command; although the "WRITE" op modified the ObjectContext of OBJECT

[ceph-users] osd create dmcrypt cant find key

2017-02-20 Thread nigel davies
Hay All I created an small dev ceph cluster and dmcrypt the OSD's but i cant seam to see where the keys are stored after. >From looking at the debug notes the dir should be "/etc/ceph/dmcrypt-keys", but that foulder does not get created and no keys are stored?? Any help on this would be grate

Re: [ceph-users] High CPU usage by ceph-mgr on idle Ceph cluster

2017-02-20 Thread Brad Hubbard
Refer to my previous post for data you can gather that will help narrow this down. On Mon, Feb 20, 2017 at 6:36 PM, Jay Linux wrote: > Hello John, > > Created tracker for this issue Refer-- > > http://tracker.ceph.com/issues/18994 > > Thanks > > On Fri, Feb 17, 2017 at

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-20 Thread Zhongyan Gu
Could this be a synchronization issue in which case multi clients visiting the same object, one client(the vm/qemu) is updating the object while another client(ceph rbd export/export-diff execution) is reading the content of the same object? How do Ceph make sure the consistency in this case?

Re: [ceph-users] `ceph health` == HEALTH_GOOD_ENOUGH?

2017-02-20 Thread John Spray
On Mon, Feb 20, 2017 at 6:37 AM, Tim Serong wrote: > Hi All, > > Pretend I'm about to upgrade from one Ceph release to another. I want > to know that the cluster is healthy enough to sanely upgrade (MONs > quorate, no OSDs actually on fire), but don't care about HEALTH_WARN >

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-20 Thread Josh Durgin
On 02/19/2017 12:15 PM, Patrick Donnelly wrote: On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins wrote: The least intrusive solution is to simply change the sandbox to allow the standard file system module loading function as expected. Then any user would need to make sure

Re: [ceph-users] High CPU usage by ceph-mgr on idle Ceph cluster

2017-02-20 Thread Jay Linux
Hello John, Created tracker for this issue Refer-- > http://tracker.ceph.com/issues/18994 Thanks On Fri, Feb 17, 2017 at 6:15 PM, John Spray wrote: > On Fri, Feb 17, 2017 at 6:27 AM, Muthusamy Muthiah > wrote: > > On one our platform mgr uses 3