Re: [ceph-users] Fwd: Small fix for ceph.spec

2013-07-30 Thread Danny Al-Gaaf
Hi, I think this is a bug in packaging of the leveldb package in this case since the spec-file already sets dependencies on on leveldb-devel. leveldb depends on snappy, therefore the leveldb package should set a dependency on snappy-devel for leveldb-devel (check the SUSE spec file for leveldb: h

Re: [ceph-users] Fwd: Small fix for ceph.spec

2013-07-30 Thread Erik Logtenberg
Hi, Fedora, in this case Fedora 19, x86_64. Kind regards, Erik. On 07/30/2013 09:29 AM, Danny Al-Gaaf wrote: > Hi, > > I think this is a bug in packaging of the leveldb package in this case > since the spec-file already sets dependencies on on leveldb-devel. > > leveldb depends on snappy, th

[ceph-users] [PATCH] Add missing buildrequires for Fedora

2013-07-30 Thread Erik Logtenberg
Hi, This patch adds two buildrequires to the ceph.spec file, that are needed to build the rpms under Fedora. Danny Al-Gaaf commented that the snappy-devel dependency should actually be added to the leveldb-devel package. I will try to get that fixed too, in the mean time, this patch does make sure

Re: [ceph-users] Fwd: Small fix for ceph.spec

2013-07-30 Thread Danny Al-Gaaf
Hi, then the Fedora package is broken. If you check the spec file of: http://dl.fedoraproject.org/pub/fedora/linux/updates/19/SRPMS/leveldb-1.12.0-3.fc19.src.rpm You can see the spec-file sets a: BuildRequires: snappy-devel But not the corresponding "Requires: snappy-devel" for the devel pac

[ceph-users] "rbd ls -l" hangs

2013-07-30 Thread Jeff Moskow
This is the same issue as yesterday, but I'm still searching for a solution. We have a lot of data on the cluster that we need and can't get to it reasonably (It took over 12 hours to export a 2GB image). The only thing that status reports as wrong is: health HEALTH_WARN 1 pgs incomplete;

Re: [ceph-users] "rbd ls -l" hangs

2013-07-30 Thread Jens Kristian Søgaard
Hi, This is the same issue as yesterday, but I'm still searching for a solution. We have a lot of data on the cluster that we need and can't health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs I'm not claiming to have an answer, but I have a suggestion you can try. Try runn

Re: [ceph-users] "rbd ls -l" hangs

2013-07-30 Thread Jeff Moskow
Thanks! I tried restarting osd.11 (the primary osd for the incomplete pg) and that helped a LOT. We went from 0/1 op/s to 10-800+ op/s! We still have "HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean", but at least we can use our cluster :-) ceph pg dump_stuck inactive

Re: [ceph-users] "rbd ls -l" hangs

2013-07-30 Thread Jeff Moskow
OK - so while things are definitely better, we still are not where we were and "rbd ls -l" still hangs. Any suggestions? -- ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] FW: Issues with ceph-deploy

2013-07-30 Thread John Wilkins
Matthew, I think one of the central differences is that mkcephfs read the ceph.conf file, and generated the OSDs from the ceph.conf file. It also generated the fsid, and placed it into the cluster map, but didn't modify the ceph.conf file itself. By contrast, "ceph-deploy new" generates the fsid,

[ceph-users] inconsistent pg: no 'snapset' attr

2013-07-30 Thread John Nielsen
I am running a ceph cluster with 24 OSD's across 3 nodes, Cuttlefish 0.61.3. Recently an inconsistent PG cropped up: # ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors pg 11.2d5 is active+clean+inconsistent, acting [5,22,9] 1 scrub errors Pool 11 is .rgw.buckets, used by a RADOS

Re: [ceph-users] Fwd: Small fix for ceph.spec

2013-07-30 Thread Erik Logtenberg
Hi, I will report the issue there as well. Please note that Ceph seems to support Fedora 17, even though that release is considered end-of-life by Fedora. This issue with the leveldb package cannot be fixed for Fedora 17, only for 18 and 19. So if Ceph wants to continue supporting Fedora 17, addin

[ceph-users] rbd read write very slow for heavy I/O operations

2013-07-30 Thread johnu
Hi, I have an openstack cluster which runs on ceph . I tried running hadoop inside VM's and I noticed that map tasks take long time to complete with time and finally it fails. RDB read/writes are getting slower with time. Is it because of too many objects in ceph per volume? I have 8 node clu

Re: [ceph-users] "rbd ls -l" hangs

2013-07-30 Thread Gregory Farnum
You'll want to figure out why the cluster isn't healthy to begin with. Is the incomplete/inactive PG staying constant? Track down which OSDs it's on and make sure the acting set is the right size, or if you've somehow lost data on it. I believe the docs have some content on doing this but I don't h