Re: [ceph-users] CephFS feature set mismatch with v0.79 and recent kernel

2014-04-09 Thread Michael Nelson
the EC rules and the kernel should be happy. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Apr 8, 2014 at 6:08 PM, Aaron Ten Clay wrote: On Tue, Apr 8, 2014 at 4:50 PM, Michael Nelson wrote: I am trying to mount CephFS from a freshly installed v0.79 cluster using

[ceph-users] CephFS feature set mismatch with v0.79 and recent kernel

2014-04-08 Thread Michael Nelson
I am trying to mount CephFS from a freshly installed v0.79 cluster using a kernel built from git.kernel.org:kernel/git/sage/ceph-client.git (for-linus a30be7cb) and running into the following dmesg errors on mount: libceph: mon0 198.18.32.12:6789 feature set mismatch, my 2b84a042aca < server's

Re: [ceph-users] EC pool errors with some k/m combinations

2014-03-31 Thread Michael Nelson
On Mon, 31 Mar 2014, Michael Nelson wrote: Hi Loic, On Sun, 30 Mar 2014, Loic Dachary wrote: Hi Michael, I'm trying to reproduce the problem from sources (today's instead of yesterday's but there is no difference that could explain the behaviour you have): cd src rm -f

Re: [ceph-users] EC pool errors with some k/m combinations

2014-03-30 Thread Michael Nelson
Hi Loic, On Sun, 30 Mar 2014, Loic Dachary wrote: Hi Michael, I'm trying to reproduce the problem from sources (today's instead of yesterday's but there is no difference that could explain the behaviour you have): cd src rm -fr /tmp/dev /tmp/out ; mkdir -p /tmp/dev ; CEPH_DIR=/tmp LC_ALL=C

[ceph-users] EC pool errors with some k/m combinations

2014-03-29 Thread Michael Nelson
I have a small cluster (4 nodes, 15 OSDs, 3-5 OSDs per node) running bits from the firefly branch (0.78-430-gb8ea656). I am trying out various k/m combinations for EC pools. Certain k/m combinations are causing rados put to fail on the second 4MB chunk. I realize some of these combinations mig

Re: [ceph-users] ec pools and radosgw

2014-03-27 Thread Michael Nelson
On Thu, 27 Mar 2014, Yehuda Sadeh wrote: On Wed, Mar 26, 2014 at 4:48 PM, Michael Nelson wrote: I am playing around with erasure coded pools on 0.78-348 (firefly) and am attempting to enable EC on the .rgw.buckets pool for radosgw (fresh install). If I use a plain EC profile (no settings

Re: [ceph-users] ec pools and radosgw

2014-03-27 Thread Michael Nelson
On Thu, 27 Mar 2014, Loic Dachary wrote: Hi Michael, Could you please show the exact commands you've used to modify the k & m values ? ceph osd crush rule create-erasure ecruleset ceph osd erasure-code-profile set myprofile ruleset-failure-domain=osd k=3 m=3 ceph osd pool create .rgw.buckets

[ceph-users] ec pools and radosgw

2014-03-26 Thread Michael Nelson
I am playing around with erasure coded pools on 0.78-348 (firefly) and am attempting to enable EC on the .rgw.buckets pool for radosgw (fresh install). If I use a plain EC profile (no settings changed), uploads of various sizes work fine and EC seems to be working based on how much space is bei

Re: [ceph-users] missing rgw user bucket metadata on 0.78

2014-03-26 Thread Michael Nelson
On Wed, 26 Mar 2014, Alfredo Deza wrote: That sounds unexpected. 0.78 was built against the Firefly branch and that branch does have that change. The only explanation I have at the moment is that the changeset may have been added *after* Friday when the release got in. I deployed the current

Re: [ceph-users] missing rgw user bucket metadata on 0.78

2014-03-25 Thread Michael Nelson
On Tue, 25 Mar 2014, Michael Nelson wrote: On Tue, 25 Mar 2014, Michael Nelson wrote: I am setting up a new test cluster on 0.78 using the same configuration that was successful on 0.72. After creating a new S3 account, a simple operation of listing buckets (which will be empty obviously) is

Re: [ceph-users] missing rgw user bucket metadata on 0.78

2014-03-25 Thread Michael Nelson
On Tue, 25 Mar 2014, Michael Nelson wrote: I am setting up a new test cluster on 0.78 using the same configuration that was successful on 0.72. After creating a new S3 account, a simple operation of listing buckets (which will be empty obviously) is resulting in an HTTP 500 error. Turned up

[ceph-users] missing rgw user bucket metadata on 0.78

2014-03-24 Thread Michael Nelson
I am setting up a new test cluster on 0.78 using the same configuration that was successful on 0.72. After creating a new S3 account, a simple operation of listing buckets (which will be empty obviously) is resulting in an HTTP 500 error. Looking at the OSD log for the user's bucket metadata, I

Re: [ceph-users] adding placement pool to radosgw

2014-02-20 Thread Michael Nelson
On Thu, 20 Feb 2014, Yehuda Sadeh wrote: On Thu, Feb 20, 2014 at 2:18 PM, Michael Nelson wrote: I am trying to add a placement pool to radosgw (based on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4992), but radosgw keeps complaining in the log that it can't find the plac

[ceph-users] adding placement pool to radosgw

2014-02-20 Thread Michael Nelson
I am trying to add a placement pool to radosgw (based on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4992), but radosgw keeps complaining in the log that it can't find the placement rule when I create a bucket using s3cmd: s3cmd --bucket-location=:two-placement mb s3://foo Is