Re: [ceph-users] Ceph not replicating

2014-04-19 Thread Michael J. Kidd
You may also want to check your 'min_size'... if it's 2, then you'll be incomplete even with 1 complete copy. ceph osd dump | grep pool You can reduce the min size with the following syntax: ceph osd pool set min_size 1 Thanks, Michael J. Kidd Sent from my mobile de

Re: [ceph-users] Ceph not replicating

2014-04-19 Thread Michael J. Kidd
oy your OSDs with XFS instead of BTRFS. The last details I've seen show BTRFS slows drastically after only a few hours with a high file count in the filesystem. Better to re-deploy now than when you have data serving in production. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Pro

Re: [ceph-users] RBD for ephemeral

2014-05-19 Thread Michael J. Kidd
Since the status is 'Abandoned', it would appear that the fix has not been merged into any release of OpenStack. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Sun, May 18, 2014 at 5:13 PM, Yuming Ma (yumima) wrote: > Wondering what is the statu

Re: [ceph-users] RBD for ephemeral

2014-05-19 Thread Michael J. Kidd
After sending my earlier email, I found another commit that was merged in March: https://review.openstack.org/#/c/59149/ Seems to follow a newer image handling technique that was being sought which prevented the first patch from being merged in... Michael J. Kidd Sr. Storage Consultant Inktank

[ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
... As an aside, we're also working to update the documentation to reflect the best practices. See Ceph.com tracker for this at: http://tracker.ceph.com/issues/9867 Thanks! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Re

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
ize=2, 32 PGs total still gives very close to 1 PG per OSD. Being that it's such a low utilization pool, this is still sufficient. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:17 PM, Christopher O'Connell wrote:

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
> Where is the source ? On the page.. :) It does link out to jquery and jquery-ui, but all the custom bits are embedded in the HTML. Glad it's helpful :) Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:46 PM, Loic Dachar

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
. Hope this helps put things into perspective. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 4:34 PM, Sanders, Bill wrote: > This is interesting. Kudos to you guys for getting the calculator up, I > think this'll help som

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-08 Thread Michael J. Kidd
otal data usage across those two pools. I welcome anyone with more CephFS experience to weigh in on this! :) Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:59 PM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote:

Re: [ceph-users] OSD process exhausting server memory

2014-10-29 Thread Michael J. Kidd
the noscrub and nodeep-scrub flags: - ceph osd unset noscrub - ceph osd unset nodeep-scrub ## For help identifying why memory usage was so high, please provide: * ceph osd dump | grep pool * ceph osd crush rule dump Let us know if this helps... I know it looks extreme, but it's worked for me

Re: [ceph-users] OSD process exhausting server memory

2014-10-29 Thread Michael J. Kidd
Ah, sorry... since they were set out manually, they'll need to be set in manually.. for i in $(ceph osd tree | grep osd | awk '{print $3}'); do ceph osd in $i; done Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Oct 29, 2014 at 12:33

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
d without needing to take down OSDs in multiple hosts. I'm also unsure about the cache tiering and how it could relate to the load being seen. Hope this helps... Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Thu, Oct 30, 2014 at 4:00 AM, Lukáš Kubín wro

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
have ideas) and may chip in... Wish I could be more help.. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Thu, Oct 30, 2014 at 11:00 AM, Lukáš Kubín wrote: > Thanks Michael, still no luck. > > Letting the problematic OSD.10 down has no effect. Wit

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Michael J. Kidd
It's also good to note that the m500 has built in RAIN protection (basically, diagonal parity at the nand level). Should be very good for journal consistency. Sent from my mobile device. Please excuse brevity and typographical errors. On Jan 15, 2014 9:07 AM, "Stefan Priebe" wrote: > Am 15.01

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Michael J. Kidd
actually, they're very inexpensive as far as SSD's go. The 960gb m500 can be had on Amazon for $499 US on prime (as of yesterday anyway). Sent from my mobile device. Please excuse brevity and typographical errors. On Jan 15, 2014 9:50 AM, "Sebastien Han" wrote: > However you have to get > 480G

Re: [ceph-users] RedHat ceph boot question

2014-01-25 Thread Michael J. Kidd
While clearly not optimal for long term flexibility, I've found that adding my OSD's to fstab allows the OSDs to mount during boot, and they start automatically when they're already mounted during boot. Hope this helps until a permanent fix is available. Michael J. Kidd Sr. Sto

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Michael J. Kidd
Seems that you may also need to tell CephFS to use the new pool instead of the default.. After CephFS is mounted, run: # cephfs /mnt/ceph set_layout -p 4 Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil wrote: > Hi Flor

Re: [ceph-users] Very high latency values

2014-03-07 Thread Michael J. Kidd
t seen any documentation on each counter, aside from occasional mailing list posts about specific counters.. Hope this helps! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 7, 2014 at 11:39 AM, Dan Ryder (daryder) wrote: > Hello, > > > > I&

Re: [ceph-users] pausing "recovery" when adding new machine

2014-03-07 Thread Michael J. Kidd
re set 'up' automatically... if not, use 'ceph osd up ' to bring them up manually. Hope this helps! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 7, 2014 at 3:06 PM, Sidharta Mukerjee wrote: > When I use ceph-deploy to add a bunc

Re: [ceph-users] pausing "recovery" when adding new machine

2014-03-07 Thread Michael J. Kidd
hese settings, along with their default values (so you can restore or adjust them as you like after the addition), please see: http://ceph.com/docs/master/rados/configuration/osd-config-ref/#operations Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 7, 2

Re: [ceph-users] rbd format 2 && stripe-count != 1 cannot be mapped with rbd.ko kernel 3.13.5

2014-03-12 Thread Michael J. Kidd
Try with --pool instead of -p Sent from my mobile device. Please excuse brevity and typographical errors. On Mar 12, 2014 5:51 AM, "Kasper Dieter" wrote: > OK, > it seems during the rbd creation with --stripe-count != 1 > you have to follow the rule: stripe-unit * stripe-count = object-size > >

Re: [ceph-users] rbd format 2 && stripe-count != 1 cannot be mapped with rbd.ko kernel 3.13.5

2014-03-12 Thread Michael J. Kidd
Disregard... I just saw your dmesg output.. Sent from my mobile device. Please excuse brevity and typographical errors. On Mar 12, 2014 7:51 AM, "Michael J. Kidd" wrote: > Try with --pool instead of -p > > Sent from my mobile device. Please excuse brevity and typographical

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Michael J. Kidd
Journals will default to being on-disk with the OSD if there is nothing specified on the ceph-deploy line. If you have a separate journal device, then you should specify it per the original example syntax. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 14

Re: [ceph-users] Get list of all RADOSGW users

2014-03-20 Thread Michael J. Kidd
How about this: rados ls -p .users.uid Your pool name may vary, but should contain the .users.uid extension. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Thu, Mar 20, 2014 at 2:00 PM, Dane Elwell wrote: > Hi list, > > Is there a way to get a list of al

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Michael J. Kidd
ith the cache pool itself, but I'm not terribly familiar with the features support in 3.14 kernel... Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов wrote: > Hi, Gregory! > I think that there is no int