Re: [ceph-users] Put Ceph Cluster Behind a Pair of LB

2014-03-14 Thread Larry Liu
Thanks for the replies. Found out it is configs on the F5. All smooth now. On Mar 12, 2014, at 12:33 PM, Kyle Bader wrote: >> This is in my lab. Plain passthrough setup with automap enabled on the F5. >> s3 & curl work fine as far as queries go. But file transfer rate degrades >> badly once I

Re: [ceph-users] Access Denied errors

2014-03-14 Thread Steve Carter
Yehuda, Thank you. We'll try that next. Would you happen to have any code samples (pref. perl) you wouldn't mind sharing for a couple of the admin API methods? -Steve - Original Message - > From: "Yehuda Sadeh" > To: "Steve Carter" > Cc: ceph-users@lists.ceph.com > Sent: Wednesday,

Re: [ceph-users] PG Calculations

2014-03-14 Thread Peter Matulis
On 03/14/2014 12:37 PM, Brian Andrus wrote: > To add on to Mark's thoughtful reply - The formula was intended to be > used on a *per-pool* basis for clusters that have a small number of > pools. However in small or large clusters, you may consider scaling up > or down per Mark's suggestion, or usin

Re: [ceph-users] Replication lag in block storage

2014-03-14 Thread Greg Poirier
We are stressing these boxes pretty spectacularly at the moment. On every box I have one OSD that is pegged for IO almost constantly. ceph-1: Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdv 0.00 0.00

Re: [ceph-users] help with ceph radosgw configure

2014-03-14 Thread Yehuda Sadeh
You might have a default web server set up on apache. Remove it and restart apache. On Fri, Mar 14, 2014 at 8:03 AM, wsnote wrote: > OS: CentOS 6.5 > version: ceph 0.67.7 > > I have configured radosgw and start it. > When I surfed https://hostname:65443/, I thought it should be >

Re: [ceph-users] Replication lag in block storage

2014-03-14 Thread Gregory Farnum
On Fri, Mar 14, 2014 at 9:37 AM, Greg Poirier wrote: > So, on the cluster that I _expect_ to be slow, it appears that we are > waiting on journal commits. I want to make sure that I am reading this > correctly: > > "received_at": "2014-03-14 12:14:22.659170", > > { "t

Re: [ceph-users] PG Calculations

2014-03-14 Thread Brian Andrus
To add on to Mark's thoughtful reply - The formula was intended to be used on a *per-pool* basis for clusters that have a small number of pools. However in small or large clusters, you may consider scaling up or down per Mark's suggestion, or using a fixed amount per pool to keep the numbers (and r

Re: [ceph-users] Replication lag in block storage

2014-03-14 Thread Greg Poirier
So, on the cluster that I _expect_ to be slow, it appears that we are waiting on journal commits. I want to make sure that I am reading this correctly: "received_at": "2014-03-14 12:14:22.659170", { "time": "2014-03-14 12:14:22.660191", "event":

Re: [ceph-users] PG Calculations

2014-03-14 Thread Mark Nelson
My personal opinion on this (not necessarily the official Inktank position) is that I'd rather error on the side of too many PGs for small clusters while I would probably prefer to error on the side of fewer (though not insanely so) PGs for larger clusters. IE I suspect that the difference bet

Re: [ceph-users] PG Calculations

2014-03-14 Thread Karol Kozubal
Dan, I think your interpretation is indeed correct. The documentation on this page looks to be saying this. http://ceph.com/docs/master/rados/operations/placement-groups/ Increasing the number of placement groups reduces the variance in per-OSD load across your cluster. We recommend approximate

Re: [ceph-users] PG Calculations

2014-03-14 Thread Dan Van Der Ster
Hi, Since you didn't get an immediate reply from a developer, I'm going to be bold and repeat my interpretation that the documentation implies, perhaps not clearly enough, that the 50-100 PGs per OSD rule should be applied for the total of all pools, not per pool. I hope a dev will correct me if

[ceph-users] help with ceph radosgw configure

2014-03-14 Thread wsnote
OS: CentOS 6.5 version: ceph 0.67.7 I have configured radosgw and start it. When I surfed https://hostname:65443/, I thought it should be anonymous - But, what I saw is -

Re: [ceph-users] PG Scaling

2014-03-14 Thread Karol Kozubal
http://ceph.com/docs/master/rados/operations/placement-groups/ Its provided in the example calculation on that page. Karol On 2014-03-14, 10:37 AM, "Christian Kauhaus" wrote: >Am 12.03.2014 18:54, schrieb McNamara, Bradley: >> Round up your pg_num and pgp_num to the next power of 2, 2048. >

Re: [ceph-users] PG Scaling

2014-03-14 Thread Christian Kauhaus
Am 12.03.2014 18:54, schrieb McNamara, Bradley: > Round up your pg_num and pgp_num to the next power of 2, 2048. I'm wondering where the "power of two" rule comes from. I can't find it in the documentation. Moreover, the example at http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Michael J. Kidd
Journals will default to being on-disk with the OSD if there is nothing specified on the ceph-deploy line. If you have a separate journal device, then you should specify it per the original example syntax. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 14, 2014

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Jake Young
You should take a look at this blog post: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/ The test results shows that using a RAID card with a write-back cache without journal disks can perform better or equivalent to using journal disks with XFS. *As to

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Markus Goldberg
Sorry, i should have asked a little bit clearer: Can ceph (or OSDs) be used without journals now ? The Journal-Parameter seems to be optional ( because of '[...]' ) Markus Am 14.03.2014 12:19, schrieb John Spray: Journals have not gone anywhere, and ceph-deploy still supports specifying them wit

Re: [ceph-users] Remove volume

2014-03-14 Thread YIP Wai Peng
Had the same issue. I restarted glance, and tried removing with rbd snap rm @snap Some of them are marked protected, in which you'd need to unprotected them first. - WP On Thursday, 13 March 2014, yalla.gnan.ku...@accenture.com < yalla.gnan.ku...@accenture.com> wrote: > Hi All, > > > > Any id

Re: [ceph-users] if partition name changes, will ceph get corrupted?

2014-03-14 Thread YIP Wai Peng
Not sure if this answers your question, but when you start the osd that's remapped, ceph will not be able to find the correct key and refuse to use that osd. - WP On Thursday, 13 March 2014, Sidharta Mukerjee wrote: > If a partition name such as "/dev/sdd" changes to "/dev/sde" and ceph was > a

Re: [ceph-users] 答复: Wrong PG nums

2014-03-14 Thread John Spray
It appears that pool_default_pg_num is respected during explicit pool creation (in OSDMonitor), but not in the default OSD map construction (OSDMap::build_simple respects osd_pg_bits instead). So it seems that it is normal, but not necessarily desirable. Arguably a bug. John On Fri, Mar 14, 201

Re: [ceph-users] No more Journals ?

2014-03-14 Thread John Spray
Journals have not gone anywhere, and ceph-deploy still supports specifying them with exactly the same syntax as before. The page you're looking at is the simplified "quick start", the detail on osd creation including journals is here: http://eu.ceph.com/docs/v0.77/rados/deployment/ceph-deploy-osd/

Re: [ceph-users] modprobe rbd fails in Emperor 0.72.2

2014-03-14 Thread Arne Wiebalck
On Mar 14, 2014, at 11:17 AM, Dan Koren wrote: > I have a fresh (clean?) Emperor 0.72.2 installation on RHEL 6.3. > When I try modprobe rbd I get "FATAL: Module rbd not found." > Suggestions much appreciated. Did you update your kernel to something that is new enough to actually include the rb

[ceph-users] modprobe rbd fails in Emperor 0.72.2

2014-03-14 Thread Dan Koren
I have a fresh (clean?) Emperor 0.72.2 installation on RHEL 6.3. When I try modprobe rbd I get "FATAL: Module rbd not found." Suggestions much appreciated. Thx, dk *Dan Koren* *DATERA* | 650.210.7910 | @dateranews d...@datera.io -- This email and any attachments theret

[ceph-users] No more Journals ?

2014-03-14 Thread Markus Goldberg
Hi, i'm a little bit surprised. I read through the new manuals of 0.77 (http://eu.ceph.com/docs/v0.77/start/quick-ceph-deploy/) In the section of creating the osd the manual says: /Then, from your admin node, use //ceph-deploy//to prepare the OSDs./ // /ceph-deploy osd prepare {ceph-node}:/pa

Re: [ceph-users] another assertion failure in monitor

2014-03-14 Thread Pawel Veselov
> This whole thing started with migrating from 0.56.7 to 0.72.2. First, we >> started seeing failed assertions of (version == pg_map.version) in >> PGMonitor.cc:273, but on one monitor (d) only. I attempted to resync the >> failing monitor (d) with --force-sync from (c). (d) started to work, but >