On 08/04/15 15:16, J David wrote:
> Getting placement groups to be placed evenly continues to be a major
> challenge for us, bordering on impossible.
>
> When we first reported trouble with this, the ceph cluster had 12
> OSD's (each Intel DC S3700 400GB) spread across three nodes. Since
> then,
Getting placement groups to be placed evenly continues to be a major
challenge for us, bordering on impossible.
When we first reported trouble with this, the ceph cluster had 12
OSD's (each Intel DC S3700 400GB) spread across three nodes. Since
then, it has grown to 8 nodes with 38 OSD's.
The av
Oh, you also need to turn off "mon_osd_adjust_down_out_interval"
On Tue, Apr 7, 2015 at 8:57 PM, lijian wrote:
>
> Haomai Wang,
>
> the mon_osd_down_out_interval is 300, please refer to my settings, and I use
> the cli 'service ceph stop osd.X' to stop a osd
> the pg status change to remap,backfi
Hi Vickey,
Sorry about the issues you've been seeing. This looks very similar to
http://tracker.ceph.com/issues/11104 .
Here are two options you can try in order to work around this:
- If you must run Firefly (0.80.x) or Giant (0.87.x), please try
enabling the "epel-testing" repository on your s
Hi,
Chris Kitzmiller wrote:
> I graph aggregate stats for `ceph --admin-daemon
> /var/run/ceph/ceph-osd.$osdid.asok perf dump`. If the max latency strays too
> far
> outside of my mean latency I know to go look for the troublemaker. My graphs
> look something like this:
>
> [...]
Thanks Chri
Hello There
I am trying to install Giant on CentOS7 using ceph-deploy and encountered
below problem.
[rgw-node1][DEBUG ] Package python-ceph is obsoleted by python-rados, but
obsoleting package does not provide for requirements
[rgw-node1][DEBUG ] ---> Package cups-libs.x86_64 1:1.6.3-17.el7 will
http://ceph.com/rpm-hammer/
Or,
ceph-deploy install --stable=hammer HOST
sage
On Tue, 7 Apr 2015, O'Reilly, Dan wrote:
> Where are the RPM repos for HAMMER?
>
> Dan O'Reilly
> UNIX Systems Administration
>
> 9601 S. Meridian Blvd.
> Englewood, CO 80112
> 720-514-6293
>
>
>
> -Origina
Where are the RPM repos for HAMMER?
Dan O'Reilly
UNIX Systems Administration
9601 S. Meridian Blvd.
Englewood, CO 80112
720-514-6293
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage
Weil
Sent: Tuesday, April 07, 2015 2:55 PM
To: ceph-ann
This major release is expected to form the basis of the next long-term
stable series. It is intended to supercede v0.80.x Firefly.
Highlights since Giant include:
* RADOS Performance: a range of improvements have been made in the
OSD and client-side librados code that improve the throughput on
I'm not sure about Centos 7.0 but Ceph is not part of the 6.5 distro.
Sent from my iPhone
> On Apr 7, 2015, at 12:26 PM, Loic Dachary wrote:
>
>
>
>> On 07/04/2015 18:51, Bruce McFarland wrote:
>> Loic,
>> You're not mistaken the pages are listed under the Installation (Manual)
>> link:
>>
On 07/04/2015 18:51, Bruce McFarland wrote:
> Loic,
> You're not mistaken the pages are listed under the Installation (Manual) link:
>
> http://ceph.com/docs/master/install/
>
> You'll see the first link is the "Get Packages" link which takes you to:
>
> http://ceph.com/docs/master/install/get
Loic,
You're not mistaken the pages are listed under the Installation (Manual) link:
http://ceph.com/docs/master/install/
You'll see the first link is the "Get Packages" link which takes you to:
http://ceph.com/docs/master/install/get-packages/
This page contains the details on setting up your
I'm not having much luck here. Is there a possibility that the imported PGs
aren't being picked up because the MONs think that they're older than the empty
PGs I find on the up OSDs?
I feel that I'm so close to *not* losing my RBD volume because I only have two
bad PGs and I've successfully exp
Hi guys,
I'm invastigating rados object latency ( with
/usr/bin/time -f"%e" rados -p chunks get $chk /dev/shm/test/test.file ).
Objects are around 7MB +-1MB on size. Results shows that 0.50% of object
are fetched from cluster in 1-4 seconds and the rest of objects are
good, below 1sec (test is
Hi folks,
I will really appreciate if someone could try "rados cppool
"
command on their Hammer ceph cluster. It throws an error for me, not
sure if this is
an upstream issue or something related to our distro only.
error trace- http://pastebin.com/gVkbiPLa
This works fine for me in my fire
I spend a bunch of time figuring out ways to graph dense data sets for my
monitoring, and I have to say that graph is a thing of beauty. I'll
definitely be adding something similar to my ceph cluster monitoring
deployment.
QH
On Mon, Apr 6, 2015 at 10:36 PM, Chris Kitzmiller wrote:
> On Apr 6,
Haomai Wang,
the mon_osd_down_out_interval is 300, please refer to my settings, and I use
the cli 'service ceph stop osd.X' to stop a osd
the pg status change to remap,backfill and recovering ... immediately
so other something wrong with my settings or operation?
Thanks,
Jian Ji
At 2
Whatever the version you tested, ceph won't recover data when you
manually stop osd immediately. And it will trigger mark down osd out
when it reach "mon_osd_down_out_interval" seconds.
On Tue, Apr 7, 2015 at 8:33 PM, lijian wrote:
> Hi,
> The recovering start delay 300s after I stop a osd and th
Hi,
The recovering start delay 300s after I stop a osd and the osd status change
from in to out, the test ENV is Ceph 0.80.7
But I test in ceph 0.87.1, the recovering start immediately after I stop a
OSD,all the settings is the default value,the following is mon_osd* settings in
my test ENV:
study ceph
liqinc...@unissoft-nj.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Mar 25, 2015 at 6:37 PM, Robert LeBlanc wrote:
> As far as the foreign journal, I would run dd over the journal
> partition and try it again. It sounds like something didn't get
> cleaned up from a previous run.
I wrote zeros on the journal device re-created the journal with
"ceph-osd --m
Hi Bruce,
On 07/04/2015 02:40, Bruce McFarland wrote:
> I'm not sure exactly what your steps where, but I reinstalled a monitor
> yesterday on Centos 6.5 using ceph-deploy with the /etc/yum.repos.d/ceph.repo
> from ceph.com which I've included below.
> Bruce
That's what I also ended up doing.
22 matches
Mail list logo