Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
Thanks Kurt,

I wasn't aware of the second page and has been very helpful. However, the osd 
recovery max chunk doesn't list a unit


osd recovery max chunk

Description:The maximum size of a recovered chunk of data to push.
Type:   64-bit Integer Unsigned
Default:1 << 20



I assume this is in bytes.


--
Kevin Weiler
IT

IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | 
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: 
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>

From: Kurt Bauer mailto:kurt.ba...@univie.ac.at>>
Date: Tuesday, November 5, 2013 2:52 PM
To: Kevin Weiler 
mailto:kevin.wei...@imc-chicago.com>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] ceph recovery killing vms



Kevin Weiler schrieb:

Thanks Kyle,

What's the unit for osd recovery max chunk?

Have a look at http://ceph.com/docs/master/rados/configuration/osd-config-ref/ 
where all the possible OSD config options are described, especially have a look 
at the backfilling and recovery sections.

Also, how do I find out what my current values are for these osd options?

Have a look at 
http://ceph.com/docs/master/rados/configuration/ceph-conf/#viewing-a-configuration-at-runtime

Best regards,
Kurt

--

Kevin Weiler

IT


IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>







On 10/28/13 6:22 PM, "Kyle Bader" 
<mailto:kyle.ba...@gmail.com> wrote:



You can change some OSD tunables to lower the priority of backfills:

   osd recovery max chunk:   8388608
   osd recovery op priority: 2

In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recovery time and not starving client requests. I've found 2
to work well on our clusters, YMMV.

On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
<mailto:kevin.wei...@imc-chicago.com> wrote:


Hi all,

We have a ceph cluster that being used as a backing store for several
VMs
(windows and linux). We notice that when we reboot a node, the cluster
enters a degraded state (which is expected), but when it begins to
recover,
it starts backfilling and it kills the performance of our VMs. The VMs
run
slow, or not at all, and also seem to switch it's ceph mounts to
read-only.
I was wondering 2 things:

Shouldn't we be recovering instead of backfilling? It seems like
backfilling
is much more intensive operation
Can we improve the recovery/backfill performance so that our VMs don't
go
down when there is a problem with the cluster?


--

Kevin Weiler

IT



IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606
| http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chicago.com>




The information in this e-mail is intended only for the person or
entity to
which it is addressed.

It may contain confidential and /or privileged material. If someone
other
than the intended recipient should receive this e-mail, he / she shall
not
be entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us
immediately by
"reply" and then delete it from your system. Although this information
has
been compiled with great care, neither IMC Financial Markets & Asset
Management nor any of its related entities shall accept any
responsibility
for any errors, omissions or other inaccuracies in this information or
for
the consequences thereof, nor shall it be bound in any way by the
contents
of this e-mail or its attachments. In the event of incomplete or
incorrect
transmission, please return the e-mail to the sender and permanently
delete
this message and any attachments.

Messages and attachments are scanned for all known viruses. Always scan
attachments before opening them.

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--

Kyle






The information in this e-mail is intended only for the person or entity to 
which it is addressed.

It may contain confidential and /or privileged material. If someone other than 
the intended recipient should receive this e-mail, he / she shall not be 
entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by 
"reply" and then delete it from your system. Although this information has been 
co

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kurt Bauer


Kevin Weiler schrieb:
> Thanks Kyle,
>
> What's the unit for osd recovery max chunk?
Have a look at
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ where
all the possible OSD config options are described, especially have a
look at the backfilling and recovery sections.
>
> Also, how do I find out what my current values are for these osd options?
Have a look at
http://ceph.com/docs/master/rados/configuration/ceph-conf/#viewing-a-configuration-at-runtime

Best regards,
Kurt
>
> --
>
> Kevin Weiler
>
> IT
>
>
> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
> 60606 | http://imc-chicago.com/
>
> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
> kevin.wei...@imc-chicago.com
>
>
>
>
>
>
>
> On 10/28/13 6:22 PM, "Kyle Bader"  wrote:
>
>> You can change some OSD tunables to lower the priority of backfills:
>>
>>osd recovery max chunk:   8388608
>>osd recovery op priority: 2
>>
>> In general a lower op priority means it will take longer for your
>> placement groups to go from degraded to active+clean, the idea is to
>> balance recovery time and not starving client requests. I've found 2
>> to work well on our clusters, YMMV.
>>
>> On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
>>  wrote:
>>> Hi all,
>>>
>>> We have a ceph cluster that being used as a backing store for several
>>> VMs
>>> (windows and linux). We notice that when we reboot a node, the cluster
>>> enters a degraded state (which is expected), but when it begins to
>>> recover,
>>> it starts backfilling and it kills the performance of our VMs. The VMs
>>> run
>>> slow, or not at all, and also seem to switch it's ceph mounts to
>>> read-only.
>>> I was wondering 2 things:
>>>
>>> Shouldn't we be recovering instead of backfilling? It seems like
>>> backfilling
>>> is much more intensive operation
>>> Can we improve the recovery/backfill performance so that our VMs don't
>>> go
>>> down when there is a problem with the cluster?
>>>
>>>
>>> --
>>>
>>> Kevin Weiler
>>>
>>> IT
>>>
>>>
>>>
>>> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
>>> 60606
>>> | http://imc-chicago.com/
>>>
>>> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
>>> kevin.wei...@imc-chicago.com
>>>
>>>
>>> 
>>>
>>> The information in this e-mail is intended only for the person or
>>> entity to
>>> which it is addressed.
>>>
>>> It may contain confidential and /or privileged material. If someone
>>> other
>>> than the intended recipient should receive this e-mail, he / she shall
>>> not
>>> be entitled to read, disseminate, disclose or duplicate it.
>>>
>>> If you receive this e-mail unintentionally, please inform us
>>> immediately by
>>> "reply" and then delete it from your system. Although this information
>>> has
>>> been compiled with great care, neither IMC Financial Markets & Asset
>>> Management nor any of its related entities shall accept any
>>> responsibility
>>> for any errors, omissions or other inaccuracies in this information or
>>> for
>>> the consequences thereof, nor shall it be bound in any way by the
>>> contents
>>> of this e-mail or its attachments. In the event of incomplete or
>>> incorrect
>>> transmission, please return the e-mail to the sender and permanently
>>> delete
>>> this message and any attachments.
>>>
>>> Messages and attachments are scanned for all known viruses. Always scan
>>> attachments before opening them.
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> --
>>
>> Kyle
>
>
> 
>
> The information in this e-mail is intended only for the person or entity to 
> which it is addressed.
>
> It may contain confidential and /or privileged material. If someone other 
> than the intended recipient should receive this e-mail, he / she shall not be 
> entitled to read, disseminate, disclose or duplicate it.
>
> If you receive this e-mail unintentionally, please inform us immediately by 
> "reply" and then delete it from your system. Although this information has 
> been compiled with great care, neither IMC Financial Markets & Asset 
> Management nor any of its related entities shall accept any responsibility 
> for any errors, omissions or other inaccuracies in this information or for 
> the consequences thereof, nor shall it be bound in any way by the contents of 
> this e-mail or its attachments. In the event of incomplete or incorrect 
> transmission, please return the e-mail to the sender and permanently delete 
> this message and any attachments.
>
> Messages and attachments are scanned for all known viruses. Always scan 
> attachments before opening them.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Kurt Bauer 
Vienna University Computer Center - 

Re: [ceph-users] ceph recovery killing vms

2013-11-05 Thread Kevin Weiler
Thanks Kyle,

What's the unit for osd recovery max chunk?

Also, how do I find out what my current values are for these osd options?

--

Kevin Weiler

IT


IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/

Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com







On 10/28/13 6:22 PM, "Kyle Bader"  wrote:

>You can change some OSD tunables to lower the priority of backfills:
>
>osd recovery max chunk:   8388608
>osd recovery op priority: 2
>
>In general a lower op priority means it will take longer for your
>placement groups to go from degraded to active+clean, the idea is to
>balance recovery time and not starving client requests. I've found 2
>to work well on our clusters, YMMV.
>
>On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
> wrote:
>> Hi all,
>>
>> We have a ceph cluster that being used as a backing store for several
>>VMs
>> (windows and linux). We notice that when we reboot a node, the cluster
>> enters a degraded state (which is expected), but when it begins to
>>recover,
>> it starts backfilling and it kills the performance of our VMs. The VMs
>>run
>> slow, or not at all, and also seem to switch it's ceph mounts to
>>read-only.
>> I was wondering 2 things:
>>
>> Shouldn't we be recovering instead of backfilling? It seems like
>>backfilling
>> is much more intensive operation
>> Can we improve the recovery/backfill performance so that our VMs don't
>>go
>> down when there is a problem with the cluster?
>>
>>
>> --
>>
>> Kevin Weiler
>>
>> IT
>>
>>
>>
>> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
>>60606
>> | http://imc-chicago.com/
>>
>> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
>> kevin.wei...@imc-chicago.com
>>
>>
>> 
>>
>> The information in this e-mail is intended only for the person or
>>entity to
>> which it is addressed.
>>
>> It may contain confidential and /or privileged material. If someone
>>other
>> than the intended recipient should receive this e-mail, he / she shall
>>not
>> be entitled to read, disseminate, disclose or duplicate it.
>>
>> If you receive this e-mail unintentionally, please inform us
>>immediately by
>> "reply" and then delete it from your system. Although this information
>>has
>> been compiled with great care, neither IMC Financial Markets & Asset
>> Management nor any of its related entities shall accept any
>>responsibility
>> for any errors, omissions or other inaccuracies in this information or
>>for
>> the consequences thereof, nor shall it be bound in any way by the
>>contents
>> of this e-mail or its attachments. In the event of incomplete or
>>incorrect
>> transmission, please return the e-mail to the sender and permanently
>>delete
>> this message and any attachments.
>>
>> Messages and attachments are scanned for all known viruses. Always scan
>> attachments before opening them.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
>--
>
>Kyle




The information in this e-mail is intended only for the person or entity to 
which it is addressed.

It may contain confidential and /or privileged material. If someone other than 
the intended recipient should receive this e-mail, he / she shall not be 
entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by 
"reply" and then delete it from your system. Although this information has been 
compiled with great care, neither IMC Financial Markets & Asset Management nor 
any of its related entities shall accept any responsibility for any errors, 
omissions or other inaccuracies in this information or for the consequences 
thereof, nor shall it be bound in any way by the contents of this e-mail or its 
attachments. In the event of incomplete or incorrect transmission, please 
return the e-mail to the sender and permanently delete this message and any 
attachments.

Messages and attachments are scanned for all known viruses. Always scan 
attachments before opening them.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-30 Thread Karan Singh
Hello RZK 

Would you like to share your experience on this problem and your way of solving 
it . This sounds interesting. 

Regards 
Karan Singh 

- Original Message -

From: "Rzk"  
To: ceph-users@lists.ceph.com 
Sent: Wednesday, 30 October, 2013 4:17:32 AM 
Subject: Re: [ceph-users] ceph recovery killing vms 

Thanks Guys, 
after tested it in dev server, i have implemented the new config in prod 
system. 
next i will upgrade the hard drive.. :) 

thanks again All. 


On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader < kyle.ba...@gmail.com > wrote: 




Recovering from a degraded state by copying existing replicas to other OSDs is 
going to cause reads on existing replicas and writes to the new locations. If 
you have slow media then this is going to be felt more acutely. Tuning the 
backfill options I posted is one way to lessen the impact, another option is to 
slowly lower the weight in CRUSH for the OSD(s) you want to remove. Hopefully 
that helps! 

-- 

Kyle 




___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-29 Thread Rzk
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)

thanks again All.


On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader  wrote:

> Recovering from a degraded state by copying existing replicas to other
> OSDs is going to cause reads on existing replicas and writes to the new
> locations. If you have slow media then this is going to be felt more
> acutely. Tuning the backfill options I posted is one way to lessen the
> impact, another option is to slowly lower the weight in CRUSH for the
> OSD(s) you want to remove. Hopefully that helps!
>
> --
>
> Kyle
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-29 Thread Kyle Bader
Recovering from a degraded state by copying existing replicas to other OSDs
is going to cause reads on existing replicas and writes to the new
locations. If you have slow media then this is going to be felt more
acutely. Tuning the backfill options I posted is one way to lessen the
impact, another option is to slowly lower the weight in CRUSH for the
OSD(s) you want to remove. Hopefully that helps!

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-29 Thread Kurt Bauer
Hi,

maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.

best regards,
Kurt

Rzk schrieb:
> Hi all,
>
> I have the same problem, just curious.
> could it be caused by poor hdd performance  ?
> read/write speed doesn't match the network speed ?
>
> Currently i'm using desktop hdd in my cluster. 
>
> Rgrds,
> Rzk
>
>
>
>
> On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader  > wrote:
>
> You can change some OSD tunables to lower the priority of backfills:
>
> osd recovery max chunk:   8388608
> osd recovery op priority: 2
>
> In general a lower op priority means it will take longer for your
> placement groups to go from degraded to active+clean, the idea is to
> balance recovery time and not starving client requests. I've found 2
> to work well on our clusters, YMMV.
>
> On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
>  > wrote:
> > Hi all,
> >
> > We have a ceph cluster that being used as a backing store for
> several VMs
> > (windows and linux). We notice that when we reboot a node, the
> cluster
> > enters a degraded state (which is expected), but when it begins
> to recover,
> > it starts backfilling and it kills the performance of our VMs.
> The VMs run
> > slow, or not at all, and also seem to switch it's ceph mounts to
> read-only.
> > I was wondering 2 things:
> >
> > Shouldn't we be recovering instead of backfilling? It seems like
> backfilling
> > is much more intensive operation
> > Can we improve the recovery/backfill performance so that our VMs
> don't go
> > down when there is a problem with the cluster?
> >
> >
> > --
> >
> > Kevin Weiler
> >
> > IT
> >
> >
> >
> > IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 |
> Chicago, IL 60606
> > | http://imc-chicago.com/
> >
> > Phone: +1 312-204-7439  | Fax: +1
> 312-244-3301  | E-Mail:
> > kevin.wei...@imc-chicago.com 
> >
> >
> > 
> >
> > The information in this e-mail is intended only for the person
> or entity to
> > which it is addressed.
> >
> > It may contain confidential and /or privileged material. If
> someone other
> > than the intended recipient should receive this e-mail, he / she
> shall not
> > be entitled to read, disseminate, disclose or duplicate it.
> >
> > If you receive this e-mail unintentionally, please inform us
> immediately by
> > "reply" and then delete it from your system. Although this
> information has
> > been compiled with great care, neither IMC Financial Markets & Asset
> > Management nor any of its related entities shall accept any
> responsibility
> > for any errors, omissions or other inaccuracies in this
> information or for
> > the consequences thereof, nor shall it be bound in any way by
> the contents
> > of this e-mail or its attachments. In the event of incomplete or
> incorrect
> > transmission, please return the e-mail to the sender and
> permanently delete
> > this message and any attachments.
> >
> > Messages and attachments are scanned for all known viruses.
> Always scan
> > attachments before opening them.
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
>
> Kyle
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-28 Thread Rzk
Hi all,

I have the same problem, just curious.
could it be caused by poor hdd performance  ?
read/write speed doesn't match the network speed ?

Currently i'm using desktop hdd in my cluster.

Rgrds,
Rzk




On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader  wrote:

> You can change some OSD tunables to lower the priority of backfills:
>
> osd recovery max chunk:   8388608
> osd recovery op priority: 2
>
> In general a lower op priority means it will take longer for your
> placement groups to go from degraded to active+clean, the idea is to
> balance recovery time and not starving client requests. I've found 2
> to work well on our clusters, YMMV.
>
> On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
>  wrote:
> > Hi all,
> >
> > We have a ceph cluster that being used as a backing store for several VMs
> > (windows and linux). We notice that when we reboot a node, the cluster
> > enters a degraded state (which is expected), but when it begins to
> recover,
> > it starts backfilling and it kills the performance of our VMs. The VMs
> run
> > slow, or not at all, and also seem to switch it's ceph mounts to
> read-only.
> > I was wondering 2 things:
> >
> > Shouldn't we be recovering instead of backfilling? It seems like
> backfilling
> > is much more intensive operation
> > Can we improve the recovery/backfill performance so that our VMs don't go
> > down when there is a problem with the cluster?
> >
> >
> > --
> >
> > Kevin Weiler
> >
> > IT
> >
> >
> >
> > IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
> 60606
> > | http://imc-chicago.com/
> >
> > Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
> > kevin.wei...@imc-chicago.com
> >
> >
> > 
> >
> > The information in this e-mail is intended only for the person or entity
> to
> > which it is addressed.
> >
> > It may contain confidential and /or privileged material. If someone other
> > than the intended recipient should receive this e-mail, he / she shall
> not
> > be entitled to read, disseminate, disclose or duplicate it.
> >
> > If you receive this e-mail unintentionally, please inform us immediately
> by
> > "reply" and then delete it from your system. Although this information
> has
> > been compiled with great care, neither IMC Financial Markets & Asset
> > Management nor any of its related entities shall accept any
> responsibility
> > for any errors, omissions or other inaccuracies in this information or
> for
> > the consequences thereof, nor shall it be bound in any way by the
> contents
> > of this e-mail or its attachments. In the event of incomplete or
> incorrect
> > transmission, please return the e-mail to the sender and permanently
> delete
> > this message and any attachments.
> >
> > Messages and attachments are scanned for all known viruses. Always scan
> > attachments before opening them.
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
>
> Kyle
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph recovery killing vms

2013-10-28 Thread Kyle Bader
You can change some OSD tunables to lower the priority of backfills:

osd recovery max chunk:   8388608
osd recovery op priority: 2

In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recovery time and not starving client requests. I've found 2
to work well on our clusters, YMMV.

On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler
 wrote:
> Hi all,
>
> We have a ceph cluster that being used as a backing store for several VMs
> (windows and linux). We notice that when we reboot a node, the cluster
> enters a degraded state (which is expected), but when it begins to recover,
> it starts backfilling and it kills the performance of our VMs. The VMs run
> slow, or not at all, and also seem to switch it's ceph mounts to read-only.
> I was wondering 2 things:
>
> Shouldn't we be recovering instead of backfilling? It seems like backfilling
> is much more intensive operation
> Can we improve the recovery/backfill performance so that our VMs don't go
> down when there is a problem with the cluster?
>
>
> --
>
> Kevin Weiler
>
> IT
>
>
>
> IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606
> | http://imc-chicago.com/
>
> Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
> kevin.wei...@imc-chicago.com
>
>
> 
>
> The information in this e-mail is intended only for the person or entity to
> which it is addressed.
>
> It may contain confidential and /or privileged material. If someone other
> than the intended recipient should receive this e-mail, he / she shall not
> be entitled to read, disseminate, disclose or duplicate it.
>
> If you receive this e-mail unintentionally, please inform us immediately by
> "reply" and then delete it from your system. Although this information has
> been compiled with great care, neither IMC Financial Markets & Asset
> Management nor any of its related entities shall accept any responsibility
> for any errors, omissions or other inaccuracies in this information or for
> the consequences thereof, nor shall it be bound in any way by the contents
> of this e-mail or its attachments. In the event of incomplete or incorrect
> transmission, please return the e-mail to the sender and permanently delete
> this message and any attachments.
>
> Messages and attachments are scanned for all known viruses. Always scan
> attachments before opening them.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph recovery killing vms

2013-10-28 Thread Kevin Weiler
Hi all,

We have a ceph cluster that being used as a backing store for several VMs 
(windows and linux). We notice that when we reboot a node, the cluster enters a 
degraded state (which is expected), but when it begins to recover, it starts 
backfilling and it kills the performance of our VMs. The VMs run slow, or not 
at all, and also seem to switch it's ceph mounts to read-only. I was wondering 
2 things:


  1.  Shouldn't we be recovering instead of backfilling? It seems like 
backfilling is much more intensive operation
  2.  Can we improve the recovery/backfill performance so that our VMs don't go 
down when there is a problem with the cluster?

--
Kevin Weiler
IT

IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 | 
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail: 
kevin.wei...@imc-chicago.com



The information in this e-mail is intended only for the person or entity to 
which it is addressed.

It may contain confidential and /or privileged material. If someone other than 
the intended recipient should receive this e-mail, he / she shall not be 
entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by 
"reply" and then delete it from your system. Although this information has been 
compiled with great care, neither IMC Financial Markets & Asset Management nor 
any of its related entities shall accept any responsibility for any errors, 
omissions or other inaccuracies in this information or for the consequences 
thereof, nor shall it be bound in any way by the contents of this e-mail or its 
attachments. In the event of incomplete or incorrect transmission, please 
return the e-mail to the sender and permanently delete this message and any 
attachments.

Messages and attachments are scanned for all known viruses. Always scan 
attachments before opening them.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com