[ceph-users] ceph kernel settings

2015-07-06 Thread Daniel Hoffman
Hey all. wondering if anyone has a set of kernel settings they run on larger density setups. 24->36 disks per node. we have run into and resolved the PID issue, just wondering if there any anything else we may be seeing we dont know about yet. Thanks Daniel _

Re: [ceph-users] "too many PGs per OSD" in Hammer

2015-05-08 Thread Daniel Hoffman
Is there a way to shrink/merge PG's on a pool without removing it? I have a pool with some data in it but the PG's were miscalculated and just wondering the best way to resolve it. On Fri, May 8, 2015 at 4:49 PM, Somnath Roy wrote: > Sorry, I didn’t read through all..It seems you have 6 OSDs,

Re: [ceph-users] Shadow Files

2015-05-10 Thread Daniel Hoffman
Any updates on when this is going to be released? Daniel On Wed, May 6, 2015 at 3:51 AM, Yehuda Sadeh-Weinraub wrote: > Yes, so it seems. The librados::nobjects_begin() call expects at least a > Hammer (0.94) backend. Probably need to add a try/catch there to catch this > issue, and maybe see i

[ceph-users] civetweb lockups

2015-05-10 Thread Daniel Hoffman
Hi All. We have a wierd issue where civetweb just locks up, it just fails to respond to HTTP and a restart resolves the problem. This happens anywhere from every 60 seconds to every 4 hours with no reason behind it. We have run the gateway in full debug mode and there is nothing there that seems

Re: [ceph-users] Shadow Files

2015-05-11 Thread Daniel Hoffman
Thanks. Can you please let me know the suitable/best git version/tree to be pulling to compile and use this feature/patch? Thanks On Tue, May 12, 2015 at 4:38 AM, Yehuda Sadeh-Weinraub wrote: > > > -- > > *From: *"Daniel Hoffman" >

[ceph-users] radosgw load/performance/crashing

2015-05-25 Thread Daniel Hoffman
Hi All. We are trying to cope with radosGW crashing every 5-15mins. This seems to be getting worse and worse but we are unable to determine the cause, nothing in the logs as it appears to be a radosgw hang. The port is open, accepts a connect but there is no response to a HEAD/GET etc etc. We ar

[ceph-users] radosgw load/performance/crashing

2015-05-25 Thread Daniel Hoffman
Hi All. We are trying to cope with radosGW crashing every 5-15mins. This seems to be getting worse and worse but we are unable to determine the cause, nothing in the logs as it appears to be a radosgw hang. The port is open, accepts a connect but there is no response to a HEAD/GET etc etc. We ar

[ceph-users] Multi-Object delete and RadosGW

2015-05-25 Thread Daniel Hoffman
Has anyone come accross a problem with multi-object deletes. We have a number of systems that send we think are sending big piles of POST/XML and multi-object deletes. Has anyone had any experience with this locking up civetweb or apache/fast_cgi threads. Are there any tunable settings we could u

[ceph-users] bucket cleanup speed

2014-11-14 Thread Daniel Hoffman
Hi All. Running a Ceph Cluster (firefly) ceph version 0.80.5 We use ceph mainly for backups via the radosGW at the moment. There had to be an account deleted/bucket removed which had a very large number of objects and was about 60TB in space. We have been monitoring it for days now, and the

Re: [ceph-users] bucket cleanup speed

2014-11-14 Thread Daniel Hoffman
No one had this problem? I found a forum/mail list post from 2013 with the same issue but no responses either. Any pointers appreciated. Daniel On 2014-11-14 20:20, Daniel Hoffman wrote: Hi All. Running a Ceph Cluster (firefly) ceph version 0.80.5 We use ceph mainly for backups via the

Re: [ceph-users] bucket cleanup speed

2014-11-15 Thread Daniel Hoffman
removal of the newly deleted objects way quicker. Keep us posted to tell us if it has improved anything. JC On Nov 14, 2014, at 01:20, Daniel Hoffman wrote: Hi All. Running a Ceph Cluster (firefly) ceph version 0.80.5 We use ceph mainly for backups via the radosGW at the moment. There

Re: [ceph-users] bucket cleanup speed

2014-11-15 Thread Daniel Hoffman
essor cycle time You may want to reduce the gc cycle time (and match the total run time also). Yehuda On Sat, Nov 15, 2014 at 3:23 AM, Daniel Hoffman wrote: Thanks. We have that set rgw gc max objs = 997 The problem we have is we have Commvault connected to the cluster. What commvault do

Re: [ceph-users] bucket cleanup speed

2014-11-16 Thread Daniel Hoffman
`; echo $diff ; done 595 607 606 On 2014-11-16 11:15, Daniel Hoffman wrote: We have managed to get it running with below. rgw gc max objs = 7877 rgw gc processor period = 600 We now have a higher IOPs number on the GC pool in the dashboard. We are not sure if its making a huge differenc

[ceph-users] Using LVM on top of a RBD.

2015-11-03 Thread Daniel Hoffman
Hi All. I have a legacy server farm made up of 7 nodes running KVM and using LVM(LVs) for the disks of the virtual machines. The nodes at this time are CentOS 6. We would love to remove this small farm from our network and use CephRBD over using a traditional iSCSI block device as we currently do