Re: [ceph-users] Continuing placement group problems

2014-06-26 Thread kevin horan
On 06/26/2014 01:08 PM, Gregory Farnum wrote: On Thu, Jun 26, 2014 at 12:52 PM, Kevin Horan kho...@globalrecordings.net wrote: I am also getting inconsistent object errors on a regular basis, about 1-2 every week or so for about 300GB of data. All OSDs are using XFS filesystems. Some OSDs

Re: [ceph-users] cannot revert lost objects

2014-05-15 Thread Kevin Horan
but the operation just hangs. Kevin On 5/1/14 10:11 , kevin horan wrote: Here is how I got into this state. I have only 6 OSDs total, 3 on one host (vashti) and 3 on another (zadok). I set the noout flag so I could reboot zadok. Zadok was down for 2 minutes. When it came up ceph

Re: [ceph-users] cannot revert lost objects

2014-05-07 Thread Kevin Horan
was moving from degraded to active+clean, it finally finished probing. If it's still happening tomorrow, I'd try to find a Geeks on IRC Duty (http://ceph.com/help/community/). On 5/3/14 09:43 , Kevin Horan wrote: Craig, Thanks for your response. I have already marked osd.6 as lost

Re: [ceph-users] cannot revert lost objects

2014-05-03 Thread Kevin Horan
but the operation just hangs. Kevin On 5/1/14 10:11 , kevin horan wrote: Here is how I got into this state. I have only 6 OSDs total, 3 on one host (vashti) and 3 on another (zadok). I set the noout flag so I could reboot zadok. Zadok was down for 2 minutes. When it came up ceph

[ceph-users] OSD on an external, shared device

2013-11-27 Thread kevin horan
I am working with a small test cluster, but the problems described here will remain in production. I have an external fiber channel storage array and have exported 2 3TB disks (just as JBODs). I can use ceph-deploy to create an OSD for each of these disks on a node named Vashti. So far

Re: [ceph-users] OSD on an external, shared device

2013-11-27 Thread Kevin Horan
Thanks. I may have to go this route, but it seems awfully fragile. One stray command could destroy the entire cluster, replicas and all. Since all disks are visible to all nodes, any one of them could mount everything, corrupting all OSDs at once. Surly other people are using external FC

Re: [ceph-users] OSD on an external, shared device

2013-11-27 Thread Kevin Horan
Ah, that sounds like what I want. I'll look into that, thanks. Kevin On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote: Is LUN masking an option in your SAN? On 11/27/13, 2:34 PM, Kevin Horan kho...@cs.ucr.edu wrote: Thanks. I may have to go this route, but it seems awfully fragile. One stray

[ceph-users] OSD on an external, shared device

2013-11-26 Thread Kevin Horan
I am working with a small test cluster, but the problems described here will remain in production. I have an external fiber channel storage array and have exported 2 3TB disks (just as JBODs). I can use ceph-deploy to create an OSD for each of these disks on a node named Vashti. So far