On 2015年01月10日 03:21, Gregory Farnum wrote:
On Fri, Jan 9, 2015 at 2:00 AM, Nico Schottelius
wrote:
Lionel, Christian,
we do have the exactly same trouble as Christian,
namely
Christian Eichelmann [Fri, Jan 09, 2015 at 10:43:20AM +0100]:
We still don't know what caused this specific error...
On Fri, Jan 9, 2015 at 2:00 AM, Nico Schottelius
wrote:
> Lionel, Christian,
>
> we do have the exactly same trouble as Christian,
> namely
>
> Christian Eichelmann [Fri, Jan 09, 2015 at 10:43:20AM +0100]:
>> We still don't know what caused this specific error...
>
> and
>
>> ...there is currently
On Fri, Jan 9, 2015 at 3:00 AM, Nico Schottelius
wrote:
> Even though I do not like the fact that we lost a pg for
> an unknown reason, I would prefer ceph to handle that case to recover to
> the best possible situation.
>
> Namely I wonder if we can integrate a tool that shows
> which (parts of)
experiencing such issues it would be good if you provide more info
about your deployment: ceph version, kernel versions, OS, filesystem btrfs/xfs.
Thx Jiri
- Reply message -
From: "Nico Schottelius"
To:
Subject: [ceph-users] Is ceph production ready? [was: Ceph PG Incomplete
more info
> about your deployment: ceph version, kernel versions, OS, filesystem
> btrfs/xfs.
>
> Thx Jiri
>
> - Reply message -
> From: "Nico Schottelius"
> To:
> Subject: [ceph-users] Is ceph production ready? [was: Ceph PG Incomplete =
>
Lionel, Christian,
we do have the exactly same trouble as Christian,
namely
Christian Eichelmann [Fri, Jan 09, 2015 at 10:43:20AM +0100]:
> We still don't know what caused this specific error...
and
> ...there is currently no way to make ceph forget about the data of this pg
> and create it as
Hi Lionel,
we have a ceph cluster with in sum about 1PB, 12 OSDs with 60 Disks,
devided into 4 racks in 2 rooms, all connected with a dedicated 10G
cluster network. Of course with a replication level of 3.
We did about 9 Month intensive testing. Just like you, we were never
experiences that kind
Hi Nico.
If you are experiencing such issues it would be good if you provide more info
about your deployment: ceph version, kernel versions, OS, filesystem btrfs/xfs.
Thx Jiri
- Reply message -
From: "Nico Schottelius"
To:
Subject: [ceph-users] Is ceph production ready? [wa
Hello Dan,
it is good to know that there are actually people using ceph + qemu in
production!
Regarding replicas: I thought about using size = 2, but I see that
this resembles raid5 and size = 3 is more or less equal in terms of loss
to raid6.
Regarding the kernel panics: I am still researching
Hi Nico,
Yes Ceph is production ready. Yes people are using it in production for qemu.
Last time I heard, Ceph was surveyed as the most popular backend for OpenStack
Cinder in production.
When using RBD in production, it really is critically important to (a) use 3
replicas and (b) pay attention
On 12/30/14 16:36, Nico Schottelius wrote:
> Good evening,
>
> we also tried to rescue data *from* our old / broken pool by map'ing the
> rbd devices, mounting them on a host and rsync'ing away as much as
> possible.
>
> However, after some time rsync got completly stuck and eventually the
> host w
Good evening,
we also tried to rescue data *from* our old / broken pool by map'ing the
rbd devices, mounting them on a host and rsync'ing away as much as
possible.
However, after some time rsync got completly stuck and eventually the
host which mounted the rbd mapped devices decided to kernel pan
12 matches
Mail list logo