n Behalf Of Patrick
>> Donnelly Sent: Wednesday, 11 January 2017 5:24 PM To: Kevin
>> Olbrich Cc: Ceph Users Subject: Re: [ceph-users] Review of Ceph on
>> ZFS - or how not to deploy Ceph for RBD + OpenStack
>>
>> Hello Kevin,
>>
>> On Tue, Jan 10, 2017 at 4:21
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Patrick Donnelly
> Sent: Wednesday, 11 January 2017 5:24 PM
> To: Kevin Olbrich
> Cc: Ceph Users
> Subject: Re: [ceph-users] Review of Ceph on ZFS - or how not to deploy Ceph
> for RBD + OpenStack
>
> Hello Kevin,
>
Hello Kevin,
On Tue, Jan 10, 2017 at 4:21 PM, Kevin Olbrich wrote:
> 5x Ceph node equipped with 32GB RAM, Intel i5, Intel DC P3700 NVMe journal,
Is the "journal" used as a ZIL?
> We experienced a lot of io blocks (X requests blocked > 32 sec) when a lot
> of data is changed in cloned RBDs (disk
On 11/01/2017 7:21 AM, Kevin Olbrich wrote:
Read-Cache using normal Samsung PRO SSDs works very well
How did you implement the cache and measure the results?
a ZFS ssd cache will perform very badly with VM hosting and/or
distriibuted filesystems, the random nature of the I/O and the ARC cach
Dear Ceph-users,
just to make sure nobody makes the mistake, I would like to share my
experience with Ceph on ZFS in our test lab.
ZFS is a Copy-on-Write filesystem and is suitable IMHO where data
resilience has high priority.
I work for a mid-sized datacenter in Germany and we set up a cluster us