Hi
Am 27. Januar 2019 18:20:24 MEZ schrieb Will Dennis :
>Been reading "Learning Ceph - Second Edition"
>(https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml)
>and in Ch. 4 I read this:
>
>"We've noted that Ceph OSDs built with the
The hope is to be able to provide scale-out storage, that will be performant
enough to use as a primary fs-based data store for research data (right now we
mount via NFS on our cluster nodes, may do that with Ceph or perhaps do native
cephfs access from the cluster nodes.) Right now I’m still
On 28-1-2019 02:56, Will Dennis wrote:
I mean to use CephFS on this PoC; the initial use would be to back up an
existing ZFS server with ~43TB data (may have to limit the backed-up data
depending on how much capacity I can get out of the OSD servers) and then share
out via NFS as a read-only
Thanks again!
Will
-Original Message-
From: Anthony D'Atri [mailto:a...@dreamsnake.net]
Sent: Sunday, January 27, 2019 6:32 PM
To: Will Dennis
Cc: ceph-users
Subject: Re: [ceph-users] Questions about using existing HW for PoC cluster
> Been reading "Learning Ceph - Second Edition”
A
> Been reading "Learning Ceph - Second Edition”
An outstanding book, I must say ;)
> So can I get by with using a single SATA SSD (size?) per server for RocksDB /
> WAL if I'm using Bluestore?
Depends on the rest of your setup and use-case, but I think this would be a
bottleneck. Some
Been reading "Learning Ceph - Second Edition"
(https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml)
and in Ch. 4 I read this:
"We've noted that Ceph OSDs built with the new BlueStore back end do not
require journals. One might
Hi all,
Kind of new to Ceph (have been using 10.2.11 on a 3-node Proxmox 4.x cluster
[hyperconverged], works great!) and now I'm thinking of perhaps using it for a
bigger data storage project at work, a PoC at first, but built as correctly as
possible for performance and availability. I have