Re: [ceph-users] Hardware selection for ceph backup on ceph

2020-01-14 Thread Wido den Hollander


On 1/10/20 5:32 PM, Stefan Priebe - Profihost AG wrote:
> Hi,
> 
> we‘re currently in the process of building a new ceph cluster to backup rbd 
> images from multiple ceph clusters.
> 
> We would like to start with just a single ceph cluster to backup which is 
> about 50tb. Compression ratio of the data is around 30% while using zlib. We 
> need to scale the backup cluster up to 1pb.
> 
> The workload on the original rbd images is mostly 4K writes so I expect rbd 
> export-diff to do a lot of small writes.
> 
> The current idea is to use the following hw as a start:
> 6 Servers with:
>  1 AMD EPYC 7302P 3GHz, 16C/32T
> 128g Memory
> 14x 12tb Toshiba Enterprise MG07ACA HDD drives 4K native 
> Dual 25gb network
> 

That should be sufficient. The AMD Epyc is a great CPU and you have
enough memory.

> Does it fit? Has anybody experience with the drives? Can we use EC or do we 
> need to use normal replication?
> 

EC will just work. It will be fast enough. But since it's only a backup
system it should work out.

Oh, more servers is always better.

Wido

> Greets,
> Stefan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hardware selection for ceph backup on ceph

2020-01-12 Thread Martin Verges
Hello Stefan,

AMD EPYC
>

great choice!

Has anybody experience with the drives?


some of our customers have different toshiba MG06SCA drives and they work
great according to them. Can't say for MG07ACA but to be honest, I don't
think there should be a huge difference.

--
Martin Verges
Managing director

Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-training.

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Fr., 10. Jan. 2020 um 17:32 Uhr schrieb Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:

> Hi,
>
> we‘re currently in the process of building a new ceph cluster to backup
> rbd images from multiple ceph clusters.
>
> We would like to start with just a single ceph cluster to backup which is
> about 50tb. Compression ratio of the data is around 30% while using zlib.
> We need to scale the backup cluster up to 1pb.
>
> The workload on the original rbd images is mostly 4K writes so I expect
> rbd export-diff to do a lot of small writes.
>
> The current idea is to use the following hw as a start:
> 6 Servers with:
>  1 AMD EPYC 7302P 3GHz, 16C/32T
> 128g Memory
> 14x 12tb Toshiba Enterprise MG07ACA HDD drives 4K native
> Dual 25gb network
>
> Does it fit? Has anybody experience with the drives? Can we use EC or do
> we need to use normal replication?
>
> Greets,
> Stefan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Hardware selection for ceph backup on ceph

2020-01-10 Thread Stefan Priebe - Profihost AG
Hi,

we‘re currently in the process of building a new ceph cluster to backup rbd 
images from multiple ceph clusters.

We would like to start with just a single ceph cluster to backup which is about 
50tb. Compression ratio of the data is around 30% while using zlib. We need to 
scale the backup cluster up to 1pb.

The workload on the original rbd images is mostly 4K writes so I expect rbd 
export-diff to do a lot of small writes.

The current idea is to use the following hw as a start:
6 Servers with:
 1 AMD EPYC 7302P 3GHz, 16C/32T
128g Memory
14x 12tb Toshiba Enterprise MG07ACA HDD drives 4K native 
Dual 25gb network

Does it fit? Has anybody experience with the drives? Can we use EC or do we 
need to use normal replication?

Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com