[ceph-users] Re: Performance optimization

2021-09-07 Thread Robert Sander
Am 07.09.21 um 11:49 schrieb Simon Sutter: I never looked into RocksDB, because I thought writing data 24/7 does not benefit from caching. But this is metadata storage, so I might profit from it. Due to lack of sata ports, is it possible to save all RocksDB's on one ssd? It should still be

[ceph-users] Re: Performance optimization

2021-09-07 Thread Simon Sutter
of writing it to the disk directly. Tahnks, Simon Von: Robert Sander Gesendet: Montag, 6. September 2021 16:48:52 An: Simon Sutter; Marc; ceph-users@ceph.io Betreff: Re: [ceph-users] Re: Performance optimization Am 06.09.21 um 16:44 schrieb Simon Sutter: > |no

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
Am 06.09.21 um 16:44 schrieb Simon Sutter: |node1|node2|node3|node4|node5|node6|node7|node8| |1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB|1x1TB| |4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB|4x2TB| |1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB|1x6TB| "ceph osd df tree" should show the data

[ceph-users] Re: Performance optimization

2021-09-06 Thread Simon Sutter
use replicated uses so much more storage, it wasn't really an option until now. We didn't have any problems with CPU utilization and I can go to 32GB for every node, and 64 for MDS nodes. Thanks ____ Von: Marc Gesendet: Montag, 6. September 2021 13:53:06 An: Rob

[ceph-users] Re: Performance optimization

2021-09-06 Thread Marc
> > >> - The one 6TB disk, per node? > > > > You get bad distribution of data, why not move drives around between > these to clusters, so you have more the same in each. > > > > I would assume that this behaves exactly the other way around. As long > as you have the same number of block devices

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
Am 06.09.21 um 11:54 schrieb Marc: - The one 6TB disk, per node? You get bad distribution of data, why not move drives around between these to clusters, so you have more the same in each. I would assume that this behaves exactly the other way around. As long as you have the same number

[ceph-users] Re: Performance optimization

2021-09-06 Thread Simon Sutter
do you think about having the os on one of the disks, used by ceph? Thanks in advance, Simon Von: Kai Börnert Gesendet: Montag, 6. September 2021 10:54:24 An: ceph-users@ceph.io Betreff: [ceph-users] Re: Performance optimization Hi, are any of those old disks

[ceph-users] Re: Performance optimization

2021-09-06 Thread Marc
> > > At the moment, the nodes look like this: > 8 Nodes > Worst CPU: i7-3930K (up to i7-6850K) > Worst ammount of RAM: 24GB (up to 64GB) > HDD Layout: > 1x 1TB > 4x 2TB > 1x 6TB > all sata, some just 5400rpm > > I had to put the OS on the 6TB HDDs, because there are no more sata > connections

[ceph-users] Re: Performance optimization

2021-09-06 Thread Kai Börnert
Hi, are any of those old disks SMR ones? Because they will absolutely destroy any kind of performance (ceph does not use writecaches due to powerloss concerns, so they kinda do their whole magic for each writerequest). Greetings On 9/6/21 10:47 AM, Simon Sutter wrote: Hello everyone! I