Hi Martin,

Even before adding cold storage on HDD, I had the cluster with SSD only. That 
also could not keep up with deleting the files.
I am no where near I/O exhaustion on the SSDs or even the HDDs.

Cheers,
Christian

On Oct 2 2019, at 1:23 pm, Martin Verges <martin.ver...@croit.io> wrote:
> Hello Christian,
>
> the problem is, that HDD is not capable of providing lots of IOs required for 
> "~4 million small files".
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.ver...@croit.io (mailto:martin.ver...@croit.io)
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
>
>
> Am Mi., 2. Okt. 2019 um 11:56 Uhr schrieb Christian Pedersen 
> <chrip...@gmail.com (mailto:chrip...@gmail.com)>:
> > Hi,
> >
> > Using the S3 gateway I store ~4 million small files in my cluster every 
> > day. I have a lifecycle setup to move these files to cold storage after a 
> > day and delete them after two days.
> > The default storage is SSD based and the cold storage is HDD.
> > However the rgw lifecycle process cannot keep up with this. In a 24 hour 
> > period. A little less than a million files are moved per day ( 
> > https://imgur.com/a/H52hD2h ). I have tried only enabling the delete part 
> > of the lifecycle, but even though it deleted from SSD storage, the result 
> > is the same. The screenshots are taken while there is no incoming files to 
> > the cluster.
> > I'm running 5 rgw servers, but that doesn't really change anything from 
> > when I was running less. I've tried adjusting rgw lc max objs, but again no 
> > change in performance.
> > Any suggestions on how I can tune the lifecycle process?
> > Cheers,
> > Christian
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to