Hi Christian and Wido,

I used the daily digest and lost way to reply without heavy editing the
replies. I will change my subscription to be individual message later.

How big is the disk? RocksDB will need to compact at some point and it
> seems that the HDD can't keep up.
> I've seen this with many customers and in those cases we offloaded the
> WAL+DB to an SSD.
> How big is the data drive and the DB?
> Wido


The disks are 6TB each. The data drive is around 50% of the disk and
the DB varies from 40 to 67GB.

Splitting the WAL+DB to SSD is not an option at this time because
rebuilding the OSD one by one will take forever.


It's ceph-bluestore-tool.

Is there any official documentation on how to online migrate the
WAL+DB to SSD? I guess this feature is not backported to Luminous
right?


Kind regards,

Charles Alva
Sent from Gmail Mobile


On Fri, Apr 12, 2019 at 10:24 AM Christian Balzer <ch...@gol.com> wrote:

>
> Hello Charles,
>
> On Wed, 10 Apr 2019 14:07:58 +0700 Charles Alva wrote:
>
> > Hi Ceph Users,
> >
> > Is there a way around to minimize rocksdb compacting event so that it
> won't
> > use all the spinning disk IO utilization and avoid it being marked as
> down
> > due to fail to send heartbeat to others?
> >
> > Right now we have frequent high IO disk utilization for every 20-25
> minutes
> > where the rocksdb reaches level 4 with 67GB data to compact.
> >
> >
> Could you please follow up on the questions Wido asked?
>
> As in sizes of disk, DB, number and size of objects (I think you're using
> object store), how busy those disks and CPUs are, etc.
>
> That kind of information will be invaluable for others here and likely the
> developers as well.
>
> Regards,
>
> Christian
>
> > Kind regards,
> >
> > Charles Alva
> > Sent from Gmail Mobile
>
>
> --
> Christian Balzer        Network/Systems Engineer
> ch...@gol.com           Rakuten Communications
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to