The workload is relatively high read/write of objects through radosgw.
Gbps+ in both directions.  The OSDs are spinning disks, the journals (up
until now filestore) are on SSDs.  Four OSDs / journal disk.

On Wed, Aug 15, 2018 at 10:58 AM, Wido den Hollander <w...@42on.com> wrote:

>
>
> On 08/15/2018 05:57 PM, Robert Stanford wrote:
> >
> >  Thank you Wido.  I don't want to make any assumptions so let me verify,
> > that's 10GB of DB per 1TB storage on that OSD alone, right?  So if I
> > have 4 OSDs sharing the same SSD journal, each 1TB, there are 4 10 GB DB
> > partitions for each?
> >
>
> Yes, that is correct.
>
> Each OSD needs 10GB/1TB of storage of DB. So size your SSD according to
> your storage needs.
>
> However, it depends on the workload if you need to offload WAL+DB to a
> SSD. What is the workload?
>
> Wido
>
> > On Wed, Aug 15, 2018 at 1:59 AM, Wido den Hollander <w...@42on.com
> > <mailto:w...@42on.com>> wrote:
> >
> >
> >
> >     On 08/15/2018 04:17 AM, Robert Stanford wrote:
> >     > I am keeping the wal and db for a ceph cluster on an SSD.  I am
> using
> >     > the masif_bluestore_block_db_size / masif_bluestore_block_wal_size
> >     > parameters in ceph.conf to specify how big they should be.  Should
> these
> >     > values be the same, or should one be much larger than the other?
> >     >
> >
> >     This has been answered multiple times on this mailinglist in the last
> >     months, a bit of searching would have helped.
> >
> >     Nevertheless, 1GB for the WAL is sufficient and then allocate about
> 10GB
> >     of DB per TB of storage. That should be enough in most use cases.
> >
> >     Now, if you can spare more DB space, do so!
> >
> >     Wido
> >
> >     >  R
> >     >
> >     >
> >     > _______________________________________________
> >     > ceph-users mailing list
> >     > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> >     >
> >
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to