On 08/15/2018 06:15 PM, Robert Stanford wrote:
>
> The workload is relatively high read/write of objects through radosgw.
> Gbps+ in both directions. The OSDs are spinning disks, the journals (up
> until now filestore) are on SSDs. Four OSDs / journal disk.
>
RGW isn't always a heavy
The workload is relatively high read/write of objects through radosgw.
Gbps+ in both directions. The OSDs are spinning disks, the journals (up
until now filestore) are on SSDs. Four OSDs / journal disk.
On Wed, Aug 15, 2018 at 10:58 AM, Wido den Hollander wrote:
>
>
> On 08/15/2018 05:57 PM,
On 08/15/2018 05:57 PM, Robert Stanford wrote:
>
> Thank you Wido. I don't want to make any assumptions so let me verify,
> that's 10GB of DB per 1TB storage on that OSD alone, right? So if I
> have 4 OSDs sharing the same SSD journal, each 1TB, there are 4 10 GB DB
> partitions for each?
>
Thank you Wido. I don't want to make any assumptions so let me verify,
that's 10GB of DB per 1TB storage on that OSD alone, right? So if I have 4
OSDs sharing the same SSD journal, each 1TB, there are 4 10 GB DB
partitions for each?
On Wed, Aug 15, 2018 at 1:59 AM, Wido den Hollander wrote:
On 08/15/2018 04:17 AM, Robert Stanford wrote:
> I am keeping the wal and db for a ceph cluster on an SSD. I am using
> the masif_bluestore_block_db_size / masif_bluestore_block_wal_size
> parameters in ceph.conf to specify how big they should be. Should these
> values be the same, or should
I am keeping the wal and db for a ceph cluster on an SSD. I am using the
masif_bluestore_block_db_size / masif_bluestore_block_wal_size parameters
in ceph.conf to specify how big they should be. Should these values be the
same, or should one be much larger than the other?
R