On 11/03/2017 02:43 PM, Mark Nelson wrote:
On 11/03/2017 08:25 AM, Wido den Hollander wrote:
Op 3 november 2017 om 13:33 schreef Mark Nelson :
On 11/03/2017 02:44 AM, Wido den Hollander wrote:
Op 3 november 2017 om 0:09 schreef Nigel Williams
:
On 3 November 2017 at 07:45, Martin
On 2017-11-03 15:59, Wido den Hollander wrote:
> Op 3 november 2017 om 14:43 schreef Mark Nelson :
>
> On 11/03/2017 08:25 AM, Wido den Hollander wrote:
> Op 3 november 2017 om 13:33 schreef Mark Nelson :
>
> On 11/03/2017 02:44 AM, Wido den Hollander wrote:
> Op 3 november 2017 om 0:09 schree
On 3-11-2017 00:09, Nigel Williams wrote:
> On 3 November 2017 at 07:45, Martin Overgaard Hansen
> wrote:
>> I want to bring this subject back in the light and hope someone can provide
>> insight regarding the issue, thanks.
> Is it possible to make the DB partition (on the fastest device) too
>
> Op 3 november 2017 om 14:43 schreef Mark Nelson :
>
>
>
>
> On 11/03/2017 08:25 AM, Wido den Hollander wrote:
> >
> >> Op 3 november 2017 om 13:33 schreef Mark Nelson :
> >>
> >>
> >>
> >>
> >> On 11/03/2017 02:44 AM, Wido den Hollander wrote:
> >>>
> Op 3 november 2017 om 0:09 schreef
On 11/03/2017 08:25 AM, Wido den Hollander wrote:
Op 3 november 2017 om 13:33 schreef Mark Nelson :
On 11/03/2017 02:44 AM, Wido den Hollander wrote:
Op 3 november 2017 om 0:09 schreef Nigel Williams :
On 3 November 2017 at 07:45, Martin Overgaard Hansen wrote:
I want to bring this
> Op 3 november 2017 om 13:33 schreef Mark Nelson :
>
>
>
>
> On 11/03/2017 02:44 AM, Wido den Hollander wrote:
> >
> >> Op 3 november 2017 om 0:09 schreef Nigel Williams
> >> :
> >>
> >>
> >> On 3 November 2017 at 07:45, Martin Overgaard Hansen
> >> wrote:
> >>> I want to bring this subjec
On 11/03/2017 04:08 AM, Jorge Pinilla López wrote:
well I haven't found any recomendation either but I think that
sometimes the SSD space is being wasted.
If someone wanted to write it, you could have bluefs share some of the
space on the drive for hot object data and release space as neede
On 11/03/2017 02:44 AM, Wido den Hollander wrote:
Op 3 november 2017 om 0:09 schreef Nigel Williams :
On 3 November 2017 at 07:45, Martin Overgaard Hansen wrote:
I want to bring this subject back in the light and hope someone can provide
insight regarding the issue, thanks.
Thanks Marti
well I haven't found any recomendation either but I think that
sometimes the SSD space is being wasted.
I was thinking about making an OSD from the rest of my SSD space, but it
wouldnt scale in case more speed is needed.
Other option I asked was to use bcache or a mix between bcache and small
DB
> Op 3 november 2017 om 0:09 schreef Nigel Williams
> :
>
>
> On 3 November 2017 at 07:45, Martin Overgaard Hansen
> wrote:
> > I want to bring this subject back in the light and hope someone can provide
> > insight regarding the issue, thanks.
>
> Thanks Martin, I was going to do the same.
On 3 November 2017 at 07:45, Martin Overgaard Hansen wrote:
> I want to bring this subject back in the light and hope someone can provide
> insight regarding the issue, thanks.
Thanks Martin, I was going to do the same.
Is it possible to make the DB partition (on the fastest device) too
big? in
Hi, it seems like I’m in the same boat as everyone else in this particular
thread.
I’m also unable to find any guidelines or recommendations regarding sizing of
the wal and / or db.
I want to bring this subject back in the light and hope someone can provide
insight regarding the issue, thanks.
Hi
I'm about to change some SATA SSD disks to NVME disks and for CEPH I too
would like to know how to assign space. I have 3 1TB SATA OSDs so I'll
split the NVME disks in 3 partitions of equal size, I'm not going to
assign a different WAL partition because, if the docs are right, the WAL
is a
> Op 17 oktober 2017 om 14:21 schreef Mark Nelson :
>
>
>
>
> On 10/17/2017 01:54 AM, Wido den Hollander wrote:
> >
> >> Op 16 oktober 2017 om 18:14 schreef Richard Hesketh
> >> :
> >>
> >>
> >> On 16/10/17 13:45, Wido den Hollander wrote:
> Op 26 september 2017 om 16:39 schreef Mark Nel
On 10/17/2017 01:54 AM, Wido den Hollander wrote:
Op 16 oktober 2017 om 18:14 schreef Richard Hesketh
:
On 16/10/17 13:45, Wido den Hollander wrote:
Op 26 september 2017 om 16:39 schreef Mark Nelson :
On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
thanks David,
that's confirming what I w
Hello
Here my results
In this node, I have 3 OSDs (1TB HDD), osd.1 and osd.2 have blocks.db in
SSD partitions each of 90GB, osd.8 has no separate blocks.db
pve-hs-main[0]:~$ for i in {1,2,8} ; do echo -n "osd.$i db per object: " ; expr
`ceph daemon osd.$i perf dump | jq '.bluefs.db_used_byte
> Op 16 oktober 2017 om 18:14 schreef Richard Hesketh
> :
>
>
> On 16/10/17 13:45, Wido den Hollander wrote:
> >> Op 26 september 2017 om 16:39 schreef Mark Nelson :
> >> On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
> >>> thanks David,
> >>>
> >>> that's confirming what I was assuming. To bad
On 16/10/17 13:45, Wido den Hollander wrote:
>> Op 26 september 2017 om 16:39 schreef Mark Nelson :
>> On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
>>> thanks David,
>>>
>>> that's confirming what I was assuming. To bad that there is no
>>> estimate/method to calculate the db partition size.
>>
>>
> Op 26 september 2017 om 16:39 schreef Mark Nelson :
>
>
>
>
> On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
> > thanks David,
> >
> > that's confirming what I was assuming. To bad that there is no
> > estimate/method to calculate the db partition size.
>
> It's possible that we might be abl
On 09/26/2017 01:10 AM, Dietmar Rieder wrote:
thanks David,
that's confirming what I was assuming. To bad that there is no
estimate/method to calculate the db partition size.
It's possible that we might be able to get ranges for certain kinds of
scenarios. Maybe if you do lots of small ran
thanks David,
that's confirming what I was assuming. To bad that there is no
estimate/method to calculate the db partition size.
Dietmar
On 09/25/2017 05:10 PM, David Turner wrote:
> db/wal partitions are per OSD. DB partitions need to be made as big as
> you need them. If they run out of spac
On Tue, 26 Sep 2017, Nigel Williams wrote:
> On 26 September 2017 at 08:11, Mark Nelson wrote:
> > The WAL should never grow larger than the size of the buffers you've
> > specified. It's the DB that can grow and is difficult to estimate both
> > because different workloads will cause different n
On 26 September 2017 at 08:11, Mark Nelson wrote:
> The WAL should never grow larger than the size of the buffers you've
> specified. It's the DB that can grow and is difficult to estimate both
> because different workloads will cause different numbers of extents and
> objects, but also because r
On 09/25/2017 05:02 PM, Nigel Williams wrote:
On 26 September 2017 at 01:10, David Turner wrote:
If they are on separate
devices, then you need to make it as big as you need to to ensure that it
won't spill over (or if it does that you're ok with the degraded performance
while the db partitio
On 26 September 2017 at 01:10, David Turner wrote:
> If they are on separate
> devices, then you need to make it as big as you need to to ensure that it
> won't spill over (or if it does that you're ok with the degraded performance
> while the db partition is full). I haven't come across an equat
db/wal partitions are per OSD. DB partitions need to be made as big as you
need them. If they run out of space, they will fall back to the block
device. If the DB and block are on the same device, then there's no reason
to partition them and figure out the best size. If they are on separate
dev
On 09/25/2017 02:59 PM, Mark Nelson wrote:
> On 09/25/2017 03:31 AM, TYLin wrote:
>> Hi,
>>
>> To my understand, the bluestore write workflow is
>>
>> For normal big write
>> 1. Write data to block
>> 2. Update metadata to rocksdb
>> 3. Rocksdb write to memory and block.wal
>> 4. Once reach thresho
On 09/25/2017 03:31 AM, TYLin wrote:
Hi,
To my understand, the bluestore write workflow is
For normal big write
1. Write data to block
2. Update metadata to rocksdb
3. Rocksdb write to memory and block.wal
4. Once reach threshold, flush entries in block.wal to block.db
For overwrite and small
Hi,
To my understand, the bluestore write workflow is
For normal big write
1. Write data to block
2. Update metadata to rocksdb
3. Rocksdb write to memory and block.wal
4. Once reach threshold, flush entries in block.wal to block.db
For overwrite and small write
1. Write data and metadata to ro
I asked the same question a couple of weeks ago. No response I got
contradicted the documentation but nobody actively confirmed the
documentation was correct on this subject, either; my end state was that
I was relatively confident I wasn't making some horrible mistake by
simply specifying a bi
Some of this thread seems to contradict the documentation and confuses
me. Is the statement below correct?
"The BlueStore journal will always be placed on the fastest device
available, so using a DB device will provide the same benefit that the
WAL device would while also allowing additional meta
On 09/21/2017 05:03 PM, Mark Nelson wrote:
>
>
> On 09/21/2017 03:17 AM, Dietmar Rieder wrote:
>> On 09/21/2017 09:45 AM, Maged Mokhtar wrote:
>>> On 2017-09-21 07:56, Lazuardi Nasution wrote:
>>>
Hi,
I'm still looking for the answer of these questions. Maybe someone can
share
On 09/21/2017 03:17 AM, Dietmar Rieder wrote:
On 09/21/2017 09:45 AM, Maged Mokhtar wrote:
On 2017-09-21 07:56, Lazuardi Nasution wrote:
Hi,
I'm still looking for the answer of these questions. Maybe someone can
share their thought on these. Any comment will be helpful too.
Best regards,
On 09/21/2017 09:45 AM, Maged Mokhtar wrote:
> On 2017-09-21 07:56, Lazuardi Nasution wrote:
>
>> Hi,
>>
>> I'm still looking for the answer of these questions. Maybe someone can
>> share their thought on these. Any comment will be helpful too.
>>
>> Best regards,
>>
>> On Sat, Sep 16, 2017 at
On 2017-09-21 07:56, Lazuardi Nasution wrote:
> Hi,
>
> I'm still looking for the answer of these questions. Maybe someone can share
> their thought on these. Any comment will be helpful too.
>
> Best regards,
>
> On Sat, Sep 16, 2017 at 1:39 AM, Lazuardi Nasution
> wrote:
>
>> Hi,
>>
Hi,
I'm still looking for the answer of these questions. Maybe someone can
share their thought on these. Any comment will be helpful too.
Best regards,
On Sat, Sep 16, 2017 at 1:39 AM, Lazuardi Nasution
wrote:
> Hi,
>
> 1. Is it possible configure use osd_data not as small partition on OSD but
Hi,
1. Is it possible configure use osd_data not as small partition on OSD but
a folder (ex. on root disk)? If yes, how to do that with ceph-disk and any
pros/cons of doing that?
2. Is WAL & DB size calculated based on OSD size or expected throughput
like on journal device of filestore? If no, wha
37 matches
Mail list logo