Hello,
What IO size are you testing, Bluestore will only defer writes under
32kb is size by default. Unless you are writing sequentially,
only limited amount of buffering via SSD is going to help, you will
eventually hit the limits of the disk. Could you share some more
details as I'm intere
>Yes and no... bluestore seems to not work really optimal. For example,
>it has no filestore-like journal waterlining and flushes the deferred
>write queue just every 32 writes (deferred_batch_ops). And when it does
>that it's basically waiting for the HDD to commit and slowing down all
>further wr
Yes and no... bluestore seems to not work really optimal. For example,
it has no filestore-like journal waterlining and flushes the deferred
write queue just every 32 writes (deferred_batch_ops). And when it does
that it's basically waiting for the HDD to commit and slowing down all
further writes.
On 2/14/19 4:40 AM, John Petrini wrote:
> Okay that makes more sense, I didn't realize the WAL functioned in a
> similar manner to filestore journals (though now that I've had another
> read of Sage's blog post, New in Luminous: BlueStore, I notice he does
> cover this). Is this to say that writ
Okay that makes more sense, I didn't realize the WAL functioned in a
similar manner to filestore journals (though now that I've had another read
of Sage's blog post, New in Luminous: BlueStore, I notice he does cover
this). Is this to say that writes are acknowledged as soon as they hit the
WAL?
A
Hello,
We'll soon be building out four new luminous clusters with Bluestore.
Our current clusters are running filestore so we're not very familiar
with Bluestore yet and I'd like to have an idea of what to expect.
Here are the OSD hardware specs (5x per cluster):
2x 3.0GHz 18c/36t
22x 1.8TB 10K
Anyone have any insight to offer here? Also I'm now curious to hear
about experiences with 512e vs 4kn drives.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Martin,
Hardware has already been aquired and was spec'd to mostly match our
current clusters which perform very well for us. I'm really just hoping to
hear from anyone who may have experience moving from filestore => bluestore
with and HDD cluster. Obviously we'll be doing testing but it's alw
: Saturday, February 2, 2019 2:19 AM
To: John Petrini
Cc: ceph-users
Subject: Re: [ceph-users] Bluestore HDD Cluster Advice
Hello John,
you don't need such a big CPU, save yourself some money with a 12c/24t and
invest it in better / more disks. Same goes for memory, 128G would be enough.
Why d
Hello John,
you don't need such a big CPU, save yourself some money with a 12c/24t and
invest it in better / more disks. Same goes for memory, 128G would be
enough. Why do you install 4x 25G NIC, hard disks won't be able to use that
capacity?
In addition, you can use the 2 disks for OSDs and not
Hello,
We'll soon be building out four new luminous clusters with Bluestore.
Our current clusters are running filestore so we're not very familiar
with Bluestore yet and I'd like to have an idea of what to expect.
Here are the OSD hardware specs (5x per cluster):
2x 3.0GHz 18c/36t
22x 1.8TB 10K S
11 matches
Mail list logo