The problem with caching is that if the performance delta between the
two storage types isn't large enough, the cost of the caching
algorithms and the complexity of managing everything outweigh the
performance gains.
With Optanes vs. SSDs, the main thing to consider is how busy the
devices are in
ock contention plays a part here.
>
>
> Mark
>
>
> On 8/6/19 11:41 AM, Mark Lehrer wrote:
> > I have a few more cycles this week to dedicate to the problem of
> > making OSDs do more than maybe 5 simultaneous operations (as measured
> > by the iostat effective
RBDs in the same pool.
Thanks,
Mark
On Sat, May 11, 2019 at 5:50 AM Maged Mokhtar wrote:
>
>
> On 10/05/2019 19:54, Mark Lehrer wrote:
> > I'm setting up a new Ceph cluster with fast SSD drives, and there is
> > one problem I want to make sure to address straight away:
> >
I have had good luck with YCSB as an initial assessment of different
storage systems. Typically I'll use this first when I am playing with
a new system, but I like to switch to the more native tools (rados
bench, cassandra-stress, etc etc) as soon as I am more comfortable.
And I can definitely
My main question is this - is there a way to stop any replay or
journaling during OSD startup and bring up the pool/fs in read-only
mode?
Here is a description of what I'm seeing. I have a Luminous cluster
with CephFS and 16 8TB SSDs, using size=3.
I had a problem with one of my SAS
> but only 20MB/s write and 95MB/s read with 4KB objects.
There is copy-on-write overhead for each block, so 4K performance is
going to be limited no matter what.
However, if your system is like mine the main problem you will run
into is that Ceph was designed for spinning disks. Therefore, its
> Steps 3-6 are to get the drive lvm volume back
How much longer will we have to deal with LVM? If we can migrate non-LVM
drives from earlier versions, how about we give ceph-volume the ability to
create non-LVM OSDs directly?
On Thu, May 16, 2019 at 1:20 PM Tarek Zegar wrote:
> FYI for
I'm setting up a new Ceph cluster with fast SSD drives, and there is
one problem I want to make sure to address straight away:
comically-low OSD queue depths.
On the past several clusters I built, there was one major performance
problem that I never had time to really solve, which is this: