Paul, Bastiaan,

Thank you for your responses and for alleviating my concerns about Nautilus.  The good news is that I can still easily move up to Debian 10.  BTW, I assume that this is still with the 4.19 kernel?

Also, I'd like to inject additional customizations into my Debian configs via ceph-ansible - certain sysctls, ntp servers, and some additional packages.  Is anybody doing that, and could you share any hints on where to configure it?

Thanks.

-Dave

Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)


On 1/16/2020 2:30 PM, Paul Emmerich wrote:
Don't use Mimic, support for it is far worse than Nautilus or Luminous. I think we were the only company who built a product around Mimic, both Redhat and Suse enterprise storage was Luminous and then Nautilus skipping Mimic entirely.

We only offered Mimic as a default for a limited time and immediately moved to Nautilus as it became available and Nautilus + Debian 10 has been great for us. Mimic and Debian 9 was... well, hacked together, due to the gcc backport issues. That's not to say that it doesn't work, in fact Mimic (> 13.2.2) and Debian 9 worked perfectly fine for us.

Our Debian 10 and Nautilus packages are just so much better and more stable than Debian 9 + Mimic because we don't need to do weird things with Debian. Check the mailing list for old posts around the Mimic release by me to see how we did that build. It's not pretty, but it was the only way to use Ceph >= Mimic on Debian 9.
All that mess has been eliminated with Debian 10.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io <http://www.croit.io>
Tel: +49 89 1896585 90


On Thu, Jan 16, 2020 at 6:55 PM Bastiaan Visser <basti...@msbv.nl <mailto:basti...@msbv.nl>> wrote:

    I would definitely go for Nautilus. there are quite some
    optimizations that went in after mimic.

    Bluestore DB size usually ends up at either 30 or 60 GB.
    30 GB is one of the sweet spots during normal operation. But
    during compaction, ceph writes the new data before removing the
    old, hence the 60GB.
    Next sweetspot is 300/600GB. any size between 60 and 300 will
    never be unused.

    DB Usage is also dependent on ceph usage, object storage is known
    to use a lot more db space than rbd images for example.

    Op do 16 jan. 2020 om 17:46 schreef Dave Hall
    <kdh...@binghamton.edu <mailto:kdh...@binghamton.edu>>:

        Hello all.

        Sorry for the beginner questions...

        I am in the process of setting up a small (3 nodes, 288TB)
        Ceph cluster to store some research data.  It is expected that
        this cluster will grow significantly in the next year,
        possibly to multiple petabytes and 10s of nodes.  At this time
        I'm expected a relatively small number of clients, with only
        one or two actively writing collected data - albeit at a high
        volume per day.

        Currently I'm deploying on Debian 9 via ceph-ansible.

        Before I put this cluster into production I have a couple
        questions based on my experience to date:

        Luminous, Mimic, or Nautilus?  I need stability for this
        deployment, so I am sticking with Debian 9 since Debian 10 is
        fairly new, and I have been hesitant to go with Nautilus.  Yet
        Mimic seems to have had a hard road on Debian but for the
        efforts at Croit.

          * Statements on the Releases page are now making more sense
            to me, but I would like to confirm that Nautilus is the
            right choice at this time?

        Bluestore DB size:  My nodes currently have 8 x 12TB drives
        (plus 4 empty bays) and a PCIe NVMe drive.  If I understand
        the suggested calculation correctly, the DB size for a 12 TB
        Bluestore OSD would be 480GB.  If my NVMe isn't big enough to
        provide this size, should I skip provisioning the DBs on the
        NVMe, or should I give each OSD 1/12th of what I have
        available?  Also, should I try to shift budget a bit to get
        more NVMe as soon as I can, and redo the OSDs when sufficient
        NVMe is available?

        Thanks.

        -Dave

        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to