[ceph-users] Re: Consequences of setting bluestore_fsck_quick_fix_on_mount to false?

2021-02-16 Thread Dan van der Ster
Hi Matthew, Which version are you upgrading from? If recent nautilus, you may have already completed this conversion. When we did this fsck (not with octopus, but to a nautilus point release that had this conversion backported), we first upgraded one single osd just to see the typical downtime fo

[ceph-users] Re: Small RGW objects and RADOS 64KB minimun size

2021-02-16 Thread Loïc Dachary
Hi Josh :-) Thanks for the update: this is great news and I look forward to using this once Pacific is released. Cheers On 16/02/2021 00:43, Josh Durgin wrote: > Hello Loic! > > We have developed a strategy in pacific - reducing the min_alloc_size for HDD > to 4KB by default. > > Igor Fedotov

[ceph-users] Re: can't remove osd service by "ceph orch rm "

2021-02-16 Thread Juan Miguel Olmo Martinez
Hi Tony. Take a look to: https://docs.ceph.com/en/latest/mgr/orchestrator/#remove-an-osd -- Juan Miguel Olmo Martínez Senior Software Engineer Red Hat jolmo...@redhat.com ___ ceph-users mailing l

[ceph-users] best use of NVMe drives

2021-02-16 Thread Magnus HAGDORN
Hi there, we are in the process of growing our Nautilus ceph cluster. Currently, we have 6 nodes, 3 nodes with 2×5.5TB, 6x11TB disks and 8x186GB SSD and 3 nodes with 6×5.5TB and 6×7.5TB disks. All with dual link 10GE NICs. The SSDs are used for the CephFS metadata pool, the hard drives are used for

[ceph-users] rbd move between pools

2021-02-16 Thread Marc
What is the best way to move an rbd image to a different pool. I want to move some 'old' images (some have snapshots) to backup pool. For some there is also a difference in device class. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubsc

[ceph-users] POC Hardware questions

2021-02-16 Thread Oliver Weinmann
Dear All, A questions that probalby has been asked by many other users before. I want to do a POC. For the POC I can use old decomissioned hardware. Currently I have 3 x IBM X3550 M5 with: 1 Dualport 10G NIC Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz 64GB RAM the other two have a slower CPU

[ceph-users] Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

2021-02-16 Thread Bobby
@Marc: thanks a lot.. your results have been helpful to understand. @Mark: mainly HDDs.not even one SSD.so yes, pretty slow. On Wed, Feb 10, 2021 at 9:22 PM Marc wrote: > > Some more questions please: > > How many OSDs have you been using in your second email tests for 1gbit > > [1] > >

[ceph-users] Re: POC Hardware questions

2021-02-16 Thread Stefan Kooman
On 2/16/21 9:01 AM, Oliver Weinmann wrote: Dear All, A questions that probalby has been asked by many other users before. I want to do a POC. For the POC I can use old decomissioned hardware. Currently I have 3 x IBM X3550 M5 with: 1 Dualport 10G NIC Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50

[ceph-users] Upgrading Ceph luminous to mimic on debian-buster

2021-02-16 Thread Jean-Marc FONTANA
Hello everyone, We just installed a Ceph cluster version luminous (12.2.11) on servers working with Debian buster (10.8) using ceph-deploy and we are trying to upgrade it to mimic but can't find a way to do it. We tried ceph-deploy install --release mimic mon1 mon2 mon3 (after having modifie

[ceph-users] osds processes shutdown during outage

2021-02-16 Thread Marcel Kuiper
Hi, (sorry if this gets posted twice. I forgot a subject in the first mail) We expereinced an outage this morning on a jewel cluster with 1559 osds. It appeared that a switch uplink in a rack misbehaved and once shutting that interface ceph health restored quickly. I have some questions though

[ceph-users] Re: Upgrading Ceph luminous to mimic on debian-buster

2021-02-16 Thread Martin Verges
Hello, you can migrate to nautilus and skip the outdated mimic. Save yourself the trouble of mimic it's not worth. You find packages on debian-backports (https://packages.debian.org/buster-backports/ceph) or the croit debian mirror. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Ma

[ceph-users] U of Minn

2021-02-16 Thread Chip Cox
Is this your Graham? > On Feb 14, 2021, at 4:31 PM, Graham Allan wrote: > > On Tue, Feb 9, 2021 at 11:00 AM Matthew Vernon wrote: > >> On 07/02/2021 22:19, Marc wrote: >>> >>> I was wondering if someone could post a config for haproxy. Is there >> something specific to configure? Like bindi

[ceph-users] SUSE POC - Dead in the water

2021-02-16 Thread Schweiss, Chip
For the past several months I had been building a sizable Ceph cluster that will be up to 10PB with between 20 and 40 OSD servers this year. A few weeks ago I was informed that SUSE is shutting down SES and will no longer be selling it. We haven't licensed our proof of concept cluster that is cur

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Adam Boyhan
These guys are great. [ https://croit.io/ | https://croit.io/ ] From: "Schweiss, Chip" To: "ceph-users" Sent: Tuesday, February 16, 2021 9:42:24 AM Subject: [ceph-users] SUSE POC - Dead in the water For the past several months I had been building a sizable Ceph cluster that will be up t

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Marc
Not nice to hear, similar to centos I guess. For now I am sticking to my centos7 till it is eol. So I have a few years left to decide. You can of course get an el7/el8 license, I think you will be having the best match. Maybe in a few years the distribution does not matter any more, because

[ceph-users] Re: Consequences of setting bluestore_fsck_quick_fix_on_mount to false?

2021-02-16 Thread Matthew Vernon
Hi, On 16/02/2021 08:06, Dan van der Ster wrote: Which version are you upgrading from? If recent nautilus, you may have already completed this conversion. Mimic (well, really Luminous with a pit-stop at Mimic). When we did this fsck (not with octopus, but to a nautilus point release that ha

[ceph-users] Re: 10G stackabe lacp switches

2021-02-16 Thread DHilsbos
Sorry; Netgear M4300 switches, not M4100. Dominic L. Hilsbos, MBA Director - Information Technology Perform Air International Inc. dhils...@performair.com www.PerformAir.com -Original Message- From: dhils...@performair.com [mailto:dhils...@performair.com] Sent: Monday, February 15, 2

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Mark Nelson
Hi Chip, Regarding CephFS performance, it really depends on the io patterns and what you are trying to accomplish.  Can you talk a little bit more about what you are seeing? Thanks, Mark On 2/16/21 8:42 AM, Schweiss, Chip wrote: For the past several months I had been building a sizable

[ceph-users] Re: 10G stackabe lacp switches

2021-02-16 Thread Mario Giammarco
Il giorno lun 15 feb 2021 alle ore 15:16 mj ha scritto: > > > On 2/15/21 1:38 PM, Eneko Lacunza wrote: > > Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2 > > simple switches (Mikrotik for example) and in Proxmox use an > > active-pasive bond, with default interface in all node

[ceph-users] Re: 10G stackabe lacp switches

2021-02-16 Thread Andreas John
Hello, this is not an answer to the question directly, but you could consider the following to double bandwidth: * Run each ceph node with two NICs, each has an own IP, e.g. one node has 192.0.2.10/24 and 192.0.2.11/24 * In ceph.conf you bind 50% of the OSDs to each of those IPs: [osd.XY] ...

[ceph-users] Re: U of Minn

2021-02-16 Thread Nathan Fish
You are unlikely to manage to bottleneck HAProxy on anything except the NIC, at least using normal configurations. On Tue, Feb 16, 2021 at 9:12 AM Chip Cox wrote: > > Is this your Graham? > > > On Feb 14, 2021, at 4:31 PM, Graham Allan wrote: > > > > On Tue, Feb 9, 2021 at 11:00 AM Matthew Verno

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Schweiss, Chip
Mark, We'll see if the problems follow me as I install CroitThey gave a very impressive impromptu presentation shortly after I sent this call for help. I'll make sure I post some details about our CephFS endeavor as things progress, it will likely help others as they start their Ceph projects

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Marc
Ehhh I think they are using the standard ceph versions. Your problem with cephfs is a more a matter of configuring/setup, and you should be able solve that (and have similar results) with any distribution. > -Original Message- > From: Schweiss, Chip > Sent: 16 February 2021 17:43 > To:

[ceph-users] Re: SUSE POC - Dead in the water

2021-02-16 Thread Mark Nelson
Hi Chip, Glad to hear it!  From an upstream perspective we've got a pretty good idea of some of the bottlenecks in the MDS and others in the OSD/Bluestore, but it's always nice to hear what folks are struggling with out in the field to challenge our assumptions. Best of luck! Mark On 2/

[ceph-users] Data Missing with RBD-Mirror

2021-02-16 Thread Vikas Rana
Hi Friends, We have a very weird issue with rbd-mirror replication. As per the command output, we are in sync but the OSD usage on DR side doesn't match the Prod Side. On Prod, we are using close to 52TB but on DR side we are only 22TB. We took a snap on Prod and mounted the snap on DR side a

[ceph-users] Re: Small RGW objects and RADOS 64KB minimun size

2021-02-16 Thread Steven Pine
Will there be a well documented strategy / method for changing block sizes on existing clusters? Is there anything that could be done to optimize or assist clusters in the cut over? On Tue, Feb 16, 2021 at 3:41 AM Loïc Dachary wrote: > Hi Josh :-) > > Thanks for the update: this is great news an

[ceph-users] Re: Small RGW objects and RADOS 64KB minimun size

2021-02-16 Thread Josh Durgin
Changing min_alloc_size in bluestore requires redeploying the OSD. There's no other way to regain the space that's already allocated. In terms of making this easier, we're looking to automate rolling format changes across a cluster with cephadm in the future. Josh On 2/16/21 9:58 AM, Steven Pin

[ceph-users] Re: Small RGW objects and RADOS 64KB minimun size

2021-02-16 Thread Steven Pine
Yes please, assisting clusters in moving over to a 4k block size would be greatly appreciated. On Tue, Feb 16, 2021 at 1:14 PM Josh Durgin wrote: > Changing min_alloc_size in bluestore requires redeploying the OSD. > There's no other way to regain the space that's already allocated. > > In terms

[ceph-users] Re: best use of NVMe drives

2021-02-16 Thread Richard Bade
Hi Magnus, I agree with your last suggestion, putting the OSD DB on NVMe would be a good idea. I'm assuming you are referring to the Bluestore DB rather than filestore journal since you mentioned your cluster is Nautilus. We have a cephfs cluster set up in this way and it performs well. We don't ha

[ceph-users] Ceph User Survey Now Available

2021-02-16 Thread Mike Perez
Hi everyone! Be sure to make your voice heard by taking the Ceph User Survey before April 2, 2021. This information will help guide the Ceph community’s investment in Ceph and the Ceph community's future development. https://ceph.io/user-survey/ Thank you to the Ceph User Survey Working Group fo

[ceph-users] February 2021 Tech Talk and Code Walk-through

2021-02-16 Thread Mike Perez
Hi everyone! I'm excited to announce two talks we have on the schedule for February 2021: Jason Dillaman will be giving part 2 to the librbd code walk-through. The stream starts on February 23rd at 18:00 UTC / 19:00 CET / 1:00 PM EST / 10:00 AM PST https://tracker.ceph.com/projects/ceph/wiki/Co

[ceph-users] Ceph @ DevConf.CZ

2021-02-16 Thread Mike Perez
Hi everyone, Ceph will be present at DevConf.CZ, February 18-20 in a joint booth with the Rook Community! https://www.devconf.cz If you're interested in more information about being present at the booth to provide expertise/content/presentations to our audience, please let me know privately. --