Ceph on a single host makes little to no sense. You're better of running
something like ZFS
On Tue, 6 Jul 2021 at 23:52, Wladimir Mutel wrote:
> I started my experimental 1-host/8-HDDs setup in 2018 with
> Luminous,
> and I read
>
I'm not aware of any directly, but I know rook-ceph is used on Kubernetes, and
Kubernetes is sometimes deployed with BGP based SDN layers. So there may be a
few deployments that do it that way.
From: Martin Verges
Sent: Monday, July 5, 2021 11:23 PM
To:
Hi,
I created a Ceph environment with 3 filesystems in the pacific version,
but I'm facing a problem. I set filesystem size but it's not changed.
When I mount different filesystems on a client, they all get the same
size. Even though I put different values.
I've already tried
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Oh, I just read your message again, and I see that I didn't answer your
question. :D I admit I don't know how MAX AVAIL is calculated, and whether
it takes things like imbalance into account (it might).
Josh
On Tue, Jul 6, 2021 at 7:41 AM Josh Baergen
wrote:
> Hey Wladimir,
>
> That output
On 7/6/21 3:32 PM, German Anders wrote:
Great, now regarding FRR no, the truth is that I never had experience
before, is there any specific guide to frr-ceph configuration or any
advice? Actually, we have for the public network a bond created with two
10GbE link, so we can break the bond and
Hey Wladimir,
That output looks like it's from Nautilus or later. My understanding is
that the USED column is in raw bytes, whereas STORED is "user" bytes. If
you're using EC 2:1 for all of those pools, I would expect USED to be at
least 1.5x STORED, which looks to be the case for jerasure21.
Hi Jay,
looks like there is a bug in the code - no proper handling for this
config parameter. Ticket created: https://tracker.ceph.com/issues/51540
Thanks,
Igor
On 7/6/2021 6:08 PM, Jay Sullivan wrote:
Thanks, Igor. That's super helpful!
I'm trying to temporarily disable the
Hi Jay,
I'm having the same problem, the setting doesn't affect the warning at all.
I'm currently muting the warning every week or so (because it doesn't
even seem to be present consistently, and every time it disappears for a
moment, the mute is cancelled) with
ceph health mute
Thanks, Igor. That's super helpful!
I'm trying to temporarily disable the BLUESTORE_SPURIOUS_READ_ERRORS warning
while we further investigate our underlying cause. In short: I'm trying to keep
this cluster from going into HEALTH_WARN every day or so. I'm afraid that my
attention to monitoring
> Oh, I just read your message again, and I see that I didn't answer your
> question. :D I admit I don't know how MAX AVAIL is calculated, and whether
> it takes things like imbalance into account (it might).
It does. It’s calculated relative to the most-full OSD in the pool, and the
full_ratio
I have a pretty firm grasp on how OSD encryption with LUKS works, but I'm wondering if anyone has found a way to avoid storing the lockbox keyring on disk?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Sorry, I did not use it before, we evaluated it and went for L2 below ToR.
You need to design the BGP network with your network team because
every node will have an ASN too.
You need to set up a test network before going to production.
On Tue, Jul 6, 2021 at 4:33 PM German Anders wrote:
>
>
Great, now regarding FRR no, the truth is that I never had experience
before, is there any specific guide to frr-ceph configuration or any
advice? Actually, we have for the public network a bond created with two
10GbE link, so we can break the bond and get two 10GbE fibers per node for
the public
Without using FRR you are still L2 below ToR. Do you have any
experience with FRR?
You should think about multiple uplinks and how you handle them with FRR.
Other than that, it should work, we have been in production with BGP
above ToR for years without any problem.
On Tue, Jul 6, 2021 at 4:10 PM
Hello Stefan, thanks a lot for the reply.
We have everything in the same datacenter, and the fault domains are per
rack. Regarding BGP, the idea of the networking team is to stop using layer
2+3 and move everything to full layer 3, for this they want to implement
BGP and the idea is that each
I started my experimental 1-host/8-HDDs setup in 2018 with Luminous,
and I read
https://ceph.io/community/new-luminous-erasure-coding-rbd-cephfs/ ,
which had interested me in using Bluestore and rewriteable EC pools for
RBD data.
I have about 22 TiB or raw
Hello,
After having done a rolling reboot of my Octopus 15.2.13 cluster of 8 nodes
cephadm does not find python3 on the node and hence I get quite a few of the
following warnings:
[WRN] CEPHADM_HOST_CHECK_FAILED: 7 hosts fail cephadm check
host ceph1f failed check: Can't communicate with
On 7/5/21 6:26 PM, German Anders wrote:
Hi All,
I have an already created and functional ceph cluster (latest luminous
release) with two networks one for the public (layer 2+3) and the other for
the cluster, the public one uses VLAN and its 10GbE and the other one uses
Infiniband with
Hi,
2021年7月4日(日) 0:17 changcheng.liu :
>
> Hi all,
> I'm reading the ceph survey results:
> https://ceph.io/community/2021-ceph-user-survey-results.
> Do we have the data about which type of AsyncMessenger is used?
> TCP/RDMA/DPDK.
> What's the reason that RDMA & DPDK isn't often
yes.
> On Jul 5, 2021, at 11:23 PM, Martin Verges wrote:
>
> Hello,
>
>> This is not easy to answer without all the details. But for sure there
> are cluster running with BGP in the field just fine.
>
> Out of curiosity, is there someone here that has his Ceph cluster running
> with BGP in
Hello,
> This is not easy to answer without all the details. But for sure there
are cluster running with BGP in the field just fine.
Out of curiosity, is there someone here that has his Ceph cluster running
with BGP in production?
As far as I remember, here at croit with multiple hundred
22 matches
Mail list logo