[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
Hi,

interesting, that’s something we can definitely try!

Thanks!

Christian

> On 5. Sep 2023, at 16:37, Manuel Lausch  wrote:
> 
> Hi,
> 
> in older versions of ceph with the auto-repair feature the PG state of
> scrubbing PGs had always the repair state as well.
> With later versions (I don't know exactly at which version) ceph
> differentiated scrubbing and repair again in the PG state.
> 
> I think as long as there are no errors loged all should be fine. If
> you disable auto repair, the issue should disapear as well. In case of
> scrub errors you will then see appropriate states. 
> 
> Regards
> Manuel
> 
> On Tue, 05 Sep 2023 14:14:56 +
> Eugen Block  wrote:
> 
>> Hi,
>> 
>> it sounds like you have auto-repair enabled (osd_scrub_auto_repair). I  
>> guess you could disable that to see what's going on with the PGs and  
>> their replicas. And/or you could enable debug logs. Are all daemons  
>> running the same ceph (minor) version? I remember a customer case  
>> where different ceph minor versions (but overall Octopus) caused  
>> damaged PGs, a repair fixed them everytime. After they updated all  
>> daemons to the same minor version those errors were gone.
>> 
>> Regards,
>> Eugen
>> 
>> Zitat von Christian Theune :
>> 
>>> Hi,
>>> 
>>> this is a bit older cluster (Nautilus, bluestore only).
>>> 
>>> We’ve noticed that the cluster is almost continuously repairing PGs.  
>>> However, they all finish successfully with “0 fixed”. We do not see  
>>> the trigger why Ceph decides to repair the PGs and it’s happening  
>>> for a lot of PGs, not any specific individual one.
>>> 
>>> Deep-scrubs are generally running, but currently a bit late as we  
>>> had some recoveries in the last week.
>>> 
>>> Logs look regular aside from the number of repairs. Here’s the last  
>>> weeks from the perspective of a single PG. There’s one repair, but  
>>> the same thing seems to happen for all PGs.
>>> 
>>> 2023-08-06 16:08:17.870 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-06 16:08:18.270 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-07 21:52:22.299 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-07 21:52:22.711 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-09 00:33:42.587 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-09 00:33:43.049 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-10 09:36:00.590 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub starts
>>> 2023-08-10 09:36:28.811 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub ok
>>> 2023-08-11 12:59:14.219 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-11 12:59:14.567 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-12 13:52:44.073 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-12 13:52:44.483 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-14 01:51:04.774 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub starts
>>> 2023-08-14 01:51:33.113 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub ok
>>> 2023-08-15 05:18:16.093 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-15 05:18:16.520 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-16 09:47:38.520 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-16 09:47:38.930 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-17 19:25:45.352 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-17 19:25:45.775 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-19 05:40:43.663 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-19 05:40:44.073 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-20 12:06:54.343 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-20 12:06:54.809 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-21 19:23:10.801 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub starts
>>> 2023-08-21 19:23:39.936 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub ok
>>> 2023-08-23 03:43:21.391 7fc49f1e6640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub starts
>>> 2023-08-23 03:43:21.844 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 scrub ok
>>> 2023-08-24 04:21:17.004 7fc49b1de640  0 log_channel(cluster) log  
>>> [DBG] : 278.2f3 deep-scrub starts
>>> 2023-08-24 04:21:47.972 7fc49f1e6640  0 log_channel(cluster) log  

[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
Hi,

thanks for the hint. We’re definitely running exact same binaries for all. :)

> On 5. Sep 2023, at 16:14, Eugen Block  wrote:
> 
> Hi,
> 
> it sounds like you have auto-repair enabled (osd_scrub_auto_repair). I guess 
> you could disable that to see what's going on with the PGs and their 
> replicas. And/or you could enable debug logs. Are all daemons running the 
> same ceph (minor) version? I remember a customer case where different ceph 
> minor versions (but overall Octopus) caused damaged PGs, a repair fixed them 
> everytime. After they updated all daemons to the same minor version those 
> errors were gone.
> 
> Regards,
> Eugen
> 
> Zitat von Christian Theune :
> 
>> Hi,
>> 
>> this is a bit older cluster (Nautilus, bluestore only).
>> 
>> We’ve noticed that the cluster is almost continuously repairing PGs. 
>> However, they all finish successfully with “0 fixed”. We do not see the 
>> trigger why Ceph decides to repair the PGs and it’s happening for a lot of 
>> PGs, not any specific individual one.
>> 
>> Deep-scrubs are generally running, but currently a bit late as we had some 
>> recoveries in the last week.
>> 
>> Logs look regular aside from the number of repairs. Here’s the last weeks 
>> from the perspective of a single PG. There’s one repair, but the same thing 
>> seems to happen for all PGs.
>> 
>> 2023-08-06 16:08:17.870 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-06 16:08:18.270 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-07 21:52:22.299 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-07 21:52:22.711 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-09 00:33:42.587 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-09 00:33:43.049 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-10 09:36:00.590 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub starts
>> 2023-08-10 09:36:28.811 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub ok
>> 2023-08-11 12:59:14.219 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-11 12:59:14.567 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-12 13:52:44.073 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-12 13:52:44.483 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-14 01:51:04.774 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub starts
>> 2023-08-14 01:51:33.113 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub ok
>> 2023-08-15 05:18:16.093 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-15 05:18:16.520 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-16 09:47:38.520 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-16 09:47:38.930 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-17 19:25:45.352 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-17 19:25:45.775 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-19 05:40:43.663 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-19 05:40:44.073 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-20 12:06:54.343 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-20 12:06:54.809 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-21 19:23:10.801 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub starts
>> 2023-08-21 19:23:39.936 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub ok
>> 2023-08-23 03:43:21.391 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-23 03:43:21.844 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-24 04:21:17.004 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub starts
>> 2023-08-24 04:21:47.972 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 deep-scrub ok
>> 2023-08-25 06:55:13.588 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-25 06:55:14.087 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-26 09:26:01.174 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-26 09:26:01.561 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-27 11:18:10.828 7fc49b1de640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub starts
>> 2023-08-27 11:18:11.264 7fc49f1e6640  0 log_channel(cluster) log [DBG] : 
>> 278.2f3 scrub ok
>> 2023-08-28 19:05:42.104 7fc49f1e6640  0 

[ceph-users] Re: Is it possible (or meaningful) to revive old OSDs?

2023-09-06 Thread Richard Bade
Yes, I agree with Anthony. If your cluster is healthy and you don't
*need* to bring them back in it's going to be less work and time to
just deploy them as new.

I usually set norebalance, purge the osds in ceph, remove the vg from
the disks and re-deploy. Then unset norebalance at the end once
everything is peered and happy. This is so that it doesn't start
moving stuff around when you purge.

Rich

On Thu, 7 Sept 2023 at 02:21, Anthony D'Atri  wrote:
>
> Resurrection usually only makes sense if fate or a certain someone resulted 
> in enough overlapping removed OSDs that you can't meet min_size.  I've had to 
> a couple of times :-/
>
> If an OSD is down for more than a short while, backfilling a redeployed OSD 
> will likely be faster than waiting for it to peer and do deltas -- if it can 
> at all.
>
> > On Sep 6, 2023, at 10:16, Malte Stroem  wrote:
> >
> > Hi ceph-m...@rikdvk.mailer.me,
> >
> > you could squeeze the OSDs back in but it does not make sense.
> >
> > Just clean the disks with dd for example and add them as new disks to your 
> > cluster.
> >
> > Best,
> > Malte
> >
> > Am 04.09.23 um 09:39 schrieb ceph-m...@rikdvk.mailer.me:
> >> Hello,
> >> I have a ten node cluster with about 150 OSDs. One node went down a while 
> >> back, several months. The OSDs on the node have been marked as down and 
> >> out since.
> >> I am now in the position to return the node to the cluster, with all the 
> >> OS and OSD disks. When I boot up the now working node, the OSDs do not 
> >> start.
> >> Essentially , it seems to complain with "fail[ing]to load OSD map for 
> >> [various epoch]s, got 0 bytes".
> >> I'm guessing the OSDs on disk maps are so old, they can't get back into 
> >> the cluster?
> >> My questions are whether it's possible or worth it to try to squeeze these 
> >> OSDs back in or to just replace them. And if I should just replace them, 
> >> what's the best way? Manually remove [1] and recreate? Replace [2]? Purge 
> >> in dashboard?
> >> [1] 
> >> https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#removing-osds-manual
> >> [2] 
> >> https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#replacing-an-osd
> >> Many thanks!
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph_leadership_team_meeting_s18e06.mkv

2023-09-06 Thread Ernesto Puerta
Dear Cephers,

Today brought us an eventful CTL meeting: it looks like Jitsi recently started
requiring user authentication
 (anonymous users
will get a "Waiting for a moderator" modal), but authentication didn't work
against Google or GitHub accounts, so we had to move to the good old Google
Meet.

As a result of this, Neha has kindly set up a new private Slack channel
(#clt) to allow for quicker communication among CLT members (if you usually
attend the CLT meeting and have not been added, please ping any CLT member
to request that).

Now, let's move on the important stuff:

*The latest Pacific Release (v16.2.14)*

*The Bad*
The 14th drop of the Pacific release has landed with a few hiccups:

   - Some .deb packages were made available to downloads.ceph.com before
   the release process completion. Although this is not the first time it
   happens, we want to ensure this is the last one, so we'd like to gather
   ideas to improve the release publishing process. Neha encouraged everyone
   to share ideas here:
  - https://tracker.ceph.com/issues/62671
  - https://tracker.ceph.com/issues/62672
  - v16.2.14 also hit issues during the ceph-container stage. Laura
   wanted to raise awareness of its current setbacks
    and collect ideas to tackle
   them:
  - Enforce reviews and mandatory CI checks
  - Rework the current approach to use simple Dockerfiles
  
  - Call the Ceph community for help: ceph-container is currently
  maintained part-time by a single contributor (Guillaume Abrioux). This
  sub-project would benefit from the sound expertise on containers
among Ceph
  users. If you have ever considered contributing to Ceph, but felt a bit
  intimidated by C++, Paxos and race conditions, ceph-container is a good
  place to shed your fear.


*The Good*
Not everything about v16.2.14 was going to be bleak: David Orman brought us
really good news. They tested v16.2.14 on a large production cluster
(10gbit/s+ RGW and ~13PiB raw) and found that it solved a major issue
affecting RGW in Pacific .

*The Ugly*
During that testing, they noticed that ceph-mgr was occasionally OOM killed
(nothing new to 16.2.14, as it was previously reported). They already tried:

   - Disabling modules (like the restful one, which was a suspect)
   - Enabling debug 20
   - Turning the pg autoscaler off

Debugging will continue to characterize this issue:

   - Enable profiling (Mark Nelson)
   - Try Bloomberg's Python mem profiler
    (Matthew Leonard)


*Infrastructure*

*Reminder: Infrastructure Meeting Tomorrow. **11:30-12:30 Central Time*

Patrick brought up the following topics:

   - Need to reduce the OVH spending ($72k/year, which is a good cut in the
   Ceph Foundation budget, that's a lot less avocado sandwiches for the next
   Cephalocon):
  - Move services (e.g.: Chacra) to the Sepia lab
  - Re-use CentOS (and any spared/unused) machines for devel purposes
   - Current Ceph sys admins are overloaded, so devel/community involvement
   would be much appreciated.
   - More to be discussed in tomorrow's meeting. Please join if you
   think you can help solve/improve the Ceph infrastrucru!


*BTW*: today's CDM will be canceled, since no topics were proposed.

Kind Regards,

Ernesto
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Join Us for the Relaunch of the Ceph User + Developer Monthly Meeting!

2023-09-06 Thread Laura Flores
Hi Ceph users and developers,

We’re happy to announce a relaunch of the Ceph User + Developer Monthly
Meeting – a virtual platform that has long been in our community for
encouraging discussion and collaboration between users and developers. With
this relaunch, we aim to recenter the platform around user-facing topics
that go beyond immediate bug reports, such as long-term improvements and
knowledge sharing.

Users and developers are encouraged to submit focus topics to this Google
form

[1] in preparation for each meeting. Read more about what we're looking for
in focus topics here:
https://ceph.io/en/news/blog/2023/user-dev-meeting-relaunch/

Join us on *September 21st, 10:00am EST *at this link
 [2] to witness the relaunch!

- Laura Flores

1. User + Dev Google form:
https://docs.google.com/forms/d/e/1FAIpQLSdboBhxVoBZoaHm8xSmeBoemuXoV_rmh4vJDGBrp6d-D3-BlQ/viewform?usp=sf_link
2. Meeting link: https://meet.jit.si/ceph-user-dev-monthly

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage 

Chicago, IL

lflo...@ibm.com | lflo...@redhat.com 
M: +17087388804
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Is it possible (or meaningful) to revive old OSDs?

2023-09-06 Thread Anthony D'Atri
Resurrection usually only makes sense if fate or a certain someone resulted in 
enough overlapping removed OSDs that you can't meet min_size.  I've had to a 
couple of times :-/

If an OSD is down for more than a short while, backfilling a redeployed OSD 
will likely be faster than waiting for it to peer and do deltas -- if it can at 
all.

> On Sep 6, 2023, at 10:16, Malte Stroem  wrote:
> 
> Hi ceph-m...@rikdvk.mailer.me,
> 
> you could squeeze the OSDs back in but it does not make sense.
> 
> Just clean the disks with dd for example and add them as new disks to your 
> cluster.
> 
> Best,
> Malte
> 
> Am 04.09.23 um 09:39 schrieb ceph-m...@rikdvk.mailer.me:
>> Hello,
>> I have a ten node cluster with about 150 OSDs. One node went down a while 
>> back, several months. The OSDs on the node have been marked as down and out 
>> since.
>> I am now in the position to return the node to the cluster, with all the OS 
>> and OSD disks. When I boot up the now working node, the OSDs do not start.
>> Essentially , it seems to complain with "fail[ing]to load OSD map for 
>> [various epoch]s, got 0 bytes".
>> I'm guessing the OSDs on disk maps are so old, they can't get back into the 
>> cluster?
>> My questions are whether it's possible or worth it to try to squeeze these 
>> OSDs back in or to just replace them. And if I should just replace them, 
>> what's the best way? Manually remove [1] and recreate? Replace [2]? Purge in 
>> dashboard?
>> [1] 
>> https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#removing-osds-manual
>> [2] 
>> https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#replacing-an-osd
>> Many thanks!
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Is it possible (or meaningful) to revive old OSDs?

2023-09-06 Thread Malte Stroem

Hi ceph-m...@rikdvk.mailer.me,

you could squeeze the OSDs back in but it does not make sense.

Just clean the disks with dd for example and add them as new disks to 
your cluster.


Best,
Malte

Am 04.09.23 um 09:39 schrieb ceph-m...@rikdvk.mailer.me:

Hello,

I have a ten node cluster with about 150 OSDs. One node went down a while back, 
several months. The OSDs on the node have been marked as down and out since.

I am now in the position to return the node to the cluster, with all the OS and 
OSD disks. When I boot up the now working node, the OSDs do not start.

Essentially , it seems to complain with "fail[ing]to load OSD map for [various 
epoch]s, got 0 bytes".

I'm guessing the OSDs on disk maps are so old, they can't get back into the 
cluster?

My questions are whether it's possible or worth it to try to squeeze these OSDs 
back in or to just replace them. And if I should just replace them, what's the 
best way? Manually remove [1] and recreate? Replace [2]? Purge in dashboard?

[1] 
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#removing-osds-manual
[2] 
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-osds/#replacing-an-osd

Many thanks!

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Questions about 'public network' and 'cluster nertwork'?

2023-09-06 Thread Louis Koo
if the public network and cluster network use the same ip, why still need to 
send heartbeats to  hb_front_server  and hb _back_server at same time?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: insufficient space ( 10 extents) on vgs lvm detected locked

2023-09-06 Thread Eugen Block
This is just reflecting your current device status, the devices  
reported there should probably be running (or at least deployed) OSDs.  
In case they aren't and you expected them to be available, you'll need  
to investigate.


Zitat von absanka...@gmail.com:

ceph orch device ls - output ( insufficient space ( 10 extents) on  
vgs lvm detected locked ) Quincy version , Is this just warning or  
any action should be taken.

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io