[ceph-users] Re: Undo "radosgw-admin bi purge"

2023-03-13 Thread J. Eric Ivancich
A PR that adds experimental support for restoring a bucket index was merged 
into main. It’ll need back ports to reef, quincy, and pacific.

https://tracker.ceph.com/issues/59053 


Currently it does not work for versioned buckets. And it is experimental.

If anyone is able to try it I’d be curious about your experiences.

Eric
(he/him)

> On Feb 23, 2023, at 11:20 AM, J. Eric Ivancich  wrote:
> 
> Off the top of my head:
> 
> 1. The command would take a bucket marker and a bucket names as arguments. It 
> might also need some additional metadata to fill in gaps.
> 2. Scan the data pool for head objects that refer to that bucket marker.
> 3. Based on the number of such objects found, create a bucket index with an 
> appropriate number of shards (approximately that number divided by 50,000).
> 4. For each such object:
>   a. Read the manifest in the head object and see if all tail objects 
> exit.
>   b. If any tail objects are missing, maybe report that object on the 
> console as non-recoverable?
>   c. If all tail objects present, resolve the name of the object from the 
> head object and add the bucket index entry to the appropriate shard.
> 
> Note 1: Slight variations may be needed depending on whether or not the 
> bucket entry object exists and the bucket instance object exists.
> Note 2: Versioned buckets will likely require some additional steps, but I’d 
> need to refresh my memory on some of the details.
> 
> Eric
> (he/him)
> 
>> On Feb 23, 2023, at 4:51 AM, Robert Sander  
>> wrote:
>> 
>> Hi,
>> 
>> On 22.02.23 17:45, J. Eric Ivancich wrote:
>> 
>>> You also asked why there’s not a command to scan the data pool and recreate 
>>> the bucket index. I think the concept would work as all head objects 
>>> include the bucket marker in their names. There might be some corner cases 
>>> where it’d partially fail, such as (possibly) transactional changes that 
>>> were underway when the bucket index was purged. And there is metadata in 
>>> the bucket index that’s not stored in the objects, so it would have to be 
>>> recreated somehow. But no one has written it yet.
>> 
>> I am not in an urgent need to get such a feature.
>> 
>> How would the process look to get development started in this direction?
>> 
>> Regards
>> -- 
>> Robert Sander
>> Heinlein Consulting GmbH
>> Schwedter Str. 8/9b, 10119 Berlin
>> 
>> https://www.heinlein-support.de
>> 
>> Tel: 030 / 405051-43
>> Fax: 030 / 405051-19
>> 
>> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
>> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: upgrading from 15.2.17 to 16.2.11 - Health ERROR

2023-03-13 Thread Clyso GmbH - Ceph Foundation Member

which version of cephadm you are using?

___
Clyso GmbH - Ceph Foundation Member

Am 10.03.23 um 11:17 schrieb xadhoo...@gmail.com:

looking at ceph orch upgrade check
I find out
 },
 
"cephadm.8d0364fef6c92fc3580b0d022e32241348e6f11a7694d2b957cdafcb9d059ff2": {
 "current_id": null,
 "current_name": null,
 "current_version": null
 },


Could this lead to the issue?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] User + Dev Meeting happening this week Thursday!

2023-03-13 Thread Laura Flores
Hi Ceph Users,

The User + Dev Meeting is happening this *Thursday, March 16th at 10am
EST *(see
extra meeting details below). If you have any topics you'd like to discuss,
please add them to the etherpad:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes

One of the topics we wish to discuss is whether any users would be willing
to help with early Reef testing after the RC comes out.

Thanks,
Laura Flores

Meeting link:
https://meet.jit.si/ceph-user-dev-monthly

Time conversions:
UTC:   Thursday, March 16, 14:00 UTC
Mountain View, CA, US: Thursday, March 16,  7:00 PDT
Phoenix, AZ, US:   Thursday, March 16,  7:00 MST
Denver, CO, US:Thursday, March 16,  8:00 MDT
Huntsville, AL, US:Thursday, March 16,  9:00 CDT
Raleigh, NC, US:   Thursday, March 16, 10:00 EDT
London, England:   Thursday, March 16, 14:00 GMT
Paris, France: Thursday, March 16, 15:00 CET
Helsinki, Finland: Thursday, March 16, 16:00 EET
Tel Aviv, Israel:  Thursday, March 16, 16:00 IST
Pune, India:   Thursday, March 16, 19:30 IST
Brisbane, Australia:   Friday, March 17,  0:00 AEST
Singapore, Asia:   Thursday, March 16, 22:00 +08
Auckland, New Zealand: Friday, March 17,  3:00 NZDT

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage 

Chicago, IL

lflo...@ibm.com | lflo...@redhat.com 
M: +17087388804
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Mixed mode ssd and hdd issue

2023-03-13 Thread xadhoom76
Hi, we have a cluster with 3 nodes . Each node has 4 HDD and 1 SSD
We would like to have a pool only on ssd and a pool only on hdd, using class 
feature.
here is the setup
# buckets
host ceph01s3 {
id -3   # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
id -21 class ssd# do not change unnecessarily
# weight 34.561
alg straw2
hash 0  # rjenkins1
item osd.0 weight 10.914
item osd.5 weight 10.914
item osd.8 weight 10.914
item osd.9 weight 1.819
}
host ceph02s3 {
id -5   # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
id -22 class ssd# do not change unnecessarily
# weight 34.561
alg straw2
hash 0  # rjenkins1
item osd.1 weight 10.914
item osd.3 weight 10.914
item osd.7 weight 10.914
item osd.10 weight 1.819
}
host ceph03s3 {
id -7   # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
id -23 class ssd# do not change unnecessarily
# weight 34.561
alg straw2
hash 0  # rjenkins1
item osd.2 weight 10.914
item osd.4 weight 10.914
item osd.6 weight 10.914
item osd.11 weight 1.819
}
root default {
id -1   # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
id -24 class ssd# do not change unnecessarily
# weight 103.683
alg straw2
hash 0  # rjenkins1
item ceph01s3 weight 34.561
item ceph02s3 weight 34.561
item ceph03s3 weight 34.561
}

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default class hdd
step chooseleaf firstn 0 type host
step emit
}
rule erasure-code {
id 1
type erasure
min_size 3
max_size 4
step take default class hdd
step set_chooseleaf_tries 5
step set_choose_tries 100
step chooseleaf indep 0 type host
step emit
}
rule erasure2_1 {
id 2
type erasure
min_size 3
max_size 3
step take default class hdd
step set_chooseleaf_tries 5
step set_choose_tries 100
step chooseleaf indep 0 type host
step emit
}
rule erasure-pool.meta {
id 3
type erasure
min_size 3
max_size 3
step take default class hdd
step set_chooseleaf_tries 5
step set_choose_tries 100
step chooseleaf indep 0 type host
step emit
}
rule erasure-pool.data {
id 4
type erasure
min_size 3
max_size 3
step take default class hdd
step set_chooseleaf_tries 5
step set_choose_tries 100
step chooseleaf indep 0 type host
step emit
}
rule replicated_rule_ssd {
id 5
type replicated
min_size 1
max_size 10
step take default class ssd
step chooseleaf firstn 0 type host
step emit
}

# end crush map

pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1669 
flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application 
mgr_devicehealth
pool 5 'Datapool' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 2749 lfor 0/0/321 
flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 7 'erasure-pool.data' erasure profile k2m1 size 3 min_size 2 crush_rule 4 
object_hash rjenkins pg_num 128 pgp_num 126 pgp_num_target 128 autoscale_mode 
on last_change 2780 lfor 0/0/1676 flags hashpspool,ec_overwrites stripe_width 
8192 application cephfs
pool 8 'erasure-pool.meta' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 344 
flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 
recovery_priority 5 application cephfs
pool 9 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash 
rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 592 flags 
hashpspool stripe_width 0 application rgw
pool 10 'brescia-ovest.rgw.log' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 595 
flags hashpspool stripe_width 0 application rgw
pool 11 'brescia-ovest.rgw.control' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 597 
flags hashpspool stripe_width 0 application rgw
pool 12 'brescia-ovest.rgw.meta' replicated size 3 min_size 2 crush_rule 0 
object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode on last_change 719 lfor 
0/719/717 flags 

[ceph-users] Re: pg wait too long when osd restart

2023-03-13 Thread Josh Baergen
(trimming out the dev list and Radoslaw's email)

Hello,

I think the two critical PRs were:
* https://github.com/ceph/ceph/pull/44585 - included in 15.2.16
* https://github.com/ceph/ceph/pull/45655 - included in 15.2.17

I don't have any comments on tweaking those configuration values, and
what safe values would be.

Josh

On Sun, Mar 12, 2023 at 9:43 PM yite gu  wrote:
>
> Hello, Baergen
> Thanks for your reply. Restart osd in planned, but my version is 15.2.7, so, 
> I may have encountered the problem you said. Could you provide PR to me about 
> optimize this mechanism? Besides that, if I don't want to upgrade version in 
> recently, is a good way that adjust osd_pool_default_read_lease_ratio to 
> lower? For example, 0.4 or 0.2 to reach the user's tolerance time.
>
> Yite Gu
>
> Josh Baergen  于2023年3月10日周五 22:09写道:
>>
>> Hello,
>>
>> When you say "osd restart", what sort of restart are you referring to
>> - planned (e.g. for upgrades or maintenance) or unplanned (OSD
>> hang/crash, host issue, etc.)? If it's the former, then these
>> parameters shouldn't matter provided that you're running a recent
>> enough Ceph with default settings - it's supposed to handle planned
>> restarts with little I/O wait time. There were some issues with this
>> mechanism before Octopus 15.2.17 / Pacific 16.2.8 that could cause
>> planned restarts to wait for the read lease timeout in some
>> circumstances.
>>
>> Josh
>>
>> On Fri, Mar 10, 2023 at 1:31 AM yite gu  wrote:
>> >
>> > Hi all,
>> > osd_heartbeat_grace = 20 and osd_pool_default_read_lease_ratio = 0.8 by
>> > default, so, pg will wait 16s when osd restart in the worst case. This wait
>> > time is too long, client i/o can not be unacceptable. I think adjusting
>> > the osd_pool_default_read_lease_ratio to lower is a good way. Have any good
>> > suggestions about reduce pg wait time?
>> >
>> > Best Regard
>> > Yite Gu
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: libceph: mds1 IP+PORT wrong peer at address

2023-03-13 Thread Xiubo Li

Hi Frank,

I am afraid there is buggy in the code and it's racy when updating the 
new mdsmap with the old one. We have several fixes about this as I 
remembered.


You can try the newer kernels to see could you reproduce it.

Thanks

- Xiubo

On 13/03/2023 17:10, Frank Schilder wrote:

Hi Xiubo,

its a really old kernel version: 3.10.0-957.10.1.el7.x86_64. We plan to upgrade 
soonish, but its a major operation. For now we just need a workaround to get 
the client clean again. Do you have information about what triggers this bug? 
Maybe we can avoid the occurrence.

Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Xiubo Li 
Sent: 13 March 2023 01:44:49
To: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] libceph: mds1 IP+PORT wrong peer at address

Hi Frank,

BTW, what's your kernel version you were using ? It's a bug and I
haven't ever seen this by using the newer kernel.

You can try to remount the mountpoints and it should work.

Thanks

- Xiubo

On 09/03/2023 17:49, Frank Schilder wrote:

Hi all,

we seem to have hit a bug in the ceph fs kernel client and I just want to confirm what 
action to take. We get the error "wrong peer at address" in dmesg and some jobs 
on that server seem to get stuck in fs access; log extract below. I found these 2 tracker 
items https://tracker.ceph.com/issues/23883 and https://tracker.ceph.com/issues/41519, 
which don't seem to have fixes.

My questions:

- Is this harmless or does it indicate invalid/corrupted client cache entries?
- How to resolve, ignore, umount+mount or reboot?

Here an extract from the dmesg log, the error has survived a couple of MDS 
restarts already:

[Mon Mar  6 12:56:46 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Mon Mar  6 13:05:18 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-1572619386
[Mon Mar  6 13:05:18 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Mon Mar  6 13:13:50 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-1572619386
[Mon Mar  6 13:13:50 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Mon Mar  6 13:16:41 2023] libceph: mds1 192.168.32.87:6801 socket closed (con 
state OPEN)
[Mon Mar  6 13:16:41 2023] libceph: mds1 192.168.32.87:6801 socket closed (con 
state OPEN)
[Mon Mar  6 13:16:45 2023] ceph: mds1 reconnect start
[Mon Mar  6 13:16:45 2023] ceph: mds1 reconnect start
[Mon Mar  6 13:16:48 2023] ceph: mds1 reconnect success
[Mon Mar  6 13:16:48 2023] ceph: mds1 reconnect success
[Mon Mar  6 13:18:13 2023] ceph: update_snap_trace error -22
[Mon Mar  6 13:18:17 2023] libceph: mds7 192.168.32.88:6801 socket closed (con 
state OPEN)
[Mon Mar  6 13:18:17 2023] libceph: mds7 192.168.32.88:6801 socket closed (con 
state OPEN)
[Mon Mar  6 13:18:23 2023] ceph: mds1 recovery completed
[Mon Mar  6 13:18:23 2023] ceph: mds1 recovery completed
[Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect start
[Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect start
[Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect success
[Mon Mar  6 13:18:29 2023] ceph: mds7 reconnect success
[Mon Mar  6 13:18:35 2023] ceph: update_snap_trace error -22
[Mon Mar  6 13:18:35 2023] ceph: mds7 recovery completed
[Mon Mar  6 13:18:35 2023] ceph: mds7 recovery completed
[Mon Mar  6 13:22:22 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[Mon Mar  6 13:22:22 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Mon Mar  6 13:30:54 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[...]
[Thu Mar  9 09:37:24 2023] slurm.epilog.cl (31457): drop_caches: 3
[Thu Mar  9 09:38:26 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[Thu Mar  9 09:38:26 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Thu Mar  9 09:46:58 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[Thu Mar  9 09:46:58 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Thu Mar  9 09:55:30 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[Thu Mar  9 09:55:30 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address
[Thu Mar  9 10:04:02 2023] libceph: wrong peer, want 
192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
[Thu Mar  9 10:04:02 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
address

Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Best Regards,

Xiubo Li (李秀波)

Email: xiu...@redhat.com/xiu...@ibm.com
Slack: @Xiubo Li


--
Best Regards,

Xiubo Li (李秀波)

Email: 

[ceph-users] Unexpected ceph pool creation error with Ceph Quincy

2023-03-13 Thread Geert Kloosterman
Hi all,

I'm trying out Ceph Quincy (17.2.5) for the first time and I'm running into 
unexpected behavior of "ceph osd pool create".

When not passing any pg_num and pgp_num values, I get the following error with 
Quincy:

[root@gjk-ceph ~]# ceph osd pool create asdf
Error ERANGE: 'pgp_num' must be greater than 0 and lower or equal than 
'pg_num', which in this case is 1

I checked with Ceph Pacific (16.2.11) and there the extra arguments are not 
needed.

I expected it would use osd_pool_default_pg_num and osd_pool_default_pgp_num as 
defined in my ceph.conf:

[root@gjk-ceph ~]# ceph-conf -D | grep 'osd_pool_default_pg'
osd_pool_default_pg_autoscale_mode = on
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8

At least, this is what appears to be used with Pacific.

Is this an intended change of behavior?  I could not find anything related in 
the release notes.

Best regards,
Geert Kloosterman
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: libceph: mds1 IP+PORT wrong peer at address

2023-03-13 Thread Frank Schilder
Hi Xiubo,

its a really old kernel version: 3.10.0-957.10.1.el7.x86_64. We plan to upgrade 
soonish, but its a major operation. For now we just need a workaround to get 
the client clean again. Do you have information about what triggers this bug? 
Maybe we can avoid the occurrence.

Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Xiubo Li 
Sent: 13 March 2023 01:44:49
To: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] libceph: mds1 IP+PORT wrong peer at address

Hi Frank,

BTW, what's your kernel version you were using ? It's a bug and I
haven't ever seen this by using the newer kernel.

You can try to remount the mountpoints and it should work.

Thanks

- Xiubo

On 09/03/2023 17:49, Frank Schilder wrote:
> Hi all,
>
> we seem to have hit a bug in the ceph fs kernel client and I just want to 
> confirm what action to take. We get the error "wrong peer at address" in 
> dmesg and some jobs on that server seem to get stuck in fs access; log 
> extract below. I found these 2 tracker items 
> https://tracker.ceph.com/issues/23883 and 
> https://tracker.ceph.com/issues/41519, which don't seem to have fixes.
>
> My questions:
>
> - Is this harmless or does it indicate invalid/corrupted client cache entries?
> - How to resolve, ignore, umount+mount or reboot?
>
> Here an extract from the dmesg log, the error has survived a couple of MDS 
> restarts already:
>
> [Mon Mar  6 12:56:46 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Mon Mar  6 13:05:18 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-1572619386
> [Mon Mar  6 13:05:18 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Mon Mar  6 13:13:50 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-1572619386
> [Mon Mar  6 13:13:50 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Mon Mar  6 13:16:41 2023] libceph: mds1 192.168.32.87:6801 socket closed 
> (con state OPEN)
> [Mon Mar  6 13:16:41 2023] libceph: mds1 192.168.32.87:6801 socket closed 
> (con state OPEN)
> [Mon Mar  6 13:16:45 2023] ceph: mds1 reconnect start
> [Mon Mar  6 13:16:45 2023] ceph: mds1 reconnect start
> [Mon Mar  6 13:16:48 2023] ceph: mds1 reconnect success
> [Mon Mar  6 13:16:48 2023] ceph: mds1 reconnect success
> [Mon Mar  6 13:18:13 2023] ceph: update_snap_trace error -22
> [Mon Mar  6 13:18:17 2023] libceph: mds7 192.168.32.88:6801 socket closed 
> (con state OPEN)
> [Mon Mar  6 13:18:17 2023] libceph: mds7 192.168.32.88:6801 socket closed 
> (con state OPEN)
> [Mon Mar  6 13:18:23 2023] ceph: mds1 recovery completed
> [Mon Mar  6 13:18:23 2023] ceph: mds1 recovery completed
> [Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect start
> [Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect start
> [Mon Mar  6 13:18:28 2023] ceph: mds7 reconnect success
> [Mon Mar  6 13:18:29 2023] ceph: mds7 reconnect success
> [Mon Mar  6 13:18:35 2023] ceph: update_snap_trace error -22
> [Mon Mar  6 13:18:35 2023] ceph: mds7 recovery completed
> [Mon Mar  6 13:18:35 2023] ceph: mds7 recovery completed
> [Mon Mar  6 13:22:22 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [Mon Mar  6 13:22:22 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Mon Mar  6 13:30:54 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [...]
> [Thu Mar  9 09:37:24 2023] slurm.epilog.cl (31457): drop_caches: 3
> [Thu Mar  9 09:38:26 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [Thu Mar  9 09:38:26 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Thu Mar  9 09:46:58 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [Thu Mar  9 09:46:58 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Thu Mar  9 09:55:30 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [Thu Mar  9 09:55:30 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
> [Thu Mar  9 10:04:02 2023] libceph: wrong peer, want 
> 192.168.32.87:6801/-223958753, got 192.168.32.87:6801/-453143347
> [Thu Mar  9 10:04:02 2023] libceph: mds1 192.168.32.87:6801 wrong peer at 
> address
>
> Thanks and best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Best Regards,

Xiubo Li (李秀波)

Email: xiu...@redhat.com/xiu...@ibm.com
Slack: @Xiubo Li

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can't install cephadm on HPC

2023-03-13 Thread Robert Sander

On 13.03.23 03:29, zyz wrote:

Hi:
   I encountered a problem when I install cephadm on Huawei Cloud EulerOS. When 
enter the following command, it raise an error. What should I do?



./cephadm add-repo --release quincy



<< ERROR: Distro hce version 2.0 not supported


There are just no upstream package repositories for your distribution 
and version available. Nobody has compiled Ceph packages for Huawei 
Cloud EulerOS.


But this does not matter as long as Podman/Docker, LVM, systemd and time 
synchronization via NTP are available.


You can still bootstrap a cephadm orchestrator managed Ceph cluster as 
everything runs in containers.


You just miss the Ceph command line clients from the upstream repos 
(commands ceph, rados, rbd etc.). Your distribution may package these or 
you can use "cephadm shell" to get a shell inside a Ceph container where 
all these CLI tools are available.


Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io