[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
And in the tracker you never mentioned to add a symlink, only to add  
the prefix "/rootfs" to the ceph config. I could have tried that  
approach first. ;-)



Zitat von Eugen Block :

Alright, I updated the configs in our production cluster and  
restarted the OSDs (after removing the manual mapping from their  
unit.run files), everything good.


@Zac: Would you agree that it makes sense to add this to the docs  
[1] for cephadm clusters? They only cover the legacy world.


Thanks!
Eugen

Zitat von Wyll Ingersoll :

Yeah, now that you mention it, I recall figuring that out also at  
some point. I think I did it originally when I was debugging the  
problem without the container.



From: Eugen Block 
Sent: Friday, May 3, 2024 8:37 AM
To: Wyll Ingersoll 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] cephadm custom crush location hooks

Hm, I wonder why the symlink is required, the OSDs map / to /rootfs
anyway (excerpt of unit.run file):

-v /:/rootfs

So I removed the symlink and just added /rootfs to the crush location hook:

ceph config set osd.0 crush_location_hook
/rootfs/usr/local/bin/custom_crush_location

After OSD restart the OSD finds its correct location. So I actually
only need to update the location path, nothing else, it seems.

Zitat von Eugen Block :


I found your (open) tracker issue:

https://tracker.ceph.com/issues/53562

Your workaround works great, I tried it in a test cluster
successfully. I will adopt it to our production cluster as well.

Thanks!
Eugen

Zitat von Eugen Block :


Thank you very much for the quick response! I will take a look
first thing tomorrow and try that in a test cluster. But I agree,
it would be helpful to have a way with cephadm to apply these hooks
without these workarounds. I'll check if there's a tracker issue
for that, and create one if necessary.

Thanks!
Eugen

Zitat von Wyll Ingersoll :


I've found the crush location hook script code to be problematic
in the containerized/cephadm world.

Our workaround is to place the script in a common place on each
OSD node, such as /etc/crush/crushhook.sh, and then make a link
from /rootfs -> /, and set the configuration value so that the
path to the hook script starts with /rootfs.  The container that
the OSDs run in has access to /rootfs and this hack allows them to
all view the crush script without having to manually modify unit
files.

For example:

1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your
script.  Be aware though that if your script requires any sort of
elevated access, it may fail because the hook runs as ceph:ceph in
a minimal container so not all functions are available.  I had to
add lots of debug output and logging in mine (it's rather
complicated) to figure out what was going on when it was running.

I would love to see the "crush_location_hook" script be something
that can be stored in the config entirely instead of as a link,
similar to how the ssl certificates for RGW or the dashboard are
stored (ceph config-key set ...).   The current situation is not
ideal.





From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
Alright, I updated the configs in our production cluster and restarted  
the OSDs (after removing the manual mapping from their unit.run  
files), everything good.


@Zac: Would you agree that it makes sense to add this to the docs [1]  
for cephadm clusters? They only cover the legacy world.


Thanks!
Eugen

Zitat von Wyll Ingersoll :

Yeah, now that you mention it, I recall figuring that out also at  
some point. I think I did it originally when I was debugging the  
problem without the container.



From: Eugen Block 
Sent: Friday, May 3, 2024 8:37 AM
To: Wyll Ingersoll 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] cephadm custom crush location hooks

Hm, I wonder why the symlink is required, the OSDs map / to /rootfs
anyway (excerpt of unit.run file):

-v /:/rootfs

So I removed the symlink and just added /rootfs to the crush location hook:

ceph config set osd.0 crush_location_hook
/rootfs/usr/local/bin/custom_crush_location

After OSD restart the OSD finds its correct location. So I actually
only need to update the location path, nothing else, it seems.

Zitat von Eugen Block :


I found your (open) tracker issue:

https://tracker.ceph.com/issues/53562

Your workaround works great, I tried it in a test cluster
successfully. I will adopt it to our production cluster as well.

Thanks!
Eugen

Zitat von Eugen Block :


Thank you very much for the quick response! I will take a look
first thing tomorrow and try that in a test cluster. But I agree,
it would be helpful to have a way with cephadm to apply these hooks
without these workarounds. I'll check if there's a tracker issue
for that, and create one if necessary.

Thanks!
Eugen

Zitat von Wyll Ingersoll :


I've found the crush location hook script code to be problematic
in the containerized/cephadm world.

Our workaround is to place the script in a common place on each
OSD node, such as /etc/crush/crushhook.sh, and then make a link
from /rootfs -> /, and set the configuration value so that the
path to the hook script starts with /rootfs.  The container that
the OSDs run in has access to /rootfs and this hack allows them to
all view the crush script without having to manually modify unit
files.

For example:

1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your
script.  Be aware though that if your script requires any sort of
elevated access, it may fail because the hook runs as ceph:ceph in
a minimal container so not all functions are available.  I had to
add lots of debug output and logging in mine (it's rather
complicated) to figure out what was going on when it was running.

I would love to see the "crush_location_hook" script be something
that can be stored in the config entirely instead of as a link,
similar to how the ssl certificates for RGW or the dashboard are
stored (ceph config-key set ...).   The current situation is not
ideal.





From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Wyll Ingersoll


Yeah, now that you mention it, I recall figuring that out also at some point. I 
think I did it originally when I was debugging the problem without the 
container.


From: Eugen Block 
Sent: Friday, May 3, 2024 8:37 AM
To: Wyll Ingersoll 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] cephadm custom crush location hooks

Hm, I wonder why the symlink is required, the OSDs map / to /rootfs
anyway (excerpt of unit.run file):

-v /:/rootfs

So I removed the symlink and just added /rootfs to the crush location hook:

ceph config set osd.0 crush_location_hook
/rootfs/usr/local/bin/custom_crush_location

After OSD restart the OSD finds its correct location. So I actually
only need to update the location path, nothing else, it seems.

Zitat von Eugen Block :

> I found your (open) tracker issue:
>
> https://tracker.ceph.com/issues/53562
>
> Your workaround works great, I tried it in a test cluster
> successfully. I will adopt it to our production cluster as well.
>
> Thanks!
> Eugen
>
> Zitat von Eugen Block :
>
>> Thank you very much for the quick response! I will take a look
>> first thing tomorrow and try that in a test cluster. But I agree,
>> it would be helpful to have a way with cephadm to apply these hooks
>> without these workarounds. I'll check if there's a tracker issue
>> for that, and create one if necessary.
>>
>> Thanks!
>> Eugen
>>
>> Zitat von Wyll Ingersoll :
>>
>>> I've found the crush location hook script code to be problematic
>>> in the containerized/cephadm world.
>>>
>>> Our workaround is to place the script in a common place on each
>>> OSD node, such as /etc/crush/crushhook.sh, and then make a link
>>> from /rootfs -> /, and set the configuration value so that the
>>> path to the hook script starts with /rootfs.  The container that
>>> the OSDs run in has access to /rootfs and this hack allows them to
>>> all view the crush script without having to manually modify unit
>>> files.
>>>
>>> For example:
>>>
>>> 1.
>>> put crushhook script on the host OS in /etc/crush/crushhook.sh
>>> 2.
>>> make a link on the host os:   $ cd /; sudo ln -s / /rootfs
>>> 3.
>>> ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh
>>>
>>>
>>> The containers see "/rootfs" and will then be able to access your
>>> script.  Be aware though that if your script requires any sort of
>>> elevated access, it may fail because the hook runs as ceph:ceph in
>>> a minimal container so not all functions are available.  I had to
>>> add lots of debug output and logging in mine (it's rather
>>> complicated) to figure out what was going on when it was running.
>>>
>>> I would love to see the "crush_location_hook" script be something
>>> that can be stored in the config entirely instead of as a link,
>>> similar to how the ssl certificates for RGW or the dashboard are
>>> stored (ceph config-key set ...).   The current situation is not
>>> ideal.
>>>
>>>
>>>
>>>
>>> 
>>> From: Eugen Block 
>>> Sent: Thursday, May 2, 2024 10:23 AM
>>> To: ceph-users@ceph.io 
>>> Subject: [ceph-users] cephadm custom crush location hooks
>>>
>>> Hi,
>>>
>>> we've been using custom crush location hooks for some OSDs [1] for
>>> years. Since we moved to cephadm, we always have to manually edit the
>>> unit.run file of those OSDs because the path to the script is not
>>> mapped into the containers. I don't want to define custom location
>>> hooks for all OSDs globally in the OSD spec, even if those are limited
>>> to two hosts only in our case. But I'm not aware of a method to target
>>> only specific OSDs to have some files mapped into the container [2].
>>> Is my assumption correct that we'll have to live with the manual
>>> intervention until we reorganize our osd tree? Or did I miss something?
>>>
>>> Thanks!
>>> Eugen
>>>
>>> [1]
>>> https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
>>> [2]
>>> https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block
Hm, I wonder why the symlink is required, the OSDs map / to /rootfs  
anyway (excerpt of unit.run file):


-v /:/rootfs

So I removed the symlink and just added /rootfs to the crush location hook:

ceph config set osd.0 crush_location_hook  
/rootfs/usr/local/bin/custom_crush_location


After OSD restart the OSD finds its correct location. So I actually  
only need to update the location path, nothing else, it seems.


Zitat von Eugen Block :


I found your (open) tracker issue:

https://tracker.ceph.com/issues/53562

Your workaround works great, I tried it in a test cluster  
successfully. I will adopt it to our production cluster as well.


Thanks!
Eugen

Zitat von Eugen Block :

Thank you very much for the quick response! I will take a look  
first thing tomorrow and try that in a test cluster. But I agree,  
it would be helpful to have a way with cephadm to apply these hooks  
without these workarounds. I'll check if there's a tracker issue  
for that, and create one if necessary.


Thanks!
Eugen

Zitat von Wyll Ingersoll :

I've found the crush location hook script code to be problematic  
in the containerized/cephadm world.


Our workaround is to place the script in a common place on each  
OSD node, such as /etc/crush/crushhook.sh, and then make a link  
from /rootfs -> /, and set the configuration value so that the  
path to the hook script starts with /rootfs.  The container that  
the OSDs run in has access to /rootfs and this hack allows them to  
all view the crush script without having to manually modify unit  
files.


For example:

1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your  
script.  Be aware though that if your script requires any sort of  
elevated access, it may fail because the hook runs as ceph:ceph in  
a minimal container so not all functions are available.  I had to  
add lots of debug output and logging in mine (it's rather  
complicated) to figure out what was going on when it was running.


I would love to see the "crush_location_hook" script be something  
that can be stored in the config entirely instead of as a link,  
similar to how the ssl certificates for RGW or the dashboard are  
stored (ceph config-key set ...).   The current situation is not  
ideal.






From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Wyll Ingersoll
Thank you!

From: Eugen Block 
Sent: Friday, May 3, 2024 6:46 AM
To: Wyll Ingersoll 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] cephadm custom crush location hooks

I found your (open) tracker issue:

https://tracker.ceph.com/issues/53562

Your workaround works great, I tried it in a test cluster
successfully. I will adopt it to our production cluster as well.

Thanks!
Eugen

Zitat von Eugen Block :

> Thank you very much for the quick response! I will take a look first
> thing tomorrow and try that in a test cluster. But I agree, it would
> be helpful to have a way with cephadm to apply these hooks without
> these workarounds. I'll check if there's a tracker issue for that,
> and create one if necessary.
>
> Thanks!
> Eugen
>
> Zitat von Wyll Ingersoll :
>
>> I've found the crush location hook script code to be problematic in
>> the containerized/cephadm world.
>>
>> Our workaround is to place the script in a common place on each OSD
>> node, such as /etc/crush/crushhook.sh, and then make a link from
>> /rootfs -> /, and set the configuration value so that the path to
>> the hook script starts with /rootfs.  The container that the OSDs
>> run in has access to /rootfs and this hack allows them to all view
>> the crush script without having to manually modify unit files.
>>
>> For example:
>>
>>  1.
>> put crushhook script on the host OS in /etc/crush/crushhook.sh
>>  2.
>> make a link on the host os:   $ cd /; sudo ln -s / /rootfs
>>  3.
>> ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh
>>
>>
>> The containers see "/rootfs" and will then be able to access your
>> script.  Be aware though that if your script requires any sort of
>> elevated access, it may fail because the hook runs as ceph:ceph in
>> a minimal container so not all functions are available.  I had to
>> add lots of debug output and logging in mine (it's rather
>> complicated) to figure out what was going on when it was running.
>>
>> I would love to see the "crush_location_hook" script be something
>> that can be stored in the config entirely instead of as a link,
>> similar to how the ssl certificates for RGW or the dashboard are
>> stored (ceph config-key set ...).   The current situation is not
>> ideal.
>>
>>
>>
>>
>> 
>> From: Eugen Block 
>> Sent: Thursday, May 2, 2024 10:23 AM
>> To: ceph-users@ceph.io 
>> Subject: [ceph-users] cephadm custom crush location hooks
>>
>> Hi,
>>
>> we've been using custom crush location hooks for some OSDs [1] for
>> years. Since we moved to cephadm, we always have to manually edit the
>> unit.run file of those OSDs because the path to the script is not
>> mapped into the containers. I don't want to define custom location
>> hooks for all OSDs globally in the OSD spec, even if those are limited
>> to two hosts only in our case. But I'm not aware of a method to target
>> only specific OSDs to have some files mapped into the container [2].
>> Is my assumption correct that we'll have to live with the manual
>> intervention until we reorganize our osd tree? Or did I miss something?
>>
>> Thanks!
>> Eugen
>>
>> [1]
>> https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
>> [2]
>> https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-03 Thread Eugen Block

I found your (open) tracker issue:

https://tracker.ceph.com/issues/53562

Your workaround works great, I tried it in a test cluster  
successfully. I will adopt it to our production cluster as well.


Thanks!
Eugen

Zitat von Eugen Block :

Thank you very much for the quick response! I will take a look first  
thing tomorrow and try that in a test cluster. But I agree, it would  
be helpful to have a way with cephadm to apply these hooks without  
these workarounds. I'll check if there's a tracker issue for that,  
and create one if necessary.


Thanks!
Eugen

Zitat von Wyll Ingersoll :

I've found the crush location hook script code to be problematic in  
the containerized/cephadm world.


Our workaround is to place the script in a common place on each OSD  
node, such as /etc/crush/crushhook.sh, and then make a link from  
/rootfs -> /, and set the configuration value so that the path to  
the hook script starts with /rootfs.  The container that the OSDs  
run in has access to /rootfs and this hack allows them to all view  
the crush script without having to manually modify unit files.


For example:

 1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
 2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
 3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your  
script.  Be aware though that if your script requires any sort of  
elevated access, it may fail because the hook runs as ceph:ceph in  
a minimal container so not all functions are available.  I had to  
add lots of debug output and logging in mine (it's rather  
complicated) to figure out what was going on when it was running.


I would love to see the "crush_location_hook" script be something  
that can be stored in the config entirely instead of as a link,  
similar to how the ssl certificates for RGW or the dashboard are  
stored (ceph config-key set ...).   The current situation is not  
ideal.






From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-02 Thread Eugen Block
Thank you very much for the quick response! I will take a look first  
thing tomorrow and try that in a test cluster. But I agree, it would  
be helpful to have a way with cephadm to apply these hooks without  
these workarounds. I'll check if there's a tracker issue for that, and  
create one if necessary.


Thanks!
Eugen

Zitat von Wyll Ingersoll :

I've found the crush location hook script code to be problematic in  
the containerized/cephadm world.


Our workaround is to place the script in a common place on each OSD  
node, such as /etc/crush/crushhook.sh, and then make a link from  
/rootfs -> /, and set the configuration value so that the path to  
the hook script starts with /rootfs.  The container that the OSDs  
run in has access to /rootfs and this hack allows them to all view  
the crush script without having to manually modify unit files.


For example:

  1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
  2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
  3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your  
script.  Be aware though that if your script requires any sort of  
elevated access, it may fail because the hook runs as ceph:ceph in a  
minimal container so not all functions are available.  I had to add  
lots of debug output and logging in mine (it's rather complicated)  
to figure out what was going on when it was running.


I would love to see the "crush_location_hook" script be something  
that can be stored in the config entirely instead of as a link,  
similar to how the ssl certificates for RGW or the dashboard are  
stored (ceph config-key set ...).   The current situation is not  
ideal.






From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm custom crush location hooks

2024-05-02 Thread Wyll Ingersoll



I've found the crush location hook script code to be problematic in the 
containerized/cephadm world.

Our workaround is to place the script in a common place on each OSD node, such 
as /etc/crush/crushhook.sh, and then make a link from /rootfs -> /, and set the 
configuration value so that the path to the hook script starts with /rootfs.  
The container that the OSDs run in has access to /rootfs and this hack allows 
them to all view the crush script without having to manually modify unit files.

For example:

  1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
  2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
  3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your script.  Be 
aware though that if your script requires any sort of elevated access, it may 
fail because the hook runs as ceph:ceph in a minimal container so not all 
functions are available.  I had to add lots of debug output and logging in mine 
(it's rather complicated) to figure out what was going on when it was running.

I would love to see the "crush_location_hook" script be something that can be 
stored in the config entirely instead of as a link, similar to how the ssl 
certificates for RGW or the dashboard are stored (ceph config-key set ...).   
The current situation is not ideal.





From: Eugen Block 
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io