Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't
specify its port that's why it's not being opened automatically. You can
see that in the cephadm logs (ports list is empty):
2024-09-09 04:39:48,986 7fc2993d7740 DEBUG Loaded deploy configuration:
{'fsid': '250b9d7c-6e65-11ef-8e0
3, Matthew Vernon wrote:
> > > Hi,
> > >
> > > On 05/09/2024 12:49, Redouane Kachach wrote:
> > >
> > > > The port 8765 is the "service discovery" (an internal server that
> > > > runs in
> > > > the mgr... you can
Hi,
The port 8765 is the "service discovery" (an internal server that runs in
the mgr... you can change the port by changing the
variable service_discovery_port of cephadm). Normally it is opened in the
active mgr and the service is used by prometheus (server) to get the
targets by using the http
Looks good to me. Testing went OK without any issues.
Thanks,
Redo.
On Tue, Mar 5, 2024 at 5:22 PM Travis Nielsen wrote:
> Looks great to me, Redo has tested this thoroughly.
>
> Thanks!
> Travis
>
> On Tue, Mar 5, 2024 at 8:48 AM Yuri Weinstein wrote:
>
>> Details of this release are summariz
gt;> > On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein
>> wrote:
>> >>
>> >> Redouane
>> >>
>> >> What would be a sufficient level of testing (tautology suite(s))
>> >> assuming this PR is approved to be added?
>> >&
Hi Yuri,
I've just backported to reef several fixes that I introduced in the last
months for the rook orchestrator. Most of them are fixes for dashboard
issues/crashes that only happen on Rook environments. The PR [1] has all
the changes and it was merged into reef this morning. We really
need the
part of the learning experience. So my
> answer to "how do I start over" would be "go figure it out, its an
> important lesson".
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
Dear ceph community,
As you are aware, cephadm has become the default tool for installing Ceph
on bare-metal systems. Currently, during the bootstrap process of a new
cluster, if the user interrupts the process manually or if there are any
issues causing the bootstrap process to fail, cephadm leav
Sometimes some ceph-volume commands hang when trying to access some device.
Please, take a look at the solution/steps provided by Adam in the thread
with title "Issue adding host with cephadm - nothing is deployed" to check
if the cephadm is waiting for some ceph-volume command to complete.
Regard
Normally it should work, another way to do it is basically by just entering
the container by using podman commands (or docker).
For this, just run:
> podman ps | grep mds | awk '{print $1}' (to get the container ID)
> podman exec -it /bin/sh
That should work if the container is running.
Regar
/latest/configuration/configuration/#http_sd_config
<https://prometheus.io/docs/prometheus/2.28/configuration/configuration/#http_sd_config>
On Tue, Nov 8, 2022 at 4:47 PM Eugen Block wrote:
> I somehow missed the HA part in [1], thanks for pointing that out.
>
>
> Zitat vo
If you are running quincy and using cephadm then you can have more
instances of prometheus (and other monitoring daemons) running in HA mode
by increasing the number of daemons as in [1]:
from a cephadm shell (to run 2 instances of prometheus and altertmanager):
> ceph orch apply prometheus --plac
Currently the generated template is the same for all the hosts and there's
no way to have a dedicated template for a specific host AFAIK.
On Tue, Oct 25, 2022 at 12:45 PM Lasse Aagren wrote:
> The context provided, when parsing the template:
>
>
> https://github.com/ceph/ceph/blob/v16.2.10/src/p
Glad it helped you to fix the issue. I'll open a tracker to fix the docs.
On Wed, Oct 5, 2022 at 3:52 PM E Taka <0eta...@gmail.com> wrote:
> Thanks, Redouane, that helped! The documentation should of course also be
> updated in this context.
>
> Am Mi., 5. Okt. 2022 um 15:
Hello,
As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are
now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are
stored per-nodee as:
*mgr/cephadm/{hostname}/grafana_crt*
*mgr/cephadm/{hostname}/grafana_key*
In order to see the config entries that have been
Great, thank you.
Best,
Redo.
On Thu, Jul 21, 2022 at 2:01 PM Robert Reihs wrote:
> Bug Reported:
> https://tracker.ceph.com/issues/56660
> Best
> Robert Reihs
>
> On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
----- Original Message ---
> On Tuesday, July 19th, 2022 at 13:47, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > Did you try the *rm *option? both ceph config and ceph config-key support
> > removing config kyes:
> >
> > From:
> >
&g
?
>
> Best,
>
> Luis Domingues
> Proton AG
>
>
> --- Original Message ---
> On Friday, July 15th, 2022 at 17:06, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > This section could be added to any service spec. cephadm will pa
Great, thanks for sharing your solution.
It would be great if you can open a tracker describing the issue so it
could be fixed later in cephadm code.
Best,
Redo.
On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs wrote:
> Hi,
> I think I found the problem. We are using ipv6 only, and the config ceph
s added to ceph.conf.
>
> Best Regards,
> Ali
> On 15.07.22 15:21, Redouane Kachach Elhichou wrote:
>
> Hello Ali,
>
> You can set configuration by including a config section in our yaml as
> following:
>
> config:
> param_1: val_1
> ...
Hello Ali,
You can set configuration by including a config section in our yaml as
following:
config:
param_1: val_1
...
param_N: val_N
this is equivalent to call the following ceph cmd:
> ceph config set
Best Regards,
Redo.
On Fri, Jul 15, 2022 at 2:45 PM Ali Akil w
>From the error message:
2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]
it seems that you are not using the cephadm that corresponds to your ceph
version. Please, try to get cephad
To see what cephadm is doing you can check both the logs on:
*/var/log/ceph/cephadm.log* (here you can see what the cephadm running on
each host is doing) and you can also check what the cephadm (mgr module) is
doing by checking the logs of the mgr container by:
> podman logs -f `podman ps | grep
Hello Dmitriy,
You have to provide a valid ip during the bootstrap: --mon-ip **
* *must be a valid ip from some interface on the current node.
Regards,
Redouane.
On Thu, May 26, 2022 at 2:14 AM Dmitriy Trubov
wrote:
> Hi,
>
> I'm trying to install ansible octopus with cephadm.
>
> Here is
24 matches
Mail list logo