Normally it should work, another way to do it is basically by just entering
the container by using podman commands (or docker).
For this, just run:
> podman ps | grep mds | awk '{print $1}' (to get the container ID)
> podman exec -it /bin/sh
That should work if the container is running.
/latest/configuration/configuration/#http_sd_config
<https://prometheus.io/docs/prometheus/2.28/configuration/configuration/#http_sd_config>
On Tue, Nov 8, 2022 at 4:47 PM Eugen Block wrote:
> I somehow missed the HA part in [1], thanks for pointing that out.
>
>
> Zitat vo
If you are running quincy and using cephadm then you can have more
instances of prometheus (and other monitoring daemons) running in HA mode
by increasing the number of daemons as in [1]:
from a cephadm shell (to run 2 instances of prometheus and altertmanager):
> ceph orch apply prometheus
Currently the generated template is the same for all the hosts and there's
no way to have a dedicated template for a specific host AFAIK.
On Tue, Oct 25, 2022 at 12:45 PM Lasse Aagren wrote:
> The context provided, when parsing the template:
>
>
>
r schrieb Redouane Kachach Elhichou <
> rkach...@redhat.com>:
>
> > Hello,
> >
> > As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key
> are
> > now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are
> > stored per-
Hello,
As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are
now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are
stored per-nodee as:
*mgr/cephadm/{hostname}/grafana_crt*
*mgr/cephadm/{hostname}/grafana_key*
In order to see the config entries that have
Great, thank you.
Best,
Redo.
On Thu, Jul 21, 2022 at 2:01 PM Robert Reihs wrote:
> Bug Reported:
> https://tracker.ceph.com/issues/56660
> Best
> Robert Reihs
>
> On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> On Tuesday, July 19th, 2022 at 13:47, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > Did you try the *rm *option? both ceph config and ceph config-key support
> > removing config kyes:
> >
> > From:
> >
> https://docs.ceph.com/en/qu
?
>
> Best,
>
> Luis Domingues
> Proton AG
>
>
> --- Original Message ---
> On Friday, July 15th, 2022 at 17:06, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > This section could be added to any service spec. cephadm will
Great, thanks for sharing your solution.
It would be great if you can open a tracker describing the issue so it
could be fixed later in cephadm code.
Best,
Redo.
On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs wrote:
> Hi,
> I think I found the problem. We are using ipv6 only, and the config
.
>
> Best Regards,
> Ali
> On 15.07.22 15:21, Redouane Kachach Elhichou wrote:
>
> Hello Ali,
>
> You can set configuration by including a config section in our yaml as
> following:
>
> config:
> param_1: val_1
> ...
> param_N:
Hello Ali,
You can set configuration by including a config section in our yaml as
following:
config:
param_1: val_1
...
param_N: val_N
this is equivalent to call the following ceph cmd:
> ceph config set
Best Regards,
Redo.
On Fri, Jul 15, 2022 at 2:45 PM Ali Akil
>From the error message:
2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]
it seems that you are not using the cephadm that corresponds to your ceph
version. Please, try to get
To see what cephadm is doing you can check both the logs on:
*/var/log/ceph/cephadm.log* (here you can see what the cephadm running on
each host is doing) and you can also check what the cephadm (mgr module) is
doing by checking the logs of the mgr container by:
> podman logs -f `podman ps | grep
Hello Dmitriy,
You have to provide a valid ip during the bootstrap: --mon-ip **
* *must be a valid ip from some interface on the current node.
Regards,
Redouane.
On Thu, May 26, 2022 at 2:14 AM Dmitriy Trubov
wrote:
> Hi,
>
> I'm trying to install ansible octopus with cephadm.
>
> Here is
15 matches
Mail list logo