‐‐‐ Original Message ‐‐‐
On Monday, October 11th, 2021 at 10:14 AM, Stefan Kooman wrote:
> If you want to go the virtualization route ... you might as well go for
>
> the Ampere Altra Max with 128 cores :-).
I was trying to get an offer for this CPU in Europe but they say it is not
ust have missed the response to your thread, I suppose:
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/YA5KLI5MFJRKVQBKUBG7PJG4RFYLBZFA/
>
> Zitat von mabi m...@protonmail.ch:
>
> > Hello,
> >
> > A few days later the ceph status progress bar is s
September 7th, 2021 at 2:30 PM, mabi wrote:
> Hello
>
> I have a test ceph octopus 16.2.5 cluster with cephadm out of 7 nodes on
> Ubuntu 20.04 LTS bare metal. I just upgraded each node's kernel and performed
> a rolling reboot and now the ceph -s output is stuck somehow and the m
4m ago 4w
-2048M
It looks like the orchestrator is stuck and does not continue it's job. Any
idea how I can get it unstuck?
Best regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email t
rations/crush-map/#device-classes
> ceph osd crush set-device-class [...]
>
> Étienne
>
>> On 16 Aug 2021, at 12:05, mabi wrote:
>>
>> Hello,
>>
>> I noticed that cephadm detects my newly added SSD disk as type HDD as you
>> can see below:
>>
the disk type to SSD instead of HDD?
Regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Indeed if you upgrade Docker such as with the APT unattended-upgrades the
Docker daemon will get restarted meaning all your containers too :( That's just
how Docker works.
You might want to switch to podman instead of Docker in order to avoid that. I
use podman precisely for this reason.
I also had the same issue with but just with prometheus during the bootstraping
of my first node of Pacific 16.2.5. What I did is simply reboot and that was it.
‐‐‐ Original Message ‐‐‐
On Sunday, July 11th, 2021 at 9:58 PM, Robert W. Eckert
wrote:
> I had the same issue for
re not experienced with fiddling
> with the monmap.
>
> Am 23.07.21 um 08:55 schrieb mabi:
>
> > Thank you Eugen for your answer. I missed the part about the monmap thingy
> > and my previous thread somehow drifted off-topic.
> >
> > Regarding the monmap is there any do
ur other thread, this won't be enough to
>
> migrate MONs to a different IP address, you need to create a new monmap.
>
> Regards,
>
> Eugen
>
> Zitat von mabi m...@protonmail.ch:
>
> > Hello,
> >
> > I need to change the IP addresses and domain name of m
that I have installed my cluster with cephadm so I guess that's the
problem.
Best,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
‐‐‐ Original Message ‐‐‐
On Wednesday, July 21st, 2021 at 9:53 AM, Burkhard Linke
wrote:
> You need to ensure that TCP traffic is routeable between the networks
>
> for the migration. OSD only hosts are trivial, an OSD updates its IP
>
> information in the OSD map on boot. This should
side? I
did not find any relevant documentation to such a procedure.
This cluster is installed with cephadm and runs on Ubuntu 20.04.
Best regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
I have now opened a bug issue as this must be a bug with cephadm:
https://tracker.ceph.com/issues/51629
Hopefully someone has time to look into that.
Thank you in advance.
‐‐‐ Original Message ‐‐‐
On Friday, July 9th, 2021 at 8:11 AM, mabi wrote:
> Hello,
>
> I rebooted
Offline
ceph1g ceph1g mds Offline
ceph1h ceph1h mds Offline
Does anyone have a clue how I can fix that? cephadm seems to be broken...
Thank you for your help.
Regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
?
‐‐‐ Original Message ‐‐‐
On Tuesday, July 6th, 2021 at 8:09 AM, mabi wrote:
> Hello,
>
> After having done a rolling reboot of my Octopus 15.2.13 cluster of 8 nodes
> cephadm does not find python3 on the node and hence I get quite a few of the
> following war
tratorError(msg) from e
orchestrator._interface.OrchestratorError: Can't communicate with remote host
`ceph1d`, possibly because python3 is not installed there: [Errno 32] Broken
pipe
I checked directly on the nodes and I can execute "python3" command and I can
also SSH into all nodes wi
‐‐‐ Original Message ‐‐‐
On Wednesday, June 16, 2021 10:18 AM, Andrew Walker-Brown
wrote:
> With active mode, you then have a transmit hashing policy, usually set
> globally.
>
> On Linux the bond would be set as ‘bond-mode 802.3ad’ and then
> ‘bond-xmit-hash-policy layer3+4’ - or
, June 10, 2021 8:44 AM, Eugen Block wrote:
> Can you share your 'ceph osd tree'?
> You can remove the stray osd "old school" with 'ceph osd purge 1
> [--force]' if you're really sure.
>
> Zitat von mabi m...@protonmail.ch:
>
> > Small correction in my mail below, I
Small correction in my mail below, I meant to say Octopus and not Nautilus, so
I am running ceph 15.2.13.
‐‐‐ Original Message ‐‐‐
On Wednesday, June 9, 2021 2:25 PM, mabi wrote:
> Hello,
>
> I replaced an OSD disk on one of my Nautilus OSD node which created a new osd
>
by cephadm
[WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
stray daemon osd.1 on host ceph1e not managed by cephadm
# ceph orch daemon rm osd.1 --force
Error EINVAL: Unable to find daemon(s) ['osd.1']
Is there another command I am missing?
Best regards,
Mabi
I managed by experimenting how to get rid of that wrongly created MDS service,
so for those who are looking for that information too, I used the following
command:
ceph orch rm mds.label:mds
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 9:16 PM, mabi wrote:
> Hello,
>
ms totally wrong and I would like to remove
it but I haven't found how to remove it totally. Any ideas?
Ideally I just want to place two MDS daemons on node ceph1a and ceph1g.
Regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
:
> That file is in the regular filesystem, you can copy it from a
> different osd directory, it just a minimal ceph.conf. The directory
> for the failing osd should now be present after the failed attempts.
>
> Zitat von mabi m...@protonmail.ch:
>
> > Nicely spotted about the missing fi
sid
> bc241cd4-e284-4c5a-aad2-5744632fc7fc
>
> I tried to reproduce a similar scenario and found a missing config
> file in the osd directory:
>
> Error: statfs
> /var/lib/ceph/acbb46d6-bde3-11eb-9cf2-fa163ebb2a74/osd.2/config: no
> such file or directory
>
> Check your s
11eb-9bb6-a5302e00e1fa@osd.2.service: Failed with result
'exit-code'.
May 27 14:48:24 ceph1f systemd[1]: Failed to start Ceph osd.2 for
8d47792c-987d-11eb-9bb6-a5302e00e1fa.
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 2:22 PM, mabi wrote:
> I am trying to run "cephadm shell&
86f20-8083-40b1-8bf1-fe35fac3d677']
Maybe this is causing trouble... So is there any method where I can remove the
wrongly new created cluster ID 91a86f20-8083-40b1-8bf1-fe35fac3d677 ??
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 12:58 PM, mabi wrote:
> You are right, I
it, the cluster's or the OSD's? According to the
> 'cephadm deploy' help page it should be the cluster fsid.
>
> Zitat von mabi m...@protonmail.ch:
>
> > Hi Eugen,
> > What a good coincidence ;-)
> > So I ran "cephadm ceph-volume lvm list" on the OSD node which I
>
5, in deploy_daemon_units
assert osd_fsid
AssertionError
Any ideas what is wrong here?
Regards,
Mabi
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 12:13 PM, Eugen Block wrote:
> Hi,
>
> I posted a link to the docs [1], [2] just yesterday ;-)
>
> You sho
t
need my CephFS to still be available. Right now there is 2 mon daemons running
and 1 active mgr and 2 standby mgrs.
Thank you,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
‐‐‐ Original Message ‐‐‐
On Monday, May 17, 2021 4:31 AM, Anthony D'Atri wrote:
> You’re running on so small a node that 3.6GB is a problem??
Yes, I have hardware constraints based on the hardware where my hardware has
maximum 8 GB of RAM as it is a Raspberry Pi 4. I am doing a proof
only for MDS so they can
benefit of more RAM.
So my question is, is this a good idea? Are there any downsides of having MDS
active and passive on two dedicated nodes?
Regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
.
Thanks,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
er to "out" nicely the mgr and mon services from that specific node?
I have a standby manager and 3 mons in total so from the redundancy it should
be no problem to take that one node out for re-installing it.
Best regards,
Mabi
___
ceph-users
Hi David,
I had a similar issue yesterday where I wanted to remove an OSD on an OSD node
which had 2 OSDs so for that I used "ceph orch osd rm" command which completed
successfully but after rebooting that OSD node I saw it was still trying to
start the systemd service for that OSD and one CPU
and sdb.
‐‐‐ Original Message ‐‐‐
On Thursday, May 6, 2021 4:17 PM, David Caro wrote:
> On 05/06 14:03, mabi wrote:
>
> > Hello,
> > I have a small 6 nodes Octopus 15.2.11 cluster installed on bare metal with
> > cephadm and I added a second OSD to one of m
G_DEGRADED: Degraded data redundancy: 132518/397554 objects degraded
(33.333%), 65 pgs degraded, 65 pgs undersized
Thank your for your hints.
Best regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
‐‐‐ Original Message ‐‐‐
On Tuesday, May 4, 2021 4:42 PM, 胡 玮文 wrote:
> MDS also write its journal to the meta pool. And eventually located on the
> OSDs.
Thank you for your answer. That's good news then as my meta pool is all SSD.
Another question popped up, for a small cluster like
to cephadm that my MDS container should be writing
its journal at another location?
Finally I read
(https://docs.ceph.com/en/latest/start/hardware-recommendations/) that the Ceph
MDS service only needs 1 MB storage for its journal, is that all?
Best regards,
Mabi
29, 2021 7:23 PM, Eugen Block ebl...@nde.ag wrote:
> >
> > > I would restart the active MGR, that should resolve it.
> > > Zitat von mabi m...@protonmail.ch:
> > >
> > > > Hello,
> > > > I upgraded my Octopus test cluster
ugen Block wrote:
> I would restart the active MGR, that should resolve it.
>
> Zitat von mabi m...@protonmail.ch:
>
> > Hello,
> > I upgraded my Octopus test cluster which has 5 hosts because one of
> > the node (a mon/mgr node) was still on version 1
.11 (e3523634d9c2227df9af89a4eac33d16738c49cb)
octopus (stable)": 2
},
"mds": {},
"overall": {
"ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb)
octopus (stable)": 7
}
}
So why is it still stuck on 15.2.10 in the dashboard?
B
Thanks to both of you for your answers. So I understand that the best practice
would be to keep all nodes on the same Ceph version number.
You mention here the "recommended (and most tested) order" which order is that?
Using cephadm with containers wouldn't the orchestrator command below take
node got installed with 15.2.11...
What is the best practice here?
Best regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
I want to deploy a new ceph Octopus cluster using cephadm on arm64 architecture
but unfortunately the ceph/ceph-grafana docker image for arm64 is missing.
Is this mailing list the right place to report this? or where should I report
that?
Best regards,
Mabi
, April 14, 2021 3:41 PM, Sebastian Wagner
wrote:
> cephadm bootstrap --skip-monitoring-stack
>
> should to the trick. See man cephadm
>
> On Tue, Apr 13, 2021 at 6:05 PM mabi wrote:
>
>> Hello,
>>
>> When bootstrapping a new ceph Octopus cluster with &qu
Hello,
When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", how can
I tell the cephadm bootstrap NOT to install the ceph-grafana container?
Thank you very much in advance for your answer.
Best regards,
Mabi
___
ceph-use
hat (I think it's part of the
> dashboard module - I don't know - we run our own Prometheus/Grafana
> infrastructure outside of Ceph).
>
> On Fri, Apr 9, 2021 at 1:32 AM mabi m...@protonmail.ch wrote:
>
> > Thank you for confirming that podman 3.0.1 is fine.
> > I have now bootstrapped m
47792c-987d-11eb-9bb6-a5302e00e1fa-alertmanager.ceph1a
‐‐‐ Original Message ‐‐‐
On Friday, April 9, 2021 3:37 AM, David Orman wrote:
> The latest podman 3.0.1 release is fine (we have many production clusters
> running this). We have not tested 3.1 yet, however, but will soon.
&
version 2.1?
and what happens if I use the latest podman version 3.0
Best regards,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
‐‐‐ Original Message ‐‐‐
On Saturday, April 3, 2021 11:22 PM, David Orman wrote:
> We use cephadm + podman for our production clusters, and have had a
> great experience. You just need to know how to operate with
> containers, so make sure to do some reading about how containers work.
>
‐‐‐ Original Message ‐‐‐
On Wednesday, March 31, 2021 3:16 PM, Stefan Kooman wrote:
> "Daemon containers deployed with cephadm, however, do not need /etc/ceph
> at all. Use the --output-dir option to put them in a
> different directory (for example, .). This may help avoid conflicts
‐‐‐ Original Message ‐‐‐
On Wednesday, March 31, 2021 9:01 AM, Stefan Kooman wrote:
> For best performance you want to give the MONs their own disk,
> preferably flash. Ceph MONs start to use disk space when the cluster is
> in an unhealthy state (as to keep track of all PG changes). So
of disks on the MGR+MON+MDS
nodes? Or can I use have my OS disks on these nodes? As far as I understand the
MDS will create a metadata pool on the OSDs.
Thanks for the hints.
Best,
Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
54 matches
Mail list logo