you're using the alert module [1] so to at least get informed about the scrub
> errors.
Thanks. I will look into because we got already icinga2 on site so I use
icinga2 to check the cluster.
Is they are a list of what the alert module going to check ?
Regards
JAS
--
Albert SHIH 嶺
Franc
waiting)
But I try to find «why» so I check all the OSD related on this pg and
didn't find anything, no error from osd daemon, no errors from smartctl, no
error from the kernel message.
So I just like to know if that's «normal» or should I scratch deeper.
JAS
--
Albert SHIH 嶺
France
Heure
was a micron ssd.
So my question : whats the best thing to do ?
Which «plugin» should I use and how I tell cephad what to do ?
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
mer. 27 mars 2024 15:43:54 CET
___
ceph-users mailing list
--
Albert SHIH 嶺
France
Heure locale/Local time:
mer. 27 mars 2024 09:18:04 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
own if that's related.
Any clue ?
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
mar. 26 mars 2024 10:52:53 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 25/03/2024 à 08:28:54-0400, Patrick Donnelly a écrit
Hi,
>
> The fix is in one of the next releases. Check the tracker ticket:
> https://tracker.ceph.com/issues/63166
Oh thanks. Didn't find it with google.
Is they are any risk/impact for the cluster ?
Regards.
--
Albe
]: mgr.server handle_open ignoring open
from mds.cephfs.cthulhu3.xvboir v2:145.238.187.186:6800/1297104944; not ready
for session (expect reconnect)
Mar 25 13:18:39 cthulhu2 ceph-mgr[2843]: mgr.server handle_open ignoring open
from mds.cephfs.cthulhu2.dqahyt v2:145.238.187.
Hi,
With our small cluster (11 nodes) I notice ceph log a lot
Beside to keep that somewhere «just in case», is they are anything to check
regularly in the log (in prevention of more serious problem) ? Or can we
trust «ceph health» and use the log only for debug.
Regards
--
Albert SHIH 嶺
.
Thanks.
--
Albert SHIH 嶺
France
Heure locale/Local time:
ven. 22 mars 2024 22:24:35 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 05/03/2024 à 11:54:34+0100, Robert Sander a écrit
Hi,
> On 3/5/24 11:05, Albert Shih wrote:
>
> > But I like to clean up and «erase» everything about rgw ? not only to try
> > to understand but also because I think I mixted up between realm and
> > zonegroup...
/en/quincy/mgr/rgw/#mgr-rgw-module
so now I got some rgw daemon running.
But I like to clean up and «erase» everything about rgw ? not only to try
to understand but also because I think I mixted up between realm and
zonegroup...
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
mar
use 3 replicas that's mean when I write 100G of data
available space = quota limite - 100G x 3
Regards
JAS
--
Albert SHIH 嶺
Observatoire de Paris
France
Heure locale/Local time:
sam. 24 févr. 2024 10:09:12 CET
___
ceph-users mailing list -- cep
to keep the first answer ?
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
jeu. 22 févr. 2024 08:44:17 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 12/02/2024 à 18:38:08+0100, Kai Stian Olstad a écrit
> On 12.02.2024 18:15, Albert Shih wrote:
> > I couldn't find a documentation about how to install a S3/Swift API (as
> > I
> > understand it's RadosGW) on quincy.
>
> It depends on how you have install Ceph.
/en/quincy/radosgw/
I can see a lot of very detailed documentation about each component, but
cannot find a more global documentation.
Any new documentation somewhere ? I think it's not a good idea to use the
one on octopus...
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
lun. 12
Le 02/02/2024 à 16:34:17+0100, Albert Shih a écrit
> Hi,
>
>
> A little basic question.
>
> I created a volume with
>
> ceph fs volume
>
> then a subvolume called «erasure» I can see that with
>
> root@cthulhu1:/etc/ceph# ceph fs subvolume info cephfs
ed-69f03a7303e9
/mnt
but on my test client I'm unable to mount
root@ceph-vo-m:/etc/ceph# mount -t ceph
vo@fxxx-c0f2-11ee-9307-f7e3b9f03075.cephfs=/volumes/_nogroup/erasure/998e3bdf-f92b-4508-99ed-69f03a7303e9/
/vo --verbose
parsing options: rw
source mount path was not specified
unable to
Le 29/01/2024 à 22:43:46+0100, Albert Shih a écrit
> Hi
>
> When I deploy my cluster I didn't notice on two of my servers the private
> network was not working (wrong vlan), now it's working, but how can I check
> the it's indeed working (currently I don't have data).
I mean...ce
Hi
When I deploy my cluster I didn't notice on two of my servers the private
network was not working (wrong vlan), now it's working, but how can I check
the it's indeed working (currently I don't have data).
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
lun. 29 janv. 2024 22:36
re» so it would be super nice to add (at least) the option to
choose the couple of pool...
>
> I haven't looked too deep into changing the default pool yet, so there might
> be a way to switch that as well.
Ok. I will also try but...well...newbie ;-)
Anyway thanks.
regards
--
Albert
to deploy
the mds, and the «new» way to do it is to use ceph fs volume.
But with ceph fs volume I didn't find any documentation of how to set the
metadata/data pool
I also didn't find any way to change after the creation of the volume the
pool.
Thanks
--
Albert SHIH 嶺
France
Heure locale/Local
Le 24/01/2024 à 10:33:45+0100, Robert Sander a écrit
Hi,
>
> On 1/24/24 10:08, Albert Shih wrote:
>
> > 99.99% because I'm newbie with ceph and don't understand clearly how
> > the autorisation work with cephfs ;-)
>
> I strongly recommend you to ask for a
. but that's OK ;-)
>
> What Robert emphasizes is that creating pools dynamically is not without
> effect
> on the number of PGs and (therefore) on the architecture (PG per OSD,
> balancer,
> pg autoscaling, etc.)
Ok.no worriesI didn't know it was possible;-)
Reg
Le 24/01/2024 à 09:45:56+0100, Robert Sander a écrit
Hi
>
> On 1/24/24 09:40, Albert Shih wrote:
>
> > Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
> > cephfs (currently and that number will increase with time).
>
> Why do you n
the same
cephfs_data_replicated/erasure pool ?
Regards
--
Albert SHIH 嶺
France
Heure locale/Local time:
mer. 24 janv. 2024 09:33:09 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
to the other ceph cluster server.
I in fact add something like
for host in `cat /usr/local/etc/ceph_list_noeuds.txt`
do
/usr/bin/rsync -av /etc/ceph/ceph* $host:/etc/ceph/
done
in a cronjob
Regards.
--
Albert SHIH 嶺
France
Heure locale/Local t
network.
Is they are anyway to configure booth public_network and private_network
with cephadm bootstrap ?
Regards.
--
Albert SHIH 嶺
France
Heure locale/Local time:
jeu. 30 nov. 2023 18:27:08 CET
___
ceph-users mailing list -- ceph-users@ceph.io
to» our puppet config.
Soon I remove the old version of cephadm and install the 17.2.7 version
everything work fine again.
Regards.
--
Albert SHIH 嶺
France
Heure locale/Local time:
mer. 29 nov. 2023 22:06:35 CET
___
ceph-users mailing list -- ceph-us
n
to put in the /etc/ceph/ceph.conf
> the label _admin to your host in "ceph orch host" so that cephadm takes care
> of maintaining your /etc/ceph.conf (outside the container).
Ok. I'm indeed using ceph orch & Cie.
Thanks.
Regards.
JAS
--
Albert SHIH 嶺
Observatoire de
e future, for that is where you and I are going to
> spend the rest of our lives.
Correct ;-)
Sorry my question was not very clear. My question was in fact in which
way we headed. But I'm guessing the answer is “ceph config” or something
like that.
Thanks.
Regar
manually touche ceph.conf ?
And what about the future ?
Regards.
--
Albert SHIH 嶺
France
Heure locale/Local time:
jeu. 23 nov. 2023 15:21:47 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
disks over 9-12
disks).
So my question are : Anyone use in large scale erasure coding for critical
(same level as raidz1/raid5 ou raidz2/raid6) ?
Regards
--
Albert SHIH 嶺
Observatoire de Paris
France
Heure locale/Local time:
jeu. 23 nov. 2023 14:51:28 CET
use the default and make sure you have
> sufficient unused capacity to increase the chances for large bluestore writes
> (keep utilization below 60-70% and just buy extra disks). A workload with
> large min_alloc_sizes has to be S3-like, only upload, download and delete are
> allowed.
Thank
er, you can always think about relocating certain services.
Ok, thanks for the answer.
Regards.
--
Albert SHIH 嶺
Observatoire de Paris
France
Heure locale/Local time:
sam. 18 nov. 2023 09:26:56 CET
___
ceph-users mailing list -- ceph-users@ceph.i
n is to use the capability of ceph to migrate by himself the data from
old to new hardware.
So short answer : no enough money ;-) ;-)
Regards.
--
Albert SHIH 嶺
France
Heure locale/Local time:
sam. 18 nov. 2023 09:19:03 CET
___
ceph-users
ster are not to get the maximum
I/O speed, I would not say the speed is not a factor, but it's not the main
point.
Regards.
--
Albert SHIH 嶺
Observatoire de Paris
France
Heure locale/Local time:
ven. 17 nov. 2023 10:49:27 CET
___
ceph-users mailing list --
verything
from the “row primary” to “row secondary”.
Regards
>
>
> Le mer. 8 nov. 2023 à 18:45, Albert Shih a écrit :
>
> Hi everyone,
>
> I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
>
> I'm trying to understand ho
lica (with only 1 copy of course) of pool from “row” primary to
secondary.
How can I achieve that ?
Regards
--
Albert SHIH 嶺
mer. 08 nov. 2023 18:37:54 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
38 matches
Mail list logo