Dear Mr Eugen, Mr Matthew, Mr David, Mr Anthony
My System is UP.
Thank you so much. We get many support from all of you., mazing, kindly support
from Top professional in Ceph.
Hope we have a chance to cooperate in the future. And If you travel to VietNam
in future, let me know. I ll be your
Thank you so much, Matthew. Pls keep an eye on my thread.
You and Mr Anthony made my day.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thank you so much, Sir. You make my day T.T
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thank you Matthew.
Im following guidance from Mr Anthony and now my recovery progress speed is
much faster.
I will update my case day by day.
Thank you so much
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi Mr Anthony,
Forget it, the osd is UP and recovery speed is x10times
Amazing
And now we just wait, right ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Yes, Sir.
We added 10TIb to cephosd02 node. Now the disk is IN, but DOWN state.
What should we do now :(
For additional, the recovery speed is x10 times :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
HolySh***
First, we change the mon_max_pg_per_osd to 1000
About adding disk for cephosd02, for more detail , what is TO, sir ? I ll make
conversation with my boss. To be honest, im thinking that the volume recovery
progress will get problem...
___
Thank you, Sir. But i think i ll wait for PG BACKFILLFULL finish, my boss is
very angry now and will not allow me to add one more disk( this action make him
think that ceph would take more time for recovering and rebalancing ). We want
to wait volume recovering progress finish
and sure, we have one more 10tib disk which cephosd02 will get it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I will correct some small things:
we have 6 nodes, 3 osd node and 3 gaeway node ( which run RGW, mds and nfs
service)
you r corrct, 2/3 osd node have ONE-NEW 10tib disk
About your suggestion, add another osd host, we will. But we need to end this
nightmare, my NFS folder which have 10tib data
Hi Mr Anthony, Could you tell me more details about raising the full and
backfullfull threshold
is it
ceph tell 'osd.*' injectargs --osd-max-backfills=2 --osd-recovery-max-active=6
??
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Hi Mr Anthony
pls check the output
https://anotepad.com/notes/s7nykdmc
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Mr Anthony,
pls check the output
https://anotepad.com/notes/s7nykdmc
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Mathew,
1) We have 2 MDS service running before this nightmare. Now we trying to apply
mds on 3 nodes, but all of them will stop within 2 minutes.
2) You are correct. We just add two 10TIB disk to cluster ( which currently
have 27 x 4TIB disk), all of them have weight 1.0
About volume
Hi David,
I ll follow your suggestion. Do you have Telegram ? If yes, could you pls add
my Telegram, +84989177619. Thank you so much
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Mathew
Pls chekc my ceph -s
ceph -s
cluster:
id: 258af72a-cff3-11eb-a261-d4f5ef25154c
health: HEALTH_WARN
3 failed cephadm daemon(s)
1 filesystem is degraded
insufficient standby MDS daemons available
1 nearfull osd(s)
Could you pls guide me more detail :( im very newbie in Ceph :(
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
And we dont have parameter folder
cd /sys/module/ceph/
[root@cephgw01 ceph]# ls
coresize holders initsize initstate notes refcnt rhelversion sections
srcversion taint uevent
My Ceph is 16.2.4
___
ceph-users mailing list --
Hi David,
Could you pls helo me understand,
Does it affect to RGW service ? And if something go bad, how can i rollback ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thank you for your time :) Have a good day, sir
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
https://drive.google.com/file/d/1OIN5O2Vj0iWfEMJ2fyHN_xV6fpknBmym/view?usp=sharing
Pls check my mds log which generate by command
cephadm logs --name mds.cephfs.cephgw02.qqsavr --fsid
258af72a-cff3-11eb-a261-d4f5ef25154c
___
ceph-users mailing list --
HI Mr Patrick,
We are in same situation with Sake, now my MDS is crashed , NFS service is down
with CEPHFS not responding. with my "ceph -s" result
health: HEALTH_WARN
3 failed cephadm daemon(s)
1 filesystem is degraded
insufficient standby MDS daemons
Could you pls help me explain the status of volume: recovering ? what is it ?
and do we need to wait for volume recovery progress finished ??
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Feb 22 13:39:43 cephgw02 conmon[1340927]: log_file
/var/lib/ceph/crash/2024-02-22T06:39:43.618845Z_78ee38bc-9115-4bc6-8c3a-4bf42284c970/log
Feb 22 13:39:43 cephgw02 conmon[1340927]: --- end dump of recent events ---
Feb 22 13:39:45 cephgw02 systemd[1]:
it suck too long log, could you pls guide me how to grep/filter important
things in logs ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
How can we get log of MDS, pls guide me T_T
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
We have 6 node ( 3 OSD-node and 3 service node), t2/3 OSD nodes was powered off
and we got big problem
pls check ceph-s result below
now we cannot start mds service, ( we tried to start but it stopped after 2
minute)
Now my application cannot access to NFS exported Folder
What should we do
We r using ceph 16.2.4
While init rados gw service, we run :
"
ceph orch apply rgw S3GW --placement="1 cephgw03uat"
radosgw-admin user create --uid=rgwadmin --display-name=rgwadmin --system
"
but the command stuck, return nothing, hust HANGS.
we add "--verbose" option and see log
overlays:
Can anyone help me ?
I need to know do Ceph 16.2.4 support Signature V4 for S3 API ? IF yes, pls
guide us
Thank u all
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
29 matches
Mail list logo