Thanks for your reply! but xx is the actual ip of my nodes, I mapped it on
internet, so i use xx to replace the public ip.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Add more information, once telnet the port, it will connected, but close by
foreign host.
[root@vm-04 ~]# telnet xx 23860
Trying xx...
Connected to xx.
Escape character is '^]'.
Connection closed by foreign host.
[root@vm-04 ~]#
[root@vm-04 ~]#
[root@vm-04 ~]#
[root@vm-04 ~]#
I have mapped port 32505 to 23860, however when connect via s3cmd it fails with
"ERROR: S3 Temporary Error: Request failed for: /. Please try again later." .
has anyone ecounted same issue?
[root@vm-04 ~]# s3cmd ls
WARNING: Retrying failed request: / ('')
WARNING: Waiting 3 sec...
WARNING:
Thanks for your information, I tried several solutions but it not working then
I reinstalled, the issue wasn't appear again.. should be something wrong when
install.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
The issue was fixed by , the pool was incorrect before.
ceph auth caps client.king mon 'allow r' mds 'allow rw' osd 'allow rwx
pool=cephfs-king-data'
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Dears ,
CephFS mount succesful, however i encounted an issue about write file with
error message "Operation not permitted".
It happend even the file chmod 777, please help me to solve, thanks a lot!
[root@vm-04 mycephfs]# df -Th
Filesystem
Dears ,
I have a question about cephfs port 6789 is with type of ClusterIP, then how
can i access it from external network?
[root@vm-01 examples]# kubectl get svc -nrook-ceph
NAME TYPECLUSTER-IP
EXTERNAL-IP PORT(S) AGE
I tried to remove the default fs then it works, but port 6789 still not able to
telnet.
ceph fs fail myfs
ceph fs rm myfs --yes-i-really-mean-it
bash-4.4$
bash-4.4$ ceph fs ls
name: kingcephfs, metadata pool: cephfs-king-metadata, data pools:
[cephfs-king-data ]
bash-4.4$
bash-4.4$
Thanks for your information, I tried to new some mds pods, but it seems the
same issue.
[root@vm-01 examples]# cat filesystem.yaml | grep activeCount
activeCount: 3
[root@vm-01 examples]#
[root@vm-01 examples]# kubectl get pod -nrook-ceph | grep mds
rook-ceph-mds-myfs-a-6d46fcfd4c-lxc8m
Everything goes fine except execute "ceph fs new kingcephfs
cephfs-king-metadata cephfs-king-data", its shows 1 filesystem is offline 1
filesystem is online with fewer MDS than max_mds.
But i see there is one mds services running, please help me to fix the issue,
thanks a lot.
bash-4.4$
I have the same issue, can someone help me, thanks in advance!
bash-4.4$ ceph fs new kingcephfs cephfs-king-metadata cephfs-king-data
new fs with metadata pool 7 and data pool 8
bash-4.4$
bash-4.4$ ceph -s
cluster:
id: de9af3fe-d3b1-4a4b-bf61-929a990295f6
health: HEALTH_ERR
11 matches
Mail list logo