Hello everyone,
I currently have a CephFS running with about 60TB of Data. I created it
with a replicated pool as default pool, an erasure coded one as
additional data pool like it is described in the docs. Now I want to
migrate the data from the replicated pool, to the new erasure coded one.
Hi,
I hope this mailgroup is ok for this kind of questions, if not please
ignore.
I'm currently in the process of planning a smaller ceph cluster mostly
for cephfs use.
The budget still allows for some SSD's in addition to the required
harddisks.
I see two options on how to use those,
Following is the error:
Apr 4 19:09:39 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
job, restart counter is at 3.
Apr 4 19:09:39 osd03 systemd[1]: Stopped Ceph object storage daemon osd.2.
Apr 4 19:09:39 osd03 systemd[1]: Starting Ceph object storage daemon
osd.2...
Apr 4 19:09:39
And after a reboot what errors are you getting?
Sent from my iPhone
On 4 Apr 2021, at 15:33, Behzad Khoshbakhti wrote:
I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph
I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
On Sun, Apr 4, 2021 at 6:47 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com>
UID and guid should both be 167 I believe.
Make a note of the current values and change them to 167 using usermod and
groupmod.
I had just this issue. It’s partly to do with how perms are used within the
containers I think.
I changed the values to 167 in passwd everything worked again.
Hi all,
I upgraded to the pacific version with cephadm. However, all our RGW daemons
cannot start anymore. any help is appreciated.
Here are the logs when starting RGW. I set debug_rados and debug_rgw to 20/20
systemd[1]: Started Ceph rgw.smil.b7-1.gpu006.twfefs for
Hello,
Permissions are correct. guid/uid is 64045/64045
ls -alh
total 32K
drwxrwxrwt 2 ceph ceph 200 Apr 4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18 2018 ..
lrwxrwxrwx 1 ceph ceph 93 Apr 4 14:11 block -> /dev/...
-rw--- 1 ceph ceph 37 Apr 4 14:11 ceph_fsid
-rw--- 1 ceph ceph
Are the file permissions correct and UID/guid in passwd both 167?
Sent from my iPhone
On 4 Apr 2021, at 12:29, Lomayani S. Laizer wrote:
Hello,
+1 Am facing the same problem in ubuntu after upgrade to pacific
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
Hello,
+1 Am facing the same problem in ubuntu after upgrade to pacific
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00
It worth mentioning as I issue the following command, the Ceph OSD starts
and joins the cluster:
/usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti
wrote:
> Hi all,
>
> As I have upgrade my Ceph cluster from 15.2.10 to
Hi all,
As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the manual
upgrade using the precompiled packages, the OSDs was down with the
following messages:
root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
--> Activating OSD ID 2 FSID
Majid,
Check out the install guide for Ubuntu using Cephadm here:
https://docs.ceph.com/en/latest/cephadm/install/
Basically install cephadm using apt, then use the Cephadm bootstrap command to
get the first mon up and running.
For additional hosts, make sure you have them in the hosts file
Thanks Dear Madjid,
Have you successfully installed ceph? If yes Please help me , I beg you.
on that link , I tried to open /etc/hosts/ and insert IP , but i did not
succeed to save.
On Sun, Apr 4, 2021 at 10:53 AM Majid Varzideh wrote:
> hi
> the best is looking at the ceph documentations.
>
hi
the best is looking at the ceph documentations.
in meantime you can use this link:
https://computingforgeeks.com/how-to-deploy-ceph-storage-cluster-on-ubuntu-18-04-lts/
On Sun, Apr 4, 2021 at 12:54 PM Michel Niyoyita wrote:
> Dear Ceph users,
>
> Kindly help on how I can deploy ceph on
Dear Ceph users,
Kindly help on how I can deploy ceph on ubuntu 18.04 TLS , I am learning
Ceph from scratch , please your inputs are highly appreciated.
Regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
16 matches
Mail list logo