My Chrome browser can not connect to Ceph Dashboard (invalid certificate
warning, security risk, blablabla...).
I uploaded on the server a valid certificate but I can not "see" it when
I'am in the container running the Ceph CLI, so I can not update the
certificate :-(
Le ven. 7 janv. 2022 à 22:1
>
>
> Where else can I look to find out why the managed block storage isn't
> accessible anymore?
>
ceph -s ? I guess it is not showing any errors, and there is probably nothing
with ceph, you can do an rbdmap and see if you can just map an image.
Then try mapping an image with the user cred
What are you trying to do that won't work? If you need resources from
outside the container, it doesn't sound like something you should need to
be entering a shell inside the container to accomplish.
On Fri, Jan 7, 2022 at 1:49 PM François RONVAUX
wrote:
> Thanks for the answer.
>
> I would want
Thanks for the answer.
I would want to get the ceph CLI to do some admin tasks.
I can access to it with the command suggested at the end of the
bootstrap process :
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid dbd1f122-6fd1-11ec-b7dc-560003c792b4 -c
/etc/ceph/ceph.conf -k
...sorta. I have a ovirt-4.4.2 system installed a couple of years ago
and set up managed block storage using ceph Octopus[1]. This has been
working well since it was originally set up.
In late November we had some network issues on one of our ovirt hosts,
as well a seperate network issue tha
On Friday, January 7, 2022 11:56:53 AM EST François RONVAUX wrote:
> Hello,
>
>
> On a CentOS Stream9 , when I try to install the ceph packages from the
> pacific release, I got this error message :
>
> [root@ceph00 ~]# cephadm install ceph-common
> Installing packages ['ceph-common']...
> Non-z
Hello Cephers,
Anyone who had that problem find a workaround ?
I thing this bug represent well what's all about :
https://tracker.ceph.com/issues/51429[1]
We have a cluster that it this and cannot auto-reshard. LARGE OMAP Objects
stays there and
cannot be removed also by pg deep scrub.
As
Wonderful!
> On Jan 7, 2022, at 8:50 AM, Patrick Donnelly wrote:
>
> We expect a release candidate in about a month. You can use that if
> you're very eager. Otherwise, Quincy should be out sometime in March.
>
> On Thu, Jan 6, 2022 at 10:22 AM Aaron Oneal wrote:
>>
>> Thank you, looking forw
Hello,
On a CentOS Stream9 , when I try to install the ceph packages from the
pacific release, I got this error message :
[root@ceph00 ~]# cephadm install ceph-common
Installing packages ['ceph-common']...
Non-zero exit code 1 from yum install -y ceph-common
yum: stdout Ceph x86_64
We expect a release candidate in about a month. You can use that if
you're very eager. Otherwise, Quincy should be out sometime in March.
On Thu, Jan 6, 2022 at 10:22 AM Aaron Oneal wrote:
>
> Thank you, looking forward to that. Any interim workarounds? Roadmap makes me
> think Quincy might not
In my test cluster the ceph-mgr is continuously logging to file like this:
2022-01-07T16:24:09.890+ 7fc49f1cc700 0 log_channel(cluster) log
[DBG] : pgmap v832: 1 pgs: 1 active+undersized; 0 B data, 5.9 MiB used,
16 GiB / 18 GiB avail
2022-01-07T16:24:11.890+ 7fc49f1cc700 0 log_channel
Hi,
I'm recently trying to enable OSD to use io_uring with our Cephadm
deployment by bellow command.
ceph config set osd bdev_ioring true
ceph config set osd bdev_ioring_hipri true
ceph config set osd bdev_ioring_sqthread_poll true
However, I've ran into the issue similar to this bug.
Bug #47661
Thanks, that worked
On Fri, Jan 7, 2022 at 1:27 PM Eugen Block wrote:
>
> Have you also tried this?
>
> # ceph orch daemon restart osd.12
>
> Without the "daemon" you would try to restart an entire service called
> "osd.12" which obviously doesn't exist. With "daemon" you can restart
> specific d
Have you also tried this?
# ceph orch daemon restart osd.12
Without the "daemon" you would try to restart an entire service called
"osd.12" which obviously doesn't exist. With "daemon" you can restart
specific daemons.
Zitat von Manuel Holtgrewe :
Dear all,
I'm running Pacific 16.2.7 w
Dear all,
I'm running Pacific 16.2.7 with cephadm.
I had a network-related issues with a host and the 7 OSDs on the host
were marked as down/out. I marked the osds as in again, but ceph orch
keeps the daemons in "stopped" state. I cannot use "ceph orch
(re)start" on the osd daemons:
# ceph orch
Hi,
On 06/01/2022 17:42, Dave Holland wrote:
The right solution appears to be to configure ceph-ansible to use
/dev/disk/by-path device names, allowing for the expander IDs being
embedded in the device name -- so those would have to be set per-host
with host vars. Has anyone done that change fr
Deal all:
I have a ceph cluste with a pool named vms . size 1. 150 osds, 1024pgs.
Bluestore.
Some pg located at osd.74. eg: pg 10.71
I do such steps:
1. stop osd.74.
2. I make a backup with ceph-objectsstore-tool --data-path
/var/lib/ceph/osd/ceph-74 –type bluestore -op export –file 10.71.p
17 matches
Mail list logo