File "/lib64/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/lib64/python3.6/concurrent/futures/_base.py", line 384, in
__get_result
raise self._exception
File "/usr/share/ceph/mgr/cephadm/
h I think it likely is)? We had this issue in
> 17.2.0 when using a non-root ssh user. If it's the same thing, it should be
> fixed for the 17.2.1 release which is planned to be soon.
>
> Thanks,
> - Adam King
>
> On Thu, Jun 23, 2022 at 11:20 AM Robert Reihs
> wrote:
Thanks for the help, It was the same issue.
Best
Robert
On Thu, Jun 23, 2022 at 8:37 PM Robert Reihs wrote:
> Hi Adam,
>
> Yes looks like the same error, I will test it with the root user.
>
> Thanks for the quick help.
> Best
> Robert
>
> On Thu, Jun 23, 2022
Hi, I tested with the 17.2.1 release with a non root ssh user and it worked
fine.
Best
Robert
On Thu, Jun 23, 2022 at 9:12 PM Robert Reihs wrote:
> Thanks for the help, It was the same issue.
>
> Best
> Robert
>
> On Thu, Jun 23, 2022 at 8:37 PM Robert Reihs
> wrote:
&g
Hi,
We are setting up a test cluster with cephadm. We would like to
set different device classes for the osd's . Is there a possibility to set
this via the service specification yaml file. This is the configuration for
the osd service:
---
service_type: osd
service_id: osd_mon_disk_layout_fast
Hi,
I am very new to the ceph world, and working on setting up a cluster. We
have two cephfs filesystems (slow and fast), everything is running and
showing um in the dashboard. I can mount on of the filesystems (it mounts
it as default). How can I specify the filesystem in the mount command?
Ceph V
Dear Burkhard,
Thanks for the help, works also when specified in the mount option
Best
Robert
On Fri, Jul 8, 2022 at 11:39 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 08.07.22 11:34, Robert Reihs wrote:
> > Hi,
> > I am very
Hi,
We have a problem with deloing radosgw vi cephadm. We have a Ceph cluster
with 3 nodes deployed via cephadm. Pool creation, cephfs and block storage
are working.
ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy
(stable)
The service specs is like this for the rgw:
---
s
Hi,
we have discovered this solution for CSI plugin permissions:
https://github.com/ceph/ceph-csi/issues/2687#issuecomment-1014360244
We are not sure of the implications of adding the mgr permissions to the
(non admin) user.
The documentation seems to be sparse on this topic. Is it ok to give a
lim
t
On Tue, Jul 12, 2022 at 5:22 PM Robert Reihs wrote:
> Hi,
>
> We have a problem with deloing radosgw vi cephadm. We have a Ceph cluster
> with 3 nodes deployed via cephadm. Pool creation, cephfs and block storage
> are working.
>
> ceph version 17.2.1 (ec95624474b1871a82
message be changed? Or is
this a bug?
Best
Robert Reihs
On Fri, Jul 15, 2022 at 3:47 PM Robert Reihs wrote:
> Hi,
> When I have no luck yet solving the issue, but I can add some
> more information. The system pools ".rgw.root" and "default.rgw.log" are
> not creat
response.
ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy
(stable) installed with cephadm.
Any idea where the problem could be?
Thanks
Robert Reihs
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
t;> default.rgw.buckets.data
>> default.rgw.buckets.non-ec
>>
>> Is this normal behavior? Should then the error message be changed? Or is
>> this a bug?
>> Best
>> Robert Reihs
>>
>>
>> On Fri, Jul 15, 2022 at 3:47 PM Robert Reihs
>> w
, so the added config dose not persists.
Best
Robert Reihs
On Mon, Jul 18, 2022 at 3:33 PM Robert Reihs wrote:
> Hi everyone,
> I have a problem with the haproxy settings for the rgw service. I
> specified the service in the service specification:
> ---
> service_type: rgw
&g
Bug Reported:
https://tracker.ceph.com/issues/56660
Best
Robert Reihs
On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
rkach...@redhat.com> wrote:
> Great, thanks for sharing your solution.
>
> It would be great if you can open a tracker describing the issue so it
&
15 matches
Mail list logo