Hello,
I have installed and bootsraped a Ceph manager node via cephadm and the
options:
--initial-dashboard-user admin --initial-dashboard-password
[PASSWORD] --dashboard-password-noupdate
Everything works fine. I also have the Grafana Board to monitor my
cluster. But the access to Gra
Hi,
you can edit the config file
/var/lib/ceph//grafana.host1/etc/grafana/grafana.ini (created by
cephadm) and then restart the container. This works in my octopus lab
environment.
Regards,
Eugen
Zitat von Ralph Soika :
Hello,
I have installed and bootsraped a Ceph manager node via c
Dear Ceph Users,
Anyone can help on the guidance of how I can integrate ceph to openstack ?
especially RGW.
Regards
Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Have you checked
https://docs.ceph.com/en/latest/radosgw/keystone/
?
Den tors 10 juni 2021 kl 10:06 skrev Michel Niyoyita :
>
> Dear Ceph Users,
>
> Anyone can help on the guidance of how I can integrate ceph to openstack ?
> especially RGW.
>
> Regards
>
> Michel
> __
Hi, we are working on doing something similar, and there's mainly two ways we
integrate it:
* cinder (openstack project) and rbd (ceph), for volumes, this has been working
well for a while.
* swift (openstack project) and rgw (ceph), for object storage, this is under
evaluation.
You might be
Hi,
has no one a idea what could cause this issue. Or how I could debug it?
In some days I have to go live with this cluster. If I don't have a
solution I have to go live with nautilus.
Manuel
On Mon, 7 Jun 2021 15:46:18 +0200
Manuel Lausch wrote:
> Hello,
>
> I implemented a new cluster w
Am 10.06.21 um 11:08 schrieb Manuel Lausch:
> Hi,
>
> has no one a idea what could cause this issue. Or how I could debug it?
>
> In some days I have to go live with this cluster. If I don't have a
> solution I have to go live with nautilus.
Hi Manuel,
I had similar issues with Octopus and i a
Am 09.06.21 um 13:52 schrieb Ilya Dryomov:
> On Wed, Jun 9, 2021 at 1:36 PM Peter Lieven wrote:
>> Am 09.06.21 um 13:28 schrieb Ilya Dryomov:
>>> On Wed, Jun 9, 2021 at 11:24 AM Peter Lieven wrote:
Hi,
we currently run into an issue where a rbd ls for a namespace returns
We have a similar setup, way smaller though (~120 osds right now) :)
We have different capped VMs, but most have 500 write, 1000 read iops cap, you
can see it in effect here:
https://cloud-ceph-performance-tests.toolforge.org/
We are currently running Octopus v15.2.11.
It's a very 'bare' ui (un
Hi,
thanks a lot for this hint! Yes I now can edit the file and restart the
grafana host with
# ceph orch stop grafana
# ceph orch start grafana
And the new configuration is used.
What I expected was, that I can define a different path on my host like
/home/grafana.ini that ceph will fetch
Small correction in my mail below, I meant to say Octopus and not Nautilus, so
I am running ceph 15.2.13.
‐‐‐ Original Message ‐‐‐
On Wednesday, June 9, 2021 2:25 PM, mabi wrote:
> Hello,
>
> I replaced an OSD disk on one of my Nautilus OSD node which created a new osd
> number. Now c
Thanks Eugen for your answer. I saw it a bit late because from the ceph manager
web interface I managed to get rid of that OSD by doing the "Windows" way and
click on the purge option. That worked.
So I suppose here that your command "ceph osd purge" must have worked, I just
did not find this c
You could play with the grafana-cli and see if it brings you anywhere.
You seem to be able to override the default config file/directory, but
I haven't tried that myself yet:
---snip---
host1:~ # podman exec c8167ed2efde grafana-cli --help
NAME:
Grafana CLI - A new cli application
USAGE:
Hi Pritha
y answers inline.
Forgot to add I'm on Ceph 1.2.1
> How did you check whether the role was created in tenant1 or tenant2?
> It shouldn't be created in tenant2, if it is, then it's a bug, please open
> a tracker issue for it.
>
I checked that with
radosgw-admin role list --tenant tenan
Hi Daniel,
Yes, it looks like a bug in the way the role name is being parsed in the
code. Please open a tracker issue for the same, and I'll fix it when I can.
Thanks,
Pritha
On Thu, Jun 10, 2021 at 5:09 PM Daniel Iwan wrote:
> Hi Pritha
>
> y answers inline.
> Forgot to add I'm on Ceph 1.2.1
Hi
We're running ceph nautilus 14.2.21 (going to octopus latest in a few
weeks) as volume and instance backend for our openstack vm's. Our
clusters run somewhere between 500 - 1000 OSDs on SAS HDDs with NVMe's
as journal and db device
Currently we do not have our vm's capped on iops and thro
Hi everyone,
We're about to start Ceph Month 2021 with Casey Bodley giving a RGW update!
Afterward we'll have two BoF discussions on:
9:30 ET / 15:30 CEST [BoF] Ceph in Research & Scientific Computing
[Kevin Hrpcek]
10:10 ET / 16:10 CEST [BoF] The go-ceph get together [John Mulligan]
Join us n
Hi Peter,
your suggestion pointed me to the right spot.
I didn't know about the feature, that ceph will read from replica
PGs.
So on. I found two functions in the osd/PrimaryLogPG.cc:
"check_laggy" and "check_laggy_requeue". On both is first a check, if
the partners have the octopus features. if
Hi,
At which point in the update procedure did you
ceph osd require-osd-release octopus
?
And are you sure it was set to nautilus before the update? (`ceph osd dump`
will show)
Cheers , Dan
On Thu, Jun 10, 2021, 5:45 PM Manuel Lausch wrote:
> Hi Peter,
>
> your suggestion pointed me t
Hi David,
That is very helpful thank you. When looking at the graphs I notice that
the bandwidth used looks as if this is very low. Or am I misinterpreting
the bandwidth graphs?
Regards
Marcel
David Caro schreef op 2021-06-10 11:49:
We have a similar setup, way smaller though (~120 osds ri
Hi Dan,
The cluster was initialy deployed with nautilus (14.2.20). I am sure
require-osd-release was nautilus at this point.
I did set this to octopus, after all components was updatated.
Manuel
On Thu, 10 Jun 2021 17:54:49 +0200
Dan van der Ster wrote:
> Hi,
>
> At which point in the upda
On 06/10 14:05, Marcel Kuiper wrote:
> Hi David,
>
> That is very helpful thank you. When looking at the graphs I notice that the
> bandwidth used looks as if this is very low. Or am I misinterpreting the
> bandwidth graphs?
Hey, sorry for the delay, something broke :)
What graphs in specific ar
Ok sounds correct. This was the only thing that came to mind which might
explain your problem.
Cheers, Dan
On Thu, Jun 10, 2021, 6:06 PM Manuel Lausch wrote:
> Hi Dan,
>
> The cluster was initialy deployed with nautilus (14.2.20). I am sure
> require-osd-release was nautilus at this point.
>
Hi,
I'm currently reading the documentation about stretched cluster,
I would like to known if it's needed or not with this kind of 3 dc
setup:
3km (0.2ms)
DC1--DC2
30km(3ms) | | 30km (2-3ms)
|DC3--
DC1
Hi Robert,
I just launched a 16.2.4 cluster and I don't reproduce that error. Could
please file a tracker in
https://tracker.ceph.com/projects/dashboard/issues/new and attach the mgr
logs and cluster details (e.g.: number of mgrs)?
Thanks!
Kind Regards,
Ernesto
On Thu, Jun 10, 2021 at 4:05 AM
Hi Ernesto – I couldn’t register for an account there, it was giving me a 503,
but I think the issue is the deployed container. I managed to clean it up, but
not 100% sure of the cause – I think it is the referenced, container – all of
the unit.run files reference
docker.io/ceph/ceph@sha256:54e
I cannot enable cephadm because it cannot find remoto lib.
Even when I installed it using "pip3 install remoto" and then installed ir
from the deb package build from the git sources at
https://github.com/alfredodeza/remoto/
If I type "import remoto" in a python3 prompt it works.
--
Alfrenovsky
Hi all
We get a new samba smb fileserver who mounts our cephfs for exporting some
shares. What might be a good or better network setup for that server?
Should I configure two interfaces - one for the smb share export towards our
workstations and desktops and one towards the ceph cluster?
Or wo
28 matches
Mail list logo