ync
> traffic from client facing radosgw.
>
> I have included a diagram below. I am unsure if pictures come thru the
> mailing lists
>
>
>
>
> From: Ulrich Klein <mailto:ulrich.kl...@ulrichklein.de>>
> Date: Friday, May 19, 2023 at 5:57 AM
> To: T
nrm.net:8080";,
>"http://west02.noam.lnrm.net:8080";,
>"http://west03.noam.lnrm.net:8080";,
>"http://west04.noam.lnrm.net:8080"; << -- sync node
>],
>"log_meta": &q
Hi,
Might be a dumb question …
I'm wondering how I can set those config variables in some but not all RGW
processes?
I'm on a cephadm 17.2.6. On 3 nodes I have RGWs. The ones on 8080 are behind
haproxy for users. the ones one 8081 I'd like for sync only.
# ceph orch ps | grep rgw
rgw.max.maxvm
Hi,
I’ve tested that combination once last year. My experience was similar. It was
dead-slow.
But if I remember correctly my conclusion was that Veeam was sending very
slowly lots of rather small objects without any parallelism.
But apart from the cruel slowness I didn’t have problems of the “bu
Hi,
Yet another question about OSD memory usage ...
I have a test cluster running. When I do a ceph orch ps I see for my osd.11:
ceph orch ps --refresh
NAME HOSTPORTSSTATUSREFRESHED AGE MEM
USE MEM LIM VERSION IMAGE ID CONTAINER ID
osd.11
I use something like "^ceph(0[1-9])|(1[0-9])$", but in a script that checks a
parameter for a "correct" ceph node name like in:
wantNum=$1
if [[ $wantNum =~ ^ceph(0[2-9]|1[0-9])$ ]] ; then
wantNum=${BASH_REMATCH[1]}
Which gives me the number, if it is in the range 02-19
Dunno, if th
Hi,
I'm experimenting with notifications for S3 buckets.
I got it working with notifications to HTTP(S) endpoints.
What I did:
Create a topic:
# cat create_topic.data
Action=CreateTopic
&Name=topictest2
&Attributes.entry.1.key=verify-ssl&Attributes.entry.1.value=false
&Attributes.entry.2.key=use
Hi,
I have a problem with a full cluster and getting it back to a healthy state.
Fortunately it's a small test cluster with no valuable data in it.
It is used exclusively for RGW/S3, running 17.2.3.
I intentionaly filled it up via rclone/S3 until it got into HEALTH_ERR, so see
what would happen
Hi,
I’m wondering if this problem will ever get fixed?
This multipart-orphan-problem has made it now multiple times to the list, the
tickets are up to 6 years old … and nothing changes.
It screws up per-user space accounting and uses up space for nothing.
I’d open another ticket with easy step
orphan-list` to determine rados objects that aren’t
> referenced by any bucket indices. Those objects could be removed after
> verification since this is an experimental feature.
>
> Eric
> (he/him)
>
>> On Sep 5, 2022, at 10:44 AM, Ulrich Klein
>> wrote:
>>
>
years.
Ciao, Uli
> On 6. Sep 2022, at 11:06, Rok Jaklič wrote:
>
> Thanks for the info.
>
> Is there any bug report open?
>
> On Mon, Sep 5, 2022 at 4:44 PM Ulrich Klein <mailto:ulrich.kl...@ulrichklein.de>> wrote:
> Looks like the old problem of lost mul
Looks like the old problem of lost multipart upload fragments. Has been
haunting me in all versions for more than a year. Haven‘t found any way of
getting rid of them.
Even deleting the bucket seems to leave the objects in the rados pool forever.
Ciao, Uli
> Am 05.09.2022 um 15:19 schrieb Rok J
Yo, I’m having the same problem and can easily reproduce it.
See
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/XOQXZYOWYMMQBWFXMHYDQUJ7LZZPFLSU/
And similar ones.
The problem still exists in Quincy 17.2.0. But looks like it’s too low priority
to be fixed.
Ciao, Uli
> On 09.
Hi,
I just tried again on a Quincy 17.2.0.
Same procedure, same problem.
I just wonder if nobody else sees that problem?
Ciao, Uli
> On 18. 03 2022, at 12:18, Ulrich Klein wrote:
>
> I tried it on a mini-cluster (4 Raspberries) with 16.2.7.
> Same procedure, same effect. I ju
it? Removing the
> label works for me.
>
> Regards,
> Gene Kuo
>
>> On Apr 27, 2022, at 18:21, Ulrich Klein wrote:
>>
>> Hi,
>>
>> Yesterday I upgraded my smallest test system, 4 Raspberries 4B, from Pacific
>> 16.2.7 (cephadm/containeriz
Hi,
Yesterday I upgraded my smallest test system, 4 Raspberries 4B, from Pacific
16.2.7 (cephadm/containerized) to 17.2.0 using
ceph orch upgrade start --ceph-version 17.2.0
It mostly worked ok, but wouldn't have finished without manual intervention.
Apparently each time a mgr is upgraded the pr
After a bunch of attempts to get multiple zonegroups with RGW multi-site to
work I’d have a question:
Has anyone successfully created a working setup with multiple zonegroups with
RGW multi-site using a cephadm/ceph orch installation of pacific?
Ciao, Uli
> On 19. 04 2022, at 14:33, Ulr
Hi,
I'm trying to do the same as Mark. Basically the same problem. Can’t get it to
work.
The —-master doesn’t make much of a difference for me.
Any other idea, maybe?
Ciao, Uli
On Cluster #1 ("nceph"):
radosgw-admin realm create --rgw-realm=acme --default
radosgw-admi
with
your request”.
Just to make sure: I am not at all involved in Ceph development, so don’t send
a feature request to me :)
Ciao, Uli
> On 22. 03 2022, at 09:23, Kai Stian Olstad wrote:
>
> On 21.03.2022 15:35, Ulrich Klein wrote:
>> RFC 7233
>> 4.4 <https://datat
tial request until they have
received a complete representation. Thus, clients cannot depend
on receiving a 416 (Range Not Satisfiable) response even when it
is most appropriate.
> On 21. 03 2022, at 15:11, Ulrich Klein wrote:
>
> With a bit of HTTP background I’d say
With a bit of HTTP background I’d say:
bytes=0-100 means: First byte to to 100nd byte. First byte is byte #0
On an empty object there is no first byte, i.e. not satisfiable ==> 416
Should be the same as on a single byte object and
bytes=1-100
200 OK should only be correct, if the serv
Hi,
I'm not the expert, either :) So if someone with more experience wants to
correct me, that’s fine.
But I think I have a similar setup with a similar goal.
I have two clusters, purely for RGW/S3.
I have a realm R in which I created a zonegroup ZG (not the low tax Kanton:) )
On the primary clu
/17/22 17:16, Ulrich Klein wrote:
>> Hi,
>>
>> My second attempt to get help with a problem I'm trying to solve for about 6
>> month now.
>>
>> I have a Ceph 16.2.6 test cluster, used almost exclusively for providing
>> RGW/S3 service. similar to
on, it's taking a while.
>
> Matt
>
> On Thu, Mar 17, 2022 at 10:31 AM Soumya Koduri wrote:
>>
>> On 3/17/22 17:16, Ulrich Klein wrote:
>>> Hi,
>>>
>>> My second attempt to get help with a problem I'm trying to solve for about
>>>
removed.
Ciao, Uli
> On 17. 03 2022, at 15:30, Soumya Koduri wrote:
>
> On 3/17/22 17:16, Ulrich Klein wrote:
>> Hi,
>>
>> My second attempt to get help with a problem I'm trying to solve for about 6
>> month now.
>>
>> I have a Ceph 16.2.6 t
Hi,
My second attempt to get help with a problem I'm trying to solve for about 6
month now.
I have a Ceph 16.2.6 test cluster, used almost exclusively for providing RGW/S3
service. similar to a production cluster.
The problem I have is this:
A client uploads (via S3) a bunch of large files int
t does.
I removed it
replaceBraces(e) {
return e.replace(/\(/g, "{").
replace(/\)/g, "}").
replace(/\[/g, "{").
replace(/]/g, "}")
Hi,
I just upgraded a small test cluster on Raspberries from pacific 16.2.6 to
16.2.7.
The upgrade went without major problems.
But now the Ceph Dashboard doesn't work anymore in Safari.
It complains about main..js "Line 3 invalid regular expression: invalid
group specifier name".
It works with
Hi,
Disclaimer: I'm in no way a Ceph expert. Have just been tinkering with Ceph/RGW
for a larger installation for a while.
My understanding is that the data between zones in a zonegroup is synced by
default. And that works well, most of the time.
If you, as I had to, want to restrict what data
Hi Francois,
> For the mpu's it is less important as I can fix them with some scripts.
Would you mind sharing how you get rid of these left-over mpu objects?
I’ve been trying to get rid of them without much success.
The "radosgw-admin bucket check --bucket --fix --check-objects” I tried, but it
… and a „chattr +i“ on the file will preserve your changes from being
over-/re-written at arbitrary points in time :) Been there…
Ciao, Uli
> Am 04.02.2022 um 11:05 schrieb Eugen Block :
>
> Hi,
>
> you should be able to change in the config file:
>
> /var/lib/ceph//prometheus.ses7-host1/etc
Not sure, if that's a bug or a feature.
Ciao, Uli
> On 01. 02 2022, at 18:09, Ulrich Klein wrote:
>
> Hi,
>
> Maybe someone who knows the commands can help me with my problem
>
> I have a small 6-node cluster running with 16.2.6 using cephadm and another
> one
Hi,
Maybe someone who knows the commands can help me with my problem
I have a small 6-node cluster running with 16.2.6 using cephadm and another one
with the same versions.
Both clusters are exlusively used for RGW/S3.
I have a realm myrealm, a zonegroup zg1 and zones on both cluster, one i
Hi,
My first question on this list …. 2nd attempt because the first one didn’t make
it (I hope)?
I'm trying out RGW multi-site sync policies.
I have a test/PoC setup with 16.2.6 using cephadm (which, by the way, I DO like)
I only use RGW/S3
There is one realm (myrealm), one zonegroup (myzg) an
34 matches
Mail list logo