l
... you should be ready to go. :)
Rok
On Mon, Feb 12, 2024 at 6:43 PM Michael Worsham
wrote:
> So, just so I am clear – in addition to the steps below, will I also need
> to also install NGINX or HAProxy on the server to act as the front end?
>
>
>
> -- M
>
>
>
&
Hi,
recommended methods of deploying rgw are imho overly complicated. You can
get service up manually also with something simple like:
[root@mon1 bin]# cat /etc/ceph/ceph.conf
[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required =
Hi,
shouldn't etag of a "parent" object change when "child" objects are added
on s3?
Example:
1. I add an object to test bucket: "example/" - size 0
"example/" has an etag XYZ1
2. I add an object to test bucket: "example/test1.txt" - size 12
"example/test1.txt" has an etag XYZ2
Hi,
I have set following permission to admin user:
radosgw-admin caps add --uid=admin --tenant=admin --caps="users=*;buckets=*"
Now I would like to upload some object with admin user to some other
user/tenant (tester1$tester1) to his bucket test1.
Other user has uid tester1 and tenant tester1
ples. Let me know if you need more
> information.
>
> Yuval
>
> On Tue, Nov 28, 2023 at 10:21 PM Rok Jaklič wrote:
>
>> Hi,
>>
>> I would like to get info if the bucket or object got updated.
>>
>> I can get this info with a changed etag of an objec
Hi,
I would like to get info if the bucket or object got updated.
I can get this info with a changed etag of an object, but not I cannot get
etag from bucket, so I am looking at
https://docs.ceph.com/en/latest/radosgw/notifications/
How do I create a topic and where do I send request with
ng for now
...
after this line ... rgw stopped responding. We had to restart it.
We were just about to upgrade to ceph 17.x... but we had postpone it
because of this.
Rok
On Fri, Oct 6, 2023 at 9:30 AM Rok Jaklič wrote:
> Hi,
>
> yesterday we changed RGW from civetweb to beast and a
Hi,
yesterday we changed RGW from civetweb to beast and at 04:02 RGW stopped
working; we had to restart it in the morning.
In one rgw log for previous day we can see:
2023-10-06T04:02:01.105+0200 7fb71d45d700 -1 received signal: Hangup from
killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd
I can confirm this. ... as we did the upgrade from .10 also.
Rok
On Fri, Sep 8, 2023 at 5:26 PM David Orman wrote:
> I would suggest updating: https://tracker.ceph.com/issues/59580
>
> We did notice it with 16.2.13, as well, after upgrading from .10, so
> likely in-between those two releases.
n existing attempts at diagnosing the issue.
>
> Mark
>
> On 9/7/23 05:55, Rok Jaklič wrote:
> > Hi,
> >
> > we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on
> > 120T/200T data.
> >
> > Is there any tracker about the problem?
Hi,
we have also experienced several ceph-mgr oom kills on ceph v16.2.13 on
120T/200T data.
Is there any tracker about the problem?
Does upgrade to 17.x "solves" the problem?
Kind regards,
Rok
On Wed, Sep 6, 2023 at 9:36 PM Ernesto Puerta wrote:
> Dear Cephers,
>
> Today brought us an
Hi,
I want to place an existing pool with data to ssd-s.
I've created crush rule:
ceph osd crush rule create-replicated replicated_ssd default host ssd
If I apply this rule to the existing pool default.rgw.buckets.index with
180G of data with command:
ceph osd pool set default.rgw.buckets.index
ices Co., Ltd.
> e: istvan.sz...@agoda.com
> -------
>
> On 2023. Jun 23., at 19:12, Rok Jaklič wrote:
>
> Email received from the internet. If in doubt, don't click any link nor
> open any attachment !
>
>
> We
We are experiencing something similar (slow GETs responses) when sending 1k
delete requests for example in ceph v16.2.13.
Rok
On Mon, Jun 12, 2023 at 7:16 PM grin wrote:
> Hello,
>
> ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy
> (stable)
>
> There is a single (test)
Hi,
are there any drawbacks of exposing multi-tenant deployment of RGWs
directly to users so they can use any S3 client to connect to service or
should we put something in front of RGWs?
How many users in multi-tenant deployment can CEPH handle?
Kind regards,
Rok
I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit.
but there is little information about those settings. Is there any
documentation in the wild about those settings?
Are they enabled by default?
On Thu, May 18, 2023 at 9:15 PM Tarrago, Eli (RIS-BCT) <
>
>
> WHO: client. or client.rgw
>
> KEY: rgw_delete_multi_obj_max_num
>
> VALUE: 1
>
> Regards, Joachim
>
> ___
> ceph ambassador DACH
> ceph consultant since 2012
>
> Clyso GmbH - Premier Ceph Foundation Member
&g
ete_multi_obj_max_num
>
> rgw_delete_multi_obj_max_num - Max number of objects in a single multi-
> object delete request
> (int, advanced)
> Default: 1000
> Can update at runtime: true
> Services: [rgw]
>
> On Wed, 2023-05-17 at 10:51 +0200, Rok Jaklič wrote:
Hi,
I would like to delete millions of objects in RGW instance with:
mc rm --recursive --force ceph/archive/veeam
but it seems it allows only 1000 (or 1002 exactly) removals per command.
How can I delete/remove all objects with some prefix?
Kind regards,
Rok
We deployed jitsi for the public sector during covid and it is still free
to use.
https://vid.arnes.si/
---
However, the landing page is in Slovene language and for future
reservations you need an AAI (SSO) account (which you get if you are a part
of a public organization (school, faculty,
1, 2 times a year we are having similar problem in *not* ceph disk cluster,
where working -> but slow disk writes give us slow reads. We somehow
"understand it", since probably slow writes fill up queues and buffers.
On Thu, Mar 9, 2023 at 11:37 AM Andrej Filipcic
wrote:
>
> Thanks for the
Hi,
I try to configure ceph with rgw and unix socket (based on
https://docs.ceph.com/en/pacific/man/8/radosgw/?highlight=radosgw). I have
in ceph.conf something like this:
[client.radosgw.ctplmon3]
host = ctplmon3
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file =
Solution was found by colleague and it was:
ms_mon_client_mode = crc
... because of
https://github.com/ceph/ceph/pull/42587/commits/7e22d2a31d277ab3eecff47b0864b206a32e2332
Rok
On Thu, Sep 8, 2022 at 6:04 PM Rok Jaklič wrote:
> What credentials should RGWs have?
>
> I have inte
Hi,
we try to copy a big file (over 400GB) using a minio client to the ceph
cluster. Copy or better transfer takes a lot of time (2 days for example)
because of "slow connection".
Usually somewhere near the end (but looks random) we get an error like:
Failed to copy `/360GB.bigfile.img`. The
Every now and then someone comes up with a subject like this.
There is quite a long thread about pros and cons using docker and all tools
around ceph on
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TTTYKRVWJOR7LOQ3UCQAZQR32R7YADVY/#AT7YQV6RE5SMKDZHXL3ZI2G5BWFUUUXE
Long story
-13 error code represents permission denied
> b. You’ve commented out the keyring configuration in ceph.conf
>
> So do your RGWs have appropriate credentials?
>
> Eric
> (he/him)
>
> > On Sep 7, 2022, at 3:04 AM, Rok Jaklič wrote:
> >
> > Hi,
>
Hi,
after upgrading to ceph version 16.2.10 from 16.2.7 rados gw is not
working. We start rados gw with:
radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n
client.radosgw.ctplmon3
ceph.conf looks like:
[root@ctplmon3 ~]# cat /etc/ceph/ceph.conf
[global]
fsid =
ven deleting the bucket seems to leave the objects in the rados pool
> forever.
>
> Ciao, Uli
>
> > Am 05.09.2022 um 15:19 schrieb Rok Jaklič :
> >
> > Hi,
> >
> > when I do:
> > radosgw-admin user stats --uid=X --tenant=Y --sync-stats
> >
&
Hi,
when I do:
radosgw-admin user stats --uid=X --tenant=Y --sync-stats
I get:
{
"stats": {
"size": 2620347285776,
"size_actual": 2620348436480,
"size_utilized": 0,
"size_kb": 2558932897,
"size_kb_actual": 2558934020,
"size_kb_utilized": 0,
Hi,
is it possible to get tenant and user id with some python boto3 request?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Can I reduce mon_initial_members to one host after already being set to two
hosts?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Actually, some of us tried to contribute to documentation but were stopped
with failed build checks for some reason.
While most of it is ok, at some places documentation is vague or missing
(maybe also the reason why this thread is so long also).
One example:
Hi,
is it possible to limit access of the subuser that he sees (read, write)
only "his" bucket? And also be able to create a bucket inside that bucket?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
This thread would not be so long if docker/containers solved the problems,
but it did not. It solved some, but introduced new ones. So we cannot
really say its better now.
Again, I think focus should more on a working ceph with clean documentation
while leaving software management, packages to
Which mode is that and where can I set it?
This one described in https://docs.ceph.com/en/latest/radosgw/multitenancy/
?
On Tue, Jun 8, 2021 at 2:24 PM Janne Johansson wrote:
> Den tis 8 juni 2021 kl 12:38 skrev Rok Jaklič :
> > Hi,
> > I try to create buckets through rgw in
Hi,
I try to create buckets through rgw in following order:
- *bucket1* with *user1* with *access_key1* and *secret_key1*
- *bucket1* with *user2* with *access_key2* and *secret_key2*
when I try to create a second bucket1 with user2 I get *Error response code
BucketAlreadyExists.*
Why? Should
In this giga, tera byte times all this dependency hell can now be avoided
with some static linking. For example, we do use statically linked mysql
binaries and it saved us numerous times. https://youtu.be/5PmHRSeA2c8?t=490
Rok
On Wed, Jun 2, 2021 at 9:57 PM Harry G. Coin wrote:
>
> On 6/2/21
I agree, simplifying "deployment" by adding another layer of complexity
does bring much more problems and hard times when something goes wrong in
the runtime. Few additional steps at "install phase" and better
understanding of underlying architecture, commands, whatever ... have much
more pros
Hi,
is it normal that radosgw-admin user info --uid=user ... takes around 3s or
more?
Also other radosgw-admin are taking quite a lot of time.
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
have 5 hosts (with
> failure domain host) your PGs become undersized when a host fails and
> won't recover until the OSDs come back. Which ceph version is this?
>
>
> Zitat von Rok Jaklič :
>
> > For this pool I have set EC 3+2 (so in total I have 5 nodes) which one
>
For this pool I have set EC 3+2 (so in total I have 5 nodes) which one was
temporarily removed, but maybe this was the problem?
On Thu, May 27, 2021 at 3:51 PM Rok Jaklič wrote:
> Hi, thanks for quick reply
>
> root@ctplmon1:~# ceph pg dump pgs_brief | grep undersized
> dumped pgs
detail
>
> and the crush rule(s) for the affected pool(s).
>
>
> Zitat von Rok Jaklič :
>
> > Hi,
> >
> > I have removed one node, but now ceph seems to stuck in:
> > Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs
> > degraded, 12
Janne Johansson
wrote:
> Den fre 21 maj 2021 kl 10:49 skrev Rok Jaklič :
> > It shows
> > sdb8:16 0 5.5T 0 disk /var/lib/ceph/osd/ceph-56
>
> That one says osd-56, you asked about why osd 85 was small in ceph osd df
>
>
> >> Den fre 2
$ID --mkfs --osd-uuid $UUID --data /dev/sdb
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID/
---
and there 100G block file resides.
On Fri, May 21, 2021 at 9:59 AM Janne Johansson wrote:
> Den fre 21 maj 2021 kl 09:41 skrev Rok Jaklič :
> > why would ceph osd df show in SIZE field small
Hi,
why would ceph osd df show in SIZE field smaller number than there is:
85hdd 0.8 1.0 100 GiB 96 GiB 95 GiB 289 KiB 952
MiB 4.3 GiB 95.68 3.37 10 up
instead of 100GiB there should be 5.5TiB.
Kind regards,
Rok
___
I agree. Documentation here is pretty vague. systemd services for osds on
ubuntu 20.04 and ceph pacific version 16.2.1 does not work either, so I
have to run it manually with
/usr/bin/ceph-osd -f --cluster ceph --id some-number --setuser ceph
--setgroup ceph
I think it would be much better if
Hi,
installation of cluster/osds went "by the book" https://docs.ceph.com/, but
now I want to setup Ceph Object Gateway, but documentation on
https://docs.ceph.com/en/latest/radosgw/ seems to lack information about
what and where to restart for example when setting [client.rgw.gateway-node1]
in
Hi,
I installed ceph object gateway and I have put one test object onto
storage. I can see it with rados -p mytest ls
How do I setup ceph that users can access (download,upload) files to this
pool?
Kind regards,
Rok
___
ceph-users mailing list --
48 matches
Mail list logo