Hi Enrico,
On Fri, Jun 29, 2018 at 7:50 PM Enrico Kern wrote:
> hmm that also pops up right away when i restart all radosgw instances. But
> i will check further and see if i can find something. Maybe doing the
> upgrade to mimic too.
>
> That bucket is basically under load on the master zone
Hi,
On Tue, Feb 6, 2018 at 10:04 AM, Ingo Reimann wrote:
> Just to add -
>
> We wrote a little wrapper, that reads the output of "radosgw-admin usage
> show" and stops, when the loop starts. When we add all entries by
> ourselves, the result is correct. Moreover - the
On Mon, Feb 5, 2018 at 12:45 PM, Thomas Bennett wrote:
> Hi,
>
> In trying to understand RGW pool usage I've noticed the pool called
> default.rgw.meta pool has a large number of objects in it. Suspiciously
> about twice as many objects in my default.rgw.buckets.index pool.
>
>
log from the resharding process, after 10 minutes I canceled
> it. Got 500MB log (gzipped still 20MB), so I cannot upload it to the bug
> tracker.
>
>
>
I will try to reproduce it on my setup it should be simpler now I am sure
it is the versioning.
Orit
> Regards,
>
>
&g
lication containing customer data, but I
> am also creating some random test data to create logs I can share.
>
> I will also test whether the versioning itself is the culprit, or if it is
> the lifecycle rule.
>
>
>
I am suspecting versioning (never tried it with resharding).
Can you open a tr
Hi Martin,
On Mon, Jan 15, 2018 at 6:04 PM, Martin Emrich
wrote:
> Hi!
>
> After having a completely broken radosgw setup due to damaged buckets, I
> completely deleted all rgw pools, and started from scratch.
>
> But my problem is reproducible. After pushing ca.
Hi,
On Mon, Dec 11, 2017 at 11:45 AM, Martin Emrich
<martin.emr...@empolis.com> wrote:
> Hi!
>
> Am 10.12.17, 11:54 schrieb "Orit Wasserman" <owass...@redhat.com>:
>
> Hi Martin,
>
> On Thu, Dec 7, 2017 at 5:05 PM, Martin Emrich <martin.
On Mon, Dec 11, 2017 at 5:44 PM, Sam Wouters <s...@ericom.be> wrote:
> On 11-12-17 16:23, Orit Wasserman wrote:
>> On Mon, Dec 11, 2017 at 4:58 PM, Sam Wouters <s...@ericom.be> wrote:
>>> Hi Orrit,
>>>
>>>
>>> On 04-12-17 18:57, Orit Wasser
On Mon, Dec 11, 2017 at 4:58 PM, Sam Wouters <s...@ericom.be> wrote:
> Hi Orrit,
>
>
> On 04-12-17 18:57, Orit Wasserman wrote:
>> Hi Andreas,
>>
>> On Mon, Dec 4, 2017 at 11:26 AM, Andreas Calminder
>> <andreas.calmin...@klarna.com> wrote:
>&
Hi Andreas,
On Mon, Dec 4, 2017 at 11:26 AM, Andreas Calminder
wrote:
> Hello,
> With release 12.2.2 dynamic resharding bucket index has been disabled
> when running a multisite environment
> (http://tracker.ceph.com/issues/21725). Does this mean that resharding
>
On Wed, Nov 29, 2017 at 6:52 PM, Aristeu Gil Alves Jr
wrote:
>> > Does s3 or swifta (for hadoop or spark) have integrated data-layout APIs
>> > for
>> > local processing data as have cephfs hadoop plugin?
>> >
>> With s3 and swift you won't have data locality as it was
On Wed, Nov 29, 2017 at 6:54 PM, Gregory Farnum wrote:
> On Wed, Nov 29, 2017 at 8:52 AM Aristeu Gil Alves Jr
> wrote:
>>>
>>> > Does s3 or swifta (for hadoop or spark) have integrated data-layout
>>> > APIs for
>>> > local processing data as have cephfs
eers,
Orit
>
> Thanks and regards,
> Aristeu
>
> 2017-11-29 4:19 GMT-02:00 Orit Wasserman <owass...@redhat.com>:
>>
>> On Tue, Nov 28, 2017 at 7:26 PM, Aristeu Gil Alves Jr
>> <aristeu...@gmail.com> wrote:
>> > Greg and Donny,
>> >
>&
On Tue, Nov 28, 2017 at 7:26 PM, Aristeu Gil Alves Jr
wrote:
> Greg and Donny,
>
> Thanks for the answers. It helped a lot!
>
> I just watched the swifta presentation and it looks quite good!
>
I would highly recommend using s3a and not swifta as it is much more
mature and
ark
>
>
> On 5 Nov 2017, at 07:33, Orit Wasserman <owass...@redhat.com> wrote:
>
> Hi Mark,
>
> On Fri, Oct 20, 2017 at 4:26 PM, Mark Schouten <m...@tuxis.nl> wrote:
>
> Hi,
>
> I see issues with resharding. rgw logging shows the following:
> 2017-1
Hi Mark,
On Fri, Oct 20, 2017 at 4:26 PM, Mark Schouten wrote:
> Hi,
>
> I see issues with resharding. rgw logging shows the following:
> 2017-10-20 15:17:30.018807 7fa1b219a700 -1 ERROR: failed to get entry from
> reshard log, oid=reshard.13 tenant= bucket=qnapnas
>
>
On Fri, Sep 29, 2017 at 5:56 PM, Yoann Moulin wrote:
> Hello,
>
> I'm doing some tests on the radosgw on luminous (12.2.1), I have a few
> questions.
>
> In the documentation[1], there is a reference to "radosgw-admin region get"
> but it seems not to be available anymore.
Hi David,
On Mon, Aug 28, 2017 at 8:33 PM, David Turner wrote:
> The vast majority of the sync error list is "failed to sync bucket
> instance: (16) Device or resource busy". I can't find anything on Google
> about this error message in relation to Ceph. Does anyone
Hi Hans,
On Wed, Jul 26, 2017 at 10:24 AM, Ben Hines wrote:
> Which version of Ceph?
>
> On Tue, Jul 25, 2017 at 4:19 AM, Hans van den Bogert
> wrote:
>>
>> Hi All,
>>
>> I don't seem to be able to fix a bucket, a bucket which has become
>> inconsistent
On Fri, Jul 7, 2017 at 4:20 PM, Murali Balcha
wrote:
> Hi,
> We tried to use Swift interface for Ceph object store and soon found out
> that it does not support SLO/DLO. We are planning to move to use S3
> interface. Are there any known limitations with the support of S3
Hi Maarten,
On Tue, Jul 4, 2017 at 9:46 PM, Maarten De Quick
wrote:
> Hi,
>
> Background: We're having issues with our index pool (slow requests / time
> outs causes crashing of an OSD and a recovery -> application issues). We
> know we have very big buckets (eg. bucket of
Hi Pavan,
On Tue, Jun 20, 2017 at 8:29 AM, Pavan Rallabhandi <
prallabha...@walmartlabs.com> wrote:
> Trying one more time with ceph-users
>
> On 19/06/17, 11:07 PM, "Pavan Rallabhandi"
> wrote:
>
> On many of our clusters running Jewel (10.2.5+), am running
On Wed, May 3, 2017 at 12:13 PM, Orit Wasserman <owass...@redhat.com> wrote:
>
>
> On Wed, May 3, 2017 at 12:05 PM, yiming xie <plato...@gmail.com> wrote:
>
>> Cluster c2 have not *zone:us-1*
>>
>> ./bin/radosgw-admin -c ./run/c2/ceph.conf period update
lt_info": "6cc7889a-3f00-4fcd-b4dd-0f5951fbd561",
> "zonegroups": [
> "us2",
> "us"
> ]
> }
>
>
> 在 2017年5月3日,下午4:57,Orit Wasserman <owass...@redhat.com> 写道:
>
>
>
> On Wed, May 3, 2017 at 1
zone
> id=0cae32e6-82d5-489f-adf5-99e92c70f86f (name=us-2), switching to local
> zonegroup configuration
> 2017-05-03 04:46:10.300145 7fdb2e4226c0 -1 Cannot find zone
> id=0cae32e6-82d5-489f-adf5-99e92c70f86f (name=us-2)
> couldn't init storage provider
>
>
> 在 2017年5月3
not find connection for zone or zonegroup id:
> cb8fd49d-9789-4cb3-8010-2523bf46a650
> request failed: (2) No such file or directory
> failed to commit period: (2) No such file or directory
>
> ceph version 11.1.0-7421-gd25b355 (d25b3550dae243f6868a526632e974
> 05866e76d4)
>
>
Hi,
On Wed, May 3, 2017 at 11:00 AM, yiming xie wrote:
> Hi orit:
> I try to create multiple zonegroups in single realm, but failed. Pls tell
> me the correct way about creating multiple zonegroups
> Tks a lot!!
>
> 1.create the firstr zone group on the c1 cluster
>
Hi Ben,
On Thu, Apr 20, 2017 at 6:08 PM, Ben Morrice wrote:
> Hi all,
>
> I have tried upgrading one of our RGW servers from 10.2.5 to 10.2.7 (RHEL7)
> and authentication is in a very bad state. This installation is part of a
> multigw configuration, and I have just updated
I see : acct_user=foo, acct_name=foo,
Are you using radosgw with tenants?
If not it could be the problem
Orit
On Sat, Apr 1, 2017 at 7:43 AM, Ben Hines wrote:
> I'm also trying to use lifecycles (via boto3) but i'm getting permission
> denied trying to create the lifecycle.
On Thu, Mar 9, 2017 at 1:28 PM, Matthew Vernon wrote:
> On 09/03/17 10:45, Abhishek Lekshmanan wrote:
>
> On 03/09/2017 11:26 AM, Matthew Vernon wrote:
>>
>>>
>>> I'm using Jewel / 10.2.3-0ubuntu0.16.04.2 . We want to keep track of our
>>> S3 users' quota and usage. Even with
On Wed, Jan 11, 2017 at 2:53 PM, Marko Stojanovic wrote:
>
> Hello all,
>
> I have issue with radosgw-admin regionmap update . It doesn't update map.
>
> With zone configured like this:
>
> radosgw-admin zone get
> {
> "id": "fc12ac44-e27e-44e3-9b13-347162d3c1d2",
>
>
I agree, it could be permissions issue
> On Wed, Jan 4, 2017 at 8:59 AM, Kamble, Nitin A <nitin.kam...@teradata.com>
> wrote:
>>
>>
>> > On Dec 26, 2016, at 2:48 AM, Orit Wasserman <owass...@redhat.com> wrote:
>> >
>> > On Fri, Dec 23,
On Fri, Dec 23, 2016 at 3:42 AM, Kamble, Nitin A
wrote:
> I am trying to setup radosgw on a ceph cluster, and I am seeing some issues
> where google is not helping. I hope some of the developers would be able to
> help here.
>
>
> I tried to create radosgw as
, but one more time. We have big
>> issues recently with rgw on jewel. because of leaked data - the rate is
>> about 50GB/hour.
>>
>> We've hitted these bugs:
>> rgw: fix put_acls for objects starting and ending with underscore
>> (issue#17625, pr#11669, Orit Wasserm
On Tue, Dec 20, 2016 at 5:39 PM, Wido den Hollander <w...@42on.com> wrote:
>
>> Op 15 december 2016 om 17:10 schreef Orit Wasserman <owass...@redhat.com>:
>>
>>
>> Hi Wido,
>>
>> This looks like you are hitting http://tracker.ceph.com/issues/1
Hi Wido,
This looks like you are hitting http://tracker.ceph.com/issues/17364
The fix is being backported to jewel: https://github.com/ceph/ceph/pull/12315
A workaround:
save the realm, zonegroup and zones json file
make a copy of .rgw.root (the pool contain the multisite config)
remove
radosgw supports keystone v3 in Jewel.
Can you give more details about the error? what is the exact command
are you trying?
radosgw log with debug_rgw=20 and debug_ms=5 will be most helpfull
On Tue, Nov 22, 2016 at 10:24 AM, 한승진 wrote:
> I've figured out the main reason is.
>
Hi,
We have support for offline bucket resharding admin command:
https://github.com/ceph/ceph/pull/11230.
It will be available in Jewel 10.2.5.
Orit
On Thu, Nov 17, 2016 at 9:11 PM, Yoann Moulin wrote:
> Hello,
>
> is that possible to shard the index of existing buckets
l (.rgw.root) contains only the configuration (realm,
zonegroups and zones).
If you want to back it up run:
# rados mkpool .rgw.root.backup
# rados cppool .rgw.root .rgw.root.backup
Orit
> Thanks
>
> ----- Original Message -
>> From: "Orit Wasserman" <owass...@redhat.com>
On Fri, Nov 11, 2016 at 12:24 PM, Orit Wasserman <owass...@redhat.com> wrote:
> I have a workaround:
>
> 1. Use zonegroup and zone jsons you have from before (default-zg.json
> and default-zone.json)
> 2. Make sure the realm id in the jsons is ""
> 3. Stop the ga
92f2-5d53-4701-a895-b780b16b5374.control
> zone_info.default
> zonegroup_info.default
> realms.5b41b1b2-0f92-463d-b582-07552f83e66c.control
>
>
> Thanks
>
> - Original Message -
>> From: "Orit Wasserman" <owass...@redhat.com>
>> To: &qu
eleases of ceph anywhere on the network.
>
can you run: rados ls .rgw.root?
> Is 10.2.4 out already? I didn't see an update package to that.
>
It should be out soon
> Thanks
>
> Andrei
>
> - Original Message -
>> From: "Orit Wasserman" <owass...@redhat.
gt; Now I start the radosgw service:
>
>
> root@arh-ibstorage1-ib:~# service ceph-radosgw@radosgw.gateway start
> root@arh-ibstorage1-ib:~#
> root@arh-ibstorage1-ib:~#
> root@arh-ibstorage1-ib:~#
> root@arh-ibstorage1-ib:~#
> root@arh-ibstorage1-ib:~#
> root@arh-i
On Wed, Nov 9, 2016 at 10:20 PM, Yoann Moulin wrote:
> Hello,
>
>> many thanks for your help. I've tried setting the zone to master, followed
>> by the period update --commit command. This is what i've had:
>
> maybe it's related to this issue :
>
>
On Thu, Oct 27, 2016 at 12:30 PM, Richard Chan
wrote:
> Hi Cephers,
>
> In my period list I am seeing an orphan period
>
> {
> "periods": [
> "24dca961-5761-4bd1-972b-685a57e2fcf7:staging",
> "a5632c6c4001615e57e587c129c1ad93:staging",
>
Hi Ansgar,
We recommend 100,000 object per shard, for 50M objects you will need 512 shards.
Orit
On Fri, Oct 14, 2016 at 1:44 PM, Ansgar Jazdzewski
wrote:
> Hi,
>
> I like to know if someone of you have some kind of a formula to set
> the right number of shards for
Allan <g...@umn.edu> wrote:
> Dear Orit,
>
> On 10/07/2016 04:21 AM, Orit Wasserman wrote:
>>
>> Hi,
>>
>> On Wed, Oct 5, 2016 at 11:23 PM, Andrei Mikhailovsky <and...@arhont.com>
>> wrote:
>>>
>>> Hello everyone,
>>>
>
On Fri, Oct 7, 2016 at 9:37 PM, Graham Allan <g...@umn.edu> wrote:
> Dear Orit,
>
> On 10/07/2016 04:21 AM, Orit Wasserman wrote:
>>
>> Hi,
>>
>> On Wed, Oct 5, 2016 at 11:23 PM, Andrei Mikhailovsky <and...@arhont.com>
>> wrote:
>>>
>
quot;: ".rgw",
> "control_pool": ".rgw.control",
> "gc_pool": ".rgw.gc",
> "log_pool": ".log",
> "intent_log_pool": ".intent-log",
> "usage_log_pool": "
Hi,
On Wed, Oct 5, 2016 at 11:23 PM, Andrei Mikhailovsky wrote:
> Hello everyone,
>
> I've just updated my ceph to version 10.2.3 from 10.2.2 and I am no longer
> able to start the radosgw service. When executing I get the following error:
>
> 2016-10-05 22:14:10.735883
On Wed, Sep 28, 2016 at 10:32 AM, Iban Cabrillo wrote:
> Dear Admins,
>During last day I have been trying to deploy a new radosgw, following
> jewel guide, ceph cluster is healthy (3 mon and 2 osd servers )
>root@cephrgw ceph]# ceph -v
> ceph version 10.2.3
On Wed, Sep 28, 2016 at 5:27 PM, Michael Parson <mpar...@bl.org> wrote:
> On Wed, 28 Sep 2016, Orit Wasserman wrote:
>>
>> see blow
>>
>> On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson <mpar...@bl.org> wrote:
>
>
>
>
>>> We googled a
7b57db0473d6d
>>
>> Realms:
>> MD5 (cephrgw1-1-dfw-realm.json) = 39a4e63bab64ed756961117d3629b109
>> MD5 (cephrgw1-1-phx-realm.json) = 39a4e63bab64ed756961117d3629b109
>> MD5 (cephrgw1-2-dfw-realm.json) = 39a4e63bab64ed756961117d3629b109
>> MD5 (cephrgw1-2-phx-realm
see blow
On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote:
> (I tried to start this discussion on irc, but I wound up with the wrong
> paste buffer and wound up getting kicked off for a paste flood, sorry,
> that was on me :( )
>
> We were having some weirdness with our
"name": "default-placement",
> "tags": []
> }
> ],
> "default_placement": "default-placement",
> "realm_id": "3af93a86-
"name": "us",
> "api_name": "us",
> "is_master": "true",
> "endpoints": [
> "http:\/\/LB_FQDN:80"
> ],
> "hostnames": [],
> "hostnames_s3website": [],
>
Hi Ben,
It seems to be http://tracker.ceph.com/issues/16742.
It is being backported to jewel http://tracker.ceph.com/issues/16794,
you can try apply it and see if it helps you.
Regards,
Orit
On Fri, Sep 23, 2016 at 9:21 AM, Ben Morrice wrote:
> Hello all,
>
> I have two
Hi John,
Can you provide your zonegroup and zones configurations on all 3 rgw?
(run the commands on each rgw)
Thanks,
Orit
On Wed, Sep 21, 2016 at 11:14 PM, John Rowe wrote:
> Hello,
>
> We have 2 Ceph clusters running in two separate data centers, each one with
> 3
Everything is fine,
This is a debugging message and it was removed in newer jewel versions
Orit
On Sat, Sep 10, 2016 at 7:20 PM, Helmut Garrison
wrote:
> Hi
>
> i installed ceph and created an object storage from documents but when i
> want to create a user this
you can try:
radosgw-admin zonegroup modify --zonegroup-id --master=false
On Tue, Sep 6, 2016 at 11:08 AM, Yoann Moulin wrote:
> Hello Orit,
>
>> you have two (or more) zonegroups that are set as master.
>
> Yes I know, but I don't know how to fix this
>
>> First detect
Hi Yoann,
you have two (or more) zonegroups that are set as master.
First detect which zonegroup are the problematic
get zonegroup list by running: radosgw-admin zonegroup list
than on each zonegroup run:
radosgw-admin zonegroup get --rgw-zonegroup
see in which is_master is true.
Now you need to
uot;default-placement",
> "realm_id": "98089a5c-6c61-4cc2-a5d8-fce0cb0a9704"
> }
>
> radosgw-admin --cluster=pbs period update --commit
> 2016-07-26 10:34:56.160525 7f0e22ccf9c0 0 RGWZoneParams::create(): error
> creating default zone params: (17)
; Heppacher Str. 39
> 71404 Korb
>
> Telefon: +49 7151 1351565 0
> Telefax: +49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
&g
gt; "tags": []
> }
> ],
> "default_placement": "default-placement",
> "realm_id": ""
> }
>
> and
>
> radosgw-admin --cluster=pbs zonegroup set --rgw-zonegroup=default
>
> gives me
>
49 7151 1351565 9
> E-Mail: frank.ende...@anamica.de
> Internet: www.anamica.de
>
>
> Handelsregister: AG Stuttgart HRB 732357
> Geschäftsführer: Yvonne Holzwarth, Frank Enderle
>
>
> From: Orit Wasserman <owass...@redhat.com>
> Date: 26 July 2016 at 09:55:58
> To: Fr
you need to set the default zone as master zone.
you can try:
radosgw-admin zonegroup set < zg.json
where the json is the json return from radosgw-admin zonegroup get
with master_zone field set to "default"
Orit
On Mon, Jul 25, 2016 at 11:17 PM, Frank Enderle
wrote:
>
66 matches
Mail list logo