On Sat, Aug 17, 2024 at 9:12 AM Anthony D'Atri wrote:
>
> > It's going to wreak havoc on search engines that can't tell when
> > someone's looking up Ceph versus the long-establish Squid Proxy.
>
> Search engines are way smarter than that, and I daresay that people are far
> more likely to search
Can maybe leverage one of the other calls to check for upload completion:
list multipart uploads and/or list parts. The latter should work if you
have the upload id at hand.
Yehuda
On Wed, Jul 20, 2022, 8:40 AM Casey Bodley wrote:
> On Wed, Jul 20, 2022 at 12:57 AM Yuval Lifshitz
> wrote:
> >
On Wed, Nov 28, 2018 at 10:07 AM Maxime Guyot wrote:
>
> Hi Florian,
>
> You assumed correctly, the "test" container (private) was created with the
> "openstack container create test", then I am using the S3 API to
> enable/disable object versioning on it.
> I use the following Python snippet to
On Wed, Oct 17, 2018 at 1:14 AM Yang Yang wrote:
>
> Hi,
> A few weeks ago I found radosgw index has been inconsistent with reality.
> Some object I can not list, but I can get them by key. Please see the details
> below:
>
> BACKGROUND:
> Ceph version 12.2.4 (52085d5249a80c5f5121a76d628
There was an existing bug reported for this one, and it's fixed on master:
http://tracker.ceph.com/issues/23801
It will be backport to luminous and mimic.
On Mon, Aug 20, 2018 at 9:25 AM, Yehuda Sadeh-Weinraub
wrote:
> That message has been there since 2014. We should lower the l
That message has been there since 2014. We should lower the log level though.
Yehuda
On Mon, Aug 20, 2018 at 6:08 AM, David Turner wrote:
> In luminous they consolidated a lot of the rgw metadata pools by using
> namespace inside of the pools. I would say that the GC pool was consolidated
> into
Oh, also -- one thing that might work is running bucket check --fix on
the bucket. That should overwrite the reshard status field in the
bucket index.
Let me know if it happens to fix the issue for you.
Yehuda.
On Fri, Aug 3, 2018 at 9:46 AM, Yehuda Sadeh-Weinraub wrote:
> Is it actua
Is it actually resharding, or is it just stuck in that state?
On Fri, Aug 3, 2018 at 7:55 AM, David Turner wrote:
> I am currently unable to write any data to this bucket in this current
> state. Does anyone have any ideas for reverting to the original index
> shards and cancel the reshard proce
log.sync-status.shard.6a9448d2-bdba-4bec-aad6-aba72cd8eac6.105
>> [call] v14448'24265316 uv24265256 ondisk = -16 ((16) Device or resource
>> busy)) v8 210+0+0 (4086289880 0 0) 0x7f38a8005110 con 0x7f3868003380
>>
>>
>> There are no issues with the OSDs all other
On Sun, Jun 24, 2018 at 12:59 AM, Enrico Kern
wrote:
> Hello,
>
> We have two ceph luminous clusters (12.2.5).
>
> recently one of our big buckets stopped syncing properly. We have a one
> specific bucket which is around 30TB in size consisting of alot of
> directories with each one having files
(resending)
Sounds like a bug. Can you open a ceph tracker issue?
Thanks,
Yehuda
On Mon, Jun 18, 2018 at 7:24 AM, Sander van Schie / True
wrote:
> While Ceph was resharding buckets over and over again, the maximum available
> storage as reported by 'ceph df' also decreased by about 20%, while
You can't. A user can only list the buckets that it owns, it cannot
list other users' buckets.
Yehuda
On Sat, Apr 28, 2018 at 11:10 AM, Безруков Илья Алексеевич
wrote:
> Hello,
>
> How to configure s3 bucket acl so that one user's bucket is visible to
> another.
>
>
> I can create a bucket, obje
On Fri, Apr 13, 2018 at 5:09 PM, Katie Holly <8ld3j...@meo.ws> wrote:
> Hi everyone,
>
> I found myself in a situation where dynamic sharding and writing data to a
> bucket containing a little more than 5M objects at the same time caused
> corruption on the data rendering the entire bucket unusab
On Thu, Mar 8, 2018 at 2:22 PM, David Turner wrote:
> I remember some time ago Yehuda had commented on a thread like this saying
> that it would make sense to add a logging/auditing feature like this to RGW.
> I haven't heard much about it since then, though. Yehuda, do you remember
> that and/or
On Tue, Mar 6, 2018 at 11:40 AM, Ryan Leimenstoll
wrote:
> Hi all,
>
> We are trying to move a bucket in radosgw from one user to another in an
> effort both change ownership and attribute the storage usage of the data to
> the receiving user’s quota.
>
> I have unlinked the bucket and linked it
ine. Perhaps
> 'us' working is what shouldn't work as opposed to allowing whatever else to
> be able to work.
>
> I tested setting bucket_location to 'local-atl' and it did successfully
> create the bucket. So the question becomes, why do my other realms no
cpdump for this. Do you have any
> pointers to how to capture that for you?
>
> On Mon, Feb 26, 2018 at 4:09 PM David Turner wrote:
>>
>> That's what I set it to in the config file. I probably should have
>> mentioned that.
>>
>> On Mon, Feb 26, 2018 at 4:07 P
estraint/:create_bucket:op status=-2208
> 2018-02-26 19:43:37.341792 7f466bbca700 2 req 428078:0.005707:s3:PUT
> /testraint/:create_bucket:http status=400
>
> On Mon, Feb 26, 2018 at 2:36 PM Yehuda Sadeh-Weinraub
> wrote:
>>
>> I'm not sure if the rgw logs (debu
ph-container/blob/master/ceph-releases/luminous/ubuntu/16.04/daemon/variables_entrypoint.sh#L46
>>
>> Here's what I get when I query RGW:
>>
>> $ radosgw-admin zonegroup list
>> {
>> "default_info": "",
>> &quo
else will return an InvalidLocationConstraint error.
>
> Francis
>
>
> On 20/02/2018 8:40 AM, Yehuda Sadeh-Weinraub wrote:
>>
>> Sounds like the go sdk adds a location constraint to requests that
>> don't go to us-east-1. RGW itself is definitely isn't tied t
Sounds like the go sdk adds a location constraint to requests that
don't go to us-east-1. RGW itself is definitely isn't tied to
us-east-1, and does not know anything about it (unless you happen to
have a zonegroup named us-east-1). Maybe there's a way to configure
the sdk to avoid doing that?
Yeh
0baebec32e388f4cb7bdf1fee9afe2144eeeb354
> Best regards,
>
> Ingo
>
>
> -Ursprüngliche Nachricht-
> Von: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
> Gesendet: Donnerstag, 15. Februar 2018 00:21
> An: Ingo Reimann
> Cc: ceph-users
> Betreff: Re: [ceph-
The CORS related operations are working on specific buckets, not on
the service root. You'll need to set CORS on a bucket, and specify it
in the path.
Yehuda
On Mon, Feb 12, 2018 at 5:17 PM, Piers Haken wrote:
> I’m trying to do direct-from-browser upload to rgw using pre-signed urls,
> and I’m
On Tue, Feb 13, 2018 at 11:27 PM, Ingo Reimann wrote:
> Hi List,
>
> we want to brush up our cluster and correct things, that have been changed
> over time. When we started with bobtail, we put all index objects together
> with data into the pool rgw.buckets:
>
> root@cephadmin:~# radosgw-admin me
On Wed, Feb 14, 2018 at 2:54 AM, Amardeep Singh wrote:
> Hi,
>
> I am trying to setup RGW Metadata Search with Elastic server tier type as
> per blog post here. https://ceph.com/rgw/new-luminous-rgw-metadata-search/
>
> The environment setup is done using ceph-ansible docker containers.
>
> Conta
ienced users ...
>
> http://ceph.com/rgw/new-luminous-rgw-metadata-search/
>
> Thanks a lot.
>
>
> On Tue, Jan 16, 2018 at 3:59 PM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang
>> wrote:
>> > Hi Yehuda,
&g
string, x-amz-meta-bar;
> integer'
> ERROR: {"status": 405, "resource": null, "message": "", "error_code":
> "MethodNotAllowed", "reason": "Method Not Allowed"}
>
> How to make the method 'Allowed
The errors you're seeing there don't look like related to
elasticsearch. It's a generic radosgw related error that says that it
failed to reach the rados (ceph) backend. You can try bumping up the
messenger log (debug ms =1) and see if there's any hint in there.
Yehuda
On Fri, Jan 12, 2018 at 12:
On Fri, Dec 22, 2017 at 11:49 PM, Youzhong Yang wrote:
> I followed the exact steps of the following page:
>
> http://ceph.com/rgw/new-luminous-rgw-metadata-search/
>
> "us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue, the
> service runs successfully.
>
> "us-east-es" zone i
(+matt)
On Fri, Dec 15, 2017 at 7:21 PM, David Turner wrote:
> We're trying to build an auditing system for when a user key pair performs
> an operation on a bucket (put, delete, creating a bucket, etc) and so far
> were only able to find this information in the level 10 debug logging in the
> rg
to 1k should be fine).
Yehuda
On Wed, Dec 6, 2017 at 2:09 PM, Wido den Hollander wrote:
>
>> Op 6 december 2017 om 10:25 schreef Yehuda Sadeh-Weinraub
>> :
>>
>>
>> Are you using rgw? There are certain compatibility issues that you
>> might hit if you run
Are you using rgw? There are certain compatibility issues that you
might hit if you run mixed versions.
Yehuda
On Tue, Dec 5, 2017 at 3:20 PM, Wido den Hollander wrote:
> Hi,
>
> I haven't tried this before but I expect it to work, but I wanted to check
> before proceeding.
>
> I have a Ceph cl
er, search can be done by sending requests through rgw's
RESTful api. We have a test that uses boto to generate such requests,
but might not be exactly what you're looking for:
https://github.com/ceph/ceph/blob/master/src/test/rgw/rgw_multi/zone_es.py
Yehuda
> Thanks & regards
>
rgw has a sync modules framework that allows you to write your own
sync plugins. The system identifies objects changes and triggers
callbacks that can then act on those changes. For example, the
metadata search feature that was added recently is using this to send
objects metadata into elasticsearc
On Thu, Nov 9, 2017 at 10:05 AM, Zheyuan Chen wrote:
> I installed rados-objclass-dev and objclass.h was installed successfully.
> However, I failed to run the objclass following the steps as below:
>
> 1. copy https://github.com/ceph/ceph/blob/master/src/cls/sdk/cls_sdk.cc into
> my machine. (cls
On Mon, Nov 6, 2017 at 7:29 AM, Wido den Hollander wrote:
> Hi,
>
> On a Ceph Luminous (12.2.1) environment I'm seeing RGWs stall and about the
> same time I see these errors in the RGW logs:
>
> 2017-11-06 15:50:24.859919 7f8f5fa1a700 0 ERROR: failed to distribute cache
> for
> gn1-pf.rgw.dat
(thus no osd availability problem). Then you can start
trimming the old gc objects (on the old renamed pool) by using the
rados command. It'll take a very very long time, but the process
should pick up speed slowly, as the objects shrink.
Yehuda
>
>
> Bryan
>
>
>
> From: Ye
Some of the options there won't do much for you as they'll only affect
newer object removals. I think the default number of gc objects is
just inadequate for your needs. You can try manually running
'radosgw-admin gc process' concurrently (for the start 2 or 3
processes), see if it makes any dent t
ncing 1 shard,
> but no files ends up in the pool (testing with empty data pool). after a
> while it shows that data is back in sync but there is no file
>
> On Wed, Oct 11, 2017 at 11:26 PM, Yehuda Sadeh-Weinraub > wrote:
>
>> Thanks for your report. We're looking into
Thanks for your report. We're looking into it. You can try to see if
touching the object (e.g., modifying its permissions) triggers the sync.
Yehuda
On Wed, Oct 11, 2017 at 1:36 PM, Enrico Kern
wrote:
> Hi David,
>
> yeah seems you are right, they are stored as different filenames in the
> data
On Mon, Oct 9, 2017 at 1:59 PM, Ryan Leimenstoll
wrote:
> Hi all,
>
> We recently upgraded to Ceph 12.2.1 (Luminous) from 12.2.0 however are now
> seeing issues running radosgw. Specifically, it appears an automatically
> triggered resharding operation won’t end, despite the jobs being cancelled
On Tue, Oct 3, 2017 at 8:59 AM, Sean Purdy wrote:
> Hi,
>
>
> Is there any way that radosgw can ping something when a file is removed or
> added to a bucket?
>
That depends on what exactly you're looking for. You can't get that
info as a user. but there is a mechanism for remote zones to detect
I'm not a huge fan of train releases, as they tend to never quite make
it on time and it always feels a bit artificial timeline anyway. OTOH,
I do see and understand the need of a predictable schedule with a
roadmap attached to it. There are many that need to have at least a
vague idea on what we'r
t
For each zone, zonegroup in result:
- radosgw-admin zone get --rgw-zone=
- radosgw-admin zonegroup get --rgw-zonegroup=
- rados lspools
Also, create a user with --debug-rgw=20 --debug-ms=1, need to look at the log.
Yehuda
>
> On Thu, Sep 7, 2017 at 4:27 PM Yehuda Sadeh-Weinraub
> w
ke an issue with the metadata log in the primary master zone.
>> Not sure what could go wrong there, but maybe the master zone doesn't
>> know that it is a master zone, or it's set to not log metadata. Or
>> maybe there's a problem when the secondary is try
;s set to not log metadata. Or
maybe there's a problem when the secondary is trying to fetch the
metadata log. Maybe some kind of # of shards mismatch (though not
likely).
Try to see if the master logs any changes: should use the
'radosgw-admin mdlog list' command.
Yehuda
>
On Thu, Sep 7, 2017 at 7:44 PM, David Turner wrote:
> Ok, I've been testing, investigating, researching, etc for the last week and
> I don't have any problems with data syncing. The clients on one side are
> creating multipart objects while the multisite sync is creating them as
> whole objects a
On Wed, Aug 30, 2017 at 5:44 PM, Bryan Banister
wrote:
> Not sure what’s happening but we started to but a decent load on the RGWs we
> have setup and we were seeing failures with the following kind of
> fingerprint:
>
>
>
> 2017-08-29 17:06:22.072361 7ffdc501a700 1 rgw realm reloader: Frontends
On Fri, Jun 30, 2017 at 4:49 AM, Henrik Korkuc wrote:
> Hello,
>
> I have RGW multisite setup on Jewel and I would like to turn off data
> replication there so that only metadata (users, created buckets, etc) would
> be synced but not the data.
>
>
FWIW, not in jewel, but in kraken the zone info
On Wed, Jun 28, 2017 at 8:13 AM, Martin Emrich
wrote:
> Correction: It’s about the Version expiration, not the versioning itself.
>
> We send this rule:
>
>
>
> Rules: [
>
> {
>
> Status: 'Enabled',
>
> Prefix: '',
>
> NoncurrentVersionExpiration: {
>
>
On Fri, Jun 9, 2017 at 2:21 AM, Dan van der Ster wrote:
> Hi Bryan,
>
> On Fri, Jun 9, 2017 at 1:55 AM, Bryan Stillwell
> wrote:
>> This has come up quite a few times before, but since I was only working with
>> RBD before I didn't pay too close attention to the conversation. I'm
>> looking
>>
Have you opened a ceph tracker issue, so that we don't lose track of
the problem?
Thanks,
Yehuda
On Fri, Jun 2, 2017 at 3:05 PM, wrote:
> Hi Graham.
>
> We are on Kraken and have the same problem with "lifecycle". Various (other)
> tools like s3cmd or CyberDuck do show the applied "expiration"
On Mon, May 15, 2017 at 8:35 AM, Ken Dreyer wrote:
> On Fri, May 5, 2017 at 1:51 PM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> TL;DR: Does anyone care if we remove support for fastcgi in rgw?
>
> Please remove it as soon as possible. The old libfcgi project's code
&g
RGW has supported since forever. Originally it was the only supported
frontend, and nowadays it is the least preferred one.
Rgw was first developed over fastcgi + lighttpd, but there were some
issues with this setup, so we switched to fastcgi + apache as our main
supported configuration. This was
On Mon, Apr 3, 2017 at 1:32 AM, Luis Periquito wrote:
>> Right. The tool isn't removing objects (yet), because we wanted to
>> have more confidence in the tool before having it automatically
>> deleting all the found objects. The process currently is to manually
>> move these objects to a differen
On Fri, Mar 31, 2017 at 2:08 AM, Marius Vaitiekunas
wrote:
>
>
> On Fri, Mar 31, 2017 at 11:15 AM, Luis Periquito
> wrote:
>>
>> But wasn't that what orphans finish was supposed to do?
>>
>
> orphans finish only removes search results from a log pool.
>
Right. The tool isn't removing objects (ye
This sounds like this bug:
http://tracker.ceph.com/issues/17076
Will be fixed in 10.2.6. It's triggered by aws4 auth, so a workaround
would be to use aws2 instead.
Yehuda
On Wed, Mar 1, 2017 at 10:46 AM, John Nielsen wrote:
> Hi all-
>
> We use Amazon S3 quite a bit at $WORK but are evaluating
x27;t work as
> expected, is there anything else I can do?
>
The orphans find command will point at tail objects that aren't
referenced by any bucket index entry. In this case you have rgw object
that doesn't appear on the bucket index, so it is natural that its
tail (composed of both mu
ed.20
> -rw-r--r-- 1 root root 0 Feb 24 12:45 orphan.scan.orphans.rados.18
> -rw-r--r-- 1 root root 0 Feb 24 12:45 orphan.scan.bck1.rados.11
> -rw-r--r-- 1 root root 0 Feb 24 12:45 orphan.scan.orphans.rados.50
> -rw-r--r-- 1 root root 0 Feb 24 12:45 orphan.scan.orphans.buckets.33
>
>
Hi,
we wanted to have more confidence in the orphans search tool before
providing a functionality that actually remove the objects. One thing
that you can do is create a new pool, copy these objects to the new
pool (as a backup, rados -p --target-pool=
cp ), and remove these objects (rados -p r
On Fri, Feb 24, 2017 at 3:59 AM, Marius Vaitiekunas
wrote:
>
>
> On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
>> wrote:
>> > Hi Cephers,
>> >
>> > We are
On Wed, Feb 22, 2017 at 11:41 AM, Marius Vaitiekunas
wrote:
>
>
> On Wed, Feb 22, 2017 at 8:33 PM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
>> wrote:
>> > Hi Cephers,
>> >
>> > We are
On Wed, Feb 22, 2017 at 6:19 AM, Marius Vaitiekunas
wrote:
> Hi Cephers,
>
> We are testing rgw multisite solution between to DC. We have one zonegroup
> and to zones. At the moment all writes/deletes are done only to primary
> zone.
>
> Sometimes not all the objects are replicated.. We've written
On Tue, Jan 10, 2017 at 1:35 AM, Marius Vaitiekunas
wrote:
> Hi,
>
> I would like to ask ceph developers if there any chance that swift api
> support for rgw is going to be dropped in the future (like in 5 years).
>
> Why am I asking? :)
>
> We were happy openstack glance users on ceph s3 api unti
On Fri, Dec 2, 2016 at 3:18 AM, Yang Joseph wrote:
> Hello,
>
> I would like only to allow the user to read the object in a already existed
> bucket, and not allow users
> to create new bucket. It supposed to execute the following command:
>
> $ radosgw-admin metadata put user:test3 < ...
> ...
On Mon, Nov 21, 2016 at 3:33 PM, Graham Allan wrote:
>
>
> On 11/21/2016 05:23 PM, Yehuda Sadeh-Weinraub wrote:
>>
>>
>> Seems like bucket was sharded, but for some reason the bucket instance
>> info does not specify that. I don't know why that would happe
On Mon, Nov 21, 2016 at 3:14 PM, Graham Allan wrote:
>
>
> On 11/21/2016 04:44 PM, Yehuda Sadeh-Weinraub wrote:
>>
>> On Mon, Nov 21, 2016 at 2:42 PM, Graham Allan wrote:
>>>
>>> Following up to this (same problem, looking at it with Jeff)...
>>>
On Mon, Nov 21, 2016 at 2:42 PM, Graham Allan wrote:
> Following up to this (same problem, looking at it with Jeff)...
>
> There was definite confusion with the zone/zonegroup/realm/period changes
> during the hammer->jewel upgrade. It's possible that our placement settings
> were misplaced at thi
On Fri, Nov 18, 2016 at 1:14 PM, Jeffrey McDonald wrote:
> Hi,
>
> MSI has an erasure coded ceph pool accessible by the radosgw interface.
> We recently upgraded to Jewel from Hammer. Several days ago, we
> experienced issues with a couple of the rados gateway servers and
> inadvertently deploye
On Mon, Nov 14, 2016 at 9:20 AM, Brian Andrus
wrote:
> Hi William,
>
> "rgw print continue = true" is an apache specific setting, as mentioned
> here:
>
> http://docs.ceph.com/docs/master/install/install-ceph-gateway/#migrating-from-apache-to-civetweb
>
> I do not believe it is needed for civetweb
-8"?>InvalidArgumentmy-new-bucket-31337tx00010-005822ebbd-9951ad8-default9951ad8-default-default
>
After setting the master zone, try running:
$ radosgw-admin period update --commit
Yehuda
>
> Andrei
>
> ----- Original Message -
>> From: "Yehuda
> "user_keys_pool": ".users",
> "user_email_pool": ".users.email",
> "user_swift_pool": ".users.swift",
> "user_uid_pool": ".users.uid",
> "system_key": {
> "acces
On Tue, Nov 8, 2016 at 3:36 PM, Andrei Mikhailovsky wrote:
> Hello
>
> I am having issues with creating buckets in radosgw. It started with an
> upgrade to version 10.2.x
>
> When I am creating a bucket I get the following error on the client side:
>
>
> boto.exception.S3ResponseError: S3ResponseE
Generally you need to create a new realm, and add the 'default'
zonegroup into it. I think you can achieve this via the 'radosgw-admin
zonegroup modify' command.
The zonegroup and zone can be renamed (their id will still be
'default', but you can change their names).
Yehuda
On Thu, Oct 6, 2016 at
On Thu, Sep 22, 2016 at 1:52 PM, Paul Nimbley wrote:
> Fairly new to ceph so please excuse any misused terminology. We’re
> currently exploring the use of ceph as a replacement storage backend for an
> existing application. The existing application has 2 requirements which
> seemingly can be met
On Thu, Sep 15, 2016 at 4:53 PM, lewis.geo...@innoscale.net
wrote:
> Hi,
> So, maybe someone has an idea of where to go on this.
>
> I have just setup 2 rgw instances in a multisite setup. They are working
> nicely. I have add a couple of test buckets and some files to make sure it
> works is all.
On Fri, Sep 2, 2016 at 12:54 AM, Yoann Moulin wrote:
> Hello,
>
>> I have an issue with the default zonegroup on my cluster (Jewel 10.2.2), I
>> don't
>> know when this occured, but I think I did a wrong command during the
>> manipulation of zones and regions. Now the ID of my zonegroup is "defau
ltipart and shadow files that are not valid, but none of that actually
>
The tool is not removing data, only reporting about possible leaked rados
objects.
> updates the buckets stats to the correct values. If I had some mechanism
> for forcing that, this would be much less of a bi
On Wed, Aug 3, 2016 at 10:10 AM, Brian Felton wrote:
> This may just be me having a conversation with myself, but maybe this will
> be helpful to someone else.
>
> Having dug and dug and dug through the code, I've come to the following
> realizations:
>
>1. When a multipart upload is complete
On Thu, Jul 28, 2016 at 5:53 PM, Leo Yu wrote:
> hi all,
> i want get the usage of user,so i use the command radosgw-admin usage show
> ,but i can not get the usage when i use the --start-date unless minus 16
> hours
>
> i have rgw both on ceph01 and ceph03,civeweb:7480 port ,and the ceph versi
ed, which makes me think this setting isn't being
> respected in the way I thought it would.
>
It only affects newly created buckets.
Yehuda
> On Thu, Jul 28, 2016 at 9:59 AM, Yehuda Sadeh-Weinraub
> wrote:
>>
>> In order to use indexless (blind) buckets, you need to create a
In order to use indexless (blind) buckets, you need to create a new
placement target, and then set the placement target's index_type param
to 1.
Yehuda
On Tue, Jul 26, 2016 at 10:30 AM, Tyler Bischel
wrote:
> Hi there,
> We are looking at using Ceph (Jewel) for a use case that is very write
>
On Tue, Jun 28, 2016 at 4:12 AM, John Mathew
wrote:
> I am using radosgw as object storage in openstack liberty. I am using ceph
> jewel. Currently I can create public and private containers. But cannot
> change the access of containers ie. cannot change a public container to
> private and vice v
On Fri, Jun 10, 2016 at 11:44 AM, Deneau, Tom wrote:
> When I start radosgw, I create the pool .rgw.buckets manually to control
> whether it is replicated or erasure coded and I let the other pools be
> created automatically.
>
> However, I have noticed that sometimes the pools get created with th
On Sun, May 29, 2016 at 11:13 AM, Khang Nguyễn Nhật
wrote:
> Hi,
> I'm having problems with AWS4 in the CEPH Jewel when interact with the
> bucket, object.
> First I will talk briefly about my cluster. My cluster is used CEPH Jewel
> v10.2.1, including: 3 OSD, 2 monitors and 1 RGW.
> - Information
On Sun, May 29, 2016 at 4:47 AM, Gaurav Bafna wrote:
> Hi Cephers,
>
> I am unable to create bucket hosting a webstite in my vstart cluster.
>
> When I do this in boto :
>
> website_bucket.configure_website('index.html','error.html')
>
> I get :
>
> boto.exception.S3ResponseError: S3ResponseError:
On Fri, May 20, 2016 at 9:03 AM, Jonathan D. Proulx wrote:
> Hi All,
>
> I saw the previous thread on this related to
> http://tracker.ceph.com/issues/15597
>
> and Yehuda's fix script
> https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone
>
> Running this seems to hav
On Thu, May 12, 2016 at 12:29 AM, Saverio Proto wrote:
>> While I'm usually not fond of blaming the client application, this is
>> really the swift command line tool issue. It tries to be smart by
>> comparing the md5sum of the object's content with the object's etag,
>> and it breaks with multipa
While I'm usually not fond of blaming the client application, this is
really the swift command line tool issue. It tries to be smart by
comparing the md5sum of the object's content with the object's etag,
and it breaks with multipart objects. Multipart objects is calculated
differently (md5sum of t
On Fri, May 6, 2016 at 2:27 PM, Sage Weil wrote:
> On Fri, 6 May 2016, Yehuda Sadeh-Weinraub wrote:
>> On Fri, May 6, 2016 at 12:41 PM, Sage Weil wrote:
>> > This PR
>> >
>> > https://github.com/ceph/ceph/pull/8975
>> >
>> > removes
On Fri, May 6, 2016 at 12:41 PM, Sage Weil wrote:
> This PR
>
> https://github.com/ceph/ceph/pull/8975
>
> removes the 'rados cppool' command. The main problem is that the command
> does not make a faithful copy of all data because it doesn't preserve the
> snapshots (and snapshot related
On Tue, Apr 26, 2016 at 6:50 AM, Abhishek Lekshmanan wrote:
>
> Ansgar Jazdzewski writes:
>
>> Hi,
>>
>> After plaing with the setup i got some output that looks wrong
>>
>> # radosgw-admin zone get
>>
>> "placement_pools": [
>> {
>> "key": "default-placement",
>>
I managed to reproduce the issue, and there seem to be multiple
problems. Specifically we have an issue when upgrading a default
cluster that hasn't had a zone (and region) explicitly configured
before. There is another bug that I found
(http://tracker.ceph.com/issues/15597) that makes things even
(sorry for resubmission, adding ceph-users)
On Mon, Apr 25, 2016 at 9:47 AM, Richard Chan
wrote:
> Hi Yehuda
>
> I created a test 3xVM setup with Hammer and one radosgw on the (separate)
> admin node; creating one user and buckets.
>
> I upgraded the VMs to jewel and created a new radosgw on one
On Sat, Apr 23, 2016 at 6:22 AM, Richard Chan
wrote:
> Hi Cephers,
>
> I upgraded to Jewel and noted the is massive radosgw multisite rework
> in the release notes.
>
> Can Jewel radosgw be configured to present existing Hammer buckets?
> On a test system, jewel didn't recognise my Hammer buckets
On Tue, Mar 15, 2016 at 11:36 PM, Pavan Rallabhandi
wrote:
> Hi,
>
> I find this to be discussed here before, but couldn¹t find any solution
> hence the mail. In RGW, for a bucket holding objects in the range of ~
> millions, one can find it to take for ever to delete the bucket(via
> radosgw-admi
On Fri, Mar 4, 2016 at 7:26 AM, Ritter Sławomir
wrote:
>> From: Robin H. Johnson [mailto:robb...@gentoo.org]
>> Sent: Friday, March 04, 2016 12:40 AM
>> To: Ritter Sławomir
>> Cc: ceph-us...@ceph.com; ceph-devel
>> Subject: Re: [ceph-users] Problem: silently corrupted RadosGW objects caused
>> by
On Thu, Feb 25, 2016 at 7:17 AM, Ritter Sławomir
wrote:
> Hi,
>
>
>
> We have two CEPH clusters running on Dumpling 0.67.11 and some of our
> "multipart objects" are incompleted. It seems that some slow requests could
> cause corruption of related S3 objects. Moveover GETs for that objects are
> w
On Tue, Mar 1, 2016 at 7:23 AM, Daniel Gryniewicz wrote:
> On 02/28/2016 08:36 PM, David Wang wrote:
>>
>> Hi All,
>> How the progress of NFS on RGW? Does it released on Infernalis? The
>> contents of NFS on RGW is
>> http://tracker.ceph.com/projects/ceph/wiki/RGW_-_NFS
>>
>>
>
> The FSAL has
On Wed, Feb 24, 2016 at 5:48 PM, Ben Hines wrote:
> Any idea what is going on here? I get these intermittently, especially with
> very large file.
>
> The client is doing RANGE requests on this >51 GB file, incrementally
> fetching later chunks.
>
> 2016-02-24 16:30:59.669561 7fd33b7fe700 1 =
1 - 100 of 555 matches
Mail list logo