Ben,

Works fine as far as I see:

[root@273aa9f2ee9f /]# s3cmd mb s3://test
Bucket 's3://test/' created

[root@273aa9f2ee9f /]# s3cmd put /etc/hosts s3://test
upload: '/etc/hosts' -> 's3://test/hosts'  [1 of 1]
 196 of 196   100% in    0s   404.87 B/s  done

[root@273aa9f2ee9f /]# s3cmd ls s3://test

[root@273aa9f2ee9f /]# ls -al /tmp/hosts
ls: cannot access /tmp/hosts: No such file or directory

[root@273aa9f2ee9f /]# s3cmd get s3://test/hosts /tmp/hosts
download: 's3://test/hosts' -> '/tmp/hosts'  [1 of 1]
 196 of 196   100% in    0s  2007.56 B/s  done

[root@273aa9f2ee9f /]# cat /tmp/hosts
172.17.0.4      273aa9f2ee9f

[root@ceph-mon01 ~]# radosgw-admin bucket rm --bucket=test --purge-objects
[root@ceph-mon01 ~]# 

[root@273aa9f2ee9f /]# s3cmd ls 
[root@273aa9f2ee9f /]# 

>>If not, i imagine rados could be used to delete them manually by prefix.
That would be pain with more than few million objects :)

Stas

> On Sep 21, 2016, at 9:10 PM, Ben Hines <bhi...@gmail.com> wrote:
> 
> Thanks. Will try it out once we get on Jewel.
> 
> Just curious, does bucket deletion with --purge-objects work via 
> radosgw-admin with the no index option?
> If not, i imagine rados could be used to delete them manually by prefix.
> 
> 
> On Sep 21, 2016 6:02 PM, "Stas Starikevich" <stas.starikev...@gmail.com 
> <mailto:stas.starikev...@gmail.com>> wrote:
> Hi Ben,
> 
> Since the 'Jewel' RadosGW supports blind buckets.
> To enable blind buckets configuration I used:
> 
> radosgw-admin zone get --rgw-zone=default > default-zone.json
> #change index_type from 0 to 1
> vi default-zone.json
> radosgw-admin zone set --rgw-zone=default --infile default-zone.json
> 
> To apply changes you have to restart all the RGW daemons. Then all newly 
> created buckets will not have index (bucket list will provide empty output), 
> but GET\PUT works perfectly.
> In my tests there is no performance difference between SSD-backed indexes and 
> 'blind bucket' configuration.
> 
> Stas
> 
> > On Sep 21, 2016, at 2:26 PM, Ben Hines <bhi...@gmail.com 
> > <mailto:bhi...@gmail.com>> wrote:
> >
> > Nice, thanks! Must have missed that one. It might work well for our use 
> > case since we don't really need the index.
> >
> > -Ben
> >
> > On Wed, Sep 21, 2016 at 11:23 AM, Gregory Farnum <gfar...@redhat.com 
> > <mailto:gfar...@redhat.com>> wrote:
> > On Wednesday, September 21, 2016, Ben Hines <bhi...@gmail.com 
> > <mailto:bhi...@gmail.com>> wrote:
> > Yes, 200 million is way too big for a single ceph RGW bucket. We 
> > encountered this problem early on and sharded our buckets into 20 buckets, 
> > each which have the sharded bucket index with 20 shards.
> >
> > Unfortunately, enabling the sharded RGW index requires recreating the 
> > bucket and all objects.
> >
> > The fact that ceph uses ceph itself for the bucket indexes makes RGW less 
> > reliable in our experience. Instead of depending on one object you're 
> > depending on two, with the index and the object itself. If the cluster has 
> > any issues with the index the fact that it blocks access to the object 
> > itself is very frustrating. If we could retrieve / put objects into RGW 
> > without hitting the index at all we would - we don't need to list our 
> > buckets.
> >
> > I don't know the details or which release it went into, but indexless 
> > buckets are now a thing -- check the release notes or search the lists! :)
> > -Greg
> >
> >
> >
> > -Ben
> >
> > On Tue, Sep 20, 2016 at 1:57 AM, Wido den Hollander <w...@42on.com 
> > <mailto:w...@42on.com>> wrote:
> >
> > > Op 20 september 2016 om 10:55 schreef Василий Ангапов <anga...@gmail.com 
> > > <mailto:anga...@gmail.com>>:
> > >
> > >
> > > Hello,
> > >
> > > Is there any way to copy rgw bucket index to another Ceph node to
> > > lower the downtime of RGW? For now I have  a huge bucket with 200
> > > million files and its backfilling is blocking RGW completely for an
> > > hour and a half even with 10G network.
> > >
> >
> > No, not really. What you really want is the bucket sharding feature.
> >
> > So what you can do is enable the sharding, create a NEW bucket and copy 
> > over the objects.
> >
> > Afterwards you can remove the old bucket.
> >
> > Wido
> >
> > > Thanks!
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to