About deleting container/bucket from radosgw

2012-07-14 Thread 蔡權昱
Hello everyone,

I try to use radosgw to supply s3/swift storage,

everything is fine, but I found that something strange

after deleting a container/bucket from radosgw,

the following are commands I ran:

1. check the pool empty or not
$ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
--pool=.rgw.buckets ls


2. create a container buck1
$ swift -A http://volume/auth -U account -K key post buck1

3. show created objs in ceph
$ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
--pool=.rgw.buckets ls
9352.10
buck1

.dir.9352.10

4. delete buck1
swift -A http://volume/auth -U account -K key delete buck1

5. show objs in ceph
$ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
--pool=.rgw.buckets ls
9352.10

Then, the object '9352.10' seems like be leaved in pool forever ?
I have tried create  delete bucket by s3lib, too,
and the result is the same.

However, the function works fine,
I just want to know is this a normal case?

Thanks.

--
Chuanyu Tsai
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: About deleting container/bucket from radosgw

2012-07-14 Thread Yehuda Sadeh
On Sat, Jul 14, 2012 at 6:45 AM, 蔡權昱 chua...@cs.nctu.edu.tw wrote:
 Hello everyone,

 I try to use radosgw to supply s3/swift storage,

 everything is fine, but I found that something strange

 after deleting a container/bucket from radosgw,

 the following are commands I ran:

 1. check the pool empty or not
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 

 2. create a container buck1
 $ swift -A http://volume/auth -U account -K key post buck1

 3. show created objs in ceph
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 9352.10
 buck1
 
 .dir.9352.10

 4. delete buck1
 swift -A http://volume/auth -U account -K key delete buck1

 5. show objs in ceph
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 9352.10

 Then, the object '9352.10' seems like be leaved in pool forever ?
 I have tried create  delete bucket by s3lib, too,
 and the result is the same.

 However, the function works fine,
 I just want to know is this a normal case?


This is normal, though we modified this behavior recently, so you
shouldn't see that in the next versions. We used to keep the bucket
index by bucket instance id for archival and also for usage
processing.

Yehuda
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ceph status reporting non-existing osd

2012-07-14 Thread Andrey Korolyov
On Fri, Jul 13, 2012 at 9:09 PM, Sage Weil s...@inktank.com wrote:
 On Fri, 13 Jul 2012, Gregory Farnum wrote:
 On Fri, Jul 13, 2012 at 1:17 AM, Andrey Korolyov and...@xdel.ru wrote:
  Hi,
 
  Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
  six-node,
  and I have removed a bunch of rbd objects during recovery to avoid
  overfill.
  Right now I`m constantly receiving a warn about nearfull state on
  non-existing osd:
 
 health HEALTH_WARN 1 near full osd(s)
 monmap e3: 3 mons at
  {0=192.168.10.129:6789/0,1=192.168.10.128:6789/0,2=192.168.10.127:6789/0},
  election epoch 240, quorum 0,1,2 0,1,2
 osdmap e2098: 4 osds: 4 up, 4 in
  pgmap v518696: 464 pgs: 464 active+clean; 61070 MB data, 181 GB
  used, 143 GB / 324 GB avail
 mdsmap e181: 1/1/1 up {0=a=up:active}
 
  HEALTH_WARN 1 near full osd(s)
  osd.4 is near full at 89%
 
  Needless to say, osd.4 remains only in ceph.conf, but not at crushmap.
  Reducing has been done 'on-line', e.g. without restart entire cluster.

 Whoops! It looks like Sage has written some patches to fix this, but
 for now you should be good if you just update your ratios to a larger
 number, and then bring them back down again. :)

 Restarting ceph-mon should also do the trick.

 Thanks for the bug report!
 sage

Should I restart mons simultaneously? Restarting one by one has no
effect, same as filling up data pool up to ~95 percent(btw, when I
deleted this 50Gb file on cephfs, mds was stuck permanently and usage
remained same until I dropped and recreated data pool - hope it`s one
of known posix layer bugs). I also deleted entry from config, and then
restarted mons, with no effect. Any suggestions?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ceph init script does not start

2012-07-14 Thread Sage Weil
On Sat, 14 Jul 2012, Xiaopong Tran wrote:
 Hi,
 
 I'm getting this funny issue. I had setup two test clusters, and
 mkcephfs and the ceph start up script worked just fine. We are
 now ready to go production, we have 6 nodes, with 10 disks
 each, and one osd per disk, with 3 mds and 3 mons.
 
 The script mkcephfs ran without problem, everything was created
 properly. See attached log file. However, when I run
 
 /etc/init.d/ceph start
 
 nothing happens, not even a line of message, not on concole,
 neither in system log.
 
 But can I manually start up each individual osd, mds, and mon.

This is usually related to the 'host = ...' lines in ceph.conf.  They need 
to match the output of the `hostname` command in order for that daemon to 
be automatically started or stopped.

If that appears correct, you can try running with -x (sh -x 
/etc/init.d/ceph start) to see exactly what the script it 
getting/comparing for the host.

sage


 
 Here is more information:
 
 root@s11:~# ceph health
 HEALTH_OK
 root@s11:~# ceph -s
health HEALTH_OK
monmap e1: 3 mons at
 {a=10.1.0.11:6789/0,b=10.1.0.13:6789/0,c=10.1.0.15:6789/0}, election epoch 6,
 quorum 0,1,2 a,b,c
osdmap e24: 60 osds: 60 up, 60 in
 pgmap v143: 11520 pgs: 11520 active+clean; 8730 bytes data, 242 GB used,
 83552 GB / 83794 GB avail
mdsmap e6: 1/1/1 up {0=c=up:active}, 2 up:standby
 
 root@s11:~# ceph -v
 ceph version 0.48argonaut (commit:c2b20ca74249892c8e5e40c12aa14446a2bf2030)
 root@s11:~#
 
 The only difference this time vs the time when I setup
 the test clusters was that, for the test clusters,
 I started with 0.47.3 (and one of the cluster did
 a rolling upgrade to 0.48), but for this, all machines
 started from fresh, with 0.48.
 
 This is on Debian wheezy.
 
 Can someone give a hint where else can I look into?
 
 Thanks
 
 Xiaopong
 
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: About deleting container/bucket from radosgw

2012-07-14 Thread 蔡權昱
2012/7/14 Yehuda Sadeh yeh...@inktank.com:
 On Sat, Jul 14, 2012 at 6:45 AM, 蔡權昱 chua...@cs.nctu.edu.tw wrote:
 Hello everyone,

 I try to use radosgw to supply s3/swift storage,

 everything is fine, but I found that something strange

 after deleting a container/bucket from radosgw,

 the following are commands I ran:

 1. check the pool empty or not
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 

 2. create a container buck1
 $ swift -A http://volume/auth -U account -K key post buck1

 3. show created objs in ceph
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 9352.10
 buck1
 
 .dir.9352.10

 4. delete buck1
 swift -A http://volume/auth -U account -K key delete buck1

 5. show objs in ceph
 $ rados --pool=.rgw ls | grep -v ^\. ; echo ; rados
 --pool=.rgw.buckets ls
 9352.10

 Then, the object '9352.10' seems like be leaved in pool forever ?
 I have tried create  delete bucket by s3lib, too,
 and the result is the same.

 However, the function works fine,
 I just want to know is this a normal case?


 This is normal, though we modified this behavior recently, so you
 shouldn't see that in the next versions. We used to keep the bucket
 index by bucket instance id for archival and also for usage
 processing.
OK, I see.

On the other hand, why radosgw-admin doesn't supply bucket rm call?
When I want to user rm a user, it seems like I need to bucket unlink
all buckets belongs to the user, but the buckets are still in there!
Then, the other users can not create the same name of bucket
anymore(by s3/swift).

So, Is this the only way I can do by using s3/swift client to delete buckets
which belongs to that user before radosgw-admin user rm ?

Can I rm all buckets, and rm the user only by radosgw-admin?

Thanks for your help!

Chuanyu

 Yehuda
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: About deleting container/bucket from radosgw

2012-07-14 Thread Yehuda Sadeh
On Sat, Jul 14, 2012 at 8:04 AM, 蔡權昱 chua...@cs.nctu.edu.tw wrote:
...

 On the other hand, why radosgw-admin doesn't supply bucket rm call?
 When I want to user rm a user, it seems like I need to bucket unlink
 all buckets belongs to the user, but the buckets are still in there!
 Then, the other users can not create the same name of bucket
 anymore(by s3/swift).

Yeah, currently radosgw-admin doesn't provide a way to remove the
entire content of the bucket. I opened an issue for that (#2786), and
it ties into another issue also (#2499: the ability to remove user
without removing its data first). I think we can provide tools for
manual data removal, but we may also want to explore doing that as
part of a bigger garbage collection scheme that we'll soon be working
on.

 So, Is this the only way I can do by using s3/swift client to delete buckets
 which belongs to that user before radosgw-admin user rm ?

 Can I rm all buckets, and rm the user only by radosgw-admin?

You can unlink the buckets and link them to a different user (so that
you don't just leak the data), and then remove the user.

Yehuda
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: About deleting container/bucket from radosgw

2012-07-14 Thread 蔡權昱
2012/7/15 Yehuda Sadeh yeh...@inktank.com:
 On Sat, Jul 14, 2012 at 8:04 AM, 蔡權昱 chua...@cs.nctu.edu.tw wrote:
 ...

 On the other hand, why radosgw-admin doesn't supply bucket rm call?
 When I want to user rm a user, it seems like I need to bucket unlink
 all buckets belongs to the user, but the buckets are still in there!
 Then, the other users can not create the same name of bucket
 anymore(by s3/swift).

 Yeah, currently radosgw-admin doesn't provide a way to remove the
 entire content of the bucket. I opened an issue for that (#2786), and
 it ties into another issue also (#2499: the ability to remove user
 without removing its data first). I think we can provide tools for
 manual data removal, but we may also want to explore doing that as
 part of a bigger garbage collection scheme that we'll soon be working
 on.
It sounds good that rgw will have garbage collection!

 So, Is this the only way I can do by using s3/swift client to delete buckets
 which belongs to that user before radosgw-admin user rm ?

 Can I rm all buckets, and rm the user only by radosgw-admin?

 You can unlink the buckets and link them to a different user (so that
 you don't just leak the data), and then remove the user.
OK, it's a good approach now, thanks for your advice :)

Chuanyu

 Yehuda



-- 
蔡權昱 Tsai Chuanyu
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html