[ceph-users] radosGW namespace

2013-09-15 Thread Fuchs, Andreas (SwissTXT)
Hi Ceph Users

We setup a radosgw per ceph doku. While everything works fine we found out that 
different access_keys share the same bucket namespace.
So when access_key A creates a bucket "test" access_key B cannot create a 
bucket with name "test".
Is it possible to separate the account's so that they have theyr own namesbace?

Many thanks
Andi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Rbd cp empty block

2013-09-15 Thread Guangliang Zhao
On Mon, Sep 16, 2013 at 09:20:29AM +0800, 王根意 wrote:
> Hi all:
> 
> I have a 30G rbd block device as virtual machine disk, Aleady installed
> ubuntu 12.04. About 1G space used.
> 
> When I want to deploy vm, I made a "rbd cp". Then problem came, it copy 30G
> data instead of 1G. And this action take lots of time.
> 
> Any ideal? I just want make it faster to deploy vm.

The "rbd clone" maybe what you want ;-)
http://ceph.com/docs/master/man/8/rbd/

> 
> -- 
> OPS 王根意

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Best regards,
Guangliang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy and ceph-disk cannot prepare disks

2013-09-15 Thread Andy Schuette
First-time list poster here, and I'm pretty stumped on this one. My
problem hasn't really been discussed on the list before, so I'm hoping
that I can get this figured out since it's stopping me from learning
more about ceph. I've tried this with the journal on the same disk and
on a separate SSD, both with the same error stopping me.

I'm using ceph-deploy 1.2.3, and ceph is version 0.67.2 on the osd
node. OS is Ubuntu 13.04, kernel is 3.8.0-29, architecture is x86_64.

Here is my log from ceph-disk prepare:

ceph-disk prepare /dev/sdd
INFO:ceph-disk:Will colocate journal with data on /dev/sdd
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
Information: Moved requested sector from 2097153 to 2099200 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=/dev/sdd1  isize=2048   agcount=4, agsize=122029061 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=488116241, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=238338, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
umount: /var/lib/ceph/tmp/mnt.X21v8V: device is busy.
(In some cases useful info about processes that use
 the device is found by lsof(8) or fuser(1))
ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
'--', '/var/lib/ceph/tmp/mnt.X21v8V']' returned non-zero exit status 1

And the log from ceph-deploy is the same (I truncated since it's the
same for all 3 in the following):

2013-09-02 11:42:47,658 [ceph_deploy.osd][DEBUG ] Preparing cluster
ceph disks ACU1:/dev/sdd:/dev/sdc1 ACU1:/dev/sde:/dev/sdc2
ACU1:/dev/sdf:/dev/sdc3
2013-09-02 11:42:49,855 [ceph_deploy.osd][DEBUG ] Deploying osd to ACU1
2013-09-02 11:42:49,966 [ceph_deploy.osd][DEBUG ] Host ACU1 is now
ready for osd use.
2013-09-02 11:42:49,967 [ceph_deploy.osd][DEBUG ] Preparing host ACU1
disk /dev/sdd journal /dev/sdc1 activate False
2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare
--cluster ceph -- /dev/sdd /dev/sdc1 returned 1
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=/dev/sdd1  isize=2048   agcount=4, agsize=122094597 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=488378385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=238466, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
same device as the osd data
umount: /var/lib/ceph/tmp/mnt.68dFXq: device is busy.
(In some cases useful info about processes that use
 the device is found by lsof(8) or fuser(1))
ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
'--', '/var/lib/ceph/tmp/mnt.68dFXq']' returned non-zero exit status 1

When I go to the host machine I can umount all day with no indication
of anything holding up the process, and lsof isn't yielding anything
useful for me. Any pointers to what is going wrong would be
appreciated.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Rbd cp empty block

2013-09-15 Thread 王根意
Hi all:

I have a 30G rbd block device as virtual machine disk, Aleady installed
ubuntu 12.04. About 1G space used.

When I want to deploy vm, I made a "rbd cp". Then problem came, it copy 30G
data instead of 1G. And this action take lots of time.

Any ideal? I just want make it faster to deploy vm.

-- 
OPS 王根意
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mds stuck in rejoin

2013-09-15 Thread Gregory Farnum
What's the output of "ceph -s", and have you tried running the MDS
with any logging enabled that we can check out?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Sun, Sep 15, 2013 at 8:24 AM, Serge Slipchenko
 wrote:
> Hi,
>
> I'm testing ceph 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a) under
> load.
> My configuration has 2 mds, 3 mon and 16 osd - mon and mds are on separate
> servers, osd distributed on 8 servers
>
> 3 servers with several processes read and write via libcephfs.
>
> Restart of active mds leads to infinite rejoin and complete inaccessibility
> of the cephfs.
>
> It seems related to the bug http://tracker.ceph.com/issues/4637
>
> --
> Kind regards, Serge Slipchenko
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Module rbd not found on Ubuntu 13.04

2013-09-15 Thread Alek Paunov

On 11.09.2013 20:05, Prasanna Gholap wrote:


By the link about aws, rbd.ko isn't included yet in linux aws . I'll try to
build the kernel manually and proceed for rbd.
Thanks for your help.


If your requirement is modern Linux (not Ubuntu exclusive) you can use 
Fedora (AMIs are built with unmodified Fedora kernel which of course 
includes recent rbd)


http://fedoraproject.org/en/get-fedora-options#clouds

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem with ceph-deploy hanging

2013-09-15 Thread Gruher, Joseph R

>From: Gruher, Joseph R
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
>> wrote:
>>
>>> root@cephtest01:~# ssh cephtest02 wget -q -O-
>>> 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' |
>>> apt-key add -
>>>
>>> gpg: no valid OpenPGP data found.
>>>
>>
>>This is clearly part of the problem. Can you try getting to this with
>>something other than wget (e.g. curl) ?
>
>OK, I am seeing the problem here after turning off quiet mode on wget.  You
>can see in the wget output that part of the URL is lost when executing the
>command over SSH.  However, I'm still unsure how to fix this, I've tried a
>number of ways of enclosing the command and this keeps happening.
>
>SSH command leads to incomplete URL and returns web page (note URL
>truncated at ceph.git):
>
>root@cephtest01:~# ssh cephtest02 sudo wget -O-
>'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
>--2013-09-13 16:37:06--  https://ceph.com/git/?p=ceph.git
>
>When run locally complete URL returns PGP key:
>
>root@cephtest02:/# wget -O-
>'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
>--2013-09-13 16:37:30--
>https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

I was able to show the wget command does succeed if properly formatted (have to 
double-enclose in quotes as SSH strips the outer set) as does the "apt-key add" 
if prefaced with a "sudo".

So, I'm still stuck on the problem of ceph deploy hanging at the point shown 
below.  Any tips on how to debug further?  Has anyone else experienced a 
similar problem?  Is it possible to enable any additional output from 
ceph-deploy?  Is there any documentation on how to deploy without using 
"ceph-deploy install"?  Thanks!

Here's where it hangs:

root@cephtest01:~# ceph-deploy install cephtest02 cephtest03 cephtest04 
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster 
ceph hosts cephtest02 cephtest03 cephtest04
[ceph_deploy.install][DEBUG ] Detecting platform for host cephtest02 ...
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 12.04 precise
[cephtest02][INFO  ] installing ceph on cephtest02
[cephtest02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive 
apt-get -q install --assume-yes ca-certificates
[cephtest02][INFO  ] Reading package lists...
[cephtest02][INFO  ] Building dependency tree...
[cephtest02][INFO  ] Reading state information...
[cephtest02][INFO  ] ca-certificates is already the newest version.
[cephtest02][INFO  ] 0 upgraded, 0 newly installed, 0 to remove and 4 not 
upgraded.
[cephtest02][INFO  ] Running command: wget -q -O- 
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key 
add -

Here's the command it seems to be hanging on succeeding when manually run on 
the command line:

root@cephtest01:~# ssh cephtest02 wget -q -O- 
"'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo 
apt-key add -"
OK
root@cephtest01:~#

Thanks,
Joe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mds stuck in rejoin

2013-09-15 Thread Serge Slipchenko
Hi,

I'm testing ceph 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a) under
load.
My configuration has 2 mds, 3 mon and 16 osd - mon and mds are on separate
servers, osd distributed on 8 servers

3 servers with several processes read and write via libcephfs.

Restart of active mds leads to infinite rejoin and complete inaccessibility
of the cephfs.

It seems related to the bug http://tracker.ceph.com/issues/4637

-- 
Kind regards, Serge Slipchenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com