Re: [ceph-users] Rbd cp empty block

2013-09-16 Thread Guangliang Zhao
On Mon, Sep 16, 2013 at 09:20:29AM +0800, 王根意 wrote:
 Hi all:
 
 I have a 30G rbd block device as virtual machine disk, Aleady installed
 ubuntu 12.04. About 1G space used.
 
 When I want to deploy vm, I made a rbd cp. Then problem came, it copy 30G
 data instead of 1G. And this action take lots of time.
 
 Any ideal? I just want make it faster to deploy vm.

The rbd clone maybe what you want ;-)
http://ceph.com/docs/master/man/8/rbd/

 
 -- 
 OPS 王根意

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Best regards,
Guangliang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosGW namespace

2013-09-16 Thread Fuchs, Andreas (SwissTXT)
Hi Ceph Users

We setup a radosgw per ceph doku. While everything works fine we found out that 
different access_keys share the same bucket namespace.
So when access_key A creates a bucket test access_key B cannot create a 
bucket with name test.
Is it possible to separate the account's so that they have theyr own namesbace?

Many thanks
Andi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to use radosgw admin ops api

2013-09-16 Thread
 hi ;

I'm currently trying to test the radosgw-admin  which lives in 
http://ceph.com/docs/master/radosgw/adminops/
my user's caps is 
caps : [{type: usages, perm:*}
   {type:users, perm:*}
 ]

then, i put a commond , curl  -XGET http://kp/admin/usage?format=json 
-d'{uid=johdoe}'
and retuen 403.
   

can you help me !!!   
thinks a lot!!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to use radosgw admin ops api

2013-09-16 Thread
 hi ;

I'm currently trying to test the radosgw-admin  which lives in 
http://ceph.com/docs/master/radosgw/adminops/
my user's caps is 
caps : [{type: usages, perm:*}
   {type:users, perm:*}
 ]

then, i put a commond , curl  -XGET http://kp/admin/usage?format=json 
-d'{uid=johdoe}'
and retuen 403.
and the commond curl  -XGET http://kp/admin/user?format=json 
-d'{uid=johdoe}' will return 405.
dos the admin opi api have example!!

can you help me !!!   
thinks a lot!!
 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw admin api

2013-09-16 Thread
hello
  i do not understand  what is mean  An admin API request will be done on a 
URI that starts with the configurable ‘admin’ resource entry point. live in 
http://ceph.com/docs/master/radosgw/adminops/ , can anyone explain what should 
i do in my ceph ! it is mean configure the ceph.conf??
  
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Rugged data distribution on OSDs

2013-09-16 Thread Mihály Árva-Tóth
Hello,

I made some tests on 3 node Ceph cluster: upload 3 million 50 KiB object to
single container. Speed and performance were okay. But data does not
distributed correctly. Every node has got 2 pcs. 4 TB and 1 pc. 2 TB HDD.

osd.0 41 GB (4 TB)
osd.1 47 GB (4 TB)
osd.3 16 GB (2 TB)
osd.4 40 GB (4 TB)
osd.5 49 GB (4 TB)
osd.6 17 GB (2 TB)
osd.7 48 GB (4 TB)
osd.8 42 GB (4 TB)
osd.9 18 GB (2 TB)

Every 4 TB and 2 TB HDDs are from same vendor and same type. (WD RE SATA)

I monitored iops with Zabbix under test, you can see here:
http://ctrlv.in/237368
(sda and sdb are system HDDs) This graph are same on every three nodes.

Is there any idea what's wrong or what should I see?

I'm using ceph-0.67.3 on Ubuntu 12.04.3 x86_64.

Thank you,
Mihaly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
Hi all

as part of moving our OpenStack VM instance store from dedicated disks on the 
physical hosts to a CephFS backed by an SSD pool, we noticed that the files 
created on CephFS aren't sparse, even though the original files were.

This is on 
root@s2:~# ls -lhs /var/lib/nova/instances/_base
total 63G
750M -rw-r--r-- 1 nova nova 2.0G Jul 10 21:40 
1a11de23fe75a210b4da631366513cb7c22ef311
750M -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
1a11de23fe75a210b4da631366513cb7c22ef311_10
…

vs

root@s2:~# ls -lhs 
/mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311*
1.2G -rw-r--r-- 1 nova nova 1.2G Sep  5 16:56 
/mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311
 10G -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
/mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311_10

We have used different ways of copying the files (tar and rsync) and specified 
the sparse options:

# rsync -rtvupogS -h  /var/lib/nova/instances/ /mnt/instances/instances
or
# (cd /var/lib/nova/instances ; tar -Svcf - .)|(cd /mnt/instances/instances ; 
tar Sxpf -)

The OSDs we use for this pool are backed by XFS (which has a problem with 
sparse files, unless one specifies allocation block size options in the mounts) 
http://serverfault.com/questions/406069/why-are-my-xfs-filesystems-suddenly-consuming-more-space-and-full-of-sparse-file,
 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=055388a3188f56676c21e92962fc366ac8b5cb72.
 We have mounted the XFS partitions for the OSDs with this option, but I assume 
that this shouldn't impact the way CephFS handles sparse files.

I seem to remember that the copying of sparse files worked a couple of months 
ago (ceph-fs kernel 3.5 on btrfs OSDs), but now we used Kernel 3.10 and 
recently ceph-fuse to mount the CephFS.

Are we doing something wrong, or is this not supported by CephFS?

cheers
jc





-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch

http://www.switch.ch/socialmedia

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd stuck creating a block device

2013-09-16 Thread Nico Massenberg
Hi there,

I have successfully setup a ceph cluster with a healthy status.
When trying to create a rbd block device image I am stuck with an error which I 
have to ctrl+c:


ceph@vl0181:~/konkluster$ rbd create imagefoo --size 5120 --pool kontrastpool
2013-09-16 10:59:06.838235 7f3bcb9eb700  0 -- 192.168.111.109:0/1013698  
192.168.111.10:6806/3750 pipe(0x1fdfb00 sd=4 :0 s=1 pgs=0 cs=0 l=1 
c=0x1fdfd60).fault


Any ideas anyone?
Thanks, Nico
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd stuck creating a block device

2013-09-16 Thread Wido den Hollander

On 09/16/2013 11:18 AM, Nico Massenberg wrote:

Hi there,

I have successfully setup a ceph cluster with a healthy status.
When trying to create a rbd block device image I am stuck with an error which I 
have to ctrl+c:


ceph@vl0181:~/konkluster$ rbd create imagefoo --size 5120 --pool kontrastpool
2013-09-16 10:59:06.838235 7f3bcb9eb700  0 -- 192.168.111.109:0/1013698  
192.168.111.10:6806/3750 pipe(0x1fdfb00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x1fdfd60).fault


Any ideas anyone?


Is the Ceph cluster healthy?

What does 'ceph -s' say?

If the cluster is healthy it seems like this client can't contact the 
Ceph cluster.



Thanks, Nico
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd stuck creating a block device

2013-09-16 Thread Nico Massenberg
Am 16.09.2013 um 11:25 schrieb Wido den Hollander w...@42on.com:

 On 09/16/2013 11:18 AM, Nico Massenberg wrote:
 Hi there,
 
 I have successfully setup a ceph cluster with a healthy status.
 When trying to create a rbd block device image I am stuck with an error 
 which I have to ctrl+c:
 
 
 ceph@vl0181:~/konkluster$ rbd create imagefoo --size 5120 --pool kontrastpool
 2013-09-16 10:59:06.838235 7f3bcb9eb700  0 -- 192.168.111.109:0/1013698  
 192.168.111.10:6806/3750 pipe(0x1fdfb00 sd=4 :0 s=1 pgs=0 cs=0 l=1 
 c=0x1fdfd60).fault
 
 
 Any ideas anyone?
 
 Is the Ceph cluster healthy?

Yes it is.

 
 What does 'ceph -s' say?

ceph@vl0181:~/konkluster$ ceph -s
  cluster 3dad736b-a9fc-42bf-a2fb-399cb8cbb880
   health HEALTH_OK
   monmap e3: 3 mons at 
{ceph01=192.168.111.10:6789/0,ceph02=192.168.111.11:6789/0,ceph03=192.168.111.12:6789/0},
 election epoch 52, quorum 0,1,2 ceph01,ceph02,ceph03
   osdmap e230: 12 osds: 12 up, 12 in
pgmap v3963: 292 pgs: 292 active+clean; 0 bytes data, 450 MB used, 6847 GB 
/ 6847 GB avail
   mdsmap e1: 0/0/1 up

 
 If the cluster is healthy it seems like this client can't contact the Ceph 
 cluster.

I have no problems contacting any node/monitor from the admin machine via ping 
or telnet.

 
 Thanks, Nico
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 -- 
 Wido den Hollander
 42on B.V.
 
 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CORS not working

2013-09-16 Thread Pawel Stefanski
hello all!

Once again sorry for delay:
all dumps are in
http://pastebin.com/dBnEsWpW

btw. I saw second commit to dumpling http://tracker.ceph.com/issues/6078,
anything more changed ?
best regards!
-- 
pawel


On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com wrote:

 Can you provide a log that includes the bucket creation, CORS settings
 and the OPTIONS call? It'd be best if you could do it with also 'debug
 ms = 1'.

 Thanks,
 Yehuda

 On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański pejo...@gmail.com wrote:
  Sorry for delay,
 
  static3 bucket was created on 0.56 afair, I've tested the same operation
  with fresh bucket created now on dumpling, and the problem still occurs.
 
  regards!
  --
  pawel
 
 
  On 04.09.2013 20:15, Yehuda Sadeh wrote:
 
  Is static3 a bucket that you created before the upgrade? Can you test
  it with newly created buckets? Might be that you're hitting some other
  issue.
 
  Thanks,
  Yehuda
 
  On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski pejo...@gmail.com
  wrote:
 
  hello!
 
  yes, dns name is configured and working perfectly, the bucket (in this
  example static3) is found actually, but RGW can't read CORS
 configuration
  due some reason.
 
  2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req 10:0.000275:s3:OPTIONS
  /::getting op
  2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req 10:0.000280:s3:OPTIONS
  /:options_cors:authorizing
  2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req 10:0.000287:s3:OPTIONS
  /:options_cors:reading permissions
  2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
  rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
  s-prefetch_data=0
  2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving .rgw+static3 to cache
  LRU
  end
  2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
 name=.rgw+static3 :
  hit
  2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state: s-obj_tag
 was
  set
  empty
  2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr: user.rgw.acl
  2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr: user.rgw.cors
  2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr: user.rgw.idtag
  2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
 user.rgw.manifest
  2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving .rgw+static3 to cache
  LRU
  end
  2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
 name=.rgw+static3 :
  hit
  2013-09-04 08:07:46.082898 7ff4bf7ee700 20 rgw_get_bucket_info: old
  bucket
  info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
  2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
  AccessControlPolicyAccessControlPolicy
 
  xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
  xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
 xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
  2013-09-04 08:07:46.082943 7ff4bf7ee700 15 Read
  AccessControlPolicyAccessControlPolicy
 
  xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
  xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
 xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
  2013-09-04 08:07:46.082951 7ff4bf7ee700  2 req 10:0.000486:s3:OPTIONS
  /:options_cors:verifying op mask
  2013-09-04 08:07:46.082955 7ff4bf7ee700 20 required_mask= 1
  user.op_mask=7
  2013-09-04 08:07:46.082957 7ff4bf7ee700  2 req 10:0.000492:s3:OPTIONS
  /:options_cors:verifying op permissions
  2013-09-04 08:07:46.082960 7ff4bf7ee700  2 req 10:0.000495:s3:OPTIONS
  /:options_cors:verifying op params
  2013-09-04 08:07:46.082963 7ff4bf7ee700  2 req 10:0.000498:s3:OPTIONS
  /:options_cors:executing
  2013-09-04 08:07:46.082966 7ff4bf7ee700  2 No CORS configuration set
 yet
  for
  this bucket
  2013-09-04 08:07:46.083105 7ff4bf7ee700  2 req 10:0.000640:s3:OPTIONS
  /:options_cors:http status=403
  2013-09-04 08:07:46.083548 7ff4bf7ee700  1 == req done req=0xbcd910
  http_status=403 ==
 
  best regards!
  --
  pawel
 
 
  On Tue, Sep 3, 2013 at 5:17 PM, Yehuda Sadeh yeh...@inktank.com
 wrote:
 
  On Tue, Sep 3, 2013 at 3:40 AM, Pawel Stefanski pejo...@gmail.com
  wrote:
 
  hello!
 
  I've tried with wip-6078 and git dumpling builds and got the same
 error
  during OPTIONS request.
 
  curl -v -X OPTIONS -H 'Access-Control-Request-Method: PUT' -H
 Origin:
  http://X.pl; http://static3.X.pl/
 
  OPTIONS / HTTP/1.1
  

Re: [ceph-users] problem with ceph-deploy hanging

2013-09-16 Thread Alfredo Deza
On Sun, Sep 15, 2013 at 3:07 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:

From: Gruher, Joseph R
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:

 root@cephtest01:~# ssh cephtest02 wget -q -O-
 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' |
 apt-key add -

 gpg: no valid OpenPGP data found.


This is clearly part of the problem. Can you try getting to this with
something other than wget (e.g. curl) ?

OK, I am seeing the problem here after turning off quiet mode on wget.  You
can see in the wget output that part of the URL is lost when executing the
command over SSH.  However, I'm still unsure how to fix this, I've tried a
number of ways of enclosing the command and this keeps happening.

SSH command leads to incomplete URL and returns web page (note URL
truncated at ceph.git):

root@cephtest01:~# ssh cephtest02 sudo wget -O-
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
--2013-09-13 16:37:06--  https://ceph.com/git/?p=ceph.git

When run locally complete URL returns PGP key:

root@cephtest02:/# wget -O-
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
--2013-09-13 16:37:30--
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

 I was able to show the wget command does succeed if properly formatted (have 
 to double-enclose in quotes as SSH strips the outer set) as does the apt-key 
 add if prefaced with a sudo.

ceph-deploy inserts sudo for all commands on the remote host


 So, I'm still stuck on the problem of ceph deploy hanging at the point shown 
 below.  Any tips on how to debug further?  Has anyone else experienced a 
 similar problem?  Is it possible to enable any additional output from 
 ceph-deploy?

ceph-deploy is currently set to output logging at the DEBUG level, I
don't think there is anything more (regarding output) that you can
change for more verbosity here.

 Is there any documentation on how to deploy without using
ceph-deploy install?  Thanks!

We are about to make a release with a couple of flags to avoid
changing the source repos, this would allow you to have your own
repositories
set before running ceph-deploy and the tool would just install ceph
from that (without the need to grab keys)

But certainly, I am worried about why is it hanging for you here, this
is a problem and I really want to make sure this is either fixed or
confirmed it was some kind of misconfiguration.

I believe that the problem is coming from using `sudo` + `root`. This
is a problem that is certainly fixed in the upcoming version.


Can you try with a different user (for now) ?


 Here's where it hangs:

 root@cephtest01:~# ceph-deploy install cephtest02 cephtest03 cephtest04
 [ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster 
 ceph hosts cephtest02 cephtest03 cephtest04
 [ceph_deploy.install][DEBUG ] Detecting platform for host cephtest02 ...
 [ceph_deploy.install][INFO  ] Distro info: Ubuntu 12.04 precise
 [cephtest02][INFO  ] installing ceph on cephtest02
 [cephtest02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive 
 apt-get -q install --assume-yes ca-certificates
 [cephtest02][INFO  ] Reading package lists...
 [cephtest02][INFO  ] Building dependency tree...
 [cephtest02][INFO  ] Reading state information...
 [cephtest02][INFO  ] ca-certificates is already the newest version.
 [cephtest02][INFO  ] 0 upgraded, 0 newly installed, 0 to remove and 4 not 
 upgraded.
 [cephtest02][INFO  ] Running command: wget -q -O- 
 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key 
 add -

 Here's the command it seems to be hanging on succeeding when manually run on 
 the command line:

 root@cephtest01:~# ssh cephtest02 wget -q -O- 
 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo 
 apt-key add -
 OK
 root@cephtest01:~#

 Thanks,
 Joe
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy and ceph-disk cannot prepare disks

2013-09-16 Thread Alfredo Deza
On Mon, Sep 16, 2013 at 12:50 AM, Andy Schuette apsbi...@gmail.com wrote:
 First-time list poster here, and I'm pretty stumped on this one. My
 problem hasn't really been discussed on the list before, so I'm hoping
 that I can get this figured out since it's stopping me from learning
 more about ceph. I've tried this with the journal on the same disk and
 on a separate SSD, both with the same error stopping me.

 I'm using ceph-deploy 1.2.3, and ceph is version 0.67.2 on the osd
 node. OS is Ubuntu 13.04, kernel is 3.8.0-29, architecture is x86_64.

 Here is my log from ceph-disk prepare:

 ceph-disk prepare /dev/sdd
 INFO:ceph-disk:Will colocate journal with data on /dev/sdd
 Information: Moved requested sector from 34 to 2048 in
 order to align on 2048-sector boundaries.
 The operation has completed successfully.
 Information: Moved requested sector from 2097153 to 2099200 in
 order to align on 2048-sector boundaries.
 The operation has completed successfully.
 meta-data=/dev/sdd1  isize=2048   agcount=4, agsize=122029061 blks
  =   sectsz=512   attr=2, projid32bit=0
 data =   bsize=4096   blocks=488116241, imaxpct=5
  =   sunit=0  swidth=0 blks
 naming   =version 2  bsize=4096   ascii-ci=0
 log  =internal log   bsize=4096   blocks=238338, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none   extsz=4096   blocks=0, rtextents=0
 umount: /var/lib/ceph/tmp/mnt.X21v8V: device is busy.
 (In some cases useful info about processes that use
  the device is found by lsof(8) or fuser(1))
 ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
 '--', '/var/lib/ceph/tmp/mnt.X21v8V']' returned non-zero exit status 1

 And the log from ceph-deploy is the same (I truncated since it's the
 same for all 3 in the following):

 2013-09-02 11:42:47,658 [ceph_deploy.osd][DEBUG ] Preparing cluster
 ceph disks ACU1:/dev/sdd:/dev/sdc1 ACU1:/dev/sde:/dev/sdc2
 ACU1:/dev/sdf:/dev/sdc3
 2013-09-02 11:42:49,855 [ceph_deploy.osd][DEBUG ] Deploying osd to ACU1
 2013-09-02 11:42:49,966 [ceph_deploy.osd][DEBUG ] Host ACU1 is now
 ready for osd use.
 2013-09-02 11:42:49,967 [ceph_deploy.osd][DEBUG ] Preparing host ACU1
 disk /dev/sdd journal /dev/sdc1 activate False
 2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare
 --cluster ceph -- /dev/sdd /dev/sdc1 returned 1
 Information: Moved requested sector from 34 to 2048 in
 order to align on 2048-sector boundaries.
 The operation has completed successfully.
 meta-data=/dev/sdd1  isize=2048   agcount=4, agsize=122094597 blks
  =   sectsz=512   attr=2, projid32bit=0
 data =   bsize=4096   blocks=488378385, imaxpct=5
  =   sunit=0  swidth=0 blks
 naming   =version 2  bsize=4096   ascii-ci=0
 log  =internal log   bsize=4096   blocks=238466, version=2
  =   sectsz=512   sunit=0 blks, lazy-count=1
 realtime =none   extsz=4096   blocks=0, rtextents=0

 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
 same device as the osd data
 umount: /var/lib/ceph/tmp/mnt.68dFXq: device is busy.
 (In some cases useful info about processes that use
  the device is found by lsof(8) or fuser(1))
 ceph-disk: Unmounting filesystem failed: Command '['/bin/umount',
 '--', '/var/lib/ceph/tmp/mnt.68dFXq']' returned non-zero exit status 1

 When I go to the host machine I can umount all day with no indication
 of anything holding up the process, and lsof isn't yielding anything
 useful for me. Any pointers to what is going wrong would be
 appreciated.

This line from your log output seems like a problem:

2013-09-02 11:43:03,489 [ceph_deploy.osd][ERROR ] ceph-disk-prepare
--cluster ceph -- /dev/sdd /dev/sdc1 returned 1

Have you tried that on the remote host and checked the output then?

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Deploy a Ceph cluster to play around with

2013-09-16 Thread Guang
Hello ceph-users, ceph-devel,
Nice to meet you in the community!
Today I tried to deploy a Ceph cluster to play around with the API, and during 
the deployment, i have a couple of questions which may need you help:
  1) How many hosts do I need if I want to deploy a cluster with RadosGW (so 
that I can try with the S3 API)? Is it 3 OSD + 1 Mon + 1 GW =  5 hosts on 
minimum?

  2) I have a list of hardwares, however, my host only have 1 disk with two 
partitions, one for boot and another for LVM members, is it possible to deploy 
an OSD on such hardware (e.g. make a partition with ext4)? Or I will need 
another disk to do so?

-bash-4.1$ ceph-deploy disk list myserver.com
[ceph_deploy.osd][INFO  ] Distro info: RedHatEnterpriseServer 6.3 Santiago
[ceph_deploy.osd][DEBUG ] Listing disks on myserver.com...
[repl101.mobstor.gq1.yahoo.com][INFO  ] Running command: ceph-disk list
[repl101.mobstor.gq1.yahoo.com][INFO  ] /dev/sda :
[repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda1 other, ext4, mounted on /boot
[repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda2 other, LVM2_member

Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mds stuck in rejoin

2013-09-16 Thread Serge Slipchenko
Hi,

Digging the web I have found similar symptoms
http://tracker.ceph.com/issues/6087
I have found that my ceph-mds isn't updated and still is 0.67.2 that
doesn't have MDS patch.
After update to 0.67.3 MDS stabilized.

I am terribly sorry, but I hope that my bad experience will help someone.

On Mon, Sep 16, 2013 at 11:25 AM, Serge Slipchenko 
serge.slipche...@gmail.com wrote:

 Hi Gregory,

 On Sun, Sep 15, 2013 at 10:59 PM, Gregory Farnum g...@inktank.com wrote:

 What's the output of ceph -s, and have you tried running the MDS
 with any logging enabled that we can check out?


 See  *sudo ceph mds tell 0 injectargs '--debug_ms 20 --debug_mds 20' *and
 *sudo ceph mds tell 0 injectargs '--debug_ms 1 --debug_mds 1'*

 *sudo ceph -s*
cluster 920ff156-998f-44a9-a0c6-5bc265d4ac2e
health HEALTH_WARN mds cluster is degraded
monmap e7: 3 mons at {s01=
 144.76.13.102:6789/0,s02=144.76.13.103:6789/0,s03=144.76.13.105:6789/0},
 election epoch 4680, quorum 0,1,2 s01,s02,s03
osdmap e7278: 16 osds: 16 up, 16 in
 pgmap v1955548: 704 pgs: 704 active+clean; 207 GB data, 426 GB used,
 38463 GB / 40971 GB avail; 338KB/s rd, 338op/s
mdsmap e1307: 1/1/1 up {0=m02=up:rejoin}, 1 up:standby

 *sudo ceph mds tell 0 injectargs '--debug_ms 20 --debug_mds 20'*

 2013-09-16 10:15:36.724250 7f455864d700 20 -- 5.9.122.115:6806/29741 
 5.9.143.75:6811/25411 pipe(0x19ac500 sd=60 :59383 s=2 pgs=345 cs=1 l=1
 c=0x1939b00).writer sleeping
 2013-09-16 10:15:36.724257 7f455d066700 10 mds.0.cache
 _open_ino_backtrace_fetched ino 1003e4d errno 0
 2013-09-16 10:15:36.724264 7f455d066700 10 mds.0.cache  old object in pool
 1, retrying pool -1
 2013-09-16 10:15:36.724289 7f455d066700  1 -- 5.9.122.115:6806/29741 --
 144.76.13.103:6789/0 -- mon_get_version(what=osdmap handle=20931738) v1
 -- ?+0 0x30e4540 con 0x1875c60
 2013-09-16 10:15:36.724296 7f455d066700 20 -- 
 5.9.122.115:6806/29741submit_message mon_get_version(what=osdmap 
 handle=20931738) v1 remote,
 144.76.13.103:6789/0, have pipe.
 2013-09-16 10:15:36.724313 7f455d066700 10 -- 
 5.9.122.115:6806/29741dispatch_throttle_release 156 to dispatch throttler 
 156/104857600
 2013-09-16 10:15:36.724322 7f455d066700 20 -- 5.9.122.115:6806/29741 done
 calling dispatch on 0x1898000
 2013-09-16 10:15:36.724318 7f4559e5e700 10 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer: state = open policy.server=0
 2013-09-16 10:15:36.724337 7f4559e5e700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer encoding 20966720 features 34359738367 0x30e4540
 mon_get_version(what=osdmap handle=20931738) v1
 2013-09-16 10:15:36.724356 7f4559e5e700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer no session security
 2013-09-16 10:15:36.724365 7f4559e5e700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer sending 20966720 0x30e4540
 2013-09-16 10:15:36.724388 7f4559e5e700 10 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer: state = open policy.server=0
 2013-09-16 10:15:36.724396 7f4559e5e700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).writer sleeping
 2013-09-16 10:15:36.725105 7f455af61700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader got ACK
 2013-09-16 10:15:36.725124 7f455af61700 15 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader got ack seq 20966720
 2013-09-16 10:15:36.725133 7f455af61700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader reading tag...
 2013-09-16 10:15:36.725143 7f455af61700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader got MSG
 2013-09-16 10:15:36.725152 7f455af61700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader got envelope type=20 src mon.1 front=24 data=0 off 0
 2013-09-16 10:15:36.725162 7f455af61700 10 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader wants 24 from dispatch throttler 0/104857600
 2013-09-16 10:15:36.725172 7f455af61700 20 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).reader got front 24
 2013-09-16 10:15:36.725180 7f455af61700 10 -- 5.9.122.115:6806/29741 
 144.76.13.103:6789/0 pipe(0x18d5780 sd=42 :41897 s=2 pgs=1261 cs=1 l=1
 c=0x1875c60).aborted = 0
 2013-09-16 10:15:36.725187 7f455af61700 20 -- 

Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Yan, Zheng
For cephfs, the size reported by 'ls -s' is the same as file size. see
http://ceph.com/docs/next/dev/differences-from-posix/

Regards
Yan, Zheng


On Mon, Sep 16, 2013 at 5:12 PM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:

 Hi all

 as part of moving our OpenStack VM instance store from dedicated disks on the 
 physical hosts to a CephFS backed by an SSD pool, we noticed that the files 
 created on CephFS aren't sparse, even though the original files were.

 This is on
 root@s2:~# ls -lhs /var/lib/nova/instances/_base
 total 63G
 750M -rw-r--r-- 1 nova nova 2.0G Jul 10 21:40 
 1a11de23fe75a210b4da631366513cb7c22ef311
 750M -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
 1a11de23fe75a210b4da631366513cb7c22ef311_10
 …

 vs

 root@s2:~# ls -lhs 
 /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311*
 1.2G -rw-r--r-- 1 nova nova 1.2G Sep  5 16:56 
 /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311
  10G -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
 /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311_10

 We have used different ways of copying the files (tar and rsync) and 
 specified the sparse options:

 # rsync -rtvupogS -h  /var/lib/nova/instances/ /mnt/instances/instances
 or
 # (cd /var/lib/nova/instances ; tar -Svcf - .)|(cd /mnt/instances/instances ; 
 tar Sxpf -)

 The OSDs we use for this pool are backed by XFS (which has a problem with 
 sparse files, unless one specifies allocation block size options in the 
 mounts) 
 http://serverfault.com/questions/406069/why-are-my-xfs-filesystems-suddenly-consuming-more-space-and-full-of-sparse-file,
  
 https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=055388a3188f56676c21e92962fc366ac8b5cb72.
  We have mounted the XFS partitions for the OSDs with this option, but I 
 assume that this shouldn't impact the way CephFS handles sparse files.

 I seem to remember that the copying of sparse files worked a couple of months 
 ago (ceph-fs kernel 3.5 on btrfs OSDs), but now we used Kernel 3.10 and 
 recently ceph-fuse to mount the CephFS.

 Are we doing something wrong, or is this not supported by CephFS?

 cheers
 jc





 --
 SWITCH
 Jens-Christian Fischer, Peta Solutions
 Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
 phone +41 44 268 15 15, direct +41 44 268 15 71
 jens-christian.fisc...@switch.ch
 http://www.switch.ch

 http://www.switch.ch/socialmedia


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] errors after kernel-upgrade -- Help needed

2013-09-16 Thread Markus Goldberg

Hi,
i must ask once again. Is there really no help to this problem?
The errors are still remaining. I can't mount the cluster anymore. All 
my data is gone.

The error-messages are still changing every few seconds.

What can i do ?

Please help,
  Markus

Am 11.09.2013 08:39, schrieb Markus Goldberg:

Does noone have an idea ?
I can't mount the cluster anymore.

Thank you,
  Markus

Am 10.09.2013 09:43, schrieb Markus Goldberg:

Hi,
i made a 'stop ceph-all' on my ceph-admin-host and then a 
kernel-upgrade from 3.9 to 3.11 on all of my 3 nodes.

Ubuntu 13.04, ceph 0,68
The kernel-upgrade required a reboot.
Now after rebooting i get the following errors:

/root@bd-a:~# ceph -s//
//cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b//
// health HEALTH_WARN 133 pgs peering; 272 pgs stale; 265 pgs 
stuck unclean; 2 requests are blocked  32 sec; mds cluster is degraded//
// monmap e1: 3 mons at 
{bd-0=xxx.xxx.xxx.20:6789/0,bd-1=///xxx.xxx.xxx/.21:6789/0,bd-2=///xxx.xxx.xxx/.22:6789/0}, 
election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2//

// mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby//
// osdmap e464358: 3 osds: 3 up, 3 in//
//  pgmap v1343477: 792 pgs, 9 pools, 15145 MB data, 4986 objects//
//30927 MB used, 61372 GB / 61408 GB avail//
// 387 active+clean//
// 122 stale+active//
// 140 stale+active+clean//
// 133 peering//
//  10 stale+active+replay//
//
//root@bd-a:~# ceph -s//
//cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b//
// health HEALTH_WARN 6 pgs down; 377 pgs peering; 296 pgs stuck 
unclean; mds cluster is degraded//
// monmap e1: 3 mons at 
{bd-0=///xxx.xxx.xxx/.20:6789/0,bd-1=///xxx.xxx.xxx/.21:6789/0,bd-2=///xxx.xxx.xxx/.22:6789/0}, 
election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2//

// mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby//
// osdmap e464400: 3 osds: 3 up, 3 in//
//  pgmap v1343586: 792 pgs, 9 pools, 15145 MB data, 4986 objects//
//31046 MB used, 61372 GB / 61408 GB avail//
// 142 active//
// 270 active+clean//
//   3 active+replay//
// 371 peering//
//   6 down+peering//
//
//root@bd-a:~# ceph -s//
//cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b//
// health HEALTH_WARN 257 pgs peering; 359 pgs stuck unclean; 1 
requests are blocked  32 sec; mds cluster is degraded//
// monmap e1: 3 mons at 
{bd-0=///xxx.xxx.xxx/.20:6789/0,bd-1=///xxx.xxx.xxx/.21:6789/0,bd-2=///xxx.xxx.xxx/.22:6789/0}, 
election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2//

// mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby//
// osdmap e464403: 3 osds: 3 up, 3 in//
//  pgmap v1343594: 792 pgs, 9 pools, 15145 MB data, 4986 objects//
//31103 MB used, 61372 GB / 61408 GB avail//
// 373 active//
// 157 active+clean//
//   5 active+replay//
// 257 peering//
//
//root@bd-a:~#/

As you can see above, the errors are changing, perhaps any selfrepair 
is on the run in the background. But this is since 12 hours.

What should i do ?

Thank you,
  Markus
Am 09.09.2013 13:52, schrieb Yan, Zheng:
The bug has been fixed in 3.11 kernel by commit ccca4e37b1 (libceph: 
fix truncate size calculation). We don't backport cephfs bug fixes 
to old kernel. please update the kernel or use ceph-fuse. Regards 
Yan, Zheng

Best regards,
Tobi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
MfG,
   Markus Goldberg


Markus Goldberg | Universität Hildesheim
 | Rechenzentrum
Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | emailgoldb...@uni-hildesheim.de



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
MfG,
   Markus Goldberg


Markus Goldberg | Universität Hildesheim
 | Rechenzentrum
Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 883205 | emailgoldb...@uni-hildesheim.de



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
MfG,
  Markus Goldberg


Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Sage Weil
On Mon, 16 Sep 2013, Yan, Zheng wrote:
 For cephfs, the size reported by 'ls -s' is the same as file size. see
 http://ceph.com/docs/next/dev/differences-from-posix/

...but the files are still in fact stored sparsely.  It's just hard to 
tell.

sage



 
 Regards
 Yan, Zheng
 
 
 On Mon, Sep 16, 2013 at 5:12 PM, Jens-Christian Fischer
 jens-christian.fisc...@switch.ch wrote:
 
  Hi all
 
  as part of moving our OpenStack VM instance store from dedicated disks on 
  the physical hosts to a CephFS backed by an SSD pool, we noticed that the 
  files created on CephFS aren't sparse, even though the original files were.
 
  This is on
  root@s2:~# ls -lhs /var/lib/nova/instances/_base
  total 63G
  750M -rw-r--r-- 1 nova nova 2.0G Jul 10 21:40 
  1a11de23fe75a210b4da631366513cb7c22ef311
  750M -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
  1a11de23fe75a210b4da631366513cb7c22ef311_10
  ?
 
  vs
 
  root@s2:~# ls -lhs 
  /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311*
  1.2G -rw-r--r-- 1 nova nova 1.2G Sep  5 16:56 
  /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311
   10G -rw-r--r-- 1 libvirt-qemu kvm   10G Jul 10 21:40 
  /mnt/instances/instances/_base/1a11de23fe75a210b4da631366513cb7c22ef311_10
 
  We have used different ways of copying the files (tar and rsync) and 
  specified the sparse options:
 
  # rsync -rtvupogS -h  /var/lib/nova/instances/ /mnt/instances/instances
  or
  # (cd /var/lib/nova/instances ; tar -Svcf - .)|(cd /mnt/instances/instances 
  ; tar Sxpf -)
 
  The OSDs we use for this pool are backed by XFS (which has a problem with 
  sparse files, unless one specifies allocation block size options in the 
  mounts) 
  http://serverfault.com/questions/406069/why-are-my-xfs-filesystems-suddenly-consuming-more-space-and-full-of-sparse-file,
   
  https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=055388a3188f56676c21e92962fc366ac8b5cb72.
   We have mounted the XFS partitions for the OSDs with this option, but I 
  assume that this shouldn't impact the way CephFS handles sparse files.
 
  I seem to remember that the copying of sparse files worked a couple of 
  months ago (ceph-fs kernel 3.5 on btrfs OSDs), but now we used Kernel 3.10 
  and recently ceph-fuse to mount the CephFS.
 
  Are we doing something wrong, or is this not supported by CephFS?
 
  cheers
  jc
 
 
 
 
 
  --
  SWITCH
  Jens-Christian Fischer, Peta Solutions
  Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
  phone +41 44 268 15 15, direct +41 44 268 15 71
  jens-christian.fisc...@switch.ch
  http://www.switch.ch
 
  http://www.switch.ch/socialmedia
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer

 For cephfs, the size reported by 'ls -s' is the same as file size. see
 http://ceph.com/docs/next/dev/differences-from-posix/

ah! So if I understand correctly, the files are indeed sparse on CephFS?

thanks
/jc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
 
 For cephfs, the size reported by 'ls -s' is the same as file size. see
 http://ceph.com/docs/next/dev/differences-from-posix/
 
 ...but the files are still in fact stored sparsely.  It's just hard to 
 tell.

perfect - thanks!

/jc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] xfsprogs not found in RHEL

2013-09-16 Thread sriram
ping


On Thu, Sep 12, 2013 at 2:27 PM, sriram sriram@gmail.com wrote:

 Adding to the previous issue. I dont see any files specified in 1,2 and 3
 below. I dont have fastcgi.conf, ceph.conf or s3gw.fcgi. I have followed
 everything up till that point in the wiki. Is there anything missing in the
 wiki or should I create them?



1.

Turn off fastcgiwrapper in /etc/httpd/conf.d/fastcgi.conf by
commenting out the following line:

#FastCgiWrapper On

2.

Add a fastcgi script.

#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway

3.

Make s3gw.fcgi executable:

chmod +x /var/www/rgw/s3gw.fcgi




 On Thu, Sep 12, 2013 at 1:26 PM, sriram sriram@gmail.com wrote:

 That worked. Thank you. I have followed the steps on this wiki (which
 needs update. I will summarize the changes at the end) -
 http://ceph.com/docs/master/install/rpm/

 I do not find any file call fastcgi.conf which is specified as shown
 below in the link. Any ideas?

1.

Turn off fastcgiwrapper in /etc/httpd/conf.d/fastcgi.conf by
commenting out the following line:

#FastCgiWrapper On




 On Wed, Sep 11, 2013 at 4:28 PM, Gagandeep Arora 
 aroragaga...@gmail.comwrote:

 Hello,


 Setup EPEL repo by installing the following package:

 http://fedora.mirror.serversaustralia.com.au/epel/6/i386/epel-release-6-8.noarch.rpm

 do
 # yum install mod_fcgid.x86_64 fcgi.x86_64

 or download the packages from the following links in case you don't t
 want to setup EPEL repo:

 http://dl.fedoraproject.org/pub/epel/6/x86_64/fcgi-2.4.0-10.el6.x86_64.rpm

 http://dl.fedoraproject.org/pub/epel/6/x86_64/mod_fcgid-2.3.7-1.el6.x86_64.rpm


 Regards,
 Gagan


 On Thu, Sep 12, 2013 at 6:05 AM, sriram sriram@gmail.com wrote:

 Thank you. That worked.

 I am trying to install the object storage based on the steps here -

 http://ceph.com/docs/master/install/rpm/#installing-ceph-packages

 Where can I get these RPMs?

 rpm -ivh fcgi-2.4.0-10.el6.x86_64.rpm
 rpm -ivh mod_fastcgi-2.4.6-2.el6.rf.x86_64.rpm



 On Tue, Sep 10, 2013 at 5:05 PM, Gagandeep Arora 
 aroragaga...@gmail.com wrote:

 Hello,

 I think you downloaded source rpm. Download this package from the
 following link


 http://mirror.centos.org/centos/6/updates/x86_64/Packages/xfsprogs-3.1.1-10.el6_4.1.x86_64.rpm


 Regards,
 Gagan



 On Wed, Sep 11, 2013 at 8:54 AM, sriram sriram@gmail.com wrote:

 I installed xfsprogs from
 http://rpm.pbone.net/index.php3/stat/26/dist/74/size/1400502/name/xfsprogs-3.1.1-4.el6.src.rpm
 .
 I then ran sudo yum install ceph and I still get the same error.
 Any ideas?


 On Wed, Aug 28, 2013 at 3:47 PM, sriram sriram@gmail.com wrote:

 Can anyone point me to which xfsprogs RPM to use for RHEL 6


 On Wed, Aug 28, 2013 at 5:46 AM, Sriram sriram@gmail.comwrote:

 Yes I read that but I was not sure if installing from Centos 6
 repository can cause issues.

 On Aug 27, 2013, at 11:46 PM, Stroppa Daniele (strp) 
 s...@zhaw.ch wrote:

  Check this issue: http://tracker.ceph.com/issues/5193

  You might need the RHEL Scalable File System add-on.

  Cheers,
   --
 Daniele Stroppa
 Researcher
 Institute of Information Technology
 Zürich University of Applied Sciences
 http://www.cloudcomp.ch


   From: sriram sriram@gmail.com
 Date: Tue, 27 Aug 2013 22:50:41 -0700
 To: Lincoln Bryant linco...@uchicago.edu
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] xfsprogs not found in RHEL

  Tried

  yum clean all followed by
 yum install ceph

  and the same result.


 On Tue, Aug 27, 2013 at 7:44 PM, Lincoln Bryant 
 linco...@uchicago.edu wrote:

 Hi,

  xfsprogs should be included in the EL6 base.

  Perhaps run yum clean all and try again?

  Cheers,
 Lincoln

On Aug 27, 2013, at 9:16 PM, sriram wrote:

I am trying to install CEPH and I get the following error -

  --- Package ceph.x86_64 0:0.67.2-0.el6 will be installed
 -- Processing Dependency: xfsprogs for package:
 ceph-0.67.2-0.el6.x86_64
 --- Package python-babel.noarch 0:0.9.4-5.1.el6 will be installed
 --- Package python-backports-ssl_match_hostname.noarch
 0:3.2-0.3.a3.el6 will be installed
 --- Package python-docutils.noarch 0:0.6-1.el6 will be installed
 -- Processing Dependency: python-imaging for package:
 python-docutils-0.6-1.el6.noarch
 --- Package python-jinja2.x86_64 0:2.2.1-1.el6 will be installed
 --- Package python-pygments.noarch 0:1.1.1-1.el6 will be installed
 --- Package python-six.noarch 0:1.1.0-2.el6 will be installed
 -- Running transaction check
 --- Package ceph.x86_64 0:0.67.2-0.el6 will be installed
 -- Processing Dependency: xfsprogs for package:
 ceph-0.67.2-0.el6.x86_64
 --- Package python-imaging.x86_64 0:1.1.6-19.el6 will be installed
 -- Finished Dependency Resolution
 Error: Package: ceph-0.67.2-0.el6.x86_64 (ceph)
Requires: xfsprogs


  Machine Info -

  Linux version 2.6.32-131.4.1.el6.x86_64 (
 mockbu...@x86-003.build.bos.redhat.com) (gcc version 4.4.5
 

Re: [ceph-users] Rbd cp empty block

2013-09-16 Thread Guangliang Zhao
On Mon, Sep 16, 2013 at 09:20:29AM +0800, 王根意 wrote:
 Hi all:
 
 I have a 30G rbd block device as virtual machine disk, Aleady installed
 ubuntu 12.04. About 1G space used.
 
 When I want to deploy vm, I made a rbd cp. Then problem came, it copy 30G
 data instead of 1G. And this action take lots of time.
 
 Any ideal? I just want make it faster to deploy vm.

The rbd clone maybe what you want ;-)
http://ceph.com/docs/master/man/8/rbd/

 
 -- 
 OPS 王根意

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Best regards,
Guangliang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CORS not working

2013-09-16 Thread Yehuda Sadeh
On Mon, Sep 16, 2013 at 3:46 AM, Pawel Stefanski pejo...@gmail.com wrote:
 hello all!

 Once again sorry for delay:
 all dumps are in
 http://pastebin.com/dBnEsWpW

 btw. I saw second commit to dumpling http://tracker.ceph.com/issues/6078,
 anything more changed ?
 best regards!

I pushed an extra fix to the CORS issues that you might have been
seeing, so it's worth checking it out.

As for the logs that you provide, you create a bucket named static33
on which you set CORS, but the OPTIONS request goes to a bucket named
static3, so obviously it doesn't work.


Yehuda
 --
 pawel


 On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com wrote:

 Can you provide a log that includes the bucket creation, CORS settings
 and the OPTIONS call? It'd be best if you could do it with also 'debug
 ms = 1'.

 Thanks,
 Yehuda

 On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański pejo...@gmail.com wrote:
  Sorry for delay,
 
  static3 bucket was created on 0.56 afair, I've tested the same operation
  with fresh bucket created now on dumpling, and the problem still occurs.
 
  regards!
  --
  pawel
 
 
  On 04.09.2013 20:15, Yehuda Sadeh wrote:
 
  Is static3 a bucket that you created before the upgrade? Can you test
  it with newly created buckets? Might be that you're hitting some other
  issue.
 
  Thanks,
  Yehuda
 
  On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski pejo...@gmail.com
  wrote:
 
  hello!
 
  yes, dns name is configured and working perfectly, the bucket (in this
  example static3) is found actually, but RGW can't read CORS
  configuration
  due some reason.
 
  2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req 10:0.000275:s3:OPTIONS
  /::getting op
  2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req 10:0.000280:s3:OPTIONS
  /:options_cors:authorizing
  2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req 10:0.000287:s3:OPTIONS
  /:options_cors:reading permissions
  2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
  rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
  s-prefetch_data=0
  2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving .rgw+static3 to
  cache
  LRU
  end
  2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
  name=.rgw+static3 :
  hit
  2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state: s-obj_tag
  was
  set
  empty
  2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr: user.rgw.acl
  2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr: user.rgw.cors
  2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr: user.rgw.idtag
  2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
  user.rgw.manifest
  2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving .rgw+static3 to
  cache
  LRU
  end
  2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
  name=.rgw+static3 :
  hit
  2013-09-04 08:07:46.082898 7ff4bf7ee700 20 rgw_get_bucket_info: old
  bucket
  info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
  2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
  AccessControlPolicyAccessControlPolicy
 
 
  xmlns=http://s3.amazonaws.com/doc/2006-03-01/;OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
  xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers/URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
  xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
  2013-09-04 08:07:46.082943 7ff4bf7ee700 15 Read
  AccessControlPolicyAccessControlPolicy
 
 
  xmlns=http://s3.amazonaws.com/doc/2006-03-01/;OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
  xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers/URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 
  xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
  2013-09-04 08:07:46.082951 7ff4bf7ee700  2 req 10:0.000486:s3:OPTIONS
  /:options_cors:verifying op mask
  2013-09-04 08:07:46.082955 7ff4bf7ee700 20 required_mask= 1
  user.op_mask=7
  2013-09-04 08:07:46.082957 7ff4bf7ee700  2 req 10:0.000492:s3:OPTIONS
  /:options_cors:verifying op permissions
  2013-09-04 08:07:46.082960 7ff4bf7ee700  2 req 10:0.000495:s3:OPTIONS
  /:options_cors:verifying op params
  2013-09-04 08:07:46.082963 7ff4bf7ee700  2 req 10:0.000498:s3:OPTIONS
  /:options_cors:executing
  2013-09-04 08:07:46.082966 7ff4bf7ee700  2 No CORS configuration set
  yet
  for
  this bucket
  2013-09-04 08:07:46.083105 7ff4bf7ee700  2 req 10:0.000640:s3:OPTIONS
  /:options_cors:http status=403
  2013-09-04 08:07:46.083548 7ff4bf7ee700  1 == req done
  req=0xbcd910
  http_status=403 ==
 
  best regards!
  --
  pawel
 
 
  On Tue, Sep 3, 

Re: [ceph-users] Deploy a Ceph cluster to play around with

2013-09-16 Thread Don Talton (dotalton)
If you are just playing around, you could roll everything onto a single server. 
Or, if you wanted, put the MON and OSD on a single server and the radosgw on a 
different server. You can accomplish this in a virtual machine if you don't 
have all the hardware you would like to test with.

 -Original Message-
 From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
 boun...@lists.ceph.com] On Behalf Of Guang
 Sent: Monday, September 16, 2013 6:14 AM
 To: ceph-users@lists.ceph.com; Ceph Development
 Subject: [ceph-users] Deploy a Ceph cluster to play around with
 
 Hello ceph-users, ceph-devel,
 Nice to meet you in the community!
 Today I tried to deploy a Ceph cluster to play around with the API, and during
 the deployment, i have a couple of questions which may need you help:
   1) How many hosts do I need if I want to deploy a cluster with RadosGW (so
 that I can try with the S3 API)? Is it 3 OSD + 1 Mon + 1 GW =  5 hosts on
 minimum?
 
   2) I have a list of hardwares, however, my host only have 1 disk with two
 partitions, one for boot and another for LVM members, is it possible to
 deploy an OSD on such hardware (e.g. make a partition with ext4)? Or I will
 need another disk to do so?
 
 -bash-4.1$ ceph-deploy disk list myserver.com [ceph_deploy.osd][INFO  ]
 Distro info: RedHatEnterpriseServer 6.3 Santiago [ceph_deploy.osd][DEBUG ]
 Listing disks on myserver.com...
 [repl101.mobstor.gq1.yahoo.com][INFO  ] Running command: ceph-disk list
 [repl101.mobstor.gq1.yahoo.com][INFO  ] /dev/sda :
 [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda1 other, ext4, mounted
 on /boot [repl101.mobstor.gq1.yahoo.com][INFO  ]  /dev/sda2 other,
 LVM2_member
 
 Thanks,
 Guang
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] puppet-cephdeploy module

2013-09-16 Thread Don Talton (dotalton)
As weird as it might seem, there is a puppet module now to automate 
ceph-deploy. It came about as Cisco has its own OpenStack installer platform 
which requires full orchestration. It might be of some use to others, so here 
is the link:

https://github.com/dontalton/puppet-cephdeploy


Donald Talton
Systems Development Unit
dotal...@cisco.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CORS not working

2013-09-16 Thread Pawel Stefanski
ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)
16 wrz 2013 18:40, Yehuda Sadeh yeh...@inktank.com napisał(a):

 On Mon, Sep 16, 2013 at 9:29 AM, Pawel Stefanski pejo...@gmail.com
 wrote:
  hello!
 
  Thanks for answer!
 
  My mistake ;-),
  Correct OPTIONS request in
  http://pastebin.com/MmRtTgiZ


 That might have been fixed by the latest commit to dumpling. What
 version are you running?

  also with Ceph -v.
  static33 bucket was made on 0.67.
 
  best regards!
 
 
  On Mon, Sep 16, 2013 at 5:46 PM, Yehuda Sadeh yeh...@inktank.com
 wrote:
 
  On Mon, Sep 16, 2013 at 3:46 AM, Pawel Stefanski pejo...@gmail.com
  wrote:
   hello all!
  
   Once again sorry for delay:
   all dumps are in
   http://pastebin.com/dBnEsWpW
  
   btw. I saw second commit to dumpling
   http://tracker.ceph.com/issues/6078,
   anything more changed ?
   best regards!
 
  I pushed an extra fix to the CORS issues that you might have been
  seeing, so it's worth checking it out.
 
  As for the logs that you provide, you create a bucket named static33
  on which you set CORS, but the OPTIONS request goes to a bucket named
  static3, so obviously it doesn't work.
 
 
  Yehuda
   --
   pawel
  
  
   On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com
 wrote:
  
   Can you provide a log that includes the bucket creation, CORS
 settings
   and the OPTIONS call? It'd be best if you could do it with also
 'debug
   ms = 1'.
  
   Thanks,
   Yehuda
  
   On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański pejo...@gmail.com
   wrote:
Sorry for delay,
   
static3 bucket was created on 0.56 afair, I've tested the same
operation
with fresh bucket created now on dumpling, and the problem still
occurs.
   
regards!
--
pawel
   
   
On 04.09.2013 20:15, Yehuda Sadeh wrote:
   
Is static3 a bucket that you created before the upgrade? Can you
test
it with newly created buckets? Might be that you're hitting some
other
issue.
   
Thanks,
Yehuda
   
On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski 
 pejo...@gmail.com
wrote:
   
hello!
   
yes, dns name is configured and working perfectly, the bucket (in
this
example static3) is found actually, but RGW can't read CORS
configuration
due some reason.
   
2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req
10:0.000275:s3:OPTIONS
/::getting op
2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req
10:0.000280:s3:OPTIONS
/:options_cors:authorizing
2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req
10:0.000287:s3:OPTIONS
/:options_cors:reading permissions
2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
s-prefetch_data=0
2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving .rgw+static3 to
cache
LRU
end
2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
name=.rgw+static3 :
hit
2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state:
s-obj_tag
was
set
empty
2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr:
 user.rgw.acl
2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr:
user.rgw.cors
2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr:
user.rgw.idtag
2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
user.rgw.manifest
2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving .rgw+static3 to
cache
LRU
end
2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
name=.rgw+static3 :
hit
2013-09-04 08:07:46.082898 7ff4bf7ee700 20 rgw_get_bucket_info:
 old
bucket
info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
AccessControlPolicyAccessControlPolicy
   
   
   
xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
xsi:type=GroupURI
 http://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
   
 xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
2013-09-04 08:07:46.082943 7ff4bf7ee700 15 Read
AccessControlPolicyAccessControlPolicy
   
   
   
xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
xsi:type=GroupURI
 http://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
   
 

[ceph-users] getting started

2013-09-16 Thread Justin Ryan
Hi,

I'm brand new to Ceph, attempting to follow the Getting
Startedhttp://ceph.com/docs/master/start/guide with 2 VMs. I
completed the Preflight without issue.  I completed Storage
Cluster Quick Start http://ceph.com/docs/master/start/quick-ceph-deploy/,
but have some questions:

The *Single Node Quick Start* grey box -- does 'single node' mean if you're
running the whole thing on a single machine, if you have only one server
node like the diagram at the top of the page, or if you're only running one
OSD process? I'm not sure if I need to make the `osd crush chooseleaf type`
change.

Are the LIST, ZAP, and ADD OSDS ON STANDALONE DISKS sections an alternative
to the MULTIPLE OSDS ON THE OS DISK (DEMO ONLY) section? I thought I set up
my OSDs already on /tmp/osd{0,1}.

Moving on to the Block Device Quick
Starthttp://ceph.com/docs/master/start/quick-rbd/ --
it says To use this guide, you must have executed the procedures in the
Object Store Quick Start guide first -- but the link to the Object Store
Quick Start actually points to the Storage Cluster Quick
Starthttp://ceph.com/docs/master/start/quick-ceph-deploy/ --
which is it?

Most importantly, it says Ensure your Ceph Storage Cluster is in an active
+ clean state before working with the Ceph Block Device --- how can tell
if my cluster is active+clean?? The only ceph* command on the admin node is
ceph-deploy, and running `ceph` on the server node:

ceph@jr-ceph2:~$ ceph
2013-09-16 16:53:10.880267 7feb96c1b700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-09-16 16:53:10.880271 7feb96c1b700  0 librados: client.admin
initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound

Thanks in advance for any help, and apologies if I missed anything obvious.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CORS not working

2013-09-16 Thread Yehuda Sadeh
On Mon, Sep 16, 2013 at 9:47 AM, Pawel Stefanski pejo...@gmail.com wrote:
 ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)


That one doesn't have the required fix, which will be in 0.67.4.

Yehuda


 16 wrz 2013 18:40, Yehuda Sadeh yeh...@inktank.com napisał(a):

 On Mon, Sep 16, 2013 at 9:29 AM, Pawel Stefanski pejo...@gmail.com
 wrote:
  hello!
 
  Thanks for answer!
 
  My mistake ;-),
  Correct OPTIONS request in
  http://pastebin.com/MmRtTgiZ


 That might have been fixed by the latest commit to dumpling. What
 version are you running?

  also with Ceph -v.
  static33 bucket was made on 0.67.
 
  best regards!
 
 
  On Mon, Sep 16, 2013 at 5:46 PM, Yehuda Sadeh yeh...@inktank.com
  wrote:
 
  On Mon, Sep 16, 2013 at 3:46 AM, Pawel Stefanski pejo...@gmail.com
  wrote:
   hello all!
  
   Once again sorry for delay:
   all dumps are in
   http://pastebin.com/dBnEsWpW
  
   btw. I saw second commit to dumpling
   http://tracker.ceph.com/issues/6078,
   anything more changed ?
   best regards!
 
  I pushed an extra fix to the CORS issues that you might have been
  seeing, so it's worth checking it out.
 
  As for the logs that you provide, you create a bucket named static33
  on which you set CORS, but the OPTIONS request goes to a bucket named
  static3, so obviously it doesn't work.
 
 
  Yehuda
   --
   pawel
  
  
   On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com
   wrote:
  
   Can you provide a log that includes the bucket creation, CORS
   settings
   and the OPTIONS call? It'd be best if you could do it with also
   'debug
   ms = 1'.
  
   Thanks,
   Yehuda
  
   On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański pejo...@gmail.com
   wrote:
Sorry for delay,
   
static3 bucket was created on 0.56 afair, I've tested the same
operation
with fresh bucket created now on dumpling, and the problem still
occurs.
   
regards!
--
pawel
   
   
On 04.09.2013 20:15, Yehuda Sadeh wrote:
   
Is static3 a bucket that you created before the upgrade? Can you
test
it with newly created buckets? Might be that you're hitting some
other
issue.
   
Thanks,
Yehuda
   
On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski
pejo...@gmail.com
wrote:
   
hello!
   
yes, dns name is configured and working perfectly, the bucket
(in
this
example static3) is found actually, but RGW can't read CORS
configuration
due some reason.
   
2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req
10:0.000275:s3:OPTIONS
/::getting op
2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req
10:0.000280:s3:OPTIONS
/:options_cors:authorizing
2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req
10:0.000287:s3:OPTIONS
/:options_cors:reading permissions
2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
s-prefetch_data=0
2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving .rgw+static3
to
cache
LRU
end
2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
name=.rgw+static3 :
hit
2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state:
s-obj_tag
was
set
empty
2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr:
user.rgw.acl
2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr:
user.rgw.cors
2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr:
user.rgw.idtag
2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
user.rgw.manifest
2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving .rgw+static3
to
cache
LRU
end
2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
name=.rgw+static3 :
hit
2013-09-04 08:07:46.082898 7ff4bf7ee700 20 rgw_get_bucket_info:
old
bucket
info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
AccessControlPolicyAccessControlPolicy
   
   
   
   
xmlns=http://s3.amazonaws.com/doc/2006-03-01/;OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
   
xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers/URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
   
xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
2013-09-04 08:07:46.082943 7ff4bf7ee700 15 Read
AccessControlPolicyAccessControlPolicy
   
   
   
   
xmlns=http://s3.amazonaws.com/doc/2006-03-01/;OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
   
   
   
   

Re: [ceph-users] CORS not working

2013-09-16 Thread Pawel Stefanski
hello!

Thanks for answer!

My mistake ;-),
Correct OPTIONS request in
http://pastebin.com/MmRtTgiZ
also with Ceph -v.
static33 bucket was made on 0.67.

best regards!


On Mon, Sep 16, 2013 at 5:46 PM, Yehuda Sadeh yeh...@inktank.com wrote:

 On Mon, Sep 16, 2013 at 3:46 AM, Pawel Stefanski pejo...@gmail.com
 wrote:
  hello all!
 
  Once again sorry for delay:
  all dumps are in
  http://pastebin.com/dBnEsWpW
 
  btw. I saw second commit to dumpling http://tracker.ceph.com/issues/6078
 ,
  anything more changed ?
  best regards!

 I pushed an extra fix to the CORS issues that you might have been
 seeing, so it's worth checking it out.

 As for the logs that you provide, you create a bucket named static33
 on which you set CORS, but the OPTIONS request goes to a bucket named
 static3, so obviously it doesn't work.


 Yehuda
  --
  pawel
 
 
  On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com wrote:
 
  Can you provide a log that includes the bucket creation, CORS settings
  and the OPTIONS call? It'd be best if you could do it with also 'debug
  ms = 1'.
 
  Thanks,
  Yehuda
 
  On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański pejo...@gmail.com
 wrote:
   Sorry for delay,
  
   static3 bucket was created on 0.56 afair, I've tested the same
 operation
   with fresh bucket created now on dumpling, and the problem still
 occurs.
  
   regards!
   --
   pawel
  
  
   On 04.09.2013 20:15, Yehuda Sadeh wrote:
  
   Is static3 a bucket that you created before the upgrade? Can you test
   it with newly created buckets? Might be that you're hitting some
 other
   issue.
  
   Thanks,
   Yehuda
  
   On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski pejo...@gmail.com
   wrote:
  
   hello!
  
   yes, dns name is configured and working perfectly, the bucket (in
 this
   example static3) is found actually, but RGW can't read CORS
   configuration
   due some reason.
  
   2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req
 10:0.000275:s3:OPTIONS
   /::getting op
   2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req
 10:0.000280:s3:OPTIONS
   /:options_cors:authorizing
   2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req
 10:0.000287:s3:OPTIONS
   /:options_cors:reading permissions
   2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
   rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
   s-prefetch_data=0
   2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving .rgw+static3 to
   cache
   LRU
   end
   2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
   name=.rgw+static3 :
   hit
   2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state: s-obj_tag
   was
   set
   empty
   2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr: user.rgw.acl
   2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr: user.rgw.cors
   2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr:
 user.rgw.idtag
   2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
   user.rgw.manifest
   2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving .rgw+static3 to
   cache
   LRU
   end
   2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
   name=.rgw+static3 :
   hit
   2013-09-04 08:07:46.082898 7ff4bf7ee700 20 rgw_get_bucket_info: old
   bucket
   info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
   2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
   AccessControlPolicyAccessControlPolicy
  
  
   xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  
  
   xsi:type=GroupURI
 http://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  
  
  
 xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
   2013-09-04 08:07:46.082943 7ff4bf7ee700 15 Read
   AccessControlPolicyAccessControlPolicy
  
  
   xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  
  
   xsi:type=GroupURI
 http://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
   xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
  
  
  
 xsi:type=CanonicalUserIDpejotes/IDDisplayNameofe/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
   2013-09-04 08:07:46.082951 7ff4bf7ee700  2 req
 10:0.000486:s3:OPTIONS
   /:options_cors:verifying op mask
   2013-09-04 08:07:46.082955 7ff4bf7ee700 20 required_mask= 1
   user.op_mask=7
   2013-09-04 08:07:46.082957 7ff4bf7ee700  2 req
 10:0.000492:s3:OPTIONS
   /:options_cors:verifying op permissions
   2013-09-04 08:07:46.082960 7ff4bf7ee700  2 req
 10:0.000495:s3:OPTIONS
   /:options_cors:verifying op params
   2013-09-04 08:07:46.082963 7ff4bf7ee700  2 req
 

Re: [ceph-users] errors after kernel-upgrade -- Help needed

2013-09-16 Thread Gregory Farnum
Obviously your OSDs aren't getting all the PGs up and running. Have
you followed the troubleshooting steps?
(http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Sep 16, 2013 at 6:35 AM, Markus Goldberg
goldb...@uni-hildesheim.de wrote:
 Hi,
 i must ask once again. Is there really no help to this problem?
 The errors are still remaining. I can't mount the cluster anymore. All my
 data is gone.
 The error-messages are still changing every few seconds.

 What can i do ?

 Please help,
   Markus

 Am 11.09.2013 08:39, schrieb Markus Goldberg:

 Does noone have an idea ?
 I can't mount the cluster anymore.

 Thank you,
   Markus

 Am 10.09.2013 09:43, schrieb Markus Goldberg:

 Hi,
 i made a 'stop ceph-all' on my ceph-admin-host and then a kernel-upgrade
 from 3.9 to 3.11 on all of my 3 nodes.
 Ubuntu 13.04, ceph 0,68
 The kernel-upgrade required a reboot.
 Now after rebooting i get the following errors:

 root@bd-a:~# ceph -s
 cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b
  health HEALTH_WARN 133 pgs peering; 272 pgs stale; 265 pgs stuck
 unclean; 2 requests are blocked  32 sec; mds cluster is degraded
  monmap e1: 3 mons at
 {bd-0=xxx.xxx.xxx.20:6789/0,bd-1=xxx.xxx.xxx.21:6789/0,bd-2=xxx.xxx.xxx.22:6789/0},
 election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2
  mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby
  osdmap e464358: 3 osds: 3 up, 3 in
   pgmap v1343477: 792 pgs, 9 pools, 15145 MB data, 4986 objects
 30927 MB used, 61372 GB / 61408 GB avail
  387 active+clean
  122 stale+active
  140 stale+active+clean
  133 peering
   10 stale+active+replay

 root@bd-a:~# ceph -s
 cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b
  health HEALTH_WARN 6 pgs down; 377 pgs peering; 296 pgs stuck unclean;
 mds cluster is degraded
  monmap e1: 3 mons at
 {bd-0=xxx.xxx.xxx.20:6789/0,bd-1=xxx.xxx.xxx.21:6789/0,bd-2=xxx.xxx.xxx.22:6789/0},
 election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2
  mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby
  osdmap e464400: 3 osds: 3 up, 3 in
   pgmap v1343586: 792 pgs, 9 pools, 15145 MB data, 4986 objects
 31046 MB used, 61372 GB / 61408 GB avail
  142 active
  270 active+clean
3 active+replay
  371 peering
6 down+peering

 root@bd-a:~# ceph -s
 cluster e0dbf70d-af59-42a5-b834-7ad739a7f89b
  health HEALTH_WARN 257 pgs peering; 359 pgs stuck unclean; 1 requests
 are blocked  32 sec; mds cluster is degraded
  monmap e1: 3 mons at
 {bd-0=xxx.xxx.xxx.20:6789/0,bd-1=xxx.xxx.xxx.21:6789/0,bd-2=xxx.xxx.xxx.22:6789/0},
 election epoch 782, quorum 0,1,2 bd-0,bd-1,bd-2
  mdsmap e451467: 1/1/1 up {0=bd-0=up:replay}, 2 up:standby
  osdmap e464403: 3 osds: 3 up, 3 in
   pgmap v1343594: 792 pgs, 9 pools, 15145 MB data, 4986 objects
 31103 MB used, 61372 GB / 61408 GB avail
  373 active
  157 active+clean
5 active+replay
  257 peering

 root@bd-a:~#

 As you can see above, the errors are changing, perhaps any selfrepair is on
 the run in the background. But this is since 12 hours.
 What should i do ?

 Thank you,
   Markus
 Am 09.09.2013 13:52, schrieb Yan, Zheng:

 The bug has been fixed in 3.11 kernel by commit ccca4e37b1 (libceph: fix
 truncate size calculation). We don't backport cephfs bug fixes to old
 kernel. please update the kernel or use ceph-fuse. Regards Yan, Zheng

 Best regards,
 Tobi

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 MfG,
   Markus Goldberg

 
 Markus Goldberg | Universität Hildesheim
 | Rechenzentrum
 Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
 Fax +49 5121 883205 | email goldb...@uni-hildesheim.de
 



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 MfG,
   Markus Goldberg

 
 Markus Goldberg | Universität Hildesheim
 | Rechenzentrum
 Tel +49 5121 883212 | Marienburger Platz 22, D-31141 Hildesheim, Germany
 Fax +49 5121 883205 | email goldb...@uni-hildesheim.de
 



 

Re: [ceph-users] radosGW namespace

2013-09-16 Thread Yehuda Sadeh
Hi,

  currently different users share the same namespace. This follows the
S3 semantics, and allows accessing buckets directly via a virtual host
name. We do have multiple namespaces in mind though, and I also
prototyped it not too long ago:

http://wiki.ceph.com/01Planning/02Blueprints/Emperor/rgw%3A_multitenancy


Yehuda


On Sun, Sep 15, 2013 at 11:47 PM, Fuchs, Andreas (SwissTXT)
andreas.fu...@swisstxt.ch wrote:
 Hi Ceph Users

 We setup a radosgw per ceph doku. While everything works fine we found out 
 that different access_keys share the same bucket namespace.
 So when access_key A creates a bucket test access_key B cannot create a 
 bucket with name test.
 Is it possible to separate the account's so that they have theyr own 
 namesbace?

 Many thanks
 Andi
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] new puppet-cephdeploy module

2013-09-16 Thread Don Talton (dotalton)
As weird as it might seem, there is a puppet module now to automate 
ceph-deploy. It came about as Cisco has its own OpenStack installer platform 
which requires full orchestration. It might be of some use to others, so here 
is the link:

https://github.com/dontalton/puppet-cephdeploy


Donald Talton
Systems Development Unit
dotal...@cisco.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem with ceph-deploy hanging

2013-09-16 Thread Alfredo Deza
On Mon, Sep 16, 2013 at 1:58 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
But certainly, I am worried about why is it hanging for you here, this is a
problem and I really want to make sure this is either fixed or confirmed it 
was
some kind of misconfiguration.

I believe that the problem is coming from using `sudo` + `root`. This is a
problem that is certainly fixed in the upcoming version.


Can you try with a different user (for now) ?

 Sure, happy to try anything that might help.  Can you clarify what you mean 
 by different user?  Should I log in as something besides root on the admin 
 system or should I set up SSH to auto-login as a different user on the 
 target system?  If I different user on the target, should I configure as in 
 the pre-flight (which is what I already did) or are there any changes you 
 might suggest I try?

ceph-deploy will use the user as you are currently executing. That is
why, if you are calling ceph-deploy as root, it will log in remotely
as root.

So by a different user, I mean, something like, user `ceph` executing
ceph-deploy (yes, that same user needs to exist remotely too with
correct permissions)



 I agree, this should be root caused and addressed, there is nothing 
 particularly special about my environment as far as I know, and I followed 
 the pre-flight carefully so I assume other users may be exposed to the same 
 problem.

 Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CORS not working

2013-09-16 Thread Pawel Stefanski
I've tried now with:
http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/dumpling/pool/main/c/ceph/radosgw_0.67.3-10-g670db7e-1precise_amd64.deb
And it works :-)

Thanks!!


On Mon, Sep 16, 2013 at 7:04 PM, Pawel Stefanski pejo...@gmail.com wrote:

 I will try with dumpling master git build today.
 16 wrz 2013 18:55, Yehuda Sadeh yeh...@inktank.com napisał(a):

 On Mon, Sep 16, 2013 at 9:47 AM, Pawel Stefanski pejo...@gmail.com
 wrote:
  ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)


 That one doesn't have the required fix, which will be in 0.67.4.

 Yehuda

 
  16 wrz 2013 18:40, Yehuda Sadeh yeh...@inktank.com napisał(a):
 
  On Mon, Sep 16, 2013 at 9:29 AM, Pawel Stefanski pejo...@gmail.com
  wrote:
   hello!
  
   Thanks for answer!
  
   My mistake ;-),
   Correct OPTIONS request in
   http://pastebin.com/MmRtTgiZ
 
 
  That might have been fixed by the latest commit to dumpling. What
  version are you running?
 
   also with Ceph -v.
   static33 bucket was made on 0.67.
  
   best regards!
  
  
   On Mon, Sep 16, 2013 at 5:46 PM, Yehuda Sadeh yeh...@inktank.com
   wrote:
  
   On Mon, Sep 16, 2013 at 3:46 AM, Pawel Stefanski pejo...@gmail.com
 
   wrote:
hello all!
   
Once again sorry for delay:
all dumps are in
http://pastebin.com/dBnEsWpW
   
btw. I saw second commit to dumpling
http://tracker.ceph.com/issues/6078,
anything more changed ?
best regards!
  
   I pushed an extra fix to the CORS issues that you might have been
   seeing, so it's worth checking it out.
  
   As for the logs that you provide, you create a bucket named static33
   on which you set CORS, but the OPTIONS request goes to a bucket
 named
   static3, so obviously it doesn't work.
  
  
   Yehuda
--
pawel
   
   
On Fri, Sep 6, 2013 at 5:11 PM, Yehuda Sadeh yeh...@inktank.com
wrote:
   
Can you provide a log that includes the bucket creation, CORS
settings
and the OPTIONS call? It'd be best if you could do it with also
'debug
ms = 1'.
   
Thanks,
Yehuda
   
On Fri, Sep 6, 2013 at 7:54 AM, Paweł Stefański 
 pejo...@gmail.com
wrote:
 Sorry for delay,

 static3 bucket was created on 0.56 afair, I've tested the same
 operation
 with fresh bucket created now on dumpling, and the problem
 still
 occurs.

 regards!
 --
 pawel


 On 04.09.2013 20:15, Yehuda Sadeh wrote:

 Is static3 a bucket that you created before the upgrade? Can
 you
 test
 it with newly created buckets? Might be that you're hitting
 some
 other
 issue.

 Thanks,
 Yehuda

 On Tue, Sep 3, 2013 at 11:19 PM, Pawel Stefanski
 pejo...@gmail.com
 wrote:

 hello!

 yes, dns name is configured and working perfectly, the bucket
 (in
 this
 example static3) is found actually, but RGW can't read CORS
 configuration
 due some reason.

 2013-09-04 08:07:46.082740 7ff4bf7ee700  2 req
 10:0.000275:s3:OPTIONS
 /::getting op
 2013-09-04 08:07:46.082745 7ff4bf7ee700  2 req
 10:0.000280:s3:OPTIONS
 /:options_cors:authorizing
 2013-09-04 08:07:46.082753 7ff4bf7ee700  2 req
 10:0.000287:s3:OPTIONS
 /:options_cors:reading permissions
 2013-09-04 08:07:46.082790 7ff4bf7ee700 20 get_obj_state:
 rctx=0x7ff4f8003400 obj=.rgw:static3 state=0x7ff4f8005968
 s-prefetch_data=0
 2013-09-04 08:07:46.082810 7ff4bf7ee700 10 moving
 .rgw+static3
 to
 cache
 LRU
 end
 2013-09-04 08:07:46.082819 7ff4bf7ee700 10 cache get:
 name=.rgw+static3 :
 hit
 2013-09-04 08:07:46.082840 7ff4bf7ee700 20 get_obj_state:
 s-obj_tag
 was
 set
 empty
 2013-09-04 08:07:46.082845 7ff4bf7ee700 20 Read xattr:
 user.rgw.acl
 2013-09-04 08:07:46.082847 7ff4bf7ee700 20 Read xattr:
 user.rgw.cors
 2013-09-04 08:07:46.082848 7ff4bf7ee700 20 Read xattr:
 user.rgw.idtag
 2013-09-04 08:07:46.082849 7ff4bf7ee700 20 Read xattr:
 user.rgw.manifest
 2013-09-04 08:07:46.082855 7ff4bf7ee700 10 moving
 .rgw+static3
 to
 cache
 LRU
 end
 2013-09-04 08:07:46.082857 7ff4bf7ee700 10 cache get:
 name=.rgw+static3 :
 hit
 2013-09-04 08:07:46.082898 7ff4bf7ee700 20
 rgw_get_bucket_info:
 old
 bucket
 info, bucket=static3(@.rgw.buckets2[99137.2]) owner pejotes
 2013-09-04 08:07:46.082921 7ff4bf7ee700 15 Read
 AccessControlPolicyAccessControlPolicy




 xmlns=http://s3.amazonaws.com/doc/2006-03-01/
 OwnerIDpejotes/IDDisplayNameofe/DisplayName/OwnerAccessControlListGrantGrantee
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;




 xsi:type=GroupURI
 http://acs.amazonaws.com/groups/global/AllUsers
 /URI/GranteePermissionFULL_CONTROL/Permission/GrantGrantGrantee
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;





 

Re: [ceph-users] getting started

2013-09-16 Thread John Wilkins
We will have a new update to the quick start this week.

On Mon, Sep 16, 2013 at 12:18 PM, Alfredo Deza alfredo.d...@inktank.com wrote:
 On Mon, Sep 16, 2013 at 12:58 PM, Justin Ryan justin.r...@kixeye.com wrote:
 Hi,

 I'm brand new to Ceph, attempting to follow the Getting Started guide with 2
 VMs. I completed the Preflight without issue.  I completed Storage Cluster
 Quick Start, but have some questions:

 The Single Node Quick Start grey box -- does 'single node' mean if you're
 running the whole thing on a single machine, if you have only one server
 node like the diagram at the top of the page, or if you're only running one
 OSD process? I'm not sure if I need to make the `osd crush chooseleaf type`
 change.

 Are the LIST, ZAP, and ADD OSDS ON STANDALONE DISKS sections an alternative
 to the MULTIPLE OSDS ON THE OS DISK (DEMO ONLY) section? I thought I set up
 my OSDs already on /tmp/osd{0,1}.

 Moving on to the Block Device Quick Start -- it says To use this guide, you
 must have executed the procedures in the Object Store Quick Start guide
 first -- but the link to the Object Store Quick Start actually points to
 the Storage Cluster Quick Start -- which is it?

 Most importantly, it says Ensure your Ceph Storage Cluster is in an active
 + clean state before working with the Ceph Block Device --- how can tell if
 my cluster is active+clean?? The only ceph* command on the admin node is
 ceph-deploy, and running `ceph` on the server node:

 ceph@jr-ceph2:~$ ceph
 2013-09-16 16:53:10.880267 7feb96c1b700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-16 16:53:10.880271 7feb96c1b700  0 librados: client.admin
 initialization error (2) No such file or directory
 Error connecting to cluster: ObjectNotFound

 There is a ticket open for this, but you basically need super-user
 permissions here to run (any?) ceph commands.

 Thanks in advance for any help, and apologies if I missed anything obvious.





 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Stripping rados-classes libraries

2013-09-16 Thread Nick Bartos
Which symbols need to be kept when stripping the libraries in rados-classes?

I found this e-mail from 2010:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/1543

Is it still just symbols that start with __cls_ that need to be kept?

We compile ceph with debug symbols and strip them out for later debugging.
 I was excluding stripping the files in rados-classes for  0.67, however
in 0.67.3 the files have gotten so big with the debug info we're going to
have to strip what we can out of the files we ship.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] getting started

2013-09-16 Thread Justin Ryan
thanks, running as root does give me status, but not clean.

r...@jr-ceph2.vm:~# ceph status
  cluster 9059dfad-924a-425c-a20b-17dc1d53111e
   health HEALTH_WARN 91 pgs degraded; 192 pgs stuck unclean; recovery
21/42 degraded (50.000%)
   monmap e1: 1 mons at {jr-ceph2=10.88.26.55:6789/0}, election epoch 2,
quorum 0 jr-ceph2
   osdmap e10: 2 osds: 2 up, 2 in
pgmap v2715: 192 pgs: 101 active+remapped, 91 active+degraded; 9518
bytes data, 9148 MB used, 362 GB / 391 GB avail; 21/42 degraded (50.000%)
   mdsmap e4: 1/1/1 up {0=jr-ceph2.XXX=up:active}

don't see anything telling in the ceph logs; Should I wait for the new
quickstart?


On Mon, Sep 16, 2013 at 2:27 PM, John Wilkins john.wilk...@inktank.comwrote:

 We will have a new update to the quick start this week.

 On Mon, Sep 16, 2013 at 12:18 PM, Alfredo Deza alfredo.d...@inktank.com
 wrote:
  On Mon, Sep 16, 2013 at 12:58 PM, Justin Ryan justin.r...@kixeye.com
 wrote:
  Hi,
 
  I'm brand new to Ceph, attempting to follow the Getting Started guide
 with 2
  VMs. I completed the Preflight without issue.  I completed Storage
 Cluster
  Quick Start, but have some questions:
 
  The Single Node Quick Start grey box -- does 'single node' mean if
 you're
  running the whole thing on a single machine, if you have only one server
  node like the diagram at the top of the page, or if you're only running
 one
  OSD process? I'm not sure if I need to make the `osd crush chooseleaf
 type`
  change.
 
  Are the LIST, ZAP, and ADD OSDS ON STANDALONE DISKS sections an
 alternative
  to the MULTIPLE OSDS ON THE OS DISK (DEMO ONLY) section? I thought I
 set up
  my OSDs already on /tmp/osd{0,1}.
 
  Moving on to the Block Device Quick Start -- it says To use this
 guide, you
  must have executed the procedures in the Object Store Quick Start guide
  first -- but the link to the Object Store Quick Start actually points
 to
  the Storage Cluster Quick Start -- which is it?
 
  Most importantly, it says Ensure your Ceph Storage Cluster is in an
 active
  + clean state before working with the Ceph Block Device --- how can
 tell if
  my cluster is active+clean?? The only ceph* command on the admin node is
  ceph-deploy, and running `ceph` on the server node:
 
  ceph@jr-ceph2:~$ ceph
  2013-09-16 16:53:10.880267 7feb96c1b700 -1 monclient(hunting): ERROR:
  missing keyring, cannot use cephx for authentication
  2013-09-16 16:53:10.880271 7feb96c1b700  0 librados: client.admin
  initialization error (2) No such file or directory
  Error connecting to cluster: ObjectNotFound
 
  There is a ticket open for this, but you basically need super-user
  permissions here to run (any?) ceph commands.
 
  Thanks in advance for any help, and apologies if I missed anything
 obvious.
 
 
 
 
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 John Wilkins
 Senior Technical Writer
 Intank
 john.wilk...@inktank.com
 (415) 425-9599
 http://inktank.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problem with ceph-deploy hanging

2013-09-16 Thread Gruher, Joseph R
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Subject: Re: [ceph-users] problem with ceph-deploy hanging

ceph-deploy will use the user as you are currently executing. That is why, if
you are calling ceph-deploy as root, it will log in remotely as root.

So by a different user, I mean, something like, user `ceph` executing ceph-
deploy (yes, that same user needs to exist remotely too with correct
permissions)

This is interesting.  Since the preflight has us set up passwordless SSH with a 
default ceph user I assumed it didn't really matter what user I was logged in 
as on the admin system.  Good to know.

Unfortunately, logging in as my ceph user on the admin system (with a matching 
user on the target system) does not affect my result.  The ceph-deploy 
install still hangs here:

[cephtest02][INFO  ] Running command: wget -q -O- 
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | apt-key 
add -

It has been suggested that this could be due to our firewall.  I have the 
proxies configured in /etc/environment and when I run a wget myself (as the 
ceph user, either directly on cephtest02 or via SSH command to cephtest02 from 
the admin system) it resolves the proxy and succeeds.  Is there any reason the 
wget might behave differently when run by ceph-deploy and fail to resolve the 
proxy?  Is there anywhere I might need to set proxy information besides 
/etc/environment?

Or, any other thoughts on how to debug this further?

Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw bucket question

2013-09-16 Thread
 

hi
So what am I doing wrong? when i run commond:
radosgw-admin bucket list --uid=johdone   the return is []
and the commond
radosgw-admin bucket list --uid=wkp4666 the return is []
and the commond
radosgw-admin bucket listthe return  is [mybucket1, mybucket2]
mybucket1's owner is johdone  , and when i try create a mybucket1 for user 
wkp4666 the result is 409,
both two users caps is

caps : [{type: usages, perm:*}
   {type:users, perm:*}
 ]
does anyone face the same question ? how can i save the question!!!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw bucket question

2013-09-16 Thread Yehuda Sadeh
On Mon, Sep 16, 2013 at 7:21 PM, 鹏 wkp4...@126.com wrote:


 hi
 So what am I doing wrong? when i run commond:
 radosgw-admin bucket list --uid=johdone   the return is []
 and the commond
 radosgw-admin bucket list --uid=wkp4666 the return is []
 and the commond
 radosgw-admin bucket listthe return  is [mybucket1, mybucket2]
 mybucket1's owner is johdone  , and when i try create a mybucket1 for user
 wkp4666 the result is 409,
 both two users caps is

 caps : [{type: usages, perm:*}
{type:users, perm:*}
  ]

 does anyone face the same question ? how can i save the question!!!




What version are you using? What does the following commands return:

$ radosgw-admin  bucket stats --bucket=mybucket1

$ radosgw-admin bucket stats --bucket=mybucket2

Yehuda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com