Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Wido den Hollander

On 12/05/2013 10:44 PM, Chris C wrote:

I've been working on getting this setup working.  I have virtual
machines working using rbd based images by editing the domain directly.

Is there any way to make the creation process better?  We are hoping to
be able to use a virsh pool using the rbd driver but it appears that
Redhat has not compiled libvirt with rbd support.

Thought?



Recompile libvirt? Since RedHat hasn't enabled the RBD support in 
libvirt that's your problem.


Might be that they'll do it in RHEL 7 where librbd is available natively?


Thanks,
/Chris C


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] REST API issue for getting bucket policy

2013-12-06 Thread Gao, Wei M
Hi all,

 

I am working on the ceph radosgw(v0.72.1) and when I call the rest api to
read the bucket policy, I got an internal server error(request URL is:
/admin/bucket?policyformat=jsonbucket=test.).

However, when I call this:
/admin/bucket?policyformat=jsonbucket=testobject=obj, I got the policy of
the object returned. Besides, I do have the right permission(buckets=*). 

Any idea? Thanks!

 

Wei



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Dan van der Ster
See thread a couple days ago [ceph-users] qemu-kvm packages for centos

On Thu, Dec 5, 2013 at 10:44 PM, Chris C mazzy...@gmail.com wrote:
 I've been working on getting this setup working.  I have virtual machines
 working using rbd based images by editing the domain directly.

 Is there any way to make the creation process better?  We are hoping to be
 able to use a virsh pool using the rbd driver but it appears that Redhat has
 not compiled libvirt with rbd support.

 Thought?

 Thanks,
 /Chris C

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Sebastien Han
Arf forgot to mention that I’ll do a software mdadm RAID 1 with both sda1 and 
sdb1 and put the OS on this.
The rest (sda2 and sdb2) will go for the journals.

@James: I think that Gandalf’s main idea was to save some costs/space on the 
servers so having dedicated disks is not an option. (that what I understand 
from your comment “have the OS somewhere else” but I could be wrong)

 
Sébastien Han 
Cloud Engineer 

Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien@enovance.com 
Address : 10, rue de la Victoire - 75009 Paris 
Web : www.enovance.com - Twitter : @enovance 

On 05 Dec 2013, at 16:02, James Pearce ja...@peacon.co.uk wrote:

 Another option is to run journals on individually presented SSDs, in a 5:1 
 ratio (spinning-disk:ssd) and have the OS somewhere else.  Then the failure 
 domain is smaller.
 
 Ideally implement some way to monitor SSD write life SMART data - at least it 
 gives a guide as to device condition compared to its rated life.  That can be 
 done with smartmontools, but it would be nice to have it on the InkTank 
 dashboard for example.
 
 
 On 2013-12-05 14:26, Sebastien Han wrote:
 Hi guys,
 
 I won’t do a RAID 1 with SSDs since they both write the same data.
 Thus, they are more likely to “almost” die at the same time.
 
 What I will try to do instead is to use both disk in JBOD mode or
 (degraded RAID0).
 Then I will create a tiny root partition for the OS.
 
 Then I’ll still have something like /dev/sda2 and /dev/sdb2 and then
 I can take advantage of the 2 disks independently.
 The good thing with that is that you can balance your journals across both 
 SSDs.
 From a performance perspective this is really good.
 The bad thing as always is that if you loose a SSD you loose all the
 journals attached to it.
 
 Cheers.
 
 
 Sébastien Han
 Cloud Engineer
 
 Always give 100%. Unless you're giving blood.”
 
 Phone: +33 (0)1 49 70 99 72
 Mail: sebastien@enovance.com
 Address : 10, rue de la Victoire - 75009 Paris
 Web : www.enovance.com - Twitter : @enovance
 
 On 05 Dec 2013, at 10:53, Gandalf Corvotempesta
 gandalf.corvotempe...@gmail.com wrote:
 
 2013/12/4 Simon Leinen simon.lei...@switch.ch:
 I think this is a fine configuration - you won't be writing to the root
 partition too much, outside journals.  We also put journals on the same
 SSDs as root partitions (not that we're very ambitious about
 performance...).
 
 Do you suggest a RAID1 for the OS partitions on SSDs ? Is this safe or
 a RAID1 will decrease SSD life?
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to set up disks in the same host

2013-12-06 Thread Cristian Falcas
Hi all,

What will be the fastest disks setup between those 2:
- 1 OSD build from 6 disks in raid 10 and one ssd for journal
- 3 OSDs, each with 2 disks in raid 1 and a common ssd for all
journals (or more ssds if ssd performance will be an issue)

Mainly, will 1 OSD raid 10 be faster or slower then independent OSDs?

Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Basic cephx configuration

2013-12-06 Thread nicolasc

Hi every one,

I did not get any answer to my basic cephx question last week, so let me 
ask it one more time here, before I completely give up on Ceph and move on.


So, my issue is:

When all authentication settings are none:
* The cluster works fine
* The file /etc/ceph/ceph.client.admin.keyring  exists

Then I set auth_cluster_required to cephx. When I try to connect to 
the cluster, it detects client.admin and denies access with operation 
not supported, even for commands like ceph health.


Finally, after I explicitly set the keyring parameter in the config 
(to the default value, because the keyring file was already in the 
default location), the cluster works fine again. So the behavior changes 
when I add those 2 default lines to the config:

[client.admin]
keyring = /etc/ceph/ceph.client.admin.keyring

From the ceph.com documentation [1], about this keyring parameter:
Description:The path to the keyring file.
Type:   String
Required:   No
Default:/etc/ceph/$cluster.$name.keyring


... so, I need help:
* maybe this is a real bug? (was it already reported ?)
* maybe I am deeply stupid, and I don't understand what required and 
default means? (can anyone send me a good dictionary ?)

* maybe obi-wan kenobi?

Thanks to anyone who will respond anything (at that point, even a 
three-letter e-mail reading ACK would make me feel better). Best 
wishes for the future of Ceph, and best regards.


Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)


[1] http://ceph.com/docs/master/rados/configuration/auth-config-ref/#keys



On 11/29/2013 03:09 PM, nicolasc wrote:

An update on this issue:

Explicitly setting the keyring parameter to its default value, in 
the client section, like this:


[client.admin]
keyring = /etc/ceph/ceph.client.admin.keyring

solves the problem in the particular case when ONLY 
auth_cluster_required is set to cephx, and the two remaining auth 
parameters are set to none.


The documentation clearly states that 
/etc/ceph/ceph.client.admin.keyring is the default value of the 
keyring setting [1], so this looks like a bug. Should I report it on 
the tracker? (BTW, all of this is on v0.72.1.)


Also, does anyone have any idea about why this is not enough to enable 
the auth_service_required setting? That one still gives me the error:


client.admin authentication error (95) Operation not supported

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

[1] http://ceph.com/docs/master/rados/configuration/auth-config-ref/#keys



On 11/29/2013 10:22 AM, nicolasc wrote:

Hello every one,

Just ran a fresh install of version Emperor on an empty cluster, and 
I am left clueless, trying to troubleshoot cephx. After ceph-deploy 
created the keys, I used ceph-authtool to generate the client.admin 
keyring and the monitor keyring, as indicated in the doc. The 
configuration is really out-of-the-box: 3 monitors, each with the 
keyring in /var/lib/ceph/mon/ceph-???/keyring, all keyrings have 
umask 644 and are owned by ceph.


However, no matter which combination of auth_cluster_, 
auth_service_, or auth_client_required, is set to cephx; no 
matter either the keyring options like -k and --id on the command 
line. Authentication fails every time with:


client.admin authentication error (95) Operation not supported
Error connecting to cluster: Error

A big thanks to any one who gives me a hint about what it means. 
(This message carries so little information, I feel it could be 
simply replaced by the ! character.) I have looked in every ceph 
and system log file, nothing more.


Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to set up disks in the same host

2013-12-06 Thread Wido den Hollander

On 12/06/2013 11:00 AM, Cristian Falcas wrote:

Hi all,

What will be the fastest disks setup between those 2:
- 1 OSD build from 6 disks in raid 10 and one ssd for journal
- 3 OSDs, each with 2 disks in raid 1 and a common ssd for all
journals (or more ssds if ssd performance will be an issue)

Mainly, will 1 OSD raid 10 be faster or slower then independent OSDs?



Simply run 6 OSDs without any RAID and one SSD for the journaling.

Danger is though, if you loose the journal, you loose all OSDs. So 
better place two SSDs and then 3 OSDs per journal and make sure that 
using crush objects go two OSDs on the different journals.


You shouldn't use RAID underneath a OSD, let the replication handle all 
that.



Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Gandalf Corvotempesta
2013/12/6 Sebastien Han sebastien@enovance.com:
 @James: I think that Gandalf’s main idea was to save some costs/space on the 
 servers so having dedicated disks is not an option. (that what I understand 
 from your comment “have the OS somewhere else” but I could be wrong)

You are right. I don't have space for one or two disks to be used as OS.
I was talking about RAID1 just for OS partition (stored on SSD) and
not for the journal.

Actually I'm testing OS stored on a USB pen drive with /tmp /var/run
/var/lock placed on a tmpfs (ram) and log stored externally via
rsyslog. This should avoid USB read/writes (that are slow) as much as
possible.
It's also very easy to restore to a new pen drive from a failure:

dd if=/backup.img of=/dev/sdX bs=4M
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread James Pearce
Most servers also have internal SD card slots.  There are SD cards 
advertising 90MB/s, though I haven't tried them as OS boot personally.


On 2013-12-06 11:14, Gandalf Corvotempesta wrote:

2013/12/6 Sebastien Han sebastien@enovance.com:
@James: I think that Gandalf’s main idea was to save some 
costs/space on the servers so having dedicated disks is not an option. 
(that what I understand from your comment “have the OS somewhere else” 
but I could be wrong)


You are right. I don't have space for one or two disks to be used as 
OS.

I was talking about RAID1 just for OS partition (stored on SSD) and
not for the journal.

Actually I'm testing OS stored on a USB pen drive with /tmp /var/run
/var/lock placed on a tmpfs (ram) and log stored externally via
rsyslog. This should avoid USB read/writes (that are slow) as much as
possible.
It's also very easy to restore to a new pen drive from a failure:

dd if=/backup.img of=/dev/sdX bs=4M


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack--instance-boot-from-ceph-volume:: error could not open disk image rbd

2013-12-06 Thread Gilles Mocellin

Le 05/12/2013 14:01, Karan Singh a écrit :

Hello Everyone

Trying to boot from ceph volume using bolg 
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/ 
and http://docs.openstack.org/user-guide/content/boot_from_volume.html


Need help for this error.


=


*Logs from /var/log/libvirt/qemu ::*

**

=

qemu-kvm: -drive 
file=rbd:ceph-volumes/volume-dd315dda-b22a-4cf8-8b77-7c2b2f163155:id=volumes:key=AQC804xS8HzFJxAAD/zzQ8LMzq9wDLq/5a472g==:auth_supported=cephx\;none:mon_host=192.168.1.31\:6789\;192.168.1.33\:6789\;192.168.1.38\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=dd315dda-b22a-4cf8-8b77-7c2b2f163155,cache=none: 
*could not open disk 
image* rbd:ceph-volumes/volume-dd315dda-b22a-4cf8-8b77-7c2b2f163155:id=volumes:key=AQC804xS8HzFJxAAD/zzQ8LMzq9wDLq/5a472g==:auth_supported=cephx\;none:mon_host=192.168.1.31\:6789\;192.168.1.33\:6789\;192.168.1.38\:6789: 
No such file or directory

2013-12-05 12:42:29.544+: shutting down



Hello,

Does your qemu supports RBD images ?

See in the last line of, supported formats :
qemu-img -h

I had the same thing recently, using Debian wheezy, where qemu does not 
supports RBD.



[...]

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal, SSD and OS

2013-12-06 Thread Robert van Leeuwen
 Most servers also have internal SD card slots.  There are SD cards
 advertising 90MB/s, though I haven't tried them as OS boot personally.

We did this with some servers 2 1/2 years ago with some Blade hardware: did not 
work out so well. 
High level of failures on the SD card's even with all stuff properly on tempfs. 

For Ceph we have the OS on one of the two SSDs where the journals live.
We are graphing and alerting on the SSD wear-level SMART info.
Up to now it has worked out quite well. 
(We have a very similar setup for Openstack Swift for a few years now where we 
use the SSD for flashcache instead of journals)

Cheers,
Robert van Leeuwen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting Ceph on Linux/Windows

2013-12-06 Thread James Harper
Out of curiosity I tried the 'ceph' command from windows too. I had to rename 
librados.dll to librados.so.2, install a readline replacement 
(https://pypi.python.org/pypi/pyreadline/2.0), and even then it completely 
ignored anything I put on the command line, but from the ceph shell I could do 
thinks like 'health', 'status', 'osd tree', 'osd lspools', 'auth list', etc. I 
thought that was pretty neat.

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] recreate bucket error

2013-12-06 Thread Dominik Mostowiec
Hi,
In version dumpling upgraded from bobtail working create the same bucket.

root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.
root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.

I installed new dumpling cluster and:
root@s1:/var/log/radosgw# s3 -u create test1
Bucket successfully created.
root@s1:/var/log/radosgw# s3 -u create test1

ERROR: ErrorUnknown

In radosgw logs:

2013-12-06 13:59:56.083109 7f162d7c2700  1 == starting new request
req=0xb7d480 =
2013-12-06 13:59:56.083227 7f162d7c2700  2 req 5:0.000119::PUT
/test1/::initializing
2013-12-06 13:59:56.083261 7f162d7c2700 10 meta HTTP_X_AMZ_DATE
2013-12-06 13:59:56.083274 7f162d7c2700 10 x x-amz-date:Fri, 06 Dec
2013 12:59:56 GMT
2013-12-06 13:59:56.083298 7f162d7c2700 10 s-object=NULL s-bucket=test1
2013-12-06 13:59:56.083307 7f162d7c2700  2 req 5:0.000199:s3:PUT
/test1/::getting op
2013-12-06 13:59:56.083315 7f162d7c2700  2 req 5:0.000207:s3:PUT
/test1/:create_bucket:authorizing
2013-12-06 13:59:56.091724 7f162d7c2700 10 get_canon_resource(): dest=
2013-12-06 13:59:56.091742 7f162d7c2700 10 auth_hdr:
PUT



x-amz-date:Fri, 06 Dec 2013 12:59:56 GMT
/test1/
2013-12-06 13:59:56.091836 7f162d7c2700  2 req 5:0.008728:s3:PUT
/test1/:create_bucket:reading permissions
2013-12-06 13:59:56.091848 7f162d7c2700  2 req 5:0.008740:s3:PUT
/test1/:create_bucket:verifying op mask
2013-12-06 13:59:56.091852 7f162d7c2700  2 req 5:0.008744:s3:PUT
/test1/:create_bucket:verifying op permissions
2013-12-06 13:59:56.093858 7f162d7c2700  2 req 5:0.010750:s3:PUT
/test1/:create_bucket:verifying op params
2013-12-06 13:59:56.093882 7f162d7c2700  2 req 5:0.010773:s3:PUT
/test1/:create_bucket:executing
2013-12-06 13:59:56.104819 7f162d7c2700  0 WARNING: couldn't find acl
header for object, generating default
2013-12-06 13:59:56.132625 7f162d7c2700  0 get_bucket_info returned -125
2013-12-06 13:59:56.132656 7f162d7c2700  0 WARNING: set_req_state_err
err_no=125 resorting to 500
2013-12-06 13:59:56.132693 7f162d7c2700  2 req 5:0.049584:s3:PUT
/test1/:create_bucket:http status=500
2013-12-06 13:59:56.132890 7f162d7c2700  1 == req done
req=0xb7d480 http_status=500 ==

-- 
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Chris C
Wido, we were thinking along that line as well.  We're trying to figure out
which path will cause the least amount of pain ;)

/C


On Fri, Dec 6, 2013 at 3:27 AM, Wido den Hollander w...@42on.com wrote:

 On 12/05/2013 10:44 PM, Chris C wrote:

 I've been working on getting this setup working.  I have virtual
 machines working using rbd based images by editing the domain directly.

 Is there any way to make the creation process better?  We are hoping to
 be able to use a virsh pool using the rbd driver but it appears that
 Redhat has not compiled libvirt with rbd support.

 Thought?


 Recompile libvirt? Since RedHat hasn't enabled the RBD support in libvirt
 that's your problem.

 Might be that they'll do it in RHEL 7 where librbd is available natively?

  Thanks,
 /Chris C


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 --
 Wido den Hollander
 42on B.V.

 Phone: +31 (0)20 700 9902
 Skype: contact42on
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] installation help

2013-12-06 Thread Alfredo Deza
On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
wojciech.g...@cimr.cam.ac.uk wrote:
 Hello,
 I trying to install ceph but can't get it working documentation is not clear
 and confusing how to do it.
 I have cloned 3 machines with ubuntu 12.04 minimal system. I'm trying to
 follow docs

 http://ceph.com/docs/master/start/quick-start-preflight/

 but got some questions:

 step 4. Configure your ceph-deploy admin node with password-less SSH access
 This should be done on ceph account, its't it?

If you are following the quickstart to the letter, yes. If you are
using the latest version of ceph-deploy (1.3.3)
you will benefit from automatically getting these set up for you when
you do `ceph-deploy new {hosts}`

It will make sure it can SSH without a password prompt to the hosts
you are setting up.


 next:
 http://ceph.com/docs/master/start/quick-ceph-deploy/

 creating directories  for maintaining the configuration for ceph-deploy is
 on ceph account or root?

ceph-deploy will create the ceph.conf and other files from wherever it
runs from owned by the user executing
ceph-deploy.

So it depends here on what user you are calling ceph-deploy.

 do all following step in docs are on ceph account or root?

All steps in the quickstart assume that you have created a ceph user
and you are connecting to remote hosts
with a ceph user.

In the end it doesn't matter. But if you want to get things right from
the get-go I would try and match what the
quickstart uses so you can troubleshoot easier.


 if on ceph account step 3. creating mon on 3 machines gives on two remote
 machines these errors:


 [ceph1][DEBUG ] locating the `service` executable...
 [ceph1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph
 id=ceph1
 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
 /var/run/ceph/ceph-mon.ceph1.asok mon_status
 [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
 2] No such file or directory
 [ceph1][WARNIN] monitor: mon.ceph1, might not be running yet
 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
 /var/run/ceph/ceph-mon.ceph1.asok mon_status
 [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
 2] No such file or directory
 [ceph1][WARNIN] ceph1 is not defined in `mon initial members`
 [ceph1][WARNIN] monitor ceph1 does not exist in monmap
 [ceph1][WARNIN] neither `public_addr` nor `public_network` keys are defined
 for monitors
 [ceph1][WARNIN] monitors may not be able to form quorum


It looks like you've tried a few things in that server and you've
ended in a broken state. If you
are deploying the `ceph1` mon, that should've been defined in your
ceph.conf and it should've been
done automatically for you when you called `ceph-deploy new ceph1`.

This is an example of creating a new config file for a server I have
called `node1`:


$ ceph-deploy new node1
[ceph_deploy.cli][INFO  ] Invoked (1.3.3):
/Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy new node1
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.111.100
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: papaya.local
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.111.100']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

$ cat ceph.conf
[global]
fsid = 4e04aeaf-7025-4d33-bbcb-b27e75749b97
mon_initial_members = node1
mon_host = 192.168.111.100
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

See how `mon_initial_members` has `node1` in it?


 step 4. gathering keys should it be from all mon servers?

 checking status on ceph account gives:
 $ ceph health
 2013-12-06 09:48:41.550270 7f16b6eea700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-12-06 09:48:41.550278 7f16b6eea700  0 librados: client.admin
 initialization error (2) No such file or directory
 Error connecting to cluster: ObjectNotFound

That happens because you need to call `ceph` with sudo  (something
that ceph-deploy takes care for you)

 on root account:
 # ceph status
 cluster 5ee9b196-ef36-46dd-870e-6ef1824b1cd0
  health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
 osds
  monmap e1: 1 mons at {ceph0=192.168.45.222:6789/0}, election epoch 2,
 quorum 0 ceph0
  osdmap e1: 0 osds: 0 up, 0 in
   pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
 0 kB used, 0 kB / 0 kB avail
  192 creating

 Ceph management after installation should it be done on root or ceph
 account?


It doesn't matter, just as long as you have 

Re: [ceph-users] installation help

2013-12-06 Thread Alfredo Deza
On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
wojciech.g...@cimr.cam.ac.uk wrote:
 Hello,
 I trying to install ceph but can't get it working documentation is not clear
 and confusing how to do it.
 I have cloned 3 machines with ubuntu 12.04 minimal system. I'm trying to
 follow docs

 http://ceph.com/docs/master/start/quick-start-preflight/

 but got some questions:

 step 4. Configure your ceph-deploy admin node with password-less SSH access
 This should be done on ceph account, its't it?

 next:
 http://ceph.com/docs/master/start/quick-ceph-deploy/

 creating directories  for maintaining the configuration for ceph-deploy is
 on ceph account or root?
 do all following step in docs are on ceph account or root?
 if on ceph account step 3. creating mon on 3 machines gives on two remote
 machines these errors:


 [ceph1][DEBUG ] locating the `service` executable...
 [ceph1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph
 id=ceph1
 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
 /var/run/ceph/ceph-mon.ceph1.asok mon_status
 [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
 2] No such file or directory
 [ceph1][WARNIN] monitor: mon.ceph1, might not be running yet
 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon
 /var/run/ceph/ceph-mon.ceph1.asok mon_status
 [ceph1][ERROR ] admin_socket: exception getting command descriptions: [Errno
 2] No such file or directory
 [ceph1][WARNIN] ceph1 is not defined in `mon initial members`
 [ceph1][WARNIN] monitor ceph1 does not exist in monmap
 [ceph1][WARNIN] neither `public_addr` nor `public_network` keys are defined
 for monitors
 [ceph1][WARNIN] monitors may not be able to form quorum

 step 4. gathering keys should it be from all mon servers?

 checking status on ceph account gives:
 $ ceph health
 2013-12-06 09:48:41.550270 7f16b6eea700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-12-06 09:48:41.550278 7f16b6eea700  0 librados: client.admin
 initialization error (2) No such file or directory
 Error connecting to cluster: ObjectNotFound

 on root account:
 # ceph status
 cluster 5ee9b196-ef36-46dd-870e-6ef1824b1cd0
  health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no
 osds
  monmap e1: 1 mons at {ceph0=192.168.45.222:6789/0}, election epoch 2,
 quorum 0 ceph0
  osdmap e1: 0 osds: 0 up, 0 in
   pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
 0 kB used, 0 kB / 0 kB avail
  192 creating

 Ceph management after installation should it be done on root or ceph
 account?

 I've attached typescrpi from what I have done.

I just went through your typescript and you did call `new` but not for
all the hosts you are using:

ceph-deploy new ceph0

And then you did:

ceph-deploy install ceph0 ceph1 ceph2

And finally (reason why you have issues) you tried to deploy a mon to
those hosts that were not
defined in your ceph.conf:

ceph-deploy mon create ceph0 ceph1 ceph2

You see, you need to define those as well when calling `new` like:


ceph-deploy new ceph0 ceph1 ceph2


 thanks
 Wojciech



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] optimal setup with 4 x ethernet ports

2013-12-06 Thread Kyle Bader
 looking at tcpdump all the traffic is going exactly where it is supposed to 
 go, in particular an osd on the 192.168.228.x network appears to talk to an 
 osd on the 192.168.229.x network without anything strange happening. I was 
 just wondering if there was anything about ceph that could make this 
 non-optimal, assuming traffic was reasonably balanced between all the osd's 
 (eg all the same weights). I think the only time it would suffer is if writes 
 to other osds result in a replica write to a single osd, and even then a 
 single OSD is still limited to 7200RPM disk speed anyway so the loss isn't 
 going to be that great.

Should be fine given you only have a 1:1 ratio of link to disk.

 I think I'll be moving over to bonded setup anyway, although I'm not sure if 
 rr or lacp is best... rr will give the best potential throughput, but lacp 
 should give similar aggregate throughput if there are plenty of connections 
 going on, and less cpu load as no need to reassemble fragments.

One of the DreamHost clusters is using a pair of bonded 1GbE links on
the public network and another pair for the cluster network, we
configured each to use mode 802.3ad.

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Chris C
Dan,
I found the thread but it looks like another dead end :(

/Chris C


On Fri, Dec 6, 2013 at 4:46 AM, Dan van der Ster d...@vanderster.com wrote:

 See thread a couple days ago [ceph-users] qemu-kvm packages for centos

 On Thu, Dec 5, 2013 at 10:44 PM, Chris C mazzy...@gmail.com wrote:
  I've been working on getting this setup working.  I have virtual machines
  working using rbd based images by editing the domain directly.
 
  Is there any way to make the creation process better?  We are hoping to
 be
  able to use a virsh pool using the rbd driver but it appears that Redhat
 has
  not compiled libvirt with rbd support.
 
  Thought?
 
  Thanks,
  /Chris C
 
  ___
  ceph-users mailing list
  ceph-users@lists.ceph.com
  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Campbell, Bill
I think the version of Libvirt included with RHEL/CentOS supports RBD storage 
(but not pools), so outside of compiling a newer version not sure there can be 
anything else done aside from waiting for repo additions/newer versions of the 
distro. 

Not sure what your scenario is, but this is the exact reason we switched our 
underlying virtualization infrastructure to Ubuntu. Their cloud archive PPA has 
updated packages for QEMU/KVM, Libvirt, Open vSwitch, etc. that are backported 
for LTS releases, and is something I personally think RHEL is WAY behind the 
curve on (getting better with their RDO initiative though). We didn't like 
consuming resources validating that updated builds of QEMU/Libvirt were going 
to cause problems and just allocated those resources to learning the Ubuntu 
environment. 

As far as streamlining management on top of that, you have some options 
(outside of virt-manager, which has no native support for RBD IIRC) like 
Proxmox (which is an entire solution like ESXi/Hyper-V using KVM) or something 
like OpenStack or OpenNebula (we use OpenNebula). Beats having to edit domains 
by hand. ;-) 

- Original Message -

From: Chris C mazzy...@gmail.com 
To: Dan van der Ster d...@vanderster.com 
Cc: ceph-users@lists.ceph.com 
Sent: Friday, December 6, 2013 10:37:03 AM 
Subject: Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph 

Dan, 
I found the thread but it looks like another dead end :( 

/Chris C 


On Fri, Dec 6, 2013 at 4:46 AM, Dan van der Ster  d...@vanderster.com  wrote: 


See thread a couple days ago [ceph-users] qemu-kvm packages for centos 

On Thu, Dec 5, 2013 at 10:44 PM, Chris C  mazzy...@gmail.com  wrote: 
 I've been working on getting this setup working. I have virtual machines 
 working using rbd based images by editing the domain directly. 
 
 Is there any way to make the creation process better? We are hoping to be 
 able to use a virsh pool using the rbd driver but it appears that Redhat has 
 not compiled libvirt with rbd support. 
 
 Thought? 
 
 Thanks, 
 /Chris C 
 
 ___ 
 ceph-users mailing list 
 ceph-users@lists.ceph.com 
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
 





___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


NOTICE: Protect the information in this message in accordance with the 
company's security policies. If you received this message in error, immediately 
notify the sender and destroy all copies.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting Ceph on Linux/Windows

2013-12-06 Thread Sage Weil
[Moving this thread to ceph-devel]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Impact of fancy striping

2013-12-06 Thread nicolasc

Hi James,

Thank you for this clarification. I am quite aware of that, which is why 
the journals are on SAS disks in RAID0 (SSDs out of scope).


I still have trouble believing that fast-but-not-super-fast journals is 
the main reason for the poor performances observed. Maybe I am mistaken?


Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)



On 12/03/2013 03:01 PM, James Pearce wrote:

I would really appreciate it if someone could:
 - explain why the journal setup is way more important than striping 
settings;


I'm not sure if it's what you're asking, but any write must be 
physically written to the journal before the operation is 
acknowledged.  So the overall cluster performance (or rather write 
latency) is always governed by the speed of those journals.  Data is 
then gathered up into (hopefully) larger blocks and committed to OSDs 
later.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Basic cephx configuration

2013-12-06 Thread nicolasc

Hi Dan,

Thank you for the advice and indications. We have the exact same 
configuration, except I am only enabling auth cluster, and I am using 
ceph.client.admin.keyring instead of simply keyring.


Both locations /etc/ceph/ceph.client.admin.keyring and 
/etc/ceph/keyring are presented as default values for the keyring 
configuration setting. I will try /etc/ceph/keyring, but I doubt this 
changes much.


I am curious of whether your setup still works if you remove the 
keyring = /etc/ceph/keyring setting (you are also using a default 
location, so you could remove that line safely, right?).


Thank you very much for answering. Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)



On 12/06/2013 11:42 AM, Dan Van Der Ster wrote:

Hi,
All of our clusters have this in ceph.conf:

[global]
   auth cluster required = cephx
   auth service required = cephx
   auth client required = cephx
   keyring = /etc/ceph/keyring

and the client.admin secret in /etc/ceph/keyring:

# cat /etc/ceph/keyring
[client.admin]
 key = ...

With that you should be able to do ceph health without passing —id or 
—keyring args. (this is with dumpling, not emperor, but I guess it didn’t change.)

If it still doesn’t work, check the capabilities that client.admin has (with 
ceph auth list). Should be

 caps: [mds] allow
 caps: [mon] allow *
 caps: [osd] allow *

Cheers, Dan


On 06 Dec 2013, at 11:06, nicolasc nicolas.cance...@surfsara.nl wrote:


Hi every one,

I did not get any answer to my basic cephx question last week, so let me ask it 
one more time here, before I completely give up on Ceph and move on.

So, my issue is:

When all authentication settings are none:
* The cluster works fine
* The file /etc/ceph/ceph.client.admin.keyring  exists

Then I set auth_cluster_required to cephx. When I try to connect to the cluster, it detects 
client.admin and denies access with operation not supported, even for commands like ceph 
health.

Finally, after I explicitly set the keyring parameter in the config (to the 
default value, because the keyring file was already in the default location), the cluster 
works fine again. So the behavior changes when I add those 2 default lines to the config:
[client.admin]
keyring = /etc/ceph/ceph.client.admin.keyring

 From the ceph.com documentation [1], about this keyring parameter:
Description:The path to the keyring file.
Type:   String
Required:   No
Default:/etc/ceph/$cluster.$name.keyring
... so, I need help:
* maybe this is a real bug? (was it already reported ?)
* maybe I am deeply stupid, and I don't understand what required and 
default means? (can anyone send me a good dictionary ?)
* maybe obi-wan kenobi?

Thanks to anyone who will respond anything (at that point, even a three-letter e-mail 
reading ACK would make me feel better). Best wishes for the future of Ceph, 
and best regards.

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)


[1] http://ceph.com/docs/master/rados/configuration/auth-config-ref/#keys



On 11/29/2013 03:09 PM, nicolasc wrote:

An update on this issue:

Explicitly setting the keyring parameter to its default value, in the client 
section, like this:

[client.admin]
keyring = /etc/ceph/ceph.client.admin.keyring

solves the problem in the particular case when ONLY auth_cluster_required is set to 
cephx, and the two remaining auth parameters are set to none.

The documentation clearly states that /etc/ceph/ceph.client.admin.keyring is the 
default value of the keyring setting [1], so this looks like a bug. Should I report it 
on the tracker? (BTW, all of this is on v0.72.1.)

Also, does anyone have any idea about why this is not enough to enable the 
auth_service_required setting? That one still gives me the error:

client.admin authentication error (95) Operation not supported

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

[1] http://ceph.com/docs/master/rados/configuration/auth-config-ref/#keys



On 11/29/2013 10:22 AM, nicolasc wrote:

Hello every one,

Just ran a fresh install of version Emperor on an empty cluster, and I am left 
clueless, trying to troubleshoot cephx. After ceph-deploy created the keys, I 
used ceph-authtool to generate the client.admin keyring and the monitor 
keyring, as indicated in the doc. The configuration is really out-of-the-box: 3 
monitors, each with the keyring in /var/lib/ceph/mon/ceph-???/keyring, all 
keyrings have umask 644 and are owned by ceph.

However, no matter which combination of auth_cluster_, auth_service_, or 
auth_client_required, is set to cephx; no matter either the keyring options like -k and --id 
on the command line. Authentication fails every time with:

client.admin authentication error (95) Operation not supported
Error connecting to cluster: Error

A big thanks to any one who gives me a hint about what it means. (This message carries so 
little information, I feel it could be 

Re: [ceph-users] Impact of fancy striping

2013-12-06 Thread James Pearce
Hopefully a Ceph developer will be able to clarify how small writes are 
journaled?


The write-through 'bug' seems to explain small-block performance I've 
measured in various configurations (I find similar results to you).  
I've not still tested the patch cited, but it would be *very* 
interesting to know the impact on clusters running with spinning-disk 
journals in particular.


On 2013-12-06 16:05, nicolasc wrote:

Hi James,

Thank you for this clarification. I am quite aware of that, which is
why the journals are on SAS disks in RAID0 (SSDs out of scope).

I still have trouble believing that fast-but-not-super-fast journals
is the main reason for the poor performances observed. Maybe I am
mistaken?

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)



On 12/03/2013 03:01 PM, James Pearce wrote:

I would really appreciate it if someone could:
 - explain why the journal setup is way more important than 
striping settings;


I'm not sure if it's what you're asking, but any write must be 
physically written to the journal before the operation is 
acknowledged.  So the overall cluster performance (or rather write 
latency) is always governed by the speed of those journals.  Data is 
then gathered up into (hopefully) larger blocks and committed to OSDs 
later.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread John Kinsella
Will throw my US $0.02 in here...

We’re running CentOS 6.4[1] + modern Ceph, VMs are managed by CloudStack. We 
use distro packages whenever possible - most times folks suggest building 
something from source, I have to be dragged kicking and screaming to agreement.

Ceph support is one of the very few exceptions. We build/package/sign modern 
kernels (3.6.11), modern libvirt (0.9.13), modern qemu (1.2.0) and probably a 
few other bits. The improved functionality is easily worth the effort.

John
1:Due to RH Cloud agreements we’ll have to run RHEL hypervisors soon, will be 
happy to share thoughts on getting Ceph working with that once we do it

On Dec 6, 2013, at 5:25 AM, Chris C 
mazzy...@gmail.commailto:mazzy...@gmail.com wrote:

Wido, we were thinking along that line as well.  We're trying to figure out 
which path will cause the least amount of pain ;)

/C


On Fri, Dec 6, 2013 at 3:27 AM, Wido den Hollander 
w...@42on.commailto:w...@42on.com wrote:
On 12/05/2013 10:44 PM, Chris C wrote:
I've been working on getting this setup working.  I have virtual
machines working using rbd based images by editing the domain directly.

Is there any way to make the creation process better?  We are hoping to
be able to use a virsh pool using the rbd driver but it appears that
Redhat has not compiled libvirt with rbd support.

Thought?


Recompile libvirt? Since RedHat hasn't enabled the RBD support in libvirt 
that's your problem.

Might be that they'll do it in RHEL 7 where librbd is available natively?

Thanks,
/Chris C


___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902tel:%2B31%20%280%2920%20700%209902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Stratosechttp://stratosec.co/ - Compliance as a Service
o: 415.315.9385
@johnlkinsellahttp://twitter.com/johnlkinsella

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] recreate bucket error

2013-12-06 Thread Yehuda Sadeh
I'm having trouble reproducing this one. Are you running on latest
dumpling? Does it happen with any newly created bucket, or just with
buckets that existed before?

Yehuda

On Fri, Dec 6, 2013 at 5:07 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
 Hi,
 In version dumpling upgraded from bobtail working create the same bucket.

 root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
 Bucket successfully created.
 root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
 Bucket successfully created.

 I installed new dumpling cluster and:
 root@s1:/var/log/radosgw# s3 -u create test1
 Bucket successfully created.
 root@s1:/var/log/radosgw# s3 -u create test1

 ERROR: ErrorUnknown

 In radosgw logs:

 2013-12-06 13:59:56.083109 7f162d7c2700  1 == starting new request
 req=0xb7d480 =
 2013-12-06 13:59:56.083227 7f162d7c2700  2 req 5:0.000119::PUT
 /test1/::initializing
 2013-12-06 13:59:56.083261 7f162d7c2700 10 meta HTTP_X_AMZ_DATE
 2013-12-06 13:59:56.083274 7f162d7c2700 10 x x-amz-date:Fri, 06 Dec
 2013 12:59:56 GMT
 2013-12-06 13:59:56.083298 7f162d7c2700 10 s-object=NULL s-bucket=test1
 2013-12-06 13:59:56.083307 7f162d7c2700  2 req 5:0.000199:s3:PUT
 /test1/::getting op
 2013-12-06 13:59:56.083315 7f162d7c2700  2 req 5:0.000207:s3:PUT
 /test1/:create_bucket:authorizing
 2013-12-06 13:59:56.091724 7f162d7c2700 10 get_canon_resource(): dest=
 2013-12-06 13:59:56.091742 7f162d7c2700 10 auth_hdr:
 PUT



 x-amz-date:Fri, 06 Dec 2013 12:59:56 GMT
 /test1/
 2013-12-06 13:59:56.091836 7f162d7c2700  2 req 5:0.008728:s3:PUT
 /test1/:create_bucket:reading permissions
 2013-12-06 13:59:56.091848 7f162d7c2700  2 req 5:0.008740:s3:PUT
 /test1/:create_bucket:verifying op mask
 2013-12-06 13:59:56.091852 7f162d7c2700  2 req 5:0.008744:s3:PUT
 /test1/:create_bucket:verifying op permissions
 2013-12-06 13:59:56.093858 7f162d7c2700  2 req 5:0.010750:s3:PUT
 /test1/:create_bucket:verifying op params
 2013-12-06 13:59:56.093882 7f162d7c2700  2 req 5:0.010773:s3:PUT
 /test1/:create_bucket:executing
 2013-12-06 13:59:56.104819 7f162d7c2700  0 WARNING: couldn't find acl
 header for object, generating default
 2013-12-06 13:59:56.132625 7f162d7c2700  0 get_bucket_info returned -125
 2013-12-06 13:59:56.132656 7f162d7c2700  0 WARNING: set_req_state_err
 err_no=125 resorting to 500
 2013-12-06 13:59:56.132693 7f162d7c2700  2 req 5:0.049584:s3:PUT
 /test1/:create_bucket:http status=500
 2013-12-06 13:59:56.132890 7f162d7c2700  1 == req done
 req=0xb7d480 http_status=500 ==

 --
 Regards
 Dominik
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Impact of fancy striping

2013-12-06 Thread Robert van Leeuwen
If I understand correctly you have one sas disk as a journal for multiple OSDs.
If you do small synchronous writes it will become a IO bottleneck pretty 
quickly:
Due to multiple journals on the same disk it will no longer be sequential 
writes writes to one journal but  4k writes to x journals making it fully 
random.
I would expect a performance of 100 to 200 IOPS max.
Doing an iostat -x or atop should show this bottleneck immediately.
This is also the reason to go with SSDs: they have reasonable random IO 
performance.

Cheers,
Robert van Leeuwen

Sent from my iPad

 On 6 dec. 2013, at 17:05, nicolasc nicolas.cance...@surfsara.nl wrote:
 
 Hi James,
 
 Thank you for this clarification. I am quite aware of that, which is why the 
 journals are on SAS disks in RAID0 (SSDs out of scope).
 
 I still have trouble believing that fast-but-not-super-fast journals is the 
 main reason for the poor performances observed. Maybe I am mistaken?
 
 Best regards,
 
 Nicolas Canceill
 Scalable Storage Systems
 SURFsara (Amsterdam, NL)
 
 
 
 On 12/03/2013 03:01 PM, James Pearce wrote:
 I would really appreciate it if someone could:
 - explain why the journal setup is way more important than striping 
 settings;
 
 I'm not sure if it's what you're asking, but any write must be physically 
 written to the journal before the operation is acknowledged.  So the overall 
 cluster performance (or rather write latency) is always governed by the 
 speed of those journals.  Data is then gathered up into (hopefully) larger 
 blocks and committed to OSDs later.
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Basic cephx configuration

2013-12-06 Thread John Wilkins
Ncolasc,

You said: Just ran a fresh install of version Emperor on an empty cluster,
and I am left clueless, trying to troubleshoot cephx. *After ceph-deploy
created the keys, I used ceph-authtool to generate the client.admin keyring
and the monitor keyring, as indicated in the doc.* The configuration is
really out-of-the-box: 3 monitors, each with the keyring in
/var/lib/ceph/mon/ceph-???/keyring, all keyrings have umask 644 and are
owned by ceph.

The ceph-deploy utility already generates the monitor keyring and the
ceph.client.admin.keyring for you. I have a wip-doc branch for a manual
deployment procedure without ceph-deploy, which I don't recommend for a
first time user. However, it does detail what is going on.

http://ceph.com/docs/wip-doc-build-cluster/install/manual-deployment/

Referring to steps 8-11, you'll notice that the manual process involves
creating the monitor secret on step 8. Then, we generate a
ceph.client.admin.keyring on step 9. See what happens on step 10 and 11? I
add the ceph.client.admin.keyring contents to the monitor secret keyring;
then, I feed that to step 11 for creating the monmap.  Since ceph-deploy
already creates the ceph.client.admin.keyring for you and populates it in
the monmap, the fact that you are creating one after ceph-deploy has done
this for you probably implies that you have overwritten the
ceph.client.admin.keyring that was generated by ceph-deploy. The one you
generated probably isn't in your monmap, so you are passing it the wrong
key.

If you had the right key or if you turn off cephx, you could execute ceph
auth list to see the client.admin key contents. It's likely different from
what you have in your ceph.client.admin.keyring file.

ceph-deploy new generates a mon key. Then you deploy one or more monitors.
Then, you use ceph-deploy gatherkeys. At that point, you should have a
ceph.client.admin.keyring in the local/current directory after you executed
ceph-deploy gatherkeys.

Let me know if this helps.






On Fri, Dec 6, 2013 at 7:59 AM, nicolasc nicolas.cance...@surfsara.nlwrote:

 Hi Dan,

 Thank you for the advice and indications. We have the exact same
 configuration, except I am only enabling auth cluster, and I am using
 ceph.client.admin.keyring instead of simply keyring.

 Both locations /etc/ceph/ceph.client.admin.keyring and
 /etc/ceph/keyring are presented as default values for the keyring
 configuration setting. I will try /etc/ceph/keyring, but I doubt this
 changes much.

 I am curious of whether your setup still works if you remove the keyring
 = /etc/ceph/keyring setting (you are also using a default location, so you
 could remove that line safely, right?).

 Thank you very much for answering. Best regards,


 Nicolas Canceill
 Scalable Storage Systems
 SURFsara (Amsterdam, NL)



 On 12/06/2013 11:42 AM, Dan Van Der Ster wrote:

 Hi,
 All of our clusters have this in ceph.conf:

 [global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
keyring = /etc/ceph/keyring

 and the client.admin secret in /etc/ceph/keyring:

 # cat /etc/ceph/keyring
 [client.admin]
  key = ...

 With that you should be able to do ceph health without passing —id or
 —keyring args. (this is with dumpling, not emperor, but I guess it didn’t
 change.)

 If it still doesn’t work, check the capabilities that client.admin has
 (with ceph auth list). Should be

  caps: [mds] allow
  caps: [mon] allow *
  caps: [osd] allow *

 Cheers, Dan


 On 06 Dec 2013, at 11:06, nicolasc nicolas.cance...@surfsara.nl wrote:

  Hi every one,

 I did not get any answer to my basic cephx question last week, so let me
 ask it one more time here, before I completely give up on Ceph and move on.

 So, my issue is:

 When all authentication settings are none:
 * The cluster works fine
 * The file /etc/ceph/ceph.client.admin.keyring  exists

 Then I set auth_cluster_required to cephx. When I try to connect to
 the cluster, it detects client.admin and denies access with operation
 not supported, even for commands like ceph health.

 Finally, after I explicitly set the keyring parameter in the config
 (to the default value, because the keyring file was already in the default
 location), the cluster works fine again. So the behavior changes when I add
 those 2 default lines to the config:
 [client.admin]
 keyring = /etc/ceph/ceph.client.admin.keyring

  From the ceph.com documentation [1], about this keyring parameter:
 Description:The path to the keyring file.
 Type:   String
 Required:   No
 Default:/etc/ceph/$cluster.$name.keyring
 ... so, I need help:
 * maybe this is a real bug? (was it already reported ?)
 * maybe I am deeply stupid, and I don't understand what required and
 default means? (can anyone send me a good dictionary ?)
 * maybe obi-wan kenobi?

 Thanks to anyone who will respond anything (at that point, even a
 three-letter e-mail reading ACK would make me feel better). 

Re: [ceph-users] REST API issue for getting bucket policy

2013-12-06 Thread Yehuda Sadeh
On Fri, Dec 6, 2013 at 1:45 AM, Gao, Wei M wei.m@intel.com wrote:
 Hi all,



 I am working on the ceph radosgw(v0.72.1) and when I call the rest api to
 read the bucket policy, I got an internal server error(request URL is:
 /admin/bucket?policyformat=jsonbucket=test.).

 However, when I call this:
 /admin/bucket?policyformat=jsonbucket=testobject=obj, I got the policy of
 the object returned. Besides, I do have the right permission(buckets=*).

 Any idea? Thanks!


That's a bug. I opened issue #6940.

Thanks,
Yehuda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] My experience with ceph now documentted

2013-12-06 Thread Karan Singh
Hello Cephers 

I would like to say a BIG THANKS to ceph community for helping me in setting up 
and learning ceph. 

I have created a small documentation http://karan-mj.blogspot.fi/ of my 
experience with ceph till now , i belive it would help beginners in installing 
ceph and integrating it with openstack. I would keep updating this blog. 


PS -- i recommend original ceph documentation http://ceph.com/docs/master/ and 
other original content published by Ceph community , INKTANK and other 
partners. My attempt http://karan-mj.blogspot.fi/ is just to contribute for a 
regular online content about ceph. 



Karan Singh 
CSC - IT Center for Science Ltd. 
P.O. Box 405, FI-02101 Espoo, FINLAND 
http://www.csc.fi/ | +358 (0) 503 812758 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Dimitri Maziuk
On 12/06/2013 04:03 PM, Alek Paunov wrote:

 We use only Fedora servers for everything, so I am curious, why you are
 excluded this option from your research? (CentOS is always problematic
 with the new bits of technology).

6 months lifecycle and having to os-upgrade your entire data center 3
times a year?

(OK maybe it's 18 months and once every 9 months)
-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Alek Paunov

On 07.12.2013 00:11, Dimitri Maziuk wrote:

On 12/06/2013 04:03 PM, Alek Paunov wrote:


We use only Fedora servers for everything, so I am curious, why you are
excluded this option from your research? (CentOS is always problematic
with the new bits of technology).


6 months lifecycle and having to os-upgrade your entire data center 3
times a year?

(OK maybe it's 18 months and once every 9 months)


Most servers novadays are re-provisioned even more often, but every new 
Fedora release comes with more and more KVM/Libvirt features and 
resolved issues, so the net effect is positive anyway.


Yes, we need some extra tests to follow the cadence, just like ceph 
upgrades and the all other components.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Dimitri Maziuk
On 12/06/2013 04:28 PM, Alek Paunov wrote:
 On 07.12.2013 00:11, Dimitri Maziuk wrote:

 6 months lifecycle and having to os-upgrade your entire data center 3
 times a year?

 (OK maybe it's 18 months and once every 9 months)
 
 Most servers novadays are re-provisioned even more often,

Not where I work they aren't.

 Fedora release comes with more and more KVM/Libvirt features and
 resolved issues, so the net effect is positive anyway.

Yes, that is the main argument for tracking ubuntu. ;)

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Chris C
We rely on the stability of rhel/centos as well.  We have no patch/upgrade
policy or regulatory directive to do so.  Our servers are set and forget.
 We circle back for patch/upgrades only for break/fix.

I tried F19 just for the fun of it.  We ended up with conflicts trying to
run qemu-kvm with ceph.  I could get one or the other working but not both.
 Our architecture is calling for compute and storage to live on the same
host to save in hardware costs.

I also tried to recompile libvirt and qemu-kvm today.  I didn't even see
rbd libraries in the source code.

/C




On Fri, Dec 6, 2013 at 5:56 PM, Dimitri Maziuk dmaz...@bmrb.wisc.eduwrote:

 On 12/06/2013 04:28 PM, Alek Paunov wrote:
  On 07.12.2013 00:11, Dimitri Maziuk wrote:

  6 months lifecycle and having to os-upgrade your entire data center 3
  times a year?
 
  (OK maybe it's 18 months and once every 9 months)
 
  Most servers novadays are re-provisioned even more often,

 Not where I work they aren't.

  Fedora release comes with more and more KVM/Libvirt features and
  resolved issues, so the net effect is positive anyway.

 Yes, that is the main argument for tracking ubuntu. ;)

 --
 Dimitri Maziuk
 Programmer/sysadmin
 BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] centos6.4 + libvirt + qemu + rbd/ceph

2013-12-06 Thread Alek Paunov

On 07.12.2013 01:03, Chris C wrote:

We rely on the stability of rhel/centos as well.  We have no patch/upgrade
policy or regulatory directive to do so.  Our servers are set and forget.
  We circle back for patch/upgrades only for break/fix.


Stability means keeping the ABIs (and in general all interfaces and 
conventions) stable. It is very important when e.g. you intent to deploy 
some old Sybase on these boxes. How this type of stability helps the 
Ceph/KVM node ... ?




I tried F19 just for the fun of it.  We ended up with conflicts trying to
run qemu-kvm with ceph.  I could get one or the other working but not both.
  Our architecture is calling for compute and storage to live on the same
host to save in hardware costs.

I also tried to recompile libvirt and qemu-kvm today.  I didn't even see
rbd libraries in the source code.



OSD/libvirt-kvm dual role node should work just fine with F19/F20. If 
you are interested in Fedora deployments, we could try to resolve these 
issues.


Alek

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-12-06 Thread Josh Durgin

On 12/05/2013 02:37 PM, Dmitry Borodaenko wrote:

Josh,

On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin josh.dur...@inktank.com wrote:

I hope I can release or push commits to this branch contains live-migration,
incorrect filesystem size fix and ceph-snapshort support in a few days.


Can't wait to see this patch! Are you getting rid of the shared
storage requirement for live-migration?


Yes, that's what Haomai's patch will fix for rbd-based ephemeral
volumes (bug https://bugs.launchpad.net/nova/+bug/1250751).


We've got a version of a Nova patch that makes live migrations work
for non volume-backed instances, and hopefully addresses the concerns
raised in code review in https://review.openstack.org/56527, along
with a bunch of small bugfixes, e.g. missing max_size parameter in
direct_fetch, and a fix for http://tracker.ceph.com/issues/6693. I
have submitted it as a pull request to your nova fork on GitHub:

https://github.com/jdurgin/nova/pull/1


Thanks!


Our changes depend on the rest of commits on your havana-ephemeral-rbd
branch, and the whole patchset is now at 7 commits, which is going to
be rather tedious to submit to the OpenStack Gerrit as a series of
dependent changes. Do you think we should keep the current commit
history in its current form, or would it be easier to squash it down
to a more manageable number of patches?


As discussed on irc yesterday, most of these are submitted to icehouse
already in slightly different form, since this branch is based on
stable/havana.

I'd prefer to keep the commits small and self contained in this branch
at least. If it takes too long to get them upstream, I'm fine with
having them squashed for faster upstream review.

Josh

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com