Re: [ceph-users] Location field empty in Glance when instance to image

2013-09-02 Thread Toni F. [ackstorm]
Thanks a lot Josh. It be very useful. Regards On 31/08/13 02:58, Josh Durgin wrote: On 08/30/2013 03:40 AM, Toni F. [ackstorm] wrote: Sorry, wrong list Anyway i take this oportunity to ask two questions: Somebody knows how i can download a image or snapshot? Cinder has no way to export the

Re: [ceph-users] To put journals to SSD or not?

2013-09-02 Thread Mark Kirkwood
On 02/09/13 07:19, Fuchs, Andreas (SwissTXT) wrote: Reading through the documentation and talking to several peaople leads to the conclusion that it's a best practice to place the journal of an OSD instance to a separate SSD disk to speed writing up. But is this true? i have 3 new dell servers

Re: [ceph-users] Is it possible to change the pg number after adding new osds?

2013-09-02 Thread Fuchs, Andreas (SwissTXT)
You can change the pg numbers on the fly with ceph osd pool set {pool_name} pg_num {value} ceph osd pool set {pool_name} pgp_num {value} refrence: http://ceph.com/docs/master/rados/operations/pools/ From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of

Re: [ceph-users] To put journals to SSD or not?

2013-09-02 Thread Fuchs, Andreas (SwissTXT)
How do you test the random behavior of the disks, what's a good setup? If I understand ceph writes in 4M blocks I also expect a 50%/50% rw ratio of our workloads, what else to I have to take into consideration. Also what I not yet understand, in my performance test I get pretty nice rados bench

[ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
Hi all we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally formatted the OSDs with btrfs but have had numerous problems (server kernel panics) that we could point back to btrfs. We are therefore in the process of reformatting our OSDs to XFS. We have a process that works, but

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Corin Langosch
Am 02.09.2013 11:37, schrieb Jens-Christian Fischer: we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally formatted the OSDs with btrfs but have had numerous problems (server kernel panics) that we could point back to btrfs. We are therefore in the process of reformatting our

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Martin Rudat
Hi Jens, On 2013-09-02 19:37, Jens-Christian Fischer wrote: we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally formatted the OSDs with btrfs but have had numerous problems (server kernel panics) that we could point back to btrfs. We are therefore in the process of reformatt

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
> > Why wait for the data to migrate away? Normally you have replicas of the > whole osd data, so you can simply stop the osd, reformat the disk and restart > it again. It'll join the cluster and automatically get all data it's missing. > Of course the risk of dataloss is a bit higher during th

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
Hi Martin > On 2013-09-02 19:37, Jens-Christian Fischer wrote: >> we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally >> formatted the OSDs with btrfs but have had numerous problems (server kernel >> panics) that we could point back to btrfs. We are therefore in the process >

[ceph-users] OT: SSD versus SATA performance

2013-09-02 Thread Fuchs, Andreas (SwissTXT)
Sorry this is not related directly to ceph anymore, but as I'm pretty shure that here are some people who did similar tests already I ask. Iozone tests on my SATA disks shows: Auto Mode File size set to 4096 KB Command line used: iozone -a -s 4m Output is in Kbyte

[ceph-users] adding SSD only pool to existing ceph cluster

2013-09-02 Thread Jens-Christian Fischer
We have a ceph cluster with 64 OSD (3 TB SATA) disks on 10 servers, and run an OpenStack cluster. We are planning to move the images of the running VM instances from the physical machines to CephFS. Our plan is to add 10 SSDs (one in each server) and create a pool that is backed only by these S

Re: [ceph-users] some newbie questions...

2013-09-02 Thread Dzianis Kahanovich
Dimitri Maziuk пишет: 1) i read somewhere that it is recommended to have one OSD per disk in a production environment. is this also the maximum disk per OSD or could i use multiple disks per OSD? and why? >>> >>> you could use multiple disks for one OSD if you used some str

Re: [ceph-users] some newbie questions...

2013-09-02 Thread Dzianis Kahanovich
Oliver Daudey пишет: > 1) i read somewhere that it is recommended to have one OSD per disk in a > production environment. > is this also the maximum disk per OSD or could i use multiple disks > per OSD? and why? you could use multiple disks for one OSD if you used s

[ceph-users] ceph freezes for 10+ seconds during benchmark

2013-09-02 Thread Mariusz Gronczewski
We've installed ceph on test cluster: 3x mon, 7xOSD on 2x10k RPM SAS Centos 6.4 ( 2.6.32-358.14.1.el6.x86_64 ) ceph 0.67.2 (also tried with 0.61.7 with same results) And during rados bench I get very strange behaviour: # rados bench -p pbench 100 write sec Cur ops started finished avg MB

Re: [ceph-users] SSD only storage, where to place journal

2013-09-02 Thread Maciej Gałkiewicz
On 30 August 2013 22:13, Stefan Priebe wrote: > > Yes, thats correct what i hate at this point is that you lower the ssd > speed by writing to journal, reading to journal wirting to ssd. Sadly there > is no option to disable the journal. I think for SSD this would be best. This would be best fo

Re: [ceph-users] Assert and monitor-crash when attemting to create pool-snapshots while rbd-snapshots are in use or have been used on a pool

2013-09-02 Thread Joao Eduardo Luis
On 08/18/2013 07:11 PM, Oliver Daudey wrote: Hey all, Also created on the tracker, under http://tracker.ceph.com/issues/6047 Oliver, list, We fixed this last week. Fixes can be found on wip-6047. We shall merge this to the mainline and the patch will be backported to dumpling. Thanks onc

Re: [ceph-users] Is it possible to change the pg number after adding new osds?

2013-09-02 Thread Da Chun Ng
Only pgp_num is listed in the reference. Though pg_num can be changed in the same way, is there any risk to do that? From: andreas.fu...@swisstxt.ch To: dachun...@outlook.com; ceph-users@lists.ceph.com Subject: RE: [ceph-users] Is it possible to change the pg number after adding new osds? Date

Re: [ceph-users] Is it possible to change the pg number after adding new osds?

2013-09-02 Thread Sage Weil
On Mon, 2 Sep 2013, Da Chun Ng wrote: > According to the doc, the pg numbers should be enlarged for better > read/write balance if the osd number is increased.But seems the pg number > cannot be changed on the fly. It's fixed when the pool is created. Am I > right? It can be increased with ceph

Re: [ceph-users] Is it possible to change the pg number after adding new osds?

2013-09-02 Thread Dzianis Kahanovich
2 days ago I increase it for one pool and trying to reduce for others. Reducing don't work (for me? - "repair" freezed, but rollung back - up - is good), increasing is good. I understand next: pgp_num is temporary parameter to change pg_num. Data actually distributed over pgp_num, but allocated PGs

[ceph-users] How to force lost PGs

2013-09-02 Thread Gaylord Holder
I created a pool with no replication and an RBD within that pool. I mapped the RBD to a machine, formatted it with a file system and dumped data on it. Just to see what kind of trouble I can get into, I stopped the OSD the RBD was using, marked the OSD as out, and reformatted the OSD tree.

[ceph-users] rgw geo-replication and disaster recovery problem

2013-09-02 Thread 李学慧
Mr. Hi!I'm interested into the rgw geo-replication and disaster recovery feature. But whether those 'regisions and zones ' distributes among several different ceph clusters or just only one? Thank you ! ashely_

[ceph-users] about the script 'init-radosgw'

2013-09-02 Thread
| Hi, I install the ceph 0.56.3 on fedord 15. there is no rpm release for fc15, so i build it form the source. # ./autogen.sh when configure I use # ./configure -with--radosgw and i install the ceph and the rados sucessful. I follow the ceph doucment configure

[ceph-users] tons of "failed lossy con, dropping message" => root cause for bad performance ?

2013-09-02 Thread Matthieu Patou
Hello All, I have a simple test setup with 2 osd servers each 3 NICs (1Gb each): * One for management (ssh and such) * One for the public network (connected to ceph clients) * One for the cluster (osd inter-connection) I keep seeing this messages: Aug 26 18:43:31 ceph01 ceph-osd: 2013-08-26 1

[ceph-users] Radosgw S3 - can't authenticate user

2013-09-02 Thread Mark Kirkwood
I have a test setup for Radosgw on a single box. The Swift side of things works fine, but trying S3 (via boto) I am encountering the error: error reading user info, uid=('X5E5BXJHCZGGII3HAWBB',) can't authenticate Now the access key above is correct (see below), and I have copied the secret ke

[ceph-users] ceph-mon runs on 6800 not 6789.

2013-09-02 Thread 이주헌
Hi all. I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This is a result of 'ceph -s' --- cluster c59d13fd-c4c9-4cd0-b2ed-b654428b3171 health HEALTH_WAR

Re: [ceph-users] tons of "failed lossy con, dropping message" => root cause for bad performance ?

2013-09-02 Thread Gregory Farnum
Looks like maybe your network is faulty. The crc error means the OSD received a message with a checksum that didn't match. The dropped message indicates that the connection (in this case to a client) has failed (probably because of the bad crc?) and so it's dropping the outgoing message. This is in

Re: [ceph-users] Radosgw S3 - can't authenticate user

2013-09-02 Thread Yehuda Sadeh
On Mon, Sep 2, 2013 at 5:47 PM, Mark Kirkwood wrote: > I have a test setup for Radosgw on a single box. The Swift side of things > works fine, but trying S3 (via boto) I am encountering the error: > > error reading user info, uid=('X5E5BXJHCZGGII3HAWBB',) can't authenticate > > Now the access key

Re: [ceph-users] Radosgw S3 - can't authenticate user

2013-09-02 Thread Mark Kirkwood
On 03/09/13 15:25, Yehuda Sadeh wrote: Boto prog: #!/usr/bin/python import boto import boto.s3.connection access_key = 'X5E5BXJHCZGGII3HAWBB', secret_key = '' # redacted conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = se