Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-10 Thread Steffen Winther
Yehuda Sadeh-Weinraub writes: > According to the api specified here > http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html, > there's no response expected. I can only assume that the application > tries to decode the xml if xml content type is returned. Also what I hinted App vendor

[ceph-users] OSD load simulator

2015-03-10 Thread Adrian Sevcenco
Hi! Is is possible somehow to have a kind of OSD benchmark for CPU? It would be very useful to measure the actual compatibility of a server with a number of OSD, PGs and so on .. The reason of the request is that the rule of 1 GHz per OSD might not really hold water (for reasons like AMD vs Inte

[ceph-users] Ceph free space

2015-03-10 Thread Mateusz Skała
Hi, In my cluster is something wrong with free space. In cluster with 10OSD (5*1TB + 5*2TB) 'ceph -s' shows: 11425 GB used, 2485 GB / 13910 GB avail But I have only 2 rbd disks in one pool ('rbd'): >>rados df pool name category KB objects clones degraded

Re: [ceph-users] Ceph BIG outage : 200+ OSD are down , OSD cannot create thread

2015-03-10 Thread Christian Eichelmann
Hi Sage, we hit this problem a few monthes ago as well and it took us quite a while to figure out what's wrong. As a Systemadministrator I don't like the idea that daemons or even init scripts are changing system wide configuration parameters, so I wouldn't like to see the OSDs do it themsel

Re: [ceph-users] Ceph free space

2015-03-10 Thread Henrik Korkuc
On 3/10/15 11:06, Mateusz Skała wrote: Hi, In my cluster is something wrong with free space. In cluster with 10OSD (5*1TB + 5*2TB) ‘ceph –s’ shows: 11425 GB used, 2485 GB / 13910 GB avail But I have only 2 rbd disks in one pool (‘rbd’): >>rados df pool name category KB objects

Re: [ceph-users] Ceph free space

2015-03-10 Thread Mateusz Skała
Thanks for reply, >>ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 13910G 2472G 11437G 82.22 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 3792G 27.26 615G 971526 How to free raw used spac

Re: [ceph-users] Ceph free space

2015-03-10 Thread Mateusz Skała
Fixed problem, Default pool size is set to 2, but for rbd pool size wos set to 3. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mateusz Skała Sent: Tuesday, March 10, 2015 10:22 AM To: 'Henrik Korkuc'; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph free

[ceph-users] Issues with fresh 0.93 OSD adding to existing cluster

2015-03-10 Thread Malcolm Haak
Hi all, I've just attempted to add a new node and OSD to an existing ceph cluster (it's a small one I use as a NAS at home, not like the big production ones I normally work on) and it seems to be throwing some odd errors... Just looking for where to poke it next... Log is below, It's a two n

[ceph-users] ceph cache tier pool objects not evicted automatically even when reaching full ratio

2015-03-10 Thread Kamil Kuramshin
hi, folks! I'm testing cache tier for erasure coded pool and with RBD image on it. And now I'm facing a problem with full cache pool and object are not evicted automatically, Only if I run manually rados -p cache cache-flush-evict-all* client side is: superuser@share:~$ uname -a

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread Jesus Chavez (jeschave)
So EPEL is not requiered? Jesus Chavez SYSTEMS ENGINEER-C.SALES jesch...@cisco.com Phone: +52 55 5267 3146 Mobile: +51 1 5538883255 CCIE - 44433 On Mar 9, 2015, at 8:58 AM, HEWLETT, Paul (Paul)** CTR ** mailto:paul.hewl...@alcatel-lucent.com>> wrote: Hi Wildo It s

Re: [ceph-users] Ceph BIG outage : 200+ OSD are down , OSD cannot create thread

2015-03-10 Thread Sage Weil
On Tue, 10 Mar 2015, Christian Eichelmann wrote: > Hi Sage, > > we hit this problem a few monthes ago as well and it took us quite a while to > figure out what's wrong. > > As a Systemadministrator I don't like the idea that daemons or even init > scripts are changing system wide configuration pa

Re: [ceph-users] ceph cache tier pool objects not evicted automatically even when reaching full ratio

2015-03-10 Thread LOPEZ Jean-Charles
Hi, you need to set the max dirty bytes and/or max dirty objects as these 2 parameters will default to 0 for your cache pool. ceph osd pool set target_max_objects x ceph osd pool set target_max_bytes x The ratios you already set (dirty_ratio = 0.4 and full_ratio = 0.7) will be applie

Re: [ceph-users] ceph cache tier pool objects not evicted automatically even when reaching full ratio

2015-03-10 Thread Kamil Kuramshin
Thanks a lot to /*Be-El*/ from #ceph (irc://irc.oftc.net/ceph) The problem is resolved after setting 'target_max_bytes' for cache pool: *$ ceph osd pool set cache target_max_bytes 1840* Because setting only 'cache_target_full_ratio' to 0.7 - is not sufficient for cache tiering agent, i

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread HEWLETT, Paul (Paul)** CTR **
Hi Jesus EPEL is required for the libunwind library. If libunwind is copied to the ceph repo then EPEL would not be required. Regards Paul Hewlett Senior Systems Engineer Velocix, Cambridge Alcatel-Lucent t: +44 1223 435893 m: +44 7985327353 From: Jesus Chavez

Re: [ceph-users] Stuck PGs blocked_by non-existent OSDs

2015-03-10 Thread Samuel Just
What do you mean by "unblocked" but still "stuck"? -Sam On Mon, 2015-03-09 at 22:54 +, joel.merr...@gmail.com wrote: > On Mon, Mar 9, 2015 at 2:28 PM, Samuel Just wrote: > > You'll probably have to recreate osds with the same ids (empty ones), > > let them boot, stop them, and mark them lost.

Re: [ceph-users] Issues with fresh 0.93 OSD adding to existing cluster

2015-03-10 Thread Samuel Just
Can you reproduce this with debug osd = 20 debug filestore = 20 debug ms = 1 on the crashing osd? Also, what sha1 are the other osds and mons running? -Sam - Original Message - From: "Malcolm Haak" To: ceph-users@lists.ceph.com Sent: Tuesday, March 10, 2015 3:28:26 AM Subject: [ceph-us

Re: [ceph-users] Issues with fresh 0.93 OSD adding to existing cluster

2015-03-10 Thread Malcolm Haak
Hi Samuel, The sha1? I'm going to admit ignorance as to what you are looking for. They are all running the same release if that is what you are asking. Same tarball built into rpms using rpmbuild on both nodes... Only difference being that the other node has been upgraded and the problem node

Re: [ceph-users] flock() supported on CephFS through Fuse ?

2015-03-10 Thread Gregory Farnum
On Tue, Mar 10, 2015 at 4:20 AM, Florent B wrote: > Hi all, > > I'm testing flock() locking system on CephFS (Giant) using Fuse. > > It seems that lock works per client, and not over all clients. > > Am I right or is it supposed to work over different clients ? Does MDS > has such a locking system

Re: [ceph-users] Stuck PGs blocked_by non-existent OSDs

2015-03-10 Thread joel.merr...@gmail.com
Stuck unclean and stuck inactive. I can fire up a full query and health dump somewhere useful if you want (full pg query info on ones listed in health detail, tree, osd dump etc). There were blocked_by operations that no longer exist after doing the OSD addition. Side note, spent some time yesterd

Re: [ceph-users] Issues with fresh 0.93 OSD adding to existing cluster

2015-03-10 Thread Samuel Just
Joao, it looks like map 2759 is causing trouble, how would he get the full and incremental maps for that out of the mons? -Sam On Tue, 2015-03-10 at 14:12 +, Malcolm Haak wrote: > Hi Samuel, > > The sha1? I'm going to admit ignorance as to what you are looking for. They > are all running the

Re: [ceph-users] Stuck PGs blocked_by non-existent OSDs

2015-03-10 Thread Samuel Just
Yeah, get a ceph pg query on one of the stuck ones. -Sam On Tue, 2015-03-10 at 14:41 +, joel.merr...@gmail.com wrote: > Stuck unclean and stuck inactive. I can fire up a full query and > health dump somewhere useful if you want (full pg query info on ones > listed in health detail, tree, osd d

[ceph-users] ceph v0.80.9 debian source packages???

2015-03-10 Thread Valery Tschopp
Hi guys, The last trusty version 0.80.9 have been pushed in the "deb http://ceph.com/debian-firefly/ trusty main" repository yesterday. The last packages have the version 0.80.9-1trusty, but I can not find the corresponding source packages in http://gitbuilder.ceph.com/ceph-deb-trusty-x86_64

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread Jesus Chavez (jeschave)
thanks! but I still don’t get it very well what can I do to install libunwind? [root@aries ~]# yum install libunwind Loaded plugins: langpacks, priorities, product-id, subscription-manager 7 packages excluded due to repository priority protections No package libunwind available. Error: Nothing to

Re: [ceph-users] New eu.ceph.com mirror machine

2015-03-10 Thread Jesus Chavez (jeschave)
Or maybe installing the RPM directly from: http://www.mirrorservice.org/sites/ceph.com/rpm-firefly/rhel7/x86_64/ libunwind-1.1-3.el7.x86_64.rpm but this is not for Giant seems to be for firefly =

[ceph-users] increase pg num

2015-03-10 Thread tombo
Hello, I'm running debian 8 with ceph 0.80.6-1 firefly in production and I need to double the count of pgs. I've found that it was experimantal feature, is it safe now? Is there still --allow-experimental-feature switch for ceph osd pool set {pool-name} pg_num {pg_num} command ? Thanks

Re: [ceph-users] increase pg num

2015-03-10 Thread Weeks, Jacob (RIS-BCT)
I am not sure about v0.80.6-1 but in v0.80.7 the --allow-experimental-feature option is not required. I have increased pg_num and pgp_num in v0.80.7 without any issues. It may be safer to make the change incrementally rather than all at once. Since v0.72, ceph does not allow extreme changes in

Re: [ceph-users] increase pg num

2015-03-10 Thread tombo
Thanks for reply, On 10.03.2015 19:52, Weeks, Jacob (RIS-BCT) wrote: > I am not sure about v0.80.6-1 but in v0.80.7 the --allow-experimental-feature option is not required. I have increased pg_num and pgp_num in v0.80.7 without any issues. on how big cluster, how long did it take to recove

Re: [ceph-users] increase pg num

2015-03-10 Thread Weeks, Jacob (RIS-BCT)
>>I am not sure about v0.80.6-1 but in v0.80.7 the --allow-experimental-feature >> option is not required. I have increased pg_num and pgp_num in v0.80.7 >>without any issues. >on how big cluster, how long did it take to recovery from this change? The largest pool was roughly 200TB. It took les

[ceph-users] v0.80.9 Firefly released

2015-03-10 Thread Sage Weil
This is a bugfix release for firefly. It fixes a performance regression in librbd, an important CRUSH misbehavior (see below), and several RGW bugs. We have also backported support for flock/fcntl locks to ceph-fuse and libcephfs. We recommend that all Firefly users upgrade. For more detaile

Re: [ceph-users] EC Pool and Cache Tier Tuning

2015-03-10 Thread Steffen W Sørensen
On 09/03/2015, at 22.44, Nick Fisk wrote: > Either option #1 or #2 depending on if your data has hot spots or you need > to use EC pools. I'm finding that the cache tier can actually slow stuff > down depending on how much data is in the cache tier vs on the slower tier. > > Writes will be about

[ceph-users] ceph-deploy option --dmcrypt-key-dir unusable

2015-03-10 Thread Pierre BLONDEAU
Hi, The option "--dmcrypt-key-dir" when you want to activate/create a new OSD is unusable by default. Because the default path "/etc/ceph/dmcrypt-keys/" is hard coded in udev rules. I have found and test two simple way to solve : - Change the path of keys in ''/lib/udev/rules.d/95-ceph-osd.rules

[ceph-users] Now it seems that could not find keyring

2015-03-10 Thread Jesus Chavez (jeschave)
what is going on with ceph? [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring on aries [ceph_deploy][ERROR ] KeyNotFoundError: Could not find keyring file: /etc/ceph/ceph.client.admin.keyring on host aries =( gosh Jesus Chavez [cid:image006.gif@0

Re: [ceph-users] Now it seems that could not find keyring

2015-03-10 Thread Lindsay Mathieson
On 11 March 2015 at 06:53, Jesus Chavez (jeschave) wrote: > KeyNotFoundError: Could not find keyring file: > /etc/ceph/ceph.client.admin.keyring on host aries > Well - have you verified the keyring is there on host aries and has the right permissions? -- Lindsay _

Re: [ceph-users] S3 RadosGW - Create bucket OP

2015-03-10 Thread Yehuda Sadeh-Weinraub
- Original Message - > From: "Steffen Winther" > To: ceph-users@lists.ceph.com > Sent: Tuesday, March 10, 2015 12:06:38 AM > Subject: Re: [ceph-users] S3 RadosGW - Create bucket OP > > Yehuda Sadeh-Weinraub writes: > > > According to the api specified here > > http://docs.aws.amazon.c

[ceph-users] ceph days

2015-03-10 Thread Tom Deneau
Are the slides or videos from ceph days presentations made available somewhere? I noticed some links in the Frankfurt Ceph day, but not for the other Ceph Days. -- Tom ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listin

[ceph-users] Shadow files

2015-03-10 Thread Ben
We have a large number of shadow files in our cluster that aren't being deleted automatically as data is deleted. Is it safe to delete these files? Is there something we need to be aware of when deleting them? Is there a script that we can run that will delete these safely? Is there something w

Re: [ceph-users] v0.80.9 Firefly released

2015-03-10 Thread Christian Balzer
On Tue, 10 Mar 2015 12:34:14 -0700 (PDT) Sage Weil wrote: > Adjusting CRUSH maps > > > * This point release fixes several issues with CRUSH that trigger > excessive data migration when adjusting OSD weights. These are most > obvious when a very small weight change (e.g.

Re: [ceph-users] rados import error: short write

2015-03-10 Thread Francois Lafont
Hi, Le 10/03/2015 04:40, Leslie Teo a écrit : > we use `rados export poolA /opt/zs.rgw-buckets` export ceph cluster pool > named poolA into localdir /opt/ .and import the directroy > /opt/zs.rgw-buckets into another ceph cluster pool named hello , and > following the error :shell > rados

Re: [ceph-users] v0.80.9 Firefly released

2015-03-10 Thread Sage Weil
On Wed, 11 Mar 2015, Christian Balzer wrote: > On Tue, 10 Mar 2015 12:34:14 -0700 (PDT) Sage Weil wrote: > > > > Adjusting CRUSH maps > > > > > > * This point release fixes several issues with CRUSH that trigger > > excessive data migration when adjusting OSD weights. The

[ceph-users] PGs stuck unclean "active+remapped" after an osd marked out

2015-03-10 Thread Francois Lafont
Hi, I had a ceph cluster in "HEALTH_OK" state with Firefly 0.80.9. I just wanted to remove an OSD (which worked well). So after: ceph osd out 3 I waited for the rebalancing but I had "PGs stuck unclean": --- ~# ceph -s cluster

[ceph-users] Calamari - Data

2015-03-10 Thread Sumit Gaur
Hi I have a basic architecture related question. I know Calamari collect system usages data (diamond collector) using perfrormance counters. I need to knwo if all the system performance data that calamari shows remains in memory or it usages files to store that. Thanks sumit ___

Re: [ceph-users] PGs stuck unclean "active+remapped" after an osd marked out

2015-03-10 Thread Francois Lafont
Le 11/03/2015 05:44, Francois Lafont a écrit : > PS: here is my conf. > [...] I have this too: ~# ceph osd crush show-tunables { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 0, "straw_calc_versio

[ceph-users] Adding Monitor Stuck

2015-03-10 Thread Jesus Chavez (jeschave)
I am really stuck adding second monitor =(, ceph-deploy mon create seems to finish with some error like monitor may not be able to form quorum and they are not definite in mon initial… I have found there is a way to get it work and is doing the next commands: ceph mon add tauro 192.168.4.35:6789