[ceph-users] resolved - unusual growth in cluster after replacing journalSSDs

2018-02-06 Thread Jogi Hofmüller
Dear all, we finally found the reason for the unexpected growth in our cluster. The data was created by a collectd plugin [1] that measures latency by running rados bench once a minute. Since our cluster was stressed out for a while, removing the objects created by rados bench failed. We comple

Re: [ceph-users] unusual growth in cluster after replacing journalSSDs

2017-11-16 Thread Jogi Hofmüller
Hi, Am Donnerstag, den 16.11.2017, 13:44 +0100 schrieb Burkhard Linke: > > What remains is the growth of used data in the cluster. > > > > I put background information of our cluster and some graphs of > > different metrics on a wiki page: > > > >    https://wiki.mur.at/Dokumentation/CephCluste

[ceph-users] unusual growth in cluster after replacing journal SSDs

2017-11-16 Thread Jogi Hofmüller
Dear all, for about a month we experience something strange in our small cluster. Let me first describe what happened on the way. On Oct 4ht smartmon told us that the journal SSDs in one of our two ceph nodes will fail. Since getting replacements took way longer than expected we decided to plac

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-20 Thread Jogi Hofmüller
Hi, Am Dienstag, den 18.04.2017, 18:34 + schrieb Peter Maloney: > The 'slower with every snapshot even after CoW totally flattens it' > issue I just find easy to test, and I didn't test it on hammer or > earlier, and others confirmed it, but didn't keep track of the > versions. Just make an r

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Jogi Hofmüller
Hi, Am Dienstag, den 18.04.2017, 13:02 +0200 schrieb mj: > > On 04/18/2017 11:24 AM, Jogi Hofmüller wrote: > > This might have been true for hammer and older versions of ceph. > > From > > what I can tell now, every snapshot taken reduces performance of > > the >

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Jogi Hofmüller
Hi, thanks for all you comments so far. Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton: > Hi, > > Le 13/04/2017 à 10:51, Peter Maloney a écrit : > > Ceph snapshots relly slow things down. I can confirm that now :( > We use rbd snapshots on Firefly (and Hammer now) and I d

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-13 Thread Jogi Hofmüller
Dear David, Am Mittwoch, den 12.04.2017, 13:46 + schrieb David Turner: > I can almost guarantee what you're seeing is PG subfolder splitting.  Evey day there's something new to learn about ceph ;) > When the subfolders in a PG get X number of objects, it splits into > 16 subfolders.  Every c

[ceph-users] slow requests and short OSD failures in small cluster

2017-04-12 Thread Jogi Hofmüller
Dear all, we run a small cluster [1] that is exclusively used for virtualisation (kvm/libvirt). Recently we started to run into performance problems (slow requests, failing OSDs) for no *obvious* reason (at least not for us). We do nightly snapshots of VM images and keep the snapshots for 14 days

Re: [ceph-users] solved: ceph-deploy mon create-initial fails on Debian/Jessie

2015-11-25 Thread Jogi Hofmüller
Hi all, Well, after repeating the procedure a few times I once ran ceph-deploy forgetkeys and voila, that did it. Sorry for the noise, -- J.Hofmüller Ein literarisches Meisterwerk ist nur ein Wörterbuch in Unordnung. - Jean Cocteau signature.asc Description: OpenPGP digital signature _

[ceph-users] ceph-deploy mon create-initial fails on Debian/Jessie

2015-11-25 Thread Jogi Hofmüller
Hi all, I am reinstalling our test cluster and run into problems when running ceph-deploy mon create-initial It fails stating: [ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ceph1 [ceph_deploy][ERROR ] KeyNotFoundError: Could not find keyring file:

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi Kurt, Am 2015-09-30 um 17:09 schrieb Kurt Bauer: > You have two nodes but repl.size 3 for your test-data pool. With the > default crushmap this won't work as it tries to replicate on different > nodes. > > So either change to rep.size 2, or add another node ;-) Thanks a lot! I did not set a

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi, Some more info: ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 3.59998 root default -2 1.7 host ceph1 0 0.8 osd.0 up 1.0 1.0 1 0.8 osd.1 up 1.0 1.0 -3 1.7 host ceph2 2 0.89

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi, Am 2015-09-17 um 19:02 schrieb Stefan Eriksson: > I purged all nodes and did purgedata aswell and restarted, after this > Everything was fine. You are most certainly right, if anyone else have > this error, reinitialize the cluster might be the fastest way forward. Great that it worked for y

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-30 Thread Jogi Hofmüller
Hi, Am 2015-09-29 um 15:54 schrieb Gregory Farnum: > Can you create a ceph-deploy ticket at tracker.ceph.com, please? > And maybe make sure you're running the latest ceph-deploy, but > honestly I've no idea what it's doing these days or if this is a > resolved issue. Just file a bug. The ceph-d

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-29 Thread Jogi Hofmüller
Hi, Am 2015-09-25 um 22:23 schrieb Udo Lembke: > you can use this sources-list > > cat /etc/apt/sources.list.d/ceph.list > deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3 > jessie main The thing is: whatever I write into ceph.list, ceph-deploy just overwrites it with "d

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-25 Thread Jogi Hofmüller
Hi, Am 2015-09-25 um 22:23 schrieb Udo Lembke: > you can use this sources-list > > cat /etc/apt/sources.list.d/ceph.list > deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3 > jessie main Thanks! Will test it as soon as I get back to work next week. Regards, -- j.hofmülle

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-25 Thread Jogi Hofmüller
Hi, Am 2015-09-11 um 13:20 schrieb Florent B: > Jessie repository will be available on next Hammer release ;) An how should I continue installing ceph meanwhile? ceph-deploy new ... overwrites the /etc/apt/sources.list.d/ceph.list and hence throws an error :( Any hint appreciated. Cheers, --

Re: [ceph-users] new cluster does not reach active+clean

2013-10-03 Thread Jogi Hofmüller
Hi Tyler, Am 2013-10-03 13:22, schrieb Tyler Brekke: > You can add this to your ceph conf to distribute by device rather then node. > > osd crush chooseleaf type = 0 Great! Thanks for reminding me. I had that in previous setups but forgot it this time. > This information is also available on

Re: [ceph-users] trouble adding OSDs - which documentation to use

2013-10-03 Thread Jogi Hofmüller
Dear all, This is getting weird now ... Am 2013-10-03 11:18, schrieb Jogi Hofmüller: > root@ceph-server1:~# service ceph start > === osd.0 === > No filesystem type defined! This message is generated by /etc/init.d/ceph (OK, most of you know that I guess), which is looking for "

[ceph-users] new cluster does not reach active+clean

2013-10-03 Thread Jogi Hofmüller
Dear all, Hope I am not on everyones nerves by now ;) Just started over and created a new cluster: one monitor (ceph-mon0) one osd-server (ceph-rd0) After activating the two OSDs on ceph-rd0 the cluster reaches a state active+degraded and never becomes healthy. Unfortunately this particula

Re: [ceph-users] trouble adding OSDs - which documentation to use

2013-10-03 Thread Jogi Hofmüller
Seas Wolfgang, Am 2013-10-02 09:01, schrieb Wolfgang Hennerbichler: > On 10/01/2013 05:08 PM, Jogi Hofmüller wrote: >> Is this [1] outdated? If not, why are the links to chef-* not >> working? Is chef-* still recommended/used? > > I believe this is a matter of taste. I c

Re: [ceph-users] OSD: Newbie question regarding ceph-deploy odd create

2013-10-01 Thread Jogi Hofmüller
Hi Piers, Am 2013-09-27 22:59, schrieb Piers Dawson-Damer: > I'm trying to setup my first cluster, (have never manually > bootstrapped a cluster) I am about at the same stage here ;) > Is ceph-deploy odd activate/prepare supposed to write to the master > ceph.conf file, specific entries for e

[ceph-users] trouble adding OSDs - which documentation to use

2013-10-01 Thread Jogi Hofmüller
Dear all, I am back to managing the cluster before starting to use it even on a test host. First of all a question regarding the docs: Is this [1] outdated? If not, why are the links to chef-* not working? Is chef-* still recommended/used? After adding a new OSD (with ceph-deploy version 1.2.

[ceph-users] authentication trouble

2013-09-26 Thread Jogi Hofmüller
Dear all, I am fairly new to ceph and just in the process of testing it using several virtual machines. Now I tried to create a block device on a client and fumbled with settings for about an hour or two until the command line rbd --id dovecot create home --size=1024 finally succeeded. The k