Re: [ceph-users] rbd snap ls: how much locking is involved?

2016-01-21 Thread Christian Kauhaus
rming RBD diffs. Yes, we are also doing 'rbd export-diff' on snapshots. So this could be the cause, too. Regards Christian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0 Flying Circus Internet Operations GmbH · http://flyingcircus.io Forsterstr

[ceph-users] rbd snap ls: how much locking is involved?

2016-01-21 Thread Christian Kauhaus
on a heavly loaded cluster? TIA Christian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0 Flying Circus Internet Operations GmbH · http://flyingcircus.io Forsterstraße 29 · 06112 Halle (Saale) · Deutschland HR Stendal 21169 · Geschäftsführ

Re: [ceph-users] Blocked requests after "osd in"

2015-12-11 Thread Christian Kauhaus
I can get some input for > when I do try and tackle the problem next year. Is there already a ticket present for this issue in the bug tracker? I think this is an import issue. Regards Christian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 21940

Re: [ceph-users] Blocked requests after "osd in"

2015-12-10 Thread Christian Kauhaus
e situation is much appreciated. In the meantime, I'll be experimenting with pre-seeding the VFS cache to speed things up at least a little bit. Regards Christian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0 Flying Circus Internet Operation

Re: [ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Christian Kauhaus
so make sure they are not under load. I don't think this is an issue here. Our MONs don't use more than 5% CPU during the operation and don't cause significant amounts of disk I/O. Regards Christian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 3

[ceph-users] Blocked requests after "osd in"

2015-12-09 Thread Christian Kauhaus
tian -- Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0 Flying Circus Internet Operations GmbH · http://flyingcircus.io Forsterstraße 29 · 06112 Halle (Saale) · Deutschland HR Stendal 21169 · Geschäftsführer: Christian T

Re: [ceph-users] Small Cluster Re-IP process

2014-10-23 Thread Christian Kauhaus
where an admin refrained from performing a similar change because he was daunted by what he read on ceph.com. Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany k...@gocept.com · te

Re: [ceph-users] nf_conntrack overflow crashes OSDs

2014-08-08 Thread Christian Kauhaus
Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel +49 345 219401-11 Python, Pyramid, Plone, Zope · consulting, developmen

[ceph-users] nf_conntrack overflow crashes OSDs

2014-08-08 Thread Christian Kauhaus
vely, we have considered removing nf_conntrack completely. This, however, is not possible since we use host-based firewalling and nf_conntrack is wired quite deeply into Linux' firewall code. Just to share our experience in case someone experiences the same problem. Regards Christian -- Dipl.

Re: [ceph-users] Ceph RBD and Backup.

2014-07-03 Thread Christian Kauhaus
making rapid progress. It would be great if you'd try it, spot bugs, contribute code etc. Help is appreciated. :-) PyPI page: https://pypi.python.org/pypi/backy/ Pull requests go here: https://bitbucket.org/ctheune/backy Christian Theune is the primary contact. HTH Christian -- Di

Re: [ceph-users] How to improve performance of ceph objcect storage cluster

2014-06-27 Thread Christian Kauhaus
efault stripe size of RBD volumes. So in consequence, does this mean to go with a larger RBD object size than the default (4MiB)? Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale

Re: [ceph-users] Behaviour of ceph pg repair on different replication levels

2014-06-26 Thread Christian Kauhaus
asons we continue to > look to btrfs as our long-term goal). When thinking in petabytes scale, bit rot going to happen as a matter of fact. So I think Ceph should be prepared, at least when there are more than 2 replicas. Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@g

Re: [ceph-users] Behaviour of ceph pg repair on different replication levels

2014-06-25 Thread Christian Kauhaus
rds against local bit rot (e.g., when a local disk returns incorrect data). Or is there already a voting scheme in place during deep scrub? Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 2

[ceph-users] 403 error on http://ceph.com/docs/master/

2014-06-23 Thread Christian Kauhaus
Hi, the "Documentation" link on the ceph.com home page leads to a 403 error page. Is this a web server malfunction/misconfiguration or live the docs under different URLs now? Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration g

[ceph-users] trying to interpret lines in osd.log

2014-06-23 Thread Christian Kauhaus
- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel +49 345 219401-11 Python, Pyramid, Plone, Zope · consulting, develop

[ceph-users] error (24) Too many open files

2014-06-12 Thread Christian Kauhaus
ction": "86.37_head", "oid": "a63e7df7\/rbd_data.1933fe2ae8944a.042c\/head\/\/86", "name": "snapset", "length": 31}]} 2014-06-08 22:15:35.255955 7f850ac25700 -1 os/FileStore.cc: In function

Re: [ceph-users] OSDs

2014-06-12 Thread Christian Kauhaus
used to be 2. A replication factor of 3 incurs significantly more space overhead. Has a replication factor of 2 been proven to be insecure? Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße

[ceph-users] FAILED assert(_size >= 0) during recovery - need to understand what's going on

2014-06-10 Thread Christian Kauhaus
upted filesystems inside the VMs as well as scrub errors afterwards. How can this be? Isn't Ceph designed to handle network failures? Obviously, running nf_conntrack on Ceph hosts is not a brilliant idea but it simply was present here. But I don't think that dropping network packets sho

Re: [ceph-users] PG Scaling

2014-03-14 Thread Christian Kauhaus
guration/pool-pg-config-ref/ comes out with 333, which is certainly not a power of two. If using a power of two for the number of PGs provides a benefit, I would be happy to know more about it. Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems adm

Re: [ceph-users] smart replication

2014-02-20 Thread Christian Kauhaus
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds The principles shown here can possibly adapted to you use case. Regards Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forst

Re: [ceph-users] Removing OSD, double data migration

2014-02-13 Thread Christian Kauhaus
last time I had to take an OSD out of a cluster, I marked it "out" and removed it from the CRUSH map at the same time. Don't know if this is the recommended way but it seemed to work. Regards Christian [1] http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Christian Kauhaus
gt; spectacularly fragmented due to COW and overwrites. There's a thread from a > couple of weeks ago called "rados io hints" that you may want to look > at/contribute to. Thank you for the hint. Sage's proposal on ceph-devel sounds good, so I'll wait for an implementatio

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-07 Thread Christian Kauhaus
Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel +49 345 219401-11 Python, Pyramid, Plone, Zope · consulting, developm

Re: [ceph-users] filesystem fragmentation on ext4 OSD

2014-02-06 Thread Christian Kauhaus
n written so far. But I'm not sure about the exact pattern of OSD/filesystem interaction. HTH Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel

[ceph-users] filesystem fragmentation on ext4 OSD

2014-02-06 Thread Christian Kauhaus
Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany http://gocept.com · tel +49 345 219401-11 Python, Pyramid, Plone, Zope · consulting, development, hostin

Re: [ceph-users] EINVAL: (22) Invalid argument when starting osds

2014-01-30 Thread Christian Kauhaus
ush > create-or-move-- 01.82 > root=defaulthost=rokix EINVAL is rather generic. What do the log files say (usually /var/log/ceph/*.log)? Regards Christian -- Dipl.-Inf. Christian Kauhaus <&

Re: [ceph-users] Calculating required number of PGs per pool

2014-01-29 Thread Christian Kauhaus
ingle PG is more than ten times the average PG size. So there is no hard-and-fast rule for PG sizing. I see some heuristics which should be observed. @Ceph devs - please correct me if I'm wrong. HTH Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems admini

[ceph-users] rbd: how to find volumes with high I/O rates?

2014-01-23 Thread Christian Kauhaus
How to dig further? As far as I know, the knowledge of which objects map to a specific rbd volume is hidden somewhere inside rbd. Is there a way to summarize per-volume I/O for a given pool? TIA Christian -- Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration go

Re: [ceph-users] Ceph Performance

2014-01-09 Thread Christian Kauhaus
comparing Ceph on 7.2k rpm SATA disks against iSCSI on 15k rpm SAS disks is not fair. The random access times of 15k SAS disks are hugely better compared to 7.2k SATA disks. What would be far more interesting is to compare Ceph against iSCSI with identical disks. Regards Christian -- Dipl.