rming RBD diffs.
Yes, we are also doing 'rbd export-diff' on snapshots. So this could be the
cause, too.
Regards
Christian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstr
on a heavly loaded cluster?
TIA
Christian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal 21169 · Geschäftsführ
I can get some input for
> when I do try and tackle the problem next year.
Is there already a ticket present for this issue in the bug tracker? I think
this is an import issue.
Regards
Christian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 21940
e situation is much
appreciated.
In the meantime, I'll be experimenting with pre-seeding the VFS cache to speed
things up at least a little bit.
Regards
Christian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0
Flying Circus Internet Operation
so make sure they are not under load.
I don't think this is an issue here. Our MONs don't use more than 5% CPU
during the operation and don't cause significant amounts of disk I/O.
Regards
Christian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 3
tian
--
Dipl-Inf. Christian Kauhaus <>< · k...@flyingcircus.io · +49 345 219401-0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal 21169 · Geschäftsführer: Christian T
where an admin refrained from performing a similar
change because he was daunted by what he read on ceph.com.
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
k...@gocept.com · te
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11
Python, Pyramid, Plone, Zope · consulting, developmen
vely, we have considered removing nf_conntrack completely. This,
however, is not possible since we use host-based firewalling and nf_conntrack
is wired quite deeply into Linux' firewall code.
Just to share our experience in case someone experiences the same problem.
Regards
Christian
--
Dipl.
making rapid progress. It would be
great if you'd try it, spot bugs, contribute code etc. Help is appreciated. :-)
PyPI page: https://pypi.python.org/pypi/backy/
Pull requests go here: https://bitbucket.org/ctheune/backy
Christian Theune is the primary contact.
HTH
Christian
--
Di
efault stripe size of RBD volumes. So in
consequence, does this mean to go with a larger RBD object size than the
default (4MiB)?
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale
asons we continue to
> look to btrfs as our long-term goal).
When thinking in petabytes scale, bit rot going to happen as a matter of fact.
So I think Ceph should be prepared, at least when there are more than 2
replicas.
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@g
rds against local bit rot (e.g.,
when a local disk returns incorrect data). Or is there already a voting scheme
in place during deep scrub?
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 2
Hi,
the "Documentation" link on the ceph.com home page leads to a 403 error page.
Is this a web server malfunction/misconfiguration or live the docs under
different URLs now?
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
g
-
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11
Python, Pyramid, Plone, Zope · consulting, develop
ction": "86.37_head",
"oid":
"a63e7df7\/rbd_data.1933fe2ae8944a.042c\/head\/\/86",
"name": "snapset",
"length": 31}]}
2014-06-08 22:15:35.255955 7f850ac25700 -1 os/FileStore.cc: In function
used to be 2. A replication
factor of 3 incurs significantly more space overhead. Has a replication factor
of 2 been proven to be insecure?
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße
upted
filesystems inside the VMs as well as scrub errors afterwards.
How can this be? Isn't Ceph designed to handle network failures? Obviously,
running nf_conntrack on Ceph hosts is not a brilliant idea but it simply was
present here. But I don't think that dropping network packets sho
guration/pool-pg-config-ref/ comes out
with 333, which is certainly not a power of two.
If using a power of two for the number of PGs provides a benefit, I would be
happy to know more about it.
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems adm
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
The principles shown here can possibly adapted to you use case.
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forst
last time I had to take
an OSD out of a cluster, I marked it "out" and removed it from the CRUSH map
at the same time. Don't know if this is the recommended way but it seemed to
work.
Regards
Christian
[1]
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing
gt; spectacularly fragmented due to COW and overwrites. There's a thread from a
> couple of weeks ago called "rados io hints" that you may want to look
> at/contribute to.
Thank you for the hint. Sage's proposal on ceph-devel sounds good, so I'll
wait for an implementatio
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11
Python, Pyramid, Plone, Zope · consulting, developm
n written so far. But I'm not sure about the exact
pattern of OSD/filesystem interaction.
HTH
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
gocept gmbh & co. kg · Forsterstraße 29 · 06112 Halle (Saale) · Germany
http://gocept.com · tel +49 345 219401-11
Python, Pyramid, Plone, Zope · consulting, development, hostin
ush
> create-or-move-- 01.82
> root=defaulthost=rokix
EINVAL is rather generic. What do the log files say (usually
/var/log/ceph/*.log)?
Regards
Christian
--
Dipl.-Inf. Christian Kauhaus <&
ingle PG is more than ten times the average
PG size.
So there is no hard-and-fast rule for PG sizing. I see some heuristics which
should be observed.
@Ceph devs - please correct me if I'm wrong.
HTH
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems admini
How to dig further? As far as I know, the knowledge of which objects map to a
specific rbd volume is hidden somewhere inside rbd. Is there a way to
summarize per-volume I/O for a given pool?
TIA
Christian
--
Dipl.-Inf. Christian Kauhaus <>< · k...@gocept.com · systems administration
go
comparing Ceph on 7.2k rpm SATA disks against iSCSI on 15k rpm
SAS disks is not fair. The random access times of 15k SAS disks are hugely
better compared to 7.2k SATA disks. What would be far more interesting is to
compare Ceph against iSCSI with identical disks.
Regards
Christian
--
Dipl.
29 matches
Mail list logo