Re: [ceph-users] all three mons segfault at same time

2015-12-17 Thread Arnulf Heimsbakk
That's good to hear. My experience was pretty much the same. But depending on the load on the cluster I got a couple of crashes an our to one a day after I upgraded everything. I'm interested to hear if your cluster stays stable over time. -Arnulf On 11/10/2015 07:09 PM, Logan V. wrote: > I am

Re: [ceph-users] all three mons segfault at same time

2015-12-17 Thread Arnulf Heimsbakk
/var/log/clusterboot/lsn-mc1007/syslog <== > Nov 10 10:08:24 lsn-mc1007 kernel: [6392637.614495] init: ceph-mon > (ceph/lsn-mc1007) main process (2013418) killed by SEGV signal > Nov 10 10:08:24 lsn-mc1007 kernel: [6392637.614504] init: ceph-mon > (ceph/lsn-mc1007) main process ended, res

[ceph-users] all three mons segfault at same time

2015-11-02 Thread Arnulf Heimsbakk
When I did a unset noout on the cluster all three mons got a segmentation fault, then continued as if nothing had happened. Regular segmentation faults started on mons after upgrading to 0.94.5. Ubuntu Trusty LTS. Anyone had similar? -Arnulf Backtraces: mon1: #0 0x7f0b2969120b in raise (si

[ceph-users] ceph-mon segmentation faults after upgrade from 0.94.3 to 0.94.5

2015-10-29 Thread Arnulf Heimsbakk
Hi, we have multiple Ceph clusters. One is used as backend for OpenStack installation for developers - it's here we test Ceph upgrades before we upgrade prod Ceph clusters. The Ceph cluster is 4 nodes with 12 osds each running Ubuntu Trusty with latest 3.13 kernel. This time when upgrading fro

[ceph-users] Question about CRUSH object placement

2014-01-20 Thread Arnulf Heimsbakk
Hi, I'm trying to understand the CRUSH algorithm and how it distribute data. Let's say I simplify a small datacenter setup and map it up hierarchically in the crush map as show below. root datacenter /\ / \ /\ a b