Hi,
I had server failure that starts from one disk failure:
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023986] sd 4:2:26:0:
[sdaa] Unhandled error code
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023990] sd 4:2:26:0:
[sdaa] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
Oct 14 03:25:04
Hi
I have found somthing.
After restart time was wrong on server (+2hours) before ntp has fixed it.
I restarted this 3 osd - it not helps.
It is possible that ceph banned this osd? Or after start with wrong
time osd has broken hi's filestore?
--
Regards
Dominik
2013/10/14 Dominik Mostowiec
On 14/10/2013 08:31, Dan Bode wrote:
On Sun, Oct 13, 2013 at 4:34 PM, Loic Dachary l...@dachary.org
mailto:l...@dachary.org wrote:
Hi Dan,
I'm looking for the path of least resistance to add rbd support to
https://github.com/CiscoSystems/openstack-installer/ Being
-- All Branches --
Alfredo Deza alfr...@deza.pe
2013-09-27 10:33:52 -0400 wip-5900
Dan Mick dan.m...@inktank.com
2012-12-18 12:27:36 -0800 wip-rbd-striping
2013-07-16 23:00:06 -0700 wip-5634
David Zafman david.zaf...@inktank.com
2013-01-28 20:26:34 -0800
Is osd.47 the one with the bad disk? I should not start.
If there are other osds on the same host that aren't started with 'service
ceph start', you may have to mention them by name (the old version of the
script would stop on the first error instead of continuing). e.g.,
service ceph start
Is there any more information on an ETA?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 10/14/13 07:48, Thierry Reding wrote:
Hi all,
I've uploaded today's linux-next tree to the master branch of the
repository below:
git://gitorious.org/thierryreding/linux-next.git
A next-20131014 tag is also provided for convenience.
Gained a few conflicts, but nothing too
On 10/07/2013 08:49 PM, Eric Eastman wrote:
I am not sure if this is a bs_rbd, tgt or zfs issue, but I can reliably
crash my Centos 6.4 system running tgt 1.0.40 using a bs_rbd backstore
by creating a zpool. Using tgt with a file backed store does not panic
the system when creating a zpool.
On 14/10/2013 19:34, Dan Bode wrote:
On 14/10/2013 08:31, Dan Bode wrote:
On Sun, Oct 13, 2013 at 4:34 PM, Loic Dachary l...@dachary.org
mailto:l...@dachary.org mailto:l...@dachary.org
mailto:l...@dachary.org wrote:
Hi Dan,
I'm looking