Re: [ceph-users] Ceph-osd Daemon Receives Segmentation Fault on Trusty After Upgrading to 0.94.10 Release

2017-03-21 Thread Alexey Sheplyakov
to false and restart > Ceph deamons in order. > > Thanks for all your support. > > Özhan > > > On Tue, Mar 21, 2017 at 3:27 PM, Alexey Sheplyakov > <asheplya...@mirantis.com> wrote: >> >> Hi, >> >> This looks like a bug [1]. You can work it

Re: [ceph-users] Ceph-osd Daemon Receives Segmentation Fault on Trusty After Upgrading to 0.94.10 Release

2017-03-21 Thread Alexey Sheplyakov
Hi, This looks like a bug [1]. You can work it around by disabling the fiemap feature, like this: [osd] filestore fiemap = false Fiemap should have been disabled by default, perhaps you've explicitly enabled it? [1] http://tracker.ceph.com/issues/19323 Best regards, Alexey On Tue, Mar

Re: [ceph-users] Ceph Volume Issue

2016-11-17 Thread Alexey Sheplyakov
Hi, please share some details about your cluster (especially the hardware) - how many OSDs are there? How many disks per an OSD machine? - Do you use dedicated (SSD) OSD journals? - RAM size, CPUs model, network card bandwidth/model - Do you have a dedicated cluster network? - How many VMs (in

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-12 Thread Alexey Sheplyakov
ore 20/20 --debug_journal 20/20 -i 10 Best regards, Alexey On Fri, Sep 9, 2016 at 11:24 AM, Mehmet <c...@elchaka.de> wrote: > Hello Alexey, > > thank you for your mail - my answers inline :) > > Am 2016-09-08 16:24, schrieb Alexey Sheplyakov: > >> Hi, >>

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-08 Thread Alexey Sheplyakov
Hi, > root@:~# ceph-osd -i 12 --flush-journal > SG_IO: questionable sense data, results may be incorrect > SG_IO: questionable sense data, results may be incorrect As far as I understand these lines is a hdparm warning (OSD uses hdparm command to query the journal device write cache state). The

Re: [ceph-users] Turn snapshot of a flattened snapshot into regular image

2016-09-05 Thread Alexey Sheplyakov
Eugen, > It seems as if the nova snapshot creates a full image (flattened) so it doesn't depend on the base image. As far as I understand a (nova) snapshot is actually a standalone image (so you can boot it, convert to a volume, etc). The snapshot method of nova libvirt driver invokes the

Re: [ceph-users] help troubleshooting some osd communication problems

2016-04-29 Thread Alexey Sheplyakov
Hi, > i also wonder if just taking 148 out of the cluster (probably just marking it out) would help As far as I understand this can only harm your data. The acting set of PG 17.73 is [41, 148], so after stopping/taking out OSD 148 OSD 41 will store the only copy of objects in PG 17.73 (so it

Re: [ceph-users] OSDs are crashing during PG replication

2016-02-26 Thread Alexey Sheplyakov
; > But I set target_max_bytes: > > # ceph osd pool set cache target_max_bytes 10000 > > Can it serve as the reason? > > On Wed, Feb 24, 2016 at 4:08 PM, Alexey Sheplyakov > <asheplya...@mirantis.com> wrote: >> >> Hi, >> >> > 0> 20

Re: [ceph-users] Bug in rados bench with 0.94.6 (regression, not present in 0.94.5)

2016-02-26 Thread Alexey Sheplyakov
Christian, > Note that "rand" works fine, as does "seq" on a 0.95.5 cluster. Could you please check if 0.94.5 ("old") *client* works with 0.94.6 ("new") servers, and vice a versa? Best regards, Alexey On Fri, Feb 26, 2016 at 9:44 AM, Christian Balzer wrote: > > Hello, > >

Re: [ceph-users] OSDs are crashing during PG replication

2016-02-24 Thread Alexey Sheplyakov
Hi, > 0> 2016-02-24 04:51:45.884445 7fd994825700 -1 osd/ReplicatedPG.cc: In > function 'int ReplicatedPG::fill_in_copy_get(ReplicatedPG::OpContext*, > ceph::buffer::list::iterator&, OSDOp&, ObjectContextRef&, bool)' thread > 7fd994825700 time 2016-02-24 04:51:45.870995 osd/ReplicatedPG.cc:

[ceph-users] Fwd: [SOLVED] ceph-disk activate fails (after 33 osd drives)

2016-02-15 Thread Alexey Sheplyakov
[forwarding to the list so people know how to solve the problem] -- Forwarded message -- From: John Hogenmiller (yt) <j...@yourtech.us> Date: Fri, Feb 12, 2016 at 6:48 PM Subject: Re: [ceph-users] ceph-disk activate fails (after 33 osd drives) To: Alexey Sheplyakov <

Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)

2016-02-12 Thread Alexey Sheplyakov
John, > 2016-02-12 12:53:43.340526 7f149bc71940 -1 journal FileJournal::_open: unable > to setup io_context (0) Success Try increasing aio-max-nr: echo 131072 > /proc/sys/fs/aio-max-nr Best regards, Alexey On Fri, Feb 12, 2016 at 4:51 PM, John Hogenmiller (yt)