to false and restart
> Ceph deamons in order.
>
> Thanks for all your support.
>
> Özhan
>
>
> On Tue, Mar 21, 2017 at 3:27 PM, Alexey Sheplyakov
> <asheplya...@mirantis.com> wrote:
>>
>> Hi,
>>
>> This looks like a bug [1]. You can work it
Hi,
This looks like a bug [1]. You can work it around by disabling the
fiemap feature, like this:
[osd]
filestore fiemap = false
Fiemap should have been disabled by default, perhaps you've explicitly
enabled it?
[1] http://tracker.ceph.com/issues/19323
Best regards,
Alexey
On Tue, Mar
Hi,
please share some details about your cluster (especially the hardware)
- how many OSDs are there? How many disks per an OSD machine?
- Do you use dedicated (SSD) OSD journals?
- RAM size, CPUs model, network card bandwidth/model
- Do you have a dedicated cluster network?
- How many VMs (in
ore 20/20 --debug_journal 20/20
-i 10
Best regards,
Alexey
On Fri, Sep 9, 2016 at 11:24 AM, Mehmet <c...@elchaka.de> wrote:
> Hello Alexey,
>
> thank you for your mail - my answers inline :)
>
> Am 2016-09-08 16:24, schrieb Alexey Sheplyakov:
>
>> Hi,
>>
Hi,
> root@:~# ceph-osd -i 12 --flush-journal
> SG_IO: questionable sense data, results may be incorrect
> SG_IO: questionable sense data, results may be incorrect
As far as I understand these lines is a hdparm warning (OSD uses hdparm
command to query the journal device write cache state).
The
Eugen,
> It seems as if the nova snapshot creates a full image (flattened) so it
doesn't depend on the base image.
As far as I understand a (nova) snapshot is actually a standalone image (so
you can boot it, convert to a volume, etc).
The snapshot method of nova libvirt driver invokes the
Hi,
> i also wonder if just taking 148 out of the cluster (probably just
marking it out) would help
As far as I understand this can only harm your data. The acting set of PG
17.73 is [41, 148],
so after stopping/taking out OSD 148 OSD 41 will store the only copy of
objects in PG 17.73
(so it
;
> But I set target_max_bytes:
>
> # ceph osd pool set cache target_max_bytes 10000
>
> Can it serve as the reason?
>
> On Wed, Feb 24, 2016 at 4:08 PM, Alexey Sheplyakov
> <asheplya...@mirantis.com> wrote:
>>
>> Hi,
>>
>> > 0> 20
Christian,
> Note that "rand" works fine, as does "seq" on a 0.95.5 cluster.
Could you please check if 0.94.5 ("old") *client* works with 0.94.6
("new") servers, and vice a versa?
Best regards,
Alexey
On Fri, Feb 26, 2016 at 9:44 AM, Christian Balzer wrote:
>
> Hello,
>
>
Hi,
> 0> 2016-02-24 04:51:45.884445 7fd994825700 -1 osd/ReplicatedPG.cc: In
> function 'int ReplicatedPG::fill_in_copy_get(ReplicatedPG::OpContext*,
> ceph::buffer::list::iterator&, OSDOp&, ObjectContextRef&, bool)' thread
> 7fd994825700 time 2016-02-24 04:51:45.870995
osd/ReplicatedPG.cc:
[forwarding to the list so people know how to solve the problem]
-- Forwarded message --
From: John Hogenmiller (yt) <j...@yourtech.us>
Date: Fri, Feb 12, 2016 at 6:48 PM
Subject: Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)
To: Alexey Sheplyakov <
John,
> 2016-02-12 12:53:43.340526 7f149bc71940 -1 journal FileJournal::_open: unable
> to setup io_context (0) Success
Try increasing aio-max-nr:
echo 131072 > /proc/sys/fs/aio-max-nr
Best regards,
Alexey
On Fri, Feb 12, 2016 at 4:51 PM, John Hogenmiller (yt)
12 matches
Mail list logo