Good morning,
the OSDs are usually marked out after 10 minutes, that's when
rebalancing starts. But the I/O should not drop during that time, this
could be related to your pool configuration. If you have a replicated
pool of size 3 and also set min_size to 3 the I/O would pause if a
node
> On Apr 15, 2019, at 5:18 PM, Brian Topping wrote:
>
> If I am correct, how do I trigger the full sync?
Apologies for the noise on this thread. I came to discover the `radosgw-admin
[meta]data sync init` command. That’s gotten me with something that looked like
this for several hours:
> [roo
Hi,
I'd like to run a standalone Bluestore instance so as to test and tune its
performance. Are there any tools about it, or any suggestions?
Best,
Can Zhang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
Probably inbalance of data across your OSDs.
Could you show ceph osd df.
>From there take the disk with lowest available space. Multiply that number
>with number of OSDs. How much is it?
Kind regards,
Sinan Polat
> Op 16 apr. 2019 om 05:21 heeft Igor Podlesny het
> volgende geschreven:
>
>>
On Tue, 16 Apr 2019 at 06:43, Mark Schouten wrote:
[...]
> So where is the rest of the free space? :X
Makes sense to see:
sudo ceph osd df tree
--
End of message. Next message?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
Hi,
I have a cluster with 97TiB of storage and one pool on it, size 3, using
17.4TiB, totalling at 52.5TiB in use on the cluster. However, I feel that that
should leave me with 45TiB/3=15TiB available, but Ceph tells me the pool only
has 4.57TiB max available, as you can see below.
root@proxm
I’m starting to wonder if I actually have things configured and working
correctly, but the light traffic I am seeing is that of an incremental
replication. That would make sense, the cluster being replicated does not have
a lot of traffic on it yet. Obviously, without the full replication, the
On Tue, Apr 16, 2019 at 7:38 AM solarflow99 wrote:
>
> Then why doesn't this work?
>
> # ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
> osd.0: osd_recovery_max_active = '4' (not observed, change may require
> restart)
> osd.1: osd_recovery_max_active = '4' (not observed, change may
On Sat, Apr 13, 2019 at 9:42 AM Varun Singh wrote:
>
> Thanks Greg. A followup question. Will Zone, ZoneGroup and Realm come
> into picture? While reading the documentation, I inferred that by
> setting different Realms, I should be able to achieve the desired
> result. Is that incorrect?
I think
Then why doesn't this work?
# ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
osd.0: osd_recovery_max_active = '4' (not observed, change may require
restart)
osd.1: osd_recovery_max_active = '4' (not observed, change may require
restart)
osd.2: osd_recovery_max_active = '4' (not observe
On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy wrote:
>
> I was looking around the web for the reason for some of the default pools in
> Ceph and I cant find anything concrete. Here is our list, some show no use
> at all. Can any of these be deleted ( or is there an article my googlefu
> faile
I was looking around the web for the reason for some of the default pools in
Ceph and I cant find anything concrete. Here is our list, some show no use
at all. Can any of these be deleted ( or is there an article my googlefu
failed to find that covers the default pools?
We only use buckets, s
Hello,
After we upgraded ceph version from 12.2.7 to 12.2.11 dashboard start
reporting issues
2019-04-15 10:46:37.332514 [ERR] Unhandled exception from module
'dashboard' while running on mgr.x-a1: IOError("Port 43795 not free
on 'xxx.xxx.xxx.xx'",)
2019-04-15 10:41:37.078304 [ERR] Un
Thanks!
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, Apr 15, 2019 at 12:58 PM Abhishek Lekshmanan wrote:
>
> Paul Emmerich writes:
>
> > I think the most notable
I have an OSD process that throws an assert whenever I boot it (see
traceback below).
I have successfully run ceph-bluestore-tool with the commands repair and
fsck, including with the --deep flag, but this did not fix the problem.
Any ideas how to fix this, apart from deleting the whole OSD and
st
On 4/15/19 2:55 PM, Igor Fedotov wrote:
> Hi Wido,
>
> the main driver for this backport were multiple complains on write ops
> latency increasing over time.
>
> E.g. see thread named: "ceph osd commit latency increase over time,
> until restart" here.
>
> Or http://tracker.ceph.com/issues/38
Hello - Recevenlt we had an issue with storage node's battery failure,
which cause ceph client IO dropped to '0' bytes. Means ceph cluster
couldn't perform IO operations on the cluster till the node takes out. This
is not expected from Ceph, as some HW fails, those respective OSDs should
mark as ou
Hello - Recevenlt we had an issue with storage node's battery failure,
which cause ceph client IO dropped to '0' bytes. Means ceph cluster
couldn't perform IO operations on the cluster till the node takes out. This
is not expected from Ceph, as some HW fails, those respective OSDs should
mark as ou
Hi Wido,
the main driver for this backport were multiple complains on write ops
latency increasing over time.
E.g. see thread named: "ceph osd commit latency increase over time,
until restart" here.
Or http://tracker.ceph.com/issues/38738
Most symptoms showed Stupid Allocator as a root ca
Hi,
With the release of 12.2.12 the bitmap allocator for BlueStore is now
available under Mimic and Luminous.
[osd]
bluestore_allocator = bitmap
bluefs_allocator = bitmap
Before setting this in production: What might the implications be and
what should be thought of?
>From what I've read the bi
On 4/15/19 1:13 PM, Alfredo Daniel Rezinovsky wrote:
>
> On 15/4/19 06:54, Jasper Spaans wrote:
>> On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
>>> autoscale-status reports some of my PG_NUMs are way too big
>>>
>>> I have 256 and need 32
>>>
>>> POOL SIZE TARGET SIZE RA
On 15/4/19 06:54, Jasper Spaans wrote:
On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
autoscale-status reports some of my PG_NUMs are way too big
I have 256 and need 32
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO PG_NUM NEW PG_NUM AUTOSCALE
rbd
Paul Emmerich writes:
> I think the most notable change here is the backport of the new bitmap
> allocator, but that's missing completely from the change log.
Updated the changelog in docs and the blog. The earlier script was
ignoring entries that didn't link to backport tracker following back t
Hey Cephalopods!
This is an early heads up that we are planning a Ceph Day event at
CERN in Geneva, Switzerland on September 16, 2019 [1].
For this Ceph Day, we want to focus on use-cases and solutions for
research, academia, or other non-profit applications [2].
Registration and call for propos
On 14/04/2019 17:05, Alfredo Daniel Rezinovsky wrote:
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
> RATIO PG_NUM NEW PG_NUM AUTOSCALE
> rbd 1214G 3
Hi,
I recently upgraded a cluster to 13.2.5 and right now the RestFul API
won't start due to this stacktrace:
2019-04-15 11:32:18.632 7f8797cb6700 0 mgr[restful] Traceback (most
recent call last):
File "/usr/lib64/ceph/mgr/restful/module.py", line 254, in serve
self._serve()
File "/usr/l
26 matches
Mail list logo