And as to min_size choice -- since you've replied exactly to that part
of mine message only.
On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote:
> On Fri, Apr 12, 2019 at 9:30 PM Igor Podlesny wrote:
> > For e. g., an EC pool with default profile (2, 1) has bogus "sizing"
> > params (size=3, min_
On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote:
>
> Please don't use an EC pool with 2+1, that configuration makes no sense.
That's too much of an irony given that (2, 1) is default EC profile,
described in CEPH documentation in addition.
> min_size 3 is the default for that pool, yes. That m
I think the most notable change here is the backport of the new bitmap
allocator, but that's missing completely from the change log.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 8
Please don't use an EC pool with 2+1, that configuration makes no sense.
min_size 3 is the default for that pool, yes. That means your data
will be unavailable if any OSD is offline.
Reducing min_size to 2 means you are accepting writes when you cannot
guarantee durability which will cause problem
For e. g., an EC pool with default profile (2, 1) has bogus "sizing"
params (size=3, min_size=3).
Min. size 3 is wrong as far as I know and it's been fixed in fresh
releases (but not in Luminous).
But besides that it looks like pool usage isn't calculated according
to EC overhead but as if it was
Yes, you would do this by setting up separate data pools for segregated
clients, giving those pools a CRUSH rule placing them on their own servers,
and if using S3 assigning the clients to them using either wholly separate
instances or perhaps separate zones and the S3 placement options.
-Greg
On
On Fri, Apr 12, 2019 at 10:48 AM Magnus Grönlund wrote:
>
>
>
> Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman :
>>
>> On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund wrote:
>> >
>> > Hi Jason,
>> >
>> > Tried to follow the instructions and setting the debug level to 15 worked
>> > OK, but t
We have a user syncing data with some kind of rsync + hardlink based
system creating/removing large numbers of hard links. We've
encountered many of the issues with stray inode re-integration as
described in the thread and tracker below.
As noted one fix is to increase mds_bal_fragment_size_max s
We are happy to announce the next bugfix release for v12.2.x Luminous
stable release series. We recommend all luminous users to upgrade to
this release. Many thanks to everyone who contributed backports and a
special mention to Yuri for the QE efforts put in to this release.
Notable Changes
-
Got it. Thanks, Mark!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 12, 2019 at 10:53 PM Mark Nelson wrote:
> They have the same issue, but depending on the SSD may be better at
> absorbing the extra IO if network or CPU are bigger bottlenecks. That's
> one of the reasons tha
They have the same issue, but depending on the SSD may be better at
absorbing the extra IO if network or CPU are bigger bottlenecks. That's
one of the reasons that a lot of folks like to put the DB on flash for
HDD based clusters. It's still possible to oversubscribe them, but
you've got more
Thanks Mark,
This is interesting. I'll take a look at the links you provided.
Does rocksdb compacting issue only affect HDDs? Or SSDs are having same
issue?
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 12, 2019, 9:01 PM Mark Nelson wrote:
> Hi Charles,
>
>
> Basically the go
Hi, cephers
we have a ceph cluster with openstack.
maybe long ago, we set debug_rbd in ceph.conf and then boot vm.
but these debug config not exist in the config now.
Now we find the ceph-client.libvirt.log is 200GB.
But I can not using ceph --admin-daemon ceph-client.libvirt.asok config set
debug
Ok thanks. Is the expectation that events will be available on that socket as
soon as the occur or is it more of a best effort situation? I'm just trying to
nail down which side of the socket might be lagging. It's pretty difficult to
recreate this as I have to hit the cluster very hard to get i
Hi Aaron,
I don't think that exists currently.
Matt
On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
wrote:
>
> I have an radogw log centralizer that we use to for an audit trail for data
> access in our ceph clusters. We've enabled the ops log socket and added
> logging of the http_authorizati
I have an radogw log centralizer that we use to for an audit trail for data
access in our ceph clusters. We've enabled the ops log socket and added logging
of the http_authorization header to it:
rgw log http headers = "http_authorization"
rgw ops log socket path = /var/run/ceph/rgw-ops.sock
rgw
Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman :
> On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund
> wrote:
> >
> > Hi Jason,
> >
> > Tried to follow the instructions and setting the debug level to 15
> worked OK, but the daemon appeared to silently ignore the restart command
> (nothing indic
On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund wrote:
>
> Hi Jason,
>
> Tried to follow the instructions and setting the debug level to 15 worked OK,
> but the daemon appeared to silently ignore the restart command (nothing
> indicating a restart seen in the log).
> So I set the log level to 15
Hi Jason,
Tried to follow the instructions and setting the debug level to 15 worked
OK, but the daemon appeared to silently ignore the restart command (nothing
indicating a restart seen in the log).
So I set the log level to 15 in the config file and restarted the rbd
mirror daemon. The output sur
Hi Charles,
Basically the goal is to reduce write-amplification as much as
possible. The deeper that the rocksdb hierarchy gets, the worse the
write-amplifcation for compaction is going to be. If you look at the
OSD logs you'll see the write-amp factors for compaction in the rocksdb
compac
On Thu, Apr 11, 2019 at 4:23 PM Yury Shevchuk wrote:
>
> Hi Igor!
>
> I have upgraded from Luminous to Nautilus and now slow device
> expansion works indeed. The steps are shown below to round up the
> topic.
>
> node2# ceph osd df
> ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA
Hi,
We have a requirement to build an object storage solution with thin
layer of customization on top. This is to be deployed in our own data
centre. We will be using the objects stored in this system at various
places in our business workflow. The solution should support
multi-tenancy. Multiple te
On 4/11/2019 11:23 PM, Yury Shevchuk wrote:
Hi Igor!
I have upgraded from Luminous to Nautilus and now slow device
expansion works indeed. The steps are shown below to round up the
topic.
node2# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETAAVAIL %USE
VAR
23 matches
Mail list logo