we were getting about 14k iops with
about 10 - 30 ms of latency.
Thanks,
Matthew Stroud
On 7/6/18, 11:12 AM, "Vasu Kulkarni" wrote:
On Fri, Jul 6, 2018 at 8:38 AM, Matthew Stroud
wrote:
>
> Thanks for the reply.
>
>
>
> Actually we are using
We have changed the IO scheduler to NOOP, which seems to yield the best
results. However, I haven’t look into messing around with tuned. Let me play
with that and see if I get different results.
Thanks,
Matthew Stroud
From: Steffen Winther Sørensen
Date: Friday, July 6, 2018 at 12:22 AM
To
size to 2 instead of 3. However this
isn’t much of a win for the purestorage which dedupes on the backend, so having
copies of data are relatively free for that unit. 1 wouldn’t work because this
is hosting a production work load.
Thanks,
Matthew Stroud
From: Maged Mokhtar
Date: Friday, July 6
Bump. I’m hoping I can get people more knowledgeable than me to take a look.
From: ceph-users on behalf of Matthew
Stroud
Date: Friday, June 29, 2018 at 10:31 AM
To: ceph-users
Subject: [ceph-users] Performance tuning for SAN SSD config
We back some of our ceph clusters with SAN SSD disk
with some information removed for security
reasons.
Thanks ahead of time.
Thanks,
Matthew Stroud
CONFIDENTIALITY NOTICE: This message is intended only for the use and review of
the individual or entity to which it is addressed and may contain information
that is
Thanks for the info
From: Piotr Dalek
Date: Friday, June 15, 2018 at 12:33 AM
To: Matthew Stroud , ceph-users
Subject: RE: osd_op_threads appears to be removed from the settings
No, it’s no longer valid.
--
Piotr Dałek
piotr.da...@corp.ovh.com
https://ovhcloud.com/
From: ceph-users On
So I’m trying to update the osd_op_threads setting that was in jewel, that now
doesn’t appear to be in luminous. What’s more confusing is that the docs state
that is a valid option. Is osd_op_threads still valid?
I’m currently running ceph 12.2.2
Thanks,
Matthew Stroud
p_priority = 63
30,32d21
< ms_dispatch_throttle_bytes = 1048576000
< objecter_inflight_op_bytes = 1048576000
< osd_deep_scrub_stride=5242880
36c25
< mon_pg_warn_max_object_skew = 10
Thanks,
Matthew Stroud
From: Matthew Stroud
Date: Thursday, January 25, 2018 at 3:15 PM
To: "ceph-users@lists.ceph.
The first and hopefully easy one:
I have a situation where I have two pools that are rarely used (the third will
be in use after I can get through these issues), but they need to present at
the whims of our cloud team. Is there a way I can turn off ‘2 pools have many
more objects per pg than av
been able to find
another solution.
I have heard that BlueStore handles this better, but that wasn’t stable on the
release we are on.
Thanks,
Matthew Stroud
On 12/13/17, 3:56 AM, "Fulvio Galeazzi" wrote:
Hallo Matthew,
I am now facing the same issue and found this
Yeah, I don’t have access to the hypervisors, nor the vms on said hypervisors.
Having some sort of ceph-top would be awesome, I wish they would implement that.
Thanks,
Matthew Stroud
On 9/29/17, 11:49 AM, "Jason Dillaman" wrote:
There is a feature in the backlog for a &quo
Yeah, that is the core problem. I have been working with those teams that
manage those. However, there isn’t a way I can check on my side as it appears.
From: David Turner
Date: Friday, September 29, 2017 at 11:08 AM
To: Maged Mokhtar , Matthew Stroud
Cc: "ceph-users@lists.ceph.com"
openstack hypervisors. I’m hoping I can figure
out the offending volume with the tool set I have.
Thanks,
Matthew Stroud
CONFIDENTIALITY NOTICE: This message is intended only for the use and review of
the individual or entity to which it is addressed and may contain
wr, 1896 op/s rd, 919 op/s wr
From: David Turner
Date: Friday, September 22, 2017 at 10:12 AM
To: Matthew Stroud , "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Stuck IOs
It shows that the blocked requests also reset and are now only a few minutes
old instead of nearly a full
3 [WRN] slow request 120.899415 seconds old,
received at 2017-09-22 10:04:23.131364: osd_op(client.300809948.0:42472
7.e637a4b3 measure [omap-get-vals 0~16] snapc 0=[]
ack+read+balance_reads+skiprwlocks+known_if_redirected e8017) currently waiting
for missing object
Thanks,
Matthew Stroud
From:
get these IOs to clear. I
have bounced the OSD multiple times, but they haven’t cleared. Any advice?
Also, if anyone has pro tips on how to setup ceph for gnocchi, I’m all ears.
Thanks,
Matthew Stroud
CONFIDENTIALITY NOTICE: This message is intended only for the
After some troubleshooting, the issues appear to be caused by gnocchi using
rados. I’m trying to figure out why.
Thanks,
Matthew Stroud
From: Brian Andrus
Date: Thursday, September 7, 2017 at 1:53 PM
To: Matthew Stroud
Cc: David Turner , "ceph-users@lists.ceph.com"
Subject: Re: [
slow request 30.452344 seconds old, received at 2017-09-07
13:29:13.527157: osd_op(client.115011.0:483954528 5.e637a4b3 (undecoded)
ack+read+balance_reads+skiprwlocks+known_if_redirected e6511) currently
queued_for_pg
From: David Turner
Date: Thursday, September 7, 2017 at 1:17 PM
To
have slow requests
recovery 4678/1097738 objects degraded (0.426%)
recovery 10364/1097738 objects misplaced (0.944%)
From: David Turner
Date: Thursday, September 7, 2017 at 11:33 AM
To: Matthew Stroud , "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Blocked requests
To be
?
Thanks,
Matthew Stroud
CONFIDENTIALITY NOTICE: This message is intended only for the use and review of
the individual or entity to which it is addressed and may contain information
that is privileged and confidential. If the reader of this message is not the
Any updates here?
CONFIDENTIALITY NOTICE: This message is intended only for the use and review of
the individual or entity to which it is addressed and may contain information
that is privileged and confidential. If the reader of this message is not the
intende
When our clusters hits a failure (e.g. Node going down or osd dying) our vms
pause all IO for about 10 – 20 seconds. I’m curious if there is a way to fix or
mitigate this?
Here is my ceph.conf:
[global]
fsid = fb991e48-c425-4f82-a70e-5ce748ae186b
mon_initial_members = mon01, mon02, mon03
mon_ho
22 matches
Mail list logo