Hi

replying to list - instead of directly by accident
-------------------
Sorry I was camping for a week and was disconnected without data for the most 
part

Yes it is over iSCSI - 2 iscsi nodes 
We've set both iscsi and vmware hosts to SUSE recommended settings
in addition we've set round robin iops limit to 1 (from default 1000) on each 
vm host

the iscsi servers don't appear to be busy at any point


>>> David Byte <db...@suse.com> 9/1/2018 8:31 PM >>>

Is this over ISCSI then?
 
David Byte
Sr. Technology Strategist
SCE Enterprise Linux 
SCE Enterprise Storage
Alliances and SUSE Embedded
db...@suse.com
918.528.4422

>>> "Joe Comeau" <joe.com...@hli.ubc.ca> 9/1/2018 8:21 PM >>>

Yes I was referring to windows  explorer copies as that is what users typically 
use

but also with windows robocopy and it set to 32 threads
the difference is we may go from a peak of 300MB/s to a more normal 100MB/s to 
a stall at 0 to 30MB/s
about every 7-8 seconds it stalls to 0 MB/s


being reported by windows resource monitor

these are large files  up to multi TB files probably 12 TB in total that we are 
copying in < 100 files

thanks Joe





>>> David Byte <db...@suse.com> 8/31/2018 1:17 PM >>>

Are these single threaded writes that you are referring to?  It certainly 
appears so from the thread, but I thought it would be good to confirm that 
before digging in further.
 
 
David Byte
Sr. Technology Strategist
SCE Enterprise Linux 
SCE Enterprise Storage
Alliances and SUSE Embedded
db...@suse.com
918.528.4422
 
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Joe Comeau 
<joe.com...@hli.ubc.ca>
Date: Friday, August 31, 2018 at 1:07 PM
To: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>, Peter Eisch 
<peter.ei...@virginpulse.com>
Subject: Re: [ceph-users] cephfs speed
 
Are you using bluestore OSDs ?
 
if so my thought process on this is what we are having an issue with is caching 
and bluestore
 
see the thread on bluestore caching
"Re: [ceph-users] Best practices for allocating memory to bluestore cache"
 
 
before when we were on Jewel and filestore we could get a much better sustained 
write 
Now on bluestore we are not getting more than a sustained 2GB file write before 
it drastically slows down 
Then it fluctuates from 0kb/s to 100MB/s and back and forth as it is writing

Thanks Joe

>>> Peter Eisch <peter.ei...@virginpulse.com> 8/31/2018 10:31 AM >>>
[replying to myself]

I set aside cephfs and created an rbd volume. I get the same splotchy 
throughput with rbd as I was getting with cephfs. (image attached)

So, withdrawing this as a question here as a cephfs issue.

#backingout

peter


Peter Eisch​









virginpulse.com|
globalchallenge.virginpulse.com


Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland 
| United Kingdom | USA

Confidentiality Notice: The information contained in this e-mail, including any 
attachment(s), is intended solely for use by the designated recipient(s). 
Unauthorized use, dissemination, distribution, or reproduction of this message 
by anyone other than the intended recipient(s), or a person designated as 
responsible for delivering such messages to the intended recipient, is strictly 
prohibited and may be unlawful. This e-mail may contain proprietary, 
confidential or privileged information. Any views or opinions expressed are 
solely those of the author and do not necessarily represent those of Virgin 
Pulse, Inc. If you have received this message in error, or are not the named 
recipient(s), please immediately notify the sender and delete this e-mail 
message.


v2.10



On 8/30/18, 12:25 PM, "Peter Eisch" <peter.ei...@virginpulse.com> wrote:

Thanks for the thought. It’s mounted with this entry in fstab (one line, if 
email wraps it):

cephmon-s01,cephmon-s02,cephmon-s03:/ /loam ceph 
noauto,name=clientname,secretfile=/etc/ceph/secret,noatime,_netdev 0 2

Pretty plain, but I'm open to tweaking!

peter

From: Gregory Farnum <gfar...@redhat.com>
Date: Thursday, August 30, 2018 at 11:47 AM
To: Peter Eisch <peter.ei...@virginpulse.com>
Cc: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] cephfs speed

How are you mounting CephFS? It may be that the cache settings are just set 
very badly for a 10G pipe. Plus rados bench is a very parallel large-IO 
benchmark and many benchmarks you might dump into a filesystem are definitely 
not. 
-Greg

On Thu, Aug 30, 2018 at 7:54 AM Peter Eisch 
<mailto:peter.ei...@virginpulse.com> wrote:
Hi,

I have a cluster serving cephfs and it works. It’s just slow. Client is using 
the kernel driver. I can ‘rados bench’ writes to the cephfs_data pool at wire 
speeds (9580Mb/s on a 10G link) but when I copy data into cephfs it is rare to 
get above 100Mb/s. Large file writes may start fast (2Gb/s) but within a minute 
slows. In the dashboard at the OSDs I get lots of triangles (it doesn't stream) 
which seems to be lots of starts and stops. By contrast the graphs show 
constant flow when using 'rados bench.'

I feel like I'm missing something obvious. What can I do to help diagnose this 
better or resolve the issue? 

Errata:
Version: 12.2.7 (on everything)
mon: 3 daemons, quorum cephmon-s01,cephmon-s03,cephmon-s02
mgr: cephmon-s02(active), standbys: cephmon-s01, cephmon-s03
mds: cephfs1-1/1/1 up {0=cephmon-s02=up:active}, 2 up:standby
osd: 70 osds: 70 up, 70 in
rgw: 3 daemons active

rados bench summary:
Total time run: 600.043733
Total writes made: 167725
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1118.09
Stddev Bandwidth: 7.23868
Max bandwidth (MB/sec): 1140
Min bandwidth (MB/sec): 1084
Average IOPS: 279
Stddev IOPS: 1
Max IOPS: 285
Min IOPS: 271
Average Latency(s): 0.057239
Stddev Latency(s): 0.0354817
Max latency(s): 0.367037
Min latency(s): 0.0120791

peter


Peter Eisch​











https://www.virginpulse.com/
|

https://globalchallenge.virginpulse.com/


Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland 
| United Kingdom | USA

Confidentiality Notice: The information contained in this e-mail, including any 
attachment(s), is intended solely for use by the designated recipient(s). 
Unauthorized use, dissemination, distribution, or reproduction of this message 
by anyone other than the intended recipient(s), or a person designated as 
responsible for delivering such messages to the intended recipient, is strictly 
prohibited and may be unlawful. This e-mail may contain proprietary, 
confidential or privileged information. Any views or opinions expressed are 
solely those of the author and do not necessarily represent those of Virgin 
Pulse, Inc. If you have received this message in error, or are not the named 
recipient(s), please immediately notify the sender and delete this e-mail 
message.


v2.10

_______________________________________________
ceph-users mailing list
mailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to