Re: [ceph-users] Ceph FS Random Write 4KB block size only 2MB/s?!

2018-07-01 Thread Yu Haiyang
Many thanks, Yan!
I’ll switch to other io engines for fio testing.

On Jun 29, 2018, at 10:23 AM, Yan, Zheng 
mailto:uker...@gmail.com>> wrote:

kernel

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph FS (kernel driver) - Unable to set extended file attributed

2018-06-29 Thread Yu Haiyang
Problem solved.

Seems like I can’t set stripe_unit to a value larger than object_size.
I should increase object_size attribute before increasing stripe_unit attribute.

Hope this would help someone. :)

On Jun 29, 2018, at 12:20 PM, Yu Haiyang 
mailto:haiya...@moqi.ai>> wrote:

Hi,

I want to play around with my ceph.file.layout attributes such as stripe_unit 
and object_size to see how it affected my Ceph FS performance.
However I’ve been unable to set any attribute with below error.

$ setfattr -n ceph.file.layout.stripe_unit -v 41943040 file1
setfattr: file1: Invalid argument

Using strace I can see it failed at something related to missing locale 
language pack.
Any suggestion how to resolve this?

$ strace setfattr -n ceph.file.layout.stripe_unit -v 41943040 file
open("/usr/share/locale/en_HK/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No 
such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such 
file or directory)
open("/usr/share/locale-langpack/en_HK/LC_MESSAGES/libc.mo", O_RDONLY) = -1 
ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT 
(No such file or directory)
write(2, "setfattr: file: Invalid argument"..., 33setfattr: file: Invalid 
argument
) = 33
exit_group(1)   = ?

Many thanks,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph FS (kernel driver) - Unable to set extended file attributed

2018-06-28 Thread Yu Haiyang
Hi,

I want to play around with my ceph.file.layout attributes such as stripe_unit 
and object_size to see how it affected my Ceph FS performance.
However I’ve been unable to set any attribute with below error.

$ setfattr -n ceph.file.layout.stripe_unit -v 41943040 file1
setfattr: file1: Invalid argument

Using strace I can see it failed at something related to missing locale 
language pack.
Any suggestion how to resolve this?

$ strace setfattr -n ceph.file.layout.stripe_unit -v 41943040 file
open("/usr/share/locale/en_HK/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No 
such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such 
file or directory)
open("/usr/share/locale-langpack/en_HK/LC_MESSAGES/libc.mo", O_RDONLY) = -1 
ENOENT (No such file or directory)
open("/usr/share/locale-langpack/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT 
(No such file or directory)
write(2, "setfattr: file: Invalid argument"..., 33setfattr: file: Invalid 
argument
) = 33
exit_group(1)   = ?

Many thanks,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph FS Random Write 4KB block size only 2MB/s?!

2018-06-28 Thread Yu Haiyang
Here you go. Below are the fio job options and the results.

blocksize=4K
size=500MB
directory=[ceph_fs_mount_directory]
ioengine=libaio
iodepth=64
direct=1
runtime=60
time_based
group_reporting

numjobs Ceph FS Erasure Coding (k=2, m=1)   Ceph FS 3 Replica
1 job   577KB/s 765KB/s
2 job   1.27MB/s793KB/s
4 job   2.33MB/s1.36MB/s
8 job   4.14MB/s2.36MB/s
16 job  6.87MB/s4.40MB/s
32 job  11.07MB/s   8.17MB/s
64 job  13.75MB/s   15.84MB/s
128 job 10.46MB/s   26.82MB/s

On Jun 28, 2018, at 5:01 PM, Yan, Zheng 
mailto:uker...@gmail.com>> wrote:

On Thu, Jun 28, 2018 at 10:30 AM Yu Haiyang 
mailto:haiya...@moqi.ai>> wrote:

Hi Yan,

Thanks for your suggestion.
No, I didn’t run fio on ceph-fuse. I mounted my Ceph FS in kernel mode.


command option of fio ?

Regards,
Haiyang

On Jun 27, 2018, at 9:45 PM, Yan, Zheng 
mailto:uker...@gmail.com>> wrote:

On Wed, Jun 27, 2018 at 8:04 PM Yu Haiyang 
mailto:haiya...@moqi.ai>> wrote:

Hi All,

Using fio with job number ranging from 1 to 128, the random write speed for 4KB 
block size has been consistently around 1MB/s to 2MB/s.
Random read of the same block size can reach 60MB/s with 32 jobs.

run fio on ceph-fuse? If I remember right, fio does 1 bytes write.
overhead of passing the 1 byte to ceph-fuse is too high.


Our ceph cluster consists of 4 OSDs all running on SSD connected through a 
switch with 9.06 Gbits/sec bandwidth.
Any suggestion please?

Warmest Regards,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous BlueStore OSD - Still a way to pinpoint an object?

2018-06-27 Thread Yu Haiyang
Hi All,

Previously I read this article about how to locate an object on the OSD disk.
Apparently it was on a FileStore-back disk partition.

Now I have upgraded my Ceph to Luminous and hosted my OSDs on BlueStore 
partition, the OSD directory structure has completely changed.
The data is mapped to a block device as below and that’s as far as I can trace.

lrwxrwxrwx 1 ceph ceph   93 Jun 24 17:03 block -> 
/dev/ceph-0ec01ce9-d397-43e7-ad62-93cd1c62f75a/osd-block-f590b656-e40c-42f7-8cf9-ca846632d046

Hence is there still a way to pinpoint an object on a BlueStore disk 
partition?

Best,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph FS Random Write 4KB block size only 2MB/s?!

2018-06-27 Thread Yu Haiyang
Hi Yan,

Thanks for your suggestion.
No, I didn’t run fio on ceph-fuse. I mounted my Ceph FS in kernel mode.

Regards,
Haiyang

> On Jun 27, 2018, at 9:45 PM, Yan, Zheng  wrote:
> 
> On Wed, Jun 27, 2018 at 8:04 PM Yu Haiyang  wrote:
>> 
>> Hi All,
>> 
>> Using fio with job number ranging from 1 to 128, the random write speed for 
>> 4KB block size has been consistently around 1MB/s to 2MB/s.
>> Random read of the same block size can reach 60MB/s with 32 jobs.
> 
> run fio on ceph-fuse? If I remember right, fio does 1 bytes write.
> overhead of passing the 1 byte to ceph-fuse is too high.
> 
>> 
>> Our ceph cluster consists of 4 OSDs all running on SSD connected through a 
>> switch with 9.06 Gbits/sec bandwidth.
>> Any suggestion please?
>> 
>> Warmest Regards,
>> Haiyang
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph FS Random Write 4KB block size only 2MB/s?!

2018-06-27 Thread Yu Haiyang
Hi All,

Using fio with job number ranging from 1 to 128, the random write speed for 4KB 
block size has been consistently around 1MB/s to 2MB/s.
Random read of the same block size can reach 60MB/s with 32 jobs.

Our ceph cluster consists of 4 OSDs all running on SSD connected through a 
switch with 9.06 Gbits/sec bandwidth.
Any suggestion please?

Warmest Regards,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What is the theoretical upper bandwidth of my Ceph cluster?

2018-06-19 Thread Yu Haiyang
Hi All,

I have a Ceph cluster that consists of 3 OSDs (each on a different server’s SSD 
disk partition with 500MB/s maximum read/write speed).
The 3 OSDs are connected through a switch which provides a maximum 10 Gbits/sec 
bandwidth between each pair of servers.
My Ceph version is Luminous 12.2.5. I’ve set up a Ceph FS using BlueStore and 
Erasure Coding pool (k=2, m=1).

What is the theoretical upper bound of read and write speed of my Ceph FS?
Many thanks.

Regards,
Haiyang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com