961593 WRITE
Duration: 22155 seconds
Data Read: 204875400152 bytes
Data Written: 299728837956 bytes
Kane
在 2013-9-18,下午2:45,Anand Avati 写道:
> Can you get the volume profile dumps for both the runs and compare them?
>
> Avati
>
S_LOWDELAY SO_RCVBUF=262144
SO_SNDBUF=262144
# max protocol = SMB2
kernel oplocks = no
stat cache = no
thank you
-Kane
在 2013-9-18,下午1:38,Anand Avati 写道:
> On 9/17/13 10:34 PM, kane wrote:
>> Hi Anand,
>>
>> I use 2 gluster server , this is my volume inf
Min xfer= 9406848.00 KB
it shows much higher perf.
any places i did wrong?
thank you
-Kane
在 2013-9-18,下午1:19,Anand Avati 写道:
> How are you testing this? What tool are you using?
>
> Avati
>
>
> On Tue, Sep 17, 2013 at 9:02 PM, kane wrote:
> Hi Vij
0MB/s
any advises?
Thank you.
-Kane
在 2013-9-13,下午10:37,kane 写道:
> Hi Vijay,
>
> thank you for post this message, i will try it soon
>
> -kane
>
>
>
> 在 2013-9-13,下午9:21,Vijay Bellur 写道:
>
>> On 09/13/2013 06:10 PM, kane wrote:
>>> Hi
Hi Vijay,
thank you for post this message, i will try it soon
-kane
在 2013-9-13,下午9:21,Vijay Bellur 写道:
> On 09/13/2013 06:10 PM, kane wrote:
>> Hi
>>
>> We use gluster samba vfs test io,but the read performance via vfs is
>> half of write perfomance,
&
kane
Email: kai.z...@soulinfo.com
电话:0510-85385788-616
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo
Hi
We use samba gluster vfs in IO test, but meet with gluster server smbd
oom killer,
The smbd process spend over 15g RES with top command show, in the end is our
simple test code:
gluster server vfs --> smbd --> client mount dir "/mnt/vfs"--> execute vfs test
program "$ ./vfs 1000"
mance,
Is gluster-vfs aio & sendfile coming soon?
Thank You!
-kane ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-
在 2013-8-22,下午3:31,RAGHAVENDRA TALUR 写道:
> Hi Kane,
>
> 1. Which version of samba are you running?
>
> 2. Can you re-run the test after adding the following lines to smb.conf's
> global section and tell if it helps?
> kernel oplocks = no
> stat cach
glusterfs:volume = soul
path = /
read only = no
guest ok = yes
our win 7 client hardware:
Intel® Xeon®E31230 @ 3.20GHz
8GB RAM
linux client hardware:
Intel(R) Xeon(R) CPU X3430 @ 2.40GHz
16GB RAM
pretty thanks
-kane
在 2013-8-21,下午4:53,Lalatendu Mohanty 写道
ction
one linux client mount the "gvol" with cmd:
[root@localhost current]# mount.cifs //192.168.100.133/gvol /mnt/vfs -o
user=kane,pass=123456
then i use iozone to test the write performance in mount dir "/mnt/vfs":
[root@localhost current]# ./iozone -s 10G -r 128k -i0 -t
11 matches
Mail list logo