Thank you very much. Problem was solved by activating selective
acknowledgments.
James Robnett a écrit :
Can't really help with your larger question but I had a similar
experience with network appropriate write rates and slower reads.
You might check that you have enabled TCP
Johann Lombardi a écrit :
Hi Olivier,
On Thu, May 20, 2010 at 07:12:45PM +0200, Olivier Hargoaa wrote:
But you couldn't know but we already ran lnet self test unsuccessfully.
I wrote results as answer to Brian.
ok. To get back to your original question:
Currently Lustre network is
Dear All,
We have a cluster with lustre critical data. On this cluster there are
three networks on each Lustre server and client : one ethernet network
for administration (eth0), and two other ethernet networks configured in
bonding (bond0: eth1 eth2). On Lustre we get poor read performances
On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
On Lustre we get poor read performances
and good write performances so we decide to modify Lustre network in
order to see if problems comes from network layer.
Without having any other information other than your statement that
Which bonding method are you using? Has the performance always been
this way? Depending on which bonding type you are using and the network
hardware involved you might see the behavior you are describing.
On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
Dear All,
We have a cluster
On Thu, May 20, 2010 at 10:43:58AM -0400, Brian J. Murrell wrote:
On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
On Lustre we get poor read performances
and good write performances so we decide to modify Lustre network in
order to see if problems comes from network layer.
Nate Pearlstein a écrit :
Which bonding method are you using? Has the performance always been
this way? Depending on which bonding type you are using and the network
hardware involved you might see the behavior you are describing.
Hi,
Here is our bonding configuration :
On linux side :
Can't really help with your larger question but I had a similar
experience with network appropriate write rates and slower reads.
You might check that you have enabled TCP selective acknowledgments,
echo 1 /proc/sys/net/ipv4/tcp_sack
or
net.ipv4.tcp_sack = 1
This can help in
Hi Brian and all others,
I'm sorry for not giving you all details. Here I will send you all
information I have.
Regarding our configuration :
Lustre IO nodes are linked with two 10GB bonded links.
Compute nodes are linked with two 1GB bonded links.
Raw performances on server are fine for both
Thanks Johann,
But you couldn't know but we already ran lnet self test unsuccessfully.
I wrote results as answer to Brian.
What I do not know is if lnet test was good or not with bonding
deactivated. I will ask administrators to test it.
Regards.
Johann Lombardi a écrit :
On Thu, May 20,
10 matches
Mail list logo