Hi,

Thank you very much for the information.

The version of OrangeFS tested is 2.8.6. I'm glad to test your fix. I will stay 
tuned.

Best Regards,
Jingwang.

________________________________________
From: Randall Martin [w...@clemson.edu]
Sent: Friday, January 25, 2013 21:33
To: Kyle Schochenmaier; Zhang, Jingwang
Cc: pvfs2-developers@beowulf-underground.org; faibish, sorin
Subject: Re: [Pvfs2-developers] Potential connection starvation in bmi_ib

Yes I'm testing the modifications I made a few weeks ago, so I have not checked 
the changes in.  The race condition appears to be fixed on our Mellanox FDR 
fabric, but I am still seeing issues on our Qlogic QDR fabric.  I'll take a 
look at your patch and see if it makes sense to merge my changes with yours.

-Randy

From: Kyle Schochenmaier <kscho...@gmail.com<mailto:kscho...@gmail.com>>
Date: Friday, January 25, 2013 8:14 AM
To: "Zhang, Jingwang" <jingwang.zh...@emc.com<mailto:jingwang.zh...@emc.com>>
Cc: PVFS2-developers 
<pvfs2-developers@beowulf-underground.org<mailto:pvfs2-developers@beowulf-underground.org>>,
 "faibish, sorin" <faibish_so...@emc.com<mailto:faibish_so...@emc.com>>, 
Randall Martin <w...@clemson.edu<mailto:w...@clemson.edu>>
Subject: Re: [Pvfs2-developers] Potential connection starvation in bmi_ib

Hi Jingwang -

I believe Randy and I stumbled upon the same issue a few weeks ago and we came 
up with a slightly different approach to resolve the race condition.
I'm not sure if that fix has made it back into the main branch yet, but I'll 
double-check and see if we can send you our version first.

If you're able to test out our fix to see if you still see the race condition 
that would be great, as this may have been fixed already.
I'll check today and try to send you our patch.

Regards,
~Kyle

Kyle Schochenmaier


On Fri, Jan 25, 2013 at 1:27 AM, Zhang, Jingwang 
<jingwang.zh...@emc.com<mailto:jingwang.zh...@emc.com>> wrote:
Hi All,

Recently I experienced a performance issue with bmi_ib. The problem is as 
following:
Assuming that it is a client-server architecture, and they exchange lots of 
messages through bmi_ib. If multiple client processes are started at the same 
time, they cannot run concurrently at the same time. Instead, they will be 
serialized to run one after another. It’s a strange behavior and it hurts the 
performance greatly.

After some investigation to the source code, I found the reason for that is as 
following:
The new coming connection are handled in the function 
ib_tcp_server_check_new_connections(); and this is called inside the function 
ib_block_for_activity(). However the ib_block_for_activity() is only called 
when the network is idle in BMI_ib_testcontext() or BMI_ib_testunexpected().
As a result, when the server is busy serving one client process, the other 
processes can’t make a new connections to the server and thus they can’t 
transfer data to the server concurrently.

I made a pretty simple fix for this problem and it worked for me. The idea is 
checking new connections inside the testunexpected() so that new connections 
can be handled in time to avoid starvation of client processes. Here it is:

diff --git a/src/io/bmi/bmi_ib/ib.c b/src/io/bmi/bmi_ib/ib.c
index 0808797..b349938 100644
--- a/src/io/bmi/bmi_ib/ib.c
+++ b/src/io/bmi/bmi_ib/ib.c
@@ -1436,6 +1436,8 @@ restart:
        }
     }

+    ib_tcp_server_check_new_connections();
+
     *outcount = n;
     return activity + n;
}

Please feel free to share your thoughts and comments, thank you very much.

Best Regards,
Jingwang.


_______________________________________________
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org<mailto:Pvfs2-developers@beowulf-underground.org>
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers



_______________________________________________
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers

Reply via email to