[ewg] ofa_1_5_kernel 20100216-0200 daily build status

2010-02-16 Thread Vladimir Sokolovsky (Mellanox)
This email was generated automatically, please do not reply


git_url: git://git.openfabrics.org/ofed_1_5/linux-2.6.git
git_branch: ofed_kernel_1_5

Common build parameters: 

Passed:
Passed on i686 with linux-2.6.18
Passed on i686 with linux-2.6.19
Passed on i686 with linux-2.6.21.1
Passed on i686 with linux-2.6.26
Passed on i686 with linux-2.6.24
Passed on i686 with linux-2.6.22
Passed on i686 with linux-2.6.27
Passed on x86_64 with linux-2.6.16.60-0.54.5-smp
Passed on x86_64 with linux-2.6.16.60-0.21-smp
Passed on x86_64 with linux-2.6.18
Passed on x86_64 with linux-2.6.18-128.el5
Passed on x86_64 with linux-2.6.20
Passed on x86_64 with linux-2.6.19
Passed on x86_64 with linux-2.6.18-93.el5
Passed on x86_64 with linux-2.6.21.1
Passed on x86_64 with linux-2.6.24
Passed on x86_64 with linux-2.6.22
Passed on x86_64 with linux-2.6.26
Passed on x86_64 with linux-2.6.27
Passed on x86_64 with linux-2.6.25
Passed on x86_64 with linux-2.6.27.19-5-smp
Passed on x86_64 with linux-2.6.9-89.ELsmp
Passed on x86_64 with linux-2.6.9-67.ELsmp
Passed on x86_64 with linux-2.6.9-78.ELsmp
Passed on ia64 with linux-2.6.18
Passed on ia64 with linux-2.6.21.1
Passed on ia64 with linux-2.6.19
Passed on ia64 with linux-2.6.24
Passed on ia64 with linux-2.6.23
Passed on ia64 with linux-2.6.22
Passed on ia64 with linux-2.6.26
Passed on ia64 with linux-2.6.25
Passed on ppc64 with linux-2.6.18
Passed on ppc64 with linux-2.6.19

Failed:
Build failed on x86_64 with linux-2.6.18-164.el5
Log:
/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi/scsi_transport_iscsi.c:1832:
 warning: assignment from incompatible pointer type
/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi/scsi_transport_iscsi.c:
 In function 'iscsi_transport_init':
/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi/scsi_transport_iscsi.c:1935:
 warning: passing argument 3 of 'netlink_kernel_create' from incompatible 
pointer type
/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi/scsi_transport_iscsi.c:1949:
 error: implicit declaration of function 'netlink_kernel_release'
make[3]: *** 
[/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi/scsi_transport_iscsi.o]
 Error 1
make[2]: *** 
[/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check/drivers/scsi]
 Error 2
make[1]: *** 
[_module_/home/vlad/tmp/ofa_1_5_kernel-20100216-0200_linux-2.6.18-164.el5_x86_64_check]
 Error 2
make[1]: Leaving directory `/home/vlad/kernel.org/x86_64/linux-2.6.18-164.el5'
make: *** [kernel] Error 2
--
___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ewg] [PATCH OFED-151] ehca fixes

2010-02-16 Thread Vladimir Sokolovsky
Alexander Schmidt wrote:
 Hi Vlad,
 
 please apply the following fixes for OFED-1.5.1, thank you!
 
 Regards,
 Alex
 

Hi Alex,
These fixes require updates in backport patches, at least for SLES10 SP2 and 
SP3:

 
/tmp/ofa_1_5_dev_kernel-20100216-1441_linux-2.6.16.60-0.54.5-smp_check/kernel_patches/backport/2.6.16_sles10_sp3/ehca-030-ibmebus_loc_code.patch
/usr/bin/quilt --quiltrc 
/tmp/ofa_1_5_dev_kernel-20100216-1441_linux-2.6.16.60-0.54.5-smp_check/patches/quiltrc
 import 
/tmp/ofa_1_5_dev_kernel-20100216-1441_linux-2.6.16.60-0.54.5-smp_check/kernel_patches/backport/2.6.16_sles10_sp3/ehca-030-ibmebus_loc_code.patch
Importing patch 
/tmp/ofa_1_5_dev_kernel-20100216-1441_linux-2.6.16.60-0.54.5-smp_check/kernel_patches/backport/2.6.16_sles10_sp3/ehca-030-ibmebus_loc_code.patch
 (stored as ehca-030-ibmebus_loc_code.patch)
/usr/bin/quilt --quiltrc 
/tmp/ofa_1_5_dev_kernel-20100216-1441_linux-2.6.16.60-0.54.5-smp_check/patches/quiltrc
 push patches/ehca-030-ibmebus_loc_code.patch
Applying patch ehca-030-ibmebus_loc_code.patch
patching file drivers/infiniband/hw/ehca/ehca_classes.h
patching file drivers/infiniband/hw/ehca/ehca_eq.c
Hunk #3 FAILED at 170.
1 out of 3 hunks FAILED -- rejects in file drivers/infiniband/hw/ehca/ehca_eq.c
patching file drivers/infiniband/hw/ehca/ehca_main.c
Patch ehca-030-ibmebus_loc_code.patch does not apply (enforce with -f)

You can reproduce it by:
# ./ofed_scripts/ofed_makedist.sh

Regards,
Vladimir

 diff -Nurp 
 ofa_kernel-1.5.1.old/kernel_patches/fixes/ehca-0100-rework_destroy_eq.patch 
 ofa_kernel-1.5.1/kernel_patches/fixes/ehca-0100-rework_destroy_eq.patch
 --- 
 ofa_kernel-1.5.1.old/kernel_patches/fixes/ehca-0100-rework_destroy_eq.patch   
 1970-01-01 01:00:00.0 +0100
 +++ ofa_kernel-1.5.1/kernel_patches/fixes/ehca-0100-rework_destroy_eq.patch   
 2010-02-15 11:43:55.0 +0100
 @@ -0,0 +1,63 @@
 +commit 9420269428b3dc80c98e52beac60a3976fbef7d2
 +Author: Alexander Schmidt al...@linux.vnet.ibm.com
 +Date:   Wed Dec 9 10:11:04 2009 -0800
 +
 +IB/ehca: Rework destroy_eq()
 +
 +The ibmebus_free_irq() function, which might sleep, was called with
 +interrupts disabled.  To fix this, make sure that no interrupts are
 +running by killing the interrupt tasklet.  Also lock the
 +shca_list_lock to protect against the poll_eqs_timer running
 +concurrently.
 +
 +Signed-off-by: Alexander Schmidt al...@linux.vnet.ibm.com
 +Signed-off-by: Roland Dreier rola...@cisco.com
 +
 +diff --git a/drivers/infiniband/hw/ehca/ehca_classes.h 
 b/drivers/infiniband/hw/ehca/ehca_classes.h
 +index c825142..0136abd 100644
 +--- a/drivers/infiniband/hw/ehca/ehca_classes.h
  b/drivers/infiniband/hw/ehca/ehca_classes.h
 +@@ -375,6 +375,7 @@ extern rwlock_t ehca_qp_idr_lock;
 + extern rwlock_t ehca_cq_idr_lock;
 + extern struct idr ehca_qp_idr;
 + extern struct idr ehca_cq_idr;
 ++extern spinlock_t shca_list_lock;
 + 
 + extern int ehca_static_rate;
 + extern int ehca_port_act_time;
 +diff --git a/drivers/infiniband/hw/ehca/ehca_eq.c 
 b/drivers/infiniband/hw/ehca/ehca_eq.c
 +index 523e733..3b87589 100644
 +--- a/drivers/infiniband/hw/ehca/ehca_eq.c
  b/drivers/infiniband/hw/ehca/ehca_eq.c
 +@@ -169,12 +169,15 @@ int ehca_destroy_eq(struct ehca_shca *shca, struct 
 ehca_eq *eq)
 + unsigned long flags;
 + u64 h_ret;
 + 
 +-spin_lock_irqsave(eq-spinlock, flags);
 + ibmebus_free_irq(eq-ist, (void *)shca);
 + 
 +-h_ret = hipz_h_destroy_eq(shca-ipz_hca_handle, eq);
 ++spin_lock_irqsave(shca_list_lock, flags);
 ++eq-is_initialized = 0;
 ++spin_unlock_irqrestore(shca_list_lock, flags);
 + 
 +-spin_unlock_irqrestore(eq-spinlock, flags);
 ++tasklet_kill(eq-interrupt_task);
 ++
 ++h_ret = hipz_h_destroy_eq(shca-ipz_hca_handle, eq);
 + 
 + if (h_ret != H_SUCCESS) {
 + ehca_err(shca-ib_device, Can't free EQ resources.);
 +diff --git a/drivers/infiniband/hw/ehca/ehca_main.c 
 b/drivers/infiniband/hw/ehca/ehca_main.c
 +index fb2d83c..129a6be 100644
 +--- a/drivers/infiniband/hw/ehca/ehca_main.c
  b/drivers/infiniband/hw/ehca/ehca_main.c
 +@@ -123,7 +123,7 @@ DEFINE_IDR(ehca_qp_idr);
 + DEFINE_IDR(ehca_cq_idr);
 + 
 + static LIST_HEAD(shca_list); /* list of all registered ehcas */
 +-static DEFINE_SPINLOCK(shca_list_lock);
 ++DEFINE_SPINLOCK(shca_list_lock);
 + 
 + static struct timer_list poll_eqs_timer;
 + 
 diff -Nurp 
 ofa_kernel-1.5.1.old/kernel_patches/fixes/ehca-0110-dont_turnoff_irq_in_tasklet.patch
  
 ofa_kernel-1.5.1/kernel_patches/fixes/ehca-0110-dont_turnoff_irq_in_tasklet.patch
 --- 
 ofa_kernel-1.5.1.old/kernel_patches/fixes/ehca-0110-dont_turnoff_irq_in_tasklet.patch
  1970-01-01 01:00:00.0 +0100
 +++ 
 ofa_kernel-1.5.1/kernel_patches/fixes/ehca-0110-dont_turnoff_irq_in_tasklet.patch
  2010-02-15 11:43:55.0 +0100
 @@ -0,0 +1,33 @@
 +rq_spinlock is only taken in tasklet context, so it is safe not to
 +disable hardware interrupts.
 +
 +Signed-off

Re: [ewg] MLX4 Strangeness

2010-02-16 Thread Tom Tucker
Tziporet Koren wrote:
 On 2/15/2010 10:24 PM, Tom Tucker wrote:
   
 Hello,

 I am seeing some very strange behavior on my MLX4 adapters running 2.7
 firmware and the latest OFED 1.5.1. Two systems are involved and each
 have dual ported MTHCA DDR adapter and MLX4 adapters.

 The scenario starts with NFSRDMA stress testing between the two systems
 running bonnie++ and iozone concurrently. The test completes and there
 is no issue. Then 6 minutes pass and the server times out the
 connection and shuts down the RC connection to the client.

   From this point on, using the RDMA CM, a new RC QP can be brought up
 and moved to RTS, however, the first RDMA_SEND to the NFS SERVER system
 fails with IB_WC_RETRY_EXC_ERR. I have confirmed:

 - that arp completed successfully and the neighbor entries are
 populated on both the client and server
 - that the QP are in the RTS state on both the client and server
 - that there are RECV WR posted to the RQ on the server and they did not
 error out
 - that no RECV WR completed successfully or in error on the server
 - that there are SEND WR posted to the QP on the client
 - the client side SEND_WR fails with error 12 as mentioned above

 I have also confirmed the following with a different application (i.e.
 rping):

 server# rping -s
 client# rping -c -a 192.168.80.129

 fails with the exact same error, i.e.
 client# rping -c -a 192.168.80.129
 cq completion failed status 12
 wait for RDMA_WRITE_ADV state 10
 client DISCONNECT EVENT...

 However, if I run rping the other way, it works fine, that is,

 client# rping -s
 server# rping -c -a 192.168.80.135

 It runs without error until I stop it.

 Does anyone have any ideas on how I might debug this?



 
 Tom
 What is the vendor syndrome error when you get a completion with error?

   
Hang on... compiling
 Does the issue occurs only on the ConnectX cards (mlx4) or also on the 
 InfiniHost cards (mthca)

   

Only the MLX4 cards.

 Tziporet

 ___
 ewg mailing list
 ewg@lists.openfabrics.org
 http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
   

___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ewg] [PATCH 01/02] RDMA/nes: atomic counters for cm listener create and destroy

2010-02-16 Thread Vladimir Sokolovsky
Faisal Latif wrote:
 Running long hour iterative MPI tests, sometimes ethtool statistics
 CM Destroy Listener count is more than CM Create Listener.
 This inconsistency is fixed by making counter variable atomic.
 
 Signed-off-by: Faisal Latif faisal.la...@intel.com

Applied,

Regards,
Vladimir
___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ewg] [PATCH 02/02] RDMA/nes: listener destroyed during loopback setup crash

2010-02-16 Thread Vladimir Sokolovsky
Faisal Latif wrote:
 When listener is destroyed and where is MPA response pending for
 loopback connection, the active side cm_node gets destroyed in
 cm_event_connect_error() and again in nes_accept() and nes_reject().
 Increment cm_node's refcount to not be destroyed by cm_event_connect_error().
 
 Signed-off-by: Faisal Latif faisal.la...@intel.com
 ---

Applied,

Regards,
Vladimir
___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ewg] MLX4 Strangeness

2010-02-16 Thread Tom Tucker
Tziporet Koren wrote:
 On 2/15/2010 10:24 PM, Tom Tucker wrote:
   
 Hello,

 I am seeing some very strange behavior on my MLX4 adapters running 2.7
 firmware and the latest OFED 1.5.1. Two systems are involved and each
 have dual ported MTHCA DDR adapter and MLX4 adapters.

 The scenario starts with NFSRDMA stress testing between the two systems
 running bonnie++ and iozone concurrently. The test completes and there
 is no issue. Then 6 minutes pass and the server times out the
 connection and shuts down the RC connection to the client.

   From this point on, using the RDMA CM, a new RC QP can be brought up
 and moved to RTS, however, the first RDMA_SEND to the NFS SERVER system
 fails with IB_WC_RETRY_EXC_ERR. I have confirmed:

 - that arp completed successfully and the neighbor entries are
 populated on both the client and server
 - that the QP are in the RTS state on both the client and server
 - that there are RECV WR posted to the RQ on the server and they did not
 error out
 - that no RECV WR completed successfully or in error on the server
 - that there are SEND WR posted to the QP on the client
 - the client side SEND_WR fails with error 12 as mentioned above

 I have also confirmed the following with a different application (i.e.
 rping):

 server# rping -s
 client# rping -c -a 192.168.80.129

 fails with the exact same error, i.e.
 client# rping -c -a 192.168.80.129
 cq completion failed status 12
 wait for RDMA_WRITE_ADV state 10
 client DISCONNECT EVENT...

 However, if I run rping the other way, it works fine, that is,

 client# rping -s
 server# rping -c -a 192.168.80.135

 It runs without error until I stop it.

 Does anyone have any ideas on how I might debug this?



 
 Tom
 What is the vendor syndrome error when you get a completion with error?

   
Feb 16 15:08:29 vic10 kernel: rpcrdma: connection to 
192.168.80.129:20049 closed (-103)
Feb 16 15:51:27 vic10 kernel: rpcrdma: connection to 
192.168.80.129:20049 on mlx4_0, memreg 5 slots 32 ird 16
Feb 16 15:52:01 vic10 kernel: rpcrdma_event_process:160 wr_id 
81002879a000 status 5 opcode 0 vendor_err 244 byte_len 0 qp 
81003c9e3200 ex  src_qp  wc_flags, 0 pkey_index
Feb 16 15:52:06 vic10 kernel: rpcrdma: connection to 
192.168.80.129:20049 closed (-103)
Feb 16 15:52:06 vic10 kernel: rpcrdma: connection to 
192.168.80.129:20049 on mlx4_0, memreg 5 slots 32 ird 16
Feb 16 15:52:40 vic10 kernel: rpcrdma_event_process:160 wr_id 
81002879a000 status 5 opcode 0 vendor_err 244 byte_len 0 qp 
81002f2d8400 ex  src_qp  wc_flags, 0 pkey_index

Repeat forever

So the vendor err is 244.

 Does the issue occurs only on the ConnectX cards (mlx4) or also on the 
 InfiniHost cards (mthca)

 Tziporet

 ___
 ewg mailing list
 ewg@lists.openfabrics.org
 http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
   

___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ewg] MLX4 Strangeness

2010-02-16 Thread Tom Tucker

More info...

Reboot the client and try to reconnect to a server that has not been 
rebooted fails in the same way.

It must be an issue with the server. I see no completions on the server 
or any indication that an RDMA_SEND was incoming. Is there some way to 
dump adapter state or otherwise see if there was traffic on the wire?

Tom


Tom Tucker wrote:
 Tom Tucker wrote:
 Tziporet Koren wrote:
 On 2/15/2010 10:24 PM, Tom Tucker wrote:
  
 Hello,

 I am seeing some very strange behavior on my MLX4 adapters running 2.7
 firmware and the latest OFED 1.5.1. Two systems are involved and each
 have dual ported MTHCA DDR adapter and MLX4 adapters.

 The scenario starts with NFSRDMA stress testing between the two 
 systems
 running bonnie++ and iozone concurrently. The test completes and there
 is no issue. Then 6 minutes pass and the server times out the
 connection and shuts down the RC connection to the client.

   From this point on, using the RDMA CM, a new RC QP can be brought up
 and moved to RTS, however, the first RDMA_SEND to the NFS SERVER 
 system
 fails with IB_WC_RETRY_EXC_ERR. I have confirmed:

 - that arp completed successfully and the neighbor entries are
 populated on both the client and server
 - that the QP are in the RTS state on both the client and server
 - that there are RECV WR posted to the RQ on the server and they 
 did not
 error out
 - that no RECV WR completed successfully or in error on the server
 - that there are SEND WR posted to the QP on the client
 - the client side SEND_WR fails with error 12 as mentioned above

 I have also confirmed the following with a different application (i.e.
 rping):

 server# rping -s
 client# rping -c -a 192.168.80.129

 fails with the exact same error, i.e.
 client# rping -c -a 192.168.80.129
 cq completion failed status 12
 wait for RDMA_WRITE_ADV state 10
 client DISCONNECT EVENT...

 However, if I run rping the other way, it works fine, that is,

 client# rping -s
 server# rping -c -a 192.168.80.135

 It runs without error until I stop it.

 Does anyone have any ideas on how I might debug this?



 Tom
 What is the vendor syndrome error when you get a completion with error?

   
 Feb 16 15:08:29 vic10 kernel: rpcrdma: connection to 
 192.168.80.129:20049 closed (-103)
 Feb 16 15:51:27 vic10 kernel: rpcrdma: connection to 
 192.168.80.129:20049 on mlx4_0, memreg 5 slots 32 ird 16
 Feb 16 15:52:01 vic10 kernel: rpcrdma_event_process:160 wr_id 
 81002879a000 status 5 opcode 0 vendor_err 244 byte_len 0 qp 
 81003c9e3200 ex  src_qp  wc_flags, 0 pkey_index
 Feb 16 15:52:06 vic10 kernel: rpcrdma: connection to 
 192.168.80.129:20049 closed (-103)
 Feb 16 15:52:06 vic10 kernel: rpcrdma: connection to 
 192.168.80.129:20049 on mlx4_0, memreg 5 slots 32 ird 16
 Feb 16 15:52:40 vic10 kernel: rpcrdma_event_process:160 wr_id 
 81002879a000 status 5 opcode 0 vendor_err 244 byte_len 0 qp 
 81002f2d8400 ex  src_qp  wc_flags, 0 pkey_index

 Repeat forever

 So the vendor err is 244.


 Please ignore this. This log skips the failing WR (:-\). I need to do 
 another trace.



 Does the issue occurs only on the ConnectX cards (mlx4) or also on 
 the InfiniHost cards (mthca)

 Tziporet

 ___
 ewg mailing list
 ewg@lists.openfabrics.org
 http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg
   





___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg