Re: [PATCH 1/2] ibmveth: Fix h_free_logical_lan error on pool resize
Brian King wrote: When attempting to activate additional rx buffer pools on an ibmveth interface that was not yet up, the error below was seen. The patch fixes this by only closing and opening the interface to activate the resize if the interface is already opened. applied 1-2 to #upstream-fixes - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] ibmveth: Fix h_free_logical_lan error on pool resize
When attempting to activate additional rx buffer pools on an ibmveth interface that was not yet up, the error below was seen. The patch fixes this by only closing and opening the interface to activate the resize if the interface is already opened. (drivers/net/ibmveth.c:597 ua:3004) ERROR: h_free_logical_lan failed with fffc, continuing with close Unable to handle kernel paging request for data at address 0x0ff8 Faulting instruction address: 0xd02540e0 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=128 NUMA PSERIES LPAR Modules linked in: ip6t_REJECT xt_tcpudp ipt_REJECT xt_state iptable_mangle ipta ble_nat ip_nat iptable_filter ip6table_mangle ip_conntrack nfnetlink ip_tables i p6table_filter ip6_tables x_tables ipv6 apparmor aamatch_pcre loop dm_mod ibmvet h sg ibmvscsic sd_mod scsi_mod NIP: D02540E0 LR: D02540D4 CTR: 801AF404 REGS: c0001cd27870 TRAP: 0300 Not tainted (2.6.16.46-0.4-ppc64) MSR: 80009032 CR: 24242422 XER: 0007 DAR: 0FF8, DSISR: 4000 TASK = c0001ca7b4e0[1636] 'sh' THREAD: c0001cd24000 CPU: 0 GPR00: D02540D4 C0001CD27AF0 D0265650 C0001C936500 GPR04: 80009032 0007 0002C2EF GPR08: C0652A10 C0652AE0 GPR12: 4000 C04A3300 100A GPR16: 100B8808 100C0F60 10084878 GPR20: 100C0CB0 100AF498 0002 GPR24: 100BA488 C0001C936760 D0258DD0 C0001C936000 GPR28: C0001C936500 D0265180 C0001C936000 NIP [D02540E0] .ibmveth_close+0xc8/0xf4 [ibmveth] LR [D02540D4] .ibmveth_close+0xbc/0xf4 [ibmveth] Call Trace: [C0001CD27AF0] [D02540D4] .ibmveth_close+0xbc/0xf4 [ibmveth] (unreliable) [C0001CD27B80] [D02545FC] .veth_pool_store+0xd0/0x260 [ibmveth] [C0001CD27C40] [C012E0E8] .sysfs_write_file+0x118/0x198 [C0001CD27CF0] [C00CDAF0] .vfs_write+0x130/0x218 [C0001CD27D90] [C00CE52C] .sys_write+0x4c/0x8c [C0001CD27E30] [C000871C] syscall_exit+0x0/0x40 Instruction dump: 419affd8 2fa3 419e0020 e93d e89e8040 38a00255 e87e81b0 80c90018 48001531 e8410028 e93d00e0 7fa3eb78 f81d0430 4bfffdc9 38210090 Signed-off-by: Brian King <[EMAIL PROTECTED]> --- linux-2.6-bjking1/drivers/net/ibmveth.c | 53 ++-- 1 file changed, 31 insertions(+), 22 deletions(-) diff -puN drivers/net/ibmveth.c~ibmveth_large_frames drivers/net/ibmveth.c --- linux-2.6/drivers/net/ibmveth.c~ibmveth_large_frames2007-05-14 15:03:06.0 -0500 +++ linux-2.6-bjking1/drivers/net/ibmveth.c 2007-05-15 09:18:46.0 -0500 @@ -1243,16 +1243,19 @@ const char * buf, size_t count) if (attr == &veth_active_attr) { if (value && !pool->active) { - if(ibmveth_alloc_buffer_pool(pool)) { -ibmveth_error_printk("unable to alloc pool\n"); -return -ENOMEM; -} - pool->active = 1; - adapter->pool_config = 1; - ibmveth_close(netdev); - adapter->pool_config = 0; - if ((rc = ibmveth_open(netdev))) - return rc; + if (netif_running(netdev)) { + if(ibmveth_alloc_buffer_pool(pool)) { + ibmveth_error_printk("unable to alloc pool\n"); + return -ENOMEM; + } + pool->active = 1; + adapter->pool_config = 1; + ibmveth_close(netdev); + adapter->pool_config = 0; + if ((rc = ibmveth_open(netdev))) + return rc; + } else + pool->active = 1; } else if (!value && pool->active) { int mtu = netdev->mtu + IBMVETH_BUFF_OH; int i; @@ -1281,23 +1284,29 @@ const char * buf, size_t count) if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) return -EINVAL; else { - adapter->pool_config = 1; - ibmveth_close(netdev); - adapter->pool_config = 0; - pool->size = value; - if ((rc = ibmveth_open(netdev))) - return rc; + if (netif_running(netdev)) { + adapter->pool_config =
[PATCH 1/2] ibmveth: Fix h_free_logical_lan error on pool resize
When attempting to activate additional rx buffer pools on an ibmveth interface that was not yet up, the error below was seen. The patch fixes this by only closing and opening the interface to activate the resize if the interface is already opened. (drivers/net/ibmveth.c:597 ua:3004) ERROR: h_free_logical_lan failed with fffc, continuing with close Unable to handle kernel paging request for data at address 0x0ff8 Faulting instruction address: 0xd02540e0 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=128 NUMA PSERIES LPAR Modules linked in: ip6t_REJECT xt_tcpudp ipt_REJECT xt_state iptable_mangle ipta ble_nat ip_nat iptable_filter ip6table_mangle ip_conntrack nfnetlink ip_tables i p6table_filter ip6_tables x_tables ipv6 apparmor aamatch_pcre loop dm_mod ibmvet h sg ibmvscsic sd_mod scsi_mod NIP: D02540E0 LR: D02540D4 CTR: 801AF404 REGS: c0001cd27870 TRAP: 0300 Not tainted (2.6.16.46-0.4-ppc64) MSR: 80009032 CR: 24242422 XER: 0007 DAR: 0FF8, DSISR: 4000 TASK = c0001ca7b4e0[1636] 'sh' THREAD: c0001cd24000 CPU: 0 GPR00: D02540D4 C0001CD27AF0 D0265650 C0001C936500 GPR04: 80009032 0007 0002C2EF GPR08: C0652A10 C0652AE0 GPR12: 4000 C04A3300 100A GPR16: 100B8808 100C0F60 10084878 GPR20: 100C0CB0 100AF498 0002 GPR24: 100BA488 C0001C936760 D0258DD0 C0001C936000 GPR28: C0001C936500 D0265180 C0001C936000 NIP [D02540E0] .ibmveth_close+0xc8/0xf4 [ibmveth] LR [D02540D4] .ibmveth_close+0xbc/0xf4 [ibmveth] Call Trace: [C0001CD27AF0] [D02540D4] .ibmveth_close+0xbc/0xf4 [ibmveth] (unreliable) [C0001CD27B80] [D02545FC] .veth_pool_store+0xd0/0x260 [ibmveth] [C0001CD27C40] [C012E0E8] .sysfs_write_file+0x118/0x198 [C0001CD27CF0] [C00CDAF0] .vfs_write+0x130/0x218 [C0001CD27D90] [C00CE52C] .sys_write+0x4c/0x8c [C0001CD27E30] [C000871C] syscall_exit+0x0/0x40 Instruction dump: 419affd8 2fa3 419e0020 e93d e89e8040 38a00255 e87e81b0 80c90018 48001531 e8410028 e93d00e0 7fa3eb78 f81d0430 4bfffdc9 38210090 Signed-off-by: Brian King <[EMAIL PROTECTED]> --- linux-2.6-bjking1/drivers/net/ibmveth.c | 53 ++-- 1 file changed, 31 insertions(+), 22 deletions(-) diff -puN drivers/net/ibmveth.c~ibmveth_large_frames drivers/net/ibmveth.c --- linux-2.6/drivers/net/ibmveth.c~ibmveth_large_frames2007-05-14 15:03:06.0 -0500 +++ linux-2.6-bjking1/drivers/net/ibmveth.c 2007-05-15 09:18:46.0 -0500 @@ -1243,16 +1243,19 @@ const char * buf, size_t count) if (attr == &veth_active_attr) { if (value && !pool->active) { - if(ibmveth_alloc_buffer_pool(pool)) { -ibmveth_error_printk("unable to alloc pool\n"); -return -ENOMEM; -} - pool->active = 1; - adapter->pool_config = 1; - ibmveth_close(netdev); - adapter->pool_config = 0; - if ((rc = ibmveth_open(netdev))) - return rc; + if (netif_running(netdev)) { + if(ibmveth_alloc_buffer_pool(pool)) { + ibmveth_error_printk("unable to alloc pool\n"); + return -ENOMEM; + } + pool->active = 1; + adapter->pool_config = 1; + ibmveth_close(netdev); + adapter->pool_config = 0; + if ((rc = ibmveth_open(netdev))) + return rc; + } else + pool->active = 1; } else if (!value && pool->active) { int mtu = netdev->mtu + IBMVETH_BUFF_OH; int i; @@ -1281,23 +1284,29 @@ const char * buf, size_t count) if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) return -EINVAL; else { - adapter->pool_config = 1; - ibmveth_close(netdev); - adapter->pool_config = 0; - pool->size = value; - if ((rc = ibmveth_open(netdev))) - return rc; + if (netif_running(netdev)) { + adapter->pool_config =