[PATCH 2/5] ibmveth: Add netpoll function

2006-10-03 Thread Santiago Leon
From: Santiago Leon [EMAIL PROTECTED]

This patch adds the net poll controller function to ibmveth to support 
netconsole and netdump.

Signed-off-by: Santiago Leon [EMAIL PROTECTED]
---
 drivers/net/ibmveth.c |   11 +++
 1 file changed, 11 insertions(+)

diff -urNp a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
--- a/drivers/net/ibmveth.c 2006-07-12 11:07:41.940202928 -0500
+++ b/drivers/net/ibmveth.c 2006-07-12 11:09:45.344207680 -0500
@@ -925,6 +925,14 @@ static int ibmveth_change_mtu(struct net
return -EINVAL;
 }
 
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void ibmveth_poll_controller(struct net_device *dev)
+{
+   ibmveth_replenish_task(dev-priv);
+   ibmveth_interrupt(dev-irq, dev, NULL);
+}
+#endif
+
 static int __devinit ibmveth_probe(struct vio_dev *dev, const struct 
vio_device_id *id)
 {
int rc, i;
@@ -997,6 +1005,9 @@ static int __devinit ibmveth_probe(struc
netdev-ethtool_ops   = netdev_ethtool_ops;
netdev-change_mtu = ibmveth_change_mtu;
SET_NETDEV_DEV(netdev, dev-dev);
+#ifdef CONFIG_NET_POLL_CONTROLLER
+   netdev-poll_controller = ibmveth_poll_controller;
+#endif
netdev-features |= NETIF_F_LLTX;
spin_lock_init(adapter-stats_lock);
 
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/5] ibmveth: various fixes

2006-10-03 Thread Santiago Leon
Hi Jeff,

Can you apply the following patches (hopefully for 2.6.19)?  They are the 
hardening of the initialization for kexec, adding netpoll, and some small fixes 
for bugs that people have been running into.

Thanks,
-- 
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/5] ibmveth: kdump interrupt fix

2006-10-03 Thread Santiago Leon
From: Santiago Leon [EMAIL PROTECTED]

This patch fixes a race that panics the kernel when opening the device after a 
kdump.  Without this patch there is a window where the hypervisor can send an 
interrupt before all the structures for the kdump ibmveth module are ready 
(because the hypervisor is not aware that the partition crashed and that the 
virtual driver is reloading).  We close this window by disabling the interrupts 
before registering the adapter to the hypervisor.

This patch depends on the ibmveth: Harden driver initilisation patch.

Signed-off-by: Santiago Leon [EMAIL PROTECTED]
---
 drivers/net/ibmveth.c |2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
--- a/drivers/net/ibmveth.c
+++ b/drivers/net/ibmveth.c
@@ -527,6 +527,8 @@ static int ibmveth_open(struct net_devic
ibmveth_debug_printk(filter list @ 0x%p\n, adapter-filter_list_addr);
ibmveth_debug_printk(receive q   @ 0x%p\n, 
adapter-rx_queue.queue_addr);
 
+   h_vio_signal(adapter-vdev-unit_address, VIO_IRQ_DISABLE);
+
lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
 
if(lpar_rc != H_SUCCESS) {
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/5] ibmveth: fix int rollover panic

2006-10-03 Thread Santiago Leon
From: Santiago Leon [EMAIL PROTECTED]

This patch fixes a nasty bug that has been sitting there since the very first 
versions of the driver, but is generating a panic because we changed the number 
of 2K buffers for 2.6.16.

The consumer_index and producer_index are u32's that get incremented on every 
buffer emptied and replenished respectively.  We use the 
{producer,consumer}_index mod'ed with the size of the pool to pick out an entry 
in the free_map.  The problem happens when the u32 rolls over and the number of 
the buffers in the pool is not a perfect divisor of 2^32.  i.e. if the number 
of 2K buffers is 0x300, before the consumer_index rolls over,  our index to the 
free map = 0x mod 0x300 = 0xff.  The next time a buffer is emptied, we 
want the index to the free map to be 0x100, but 0x0 mod 0x300 is 0x0.

This patch assigns the mod'ed result back to the consumer and producer indexes 
so that they never roll over.  The second chunk of the patch covers the 
unlikely case where the consumer_index has just been reset to 0x0 and the 
hypervisor is not able to accept that buffer.

Signed-off-by: Santiago Leon [EMAIL PROTECTED]
---
 drivers/net/ibmveth.c |7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
--- a/drivers/net/ibmveth.c
+++ b/drivers/net/ibmveth.c
@@ -213,6 +213,7 @@ static void ibmveth_replenish_buffer_poo
}
 
free_index = pool-consumer_index++ % pool-size;
+   pool-consumer_index = free_index;
index = pool-free_map[free_index];
 
ibmveth_assert(index != IBM_VETH_INVALID_MAP);
@@ -238,7 +239,10 @@ static void ibmveth_replenish_buffer_poo
if(lpar_rc != H_SUCCESS) {
pool-free_map[free_index] = index;
pool-skbuff[index] = NULL;
-   pool-consumer_index--;
+   if (pool-consumer_index == 0)
+   pool-consumer_index = pool-size - 1;
+   else
+   pool-consumer_index--;
dma_unmap_single(adapter-vdev-dev,
pool-dma_addr[index], pool-buff_size,
DMA_FROM_DEVICE);
@@ -326,6 +330,7 @@ static void ibmveth_remove_buffer_from_p
 DMA_FROM_DEVICE);
 
free_index = adapter-rx_buff_pool[pool].producer_index++ % 
adapter-rx_buff_pool[pool].size;
+   adapter-rx_buff_pool[pool].producer_index = free_index;
adapter-rx_buff_pool[pool].free_map[free_index] = index;
 
mb();
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/5] ibmveth: Harden driver initilisation

2006-10-03 Thread Santiago Leon
From: Michael Ellerman [EMAIL PROTECTED]

Hi Jeff,

This patch has been floating around for a while now, Santi originally sent it 
in March: http://www.spinics.net/lists/netdev/msg00471.html
 
You replied saying you thought it was bonkers, I think I explained why it 
wasn't, perhaps you disagree.

I'm resending it now in the hope you can either give us more info on your 
objections, or merge it.

After a kexec the ibmveth driver will fail when trying to register with the 
Hypervisor because the previous kernel has not unregistered.

So if the registration fails, we unregister and then try again.

We don't unconditionally unregister, because we don't want to disturb the 
regular code path for 99% of users.

Signed-off-by: Michael Ellerman [EMAIL PROTECTED]
Acked-by: Anton Blanchard [EMAIL PROTECTED]
Signed-off-by: Santiago Leon [EMAIL PROTECTED]
---
 drivers/net/ibmveth.c |   32 ++--
 1 file changed, 26 insertions(+), 6 deletions(-)

diff -urNp a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
--- a/drivers/net/ibmveth.c 2006-07-12 10:45:09.787235496 -0500
+++ b/drivers/net/ibmveth.c 2006-07-12 10:43:20.655186616 -0500
@@ -437,6 +437,31 @@ static void ibmveth_cleanup(struct ibmve
 adapter-rx_buff_pool[i]);
 }
 
+static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
+union ibmveth_buf_desc rxq_desc, u64 mac_address)
+{
+   int rc, try_again = 1;
+
+   /* After a kexec the adapter will still be open, so our attempt to
+   * open it will fail. So if we get a failure we free the adapter and
+   * try again, but only once. */
+retry:
+   rc = h_register_logical_lan(adapter-vdev-unit_address,
+   adapter-buffer_list_dma, rxq_desc.desc,
+   adapter-filter_list_dma, mac_address);
+
+   if (rc != H_SUCCESS  try_again) {
+   do {
+   rc = h_free_logical_lan(adapter-vdev-unit_address);
+   } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
+
+   try_again = 0;
+   goto retry;
+   }
+
+   return rc;
+}
+
 static int ibmveth_open(struct net_device *netdev)
 {
struct ibmveth_adapter *adapter = netdev-priv;
@@ -502,12 +527,7 @@ static int ibmveth_open(struct net_devic
ibmveth_debug_printk(filter list @ 0x%p\n, adapter-filter_list_addr);
ibmveth_debug_printk(receive q   @ 0x%p\n, 
adapter-rx_queue.queue_addr);
 
-
-   lpar_rc = h_register_logical_lan(adapter-vdev-unit_address,
-adapter-buffer_list_dma,
-rxq_desc.desc,
-adapter-filter_list_dma,
-mac_address);
+   lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
 
if(lpar_rc != H_SUCCESS) {
ibmveth_error_printk(h_register_logical_lan failed with 
%ld\n, lpar_rc);
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/5] ibmveth: rename proc entry name

2006-10-03 Thread Santiago Leon
From: Santiago Leon [EMAIL PROTECTED]

This patch changes the name of the proc file for each ibmveth adapter from the 
network device name to the slot number in the virtual bus.

The proc file is created when the device is probed, so a change in the name of 
the device will not be reflected in the name of the proc file giving problems 
when identifying and removing the adapter.  The slot number is a property that 
does not change through the life of the adapter so we use that instead.

Signed-off-by: Santiago Leon [EMAIL PROTECTED]
---
 drivers/net/ibmveth.c |8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
--- a/drivers/net/ibmveth.c
+++ b/drivers/net/ibmveth.c
@@ -1165,7 +1165,9 @@ static void ibmveth_proc_register_adapte
 {
struct proc_dir_entry *entry;
if (ibmveth_proc_dir) {
-   entry = create_proc_entry(adapter-netdev-name, S_IFREG, 
ibmveth_proc_dir);
+   char u_addr[10];
+   sprintf(u_addr, %x, adapter-vdev-unit_address);
+   entry = create_proc_entry(u_addr, S_IFREG, ibmveth_proc_dir);
if (!entry) {
ibmveth_error_printk(Cannot create adapter proc 
entry);
} else {
@@ -1180,7 +1182,9 @@ static void ibmveth_proc_register_adapte
 static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter)
 {
if (ibmveth_proc_dir) {
-   remove_proc_entry(adapter-netdev-name, ibmveth_proc_dir);
+   char u_addr[10];
+   sprintf(u_addr, %x, adapter-vdev-unit_address);
+   remove_proc_entry(u_addr, ibmveth_proc_dir);
}
 }
 
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] ibmveth change buffer pools dynamically

2006-05-25 Thread Santiago Leon

Jeff,

Can you consider applying this patch?  I haven't received any feedback 
from netdev, but the changes are pretty straightforward (the majority of 
the patch is setting up the sysfs interface).



This patch provides a sysfs interface to change some properties of the 
ibmveth buffer pools (size of the buffers, number of buffers per pool, 
and whether a pool is active).  Ethernet drivers use ethtool to provide 
this type of functionality.  However, the buffers in the ibmveth driver 
can have an arbitrary size (not only regular, mini, and jumbo which are 
the only sizes that ethtool can change), and also ibmveth can have an 
arbitrary number of buffer pools


Under heavy load we have seen dropped packets which obviously kills TCP 
performance.  We have created several fixes that mitigate this issue, 
but we definitely need a way of changing the number of buffers for an 
adapter dynamically.  Also, changing the size of the buffers allows 
users to change the MTU to something big (bigger than a jumbo frame) 
greatly improving performance on partition to partition transfers.


The patch creates directories pool1...pool4 in the device directory in 
sysfs, each with files: num, size, and active (which default to the 
values in the mainline version).


Signed-off-by: Santiago Leon [EMAIL PROTECTED]
--
 ibmveth.c |  211 +-
 ibmveth.h |7 +-
 2 files changed, 174 insertions(+), 44 deletions(-)

--- a/drivers/net/ibmveth.h 2006-01-02 21:21:10.0 -0600
+++ b/drivers/net/ibmveth.h 2006-04-18 10:20:00.102520432 -0500
@@ -75,10 +75,13 @@
 
 #define IbmVethNumBufferPools 5
 #define IBMVETH_BUFF_OH 22 /* Overhead: 14 ethernet header + 8 opaque handle */
+#define IBMVETH_MAX_MTU 68
+#define IBMVETH_MAX_POOL_COUNT 4096
+#define IBMVETH_MAX_BUF_SIZE (1024 * 128)
 
-/* pool_size should be sorted */
 static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 };
 static int pool_count[] = { 256, 768, 256, 256, 256 };
+static int pool_active[] = { 1, 1, 0, 0, 0};
 
 #define IBM_VETH_INVALID_MAP ((u16)0x)
 
@@ -94,6 +97,7 @@ struct ibmveth_buff_pool {
 dma_addr_t *dma_addr;
 struct sk_buff **skbuff;
 int active;
+struct kobject kobj;
 };
 
 struct ibmveth_rx_q {
@@ -118,6 +122,7 @@ struct ibmveth_adapter {
 dma_addr_t filter_list_dma;
 struct ibmveth_buff_pool rx_buff_pool[IbmVethNumBufferPools];
 struct ibmveth_rx_q rx_queue;
+int pool_config;
 
 /* adapter specific stats */
 u64 replenish_task_cycles;
--- a/drivers/net/ibmveth.c 2006-01-02 21:21:10.0 -0600
+++ b/drivers/net/ibmveth.c 2006-04-18 10:19:55.624532480 -0500
@@ -96,6 +96,7 @@ static void ibmveth_proc_register_adapte
 static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter);
 static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance, struct 
pt_regs *regs);
 static inline void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
+static struct kobj_type ktype_veth_pool;
 
 #ifdef CONFIG_PROC_FS
 #define IBMVETH_PROC_DIR net/ibmveth
@@ -133,12 +134,13 @@ static inline int ibmveth_rxq_frame_leng
 }
 
 /* setup the initial settings for a buffer pool */
-static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool, u32 
pool_index, u32 pool_size, u32 buff_size)
+static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool, u32 
pool_index, u32 pool_size, u32 buff_size, u32 pool_active)
 {
pool-size = pool_size;
pool-index = pool_index;
pool-buff_size = buff_size;
pool-threshold = pool_size / 2;
+   pool-active = pool_active;
 }
 
 /* allocate and setup an buffer pool - called during open */
@@ -180,7 +182,6 @@ static int ibmveth_alloc_buffer_pool(str
atomic_set(pool-available, 0);
pool-producer_index = 0;
pool-consumer_index = 0;
-   pool-active = 0;
 
return 0;
 }
@@ -301,7 +302,6 @@ static void ibmveth_free_buffer_pool(str
kfree(pool-skbuff);
pool-skbuff = NULL;
}
-   pool-active = 0;
 }
 
 /* remove a buffer from a pool */
@@ -433,7 +433,9 @@ static void ibmveth_cleanup(struct ibmve
}
 
for(i = 0; iIbmVethNumBufferPools; i++)
-   ibmveth_free_buffer_pool(adapter, adapter-rx_buff_pool[i]);
+   if (adapter-rx_buff_pool[i].active)
+   ibmveth_free_buffer_pool(adapter, 
+adapter-rx_buff_pool[i]);
 }
 
 static int ibmveth_open(struct net_device *netdev)
@@ -489,9 +491,6 @@ static int ibmveth_open(struct net_devic
adapter-rx_queue.num_slots = rxq_entries;
adapter-rx_queue.toggle = 1;
 
-   /* call change_mtu to init the buffer pools based in initial mtu */
-   ibmveth_change_mtu(netdev, netdev-mtu);
-
memcpy(mac_address, netdev-dev_addr, netdev-addr_len);
mac_address = mac_address  16;
 
@@ -522,6 +521,17

Re: [PATCH] powerpc: ibmveth: Harden driver initilisation for kexec

2006-04-28 Thread Santiago Leon

Michael Ellerman wrote:

Looks like this hit the floor. Any chance of getting it into to 2.6.17
Jeff? AFAICT it should still apply cleanly.
 
/me knocks politely


Actually, it doesn't apply cleanly anymore.  Here's a patch that does.
--
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center
From: Michael Ellerman [EMAIL PROTECTED]

After a kexec the ibmveth driver will fail when trying to register with the 
Hypervisor because the previous kernel has not unregistered.

So if the registration fails, we unregister and then try again.

Signed-off-by: Michael Ellerman [EMAIL PROTECTED]
Acked-by: Anton Blanchard [EMAIL PROTECTED]
Signed-off-by: Santiago Leon [EMAIL PROTECTED]

 ibmveth.c |   31 ++-
 1 file changed, 26 insertions(+), 5 deletions(-)

Index: kexec/drivers/net/ibmveth.c
===
--- a/drivers/net/ibmveth.c 2006-04-28 13:16:22.244724056 -0500
+++ b/drivers/net/ibmveth.c 2006-04-28 13:29:49.429736784 -0500
@@ -436,6 +436,31 @@ static void ibmveth_cleanup(struct ibmve
ibmveth_free_buffer_pool(adapter, adapter-rx_buff_pool[i]);
 }
 
+static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
+union ibmveth_buf_desc rxq_desc, u64 mac_address)
+{
+   int rc, try_again = 1;
+
+   /* After a kexec the adapter will still be open, so our attempt to
+   * open it will fail. So if we get a failure we free the adapter and
+   * try again, but only once. */
+retry:
+   rc = h_register_logical_lan(adapter-vdev-unit_address,
+   adapter-buffer_list_dma, rxq_desc.desc,
+   adapter-filter_list_dma, mac_address);
+
+   if (rc != H_SUCCESS  try_again) {
+   do {
+   rc = h_free_logical_lan(adapter-vdev-unit_address);
+   } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
+
+   try_again = 0;
+   goto retry;
+   }
+
+   return rc;
+}
+
 static int ibmveth_open(struct net_device *netdev)
 {
struct ibmveth_adapter *adapter = netdev-priv;
@@ -505,11 +530,7 @@ static int ibmveth_open(struct net_devic
ibmveth_debug_printk(receive q   @ 0x%p\n, 
adapter-rx_queue.queue_addr);
 
 
-   lpar_rc = h_register_logical_lan(adapter-vdev-unit_address,
-adapter-buffer_list_dma,
-rxq_desc.desc,
-adapter-filter_list_dma,
-mac_address);
+   lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
 
if(lpar_rc != H_SUCCESS) {
ibmveth_error_printk(h_register_logical_lan failed with 
%ld\n, lpar_rc);


Re: [PATCH] ibmveth change buffer pools dynamically

2006-04-28 Thread Santiago Leon

Santiago Leon wrote:


This patch provides a sysfs interface to change some properties of the
ibmveth buffer pools (size of the buffers, number of buffers per pool,
and whether a pool is active).  Ethernet drivers use ethtool to provide
this type of functionality.  However, the buffers in the ibmveth driver
can have an arbitrary size (not only regular, mini, and jumbo which are
the only sizes that ethtool can change), and also ibmveth can have an
arbitrary number of buffer pools 


Under heavy load we have seen dropped packets which obviously kills TCP
performance.  We have created several fixes that mitigate this issue,
but we definitely need a way of changing the number of buffers for an
adapter dynamically.  Also, changing the size of the buffers allows
users to change the MTU to something big (bigger than a jumbo frame)
greatly improving performance on partition to partition transfers.

The patch creates directories pool1...pool4 in the device directory in
sysfs, each with files: num, size, and active (which default to the
values in the mainline version).

Comments and suggestions are welcome...



Jeff, if you don't have any problem with this patch, can you apply it?

Thanks,

--
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] ibmveth support for netpoll

2006-04-28 Thread Santiago Leon

This patch adds NETPOLL support for the ibmveth driver. Please apply.

Signed-off-by: Santiago Leon [EMAIL PROTECTED]

 ibmveth.c |   11 +++
 1 file changed, 11 insertions(+)
--- a/drivers/net/ibmveth.c 2006-04-28 13:16:22.244724056 -0500
+++ b/drivers/net/ibmveth.c 2006-04-28 13:17:59.971778584 -0500
@@ -918,6 +918,14 @@ static int ibmveth_change_mtu(struct net
return 0;   
 }
 
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void ibmveth_poll_controller(struct net_device *dev)
+{
+   ibmveth_replenish_task(dev-priv);
+   ibmveth_interrupt(dev-irq, dev, NULL);
+}
+#endif
+
 static int __devinit ibmveth_probe(struct vio_dev *dev, const struct 
vio_device_id *id)
 {
int rc, i;
@@ -989,6 +997,9 @@ static int __devinit ibmveth_probe(struc
netdev-ethtool_ops   = netdev_ethtool_ops;
netdev-change_mtu = ibmveth_change_mtu;
SET_NETDEV_DEV(netdev, dev-dev);
+#ifdef CONFIG_NET_POLL_CONTROLLER
+   netdev-poll_controller = ibmveth_poll_controller;
+#endif
netdev-features |= NETIF_F_LLTX; 
spin_lock_init(adapter-stats_lock);
 


[PATCH] ibmveth change buffer pools dynamically

2006-04-25 Thread Santiago Leon
This patch provides a sysfs interface to change some properties of the
ibmveth buffer pools (size of the buffers, number of buffers per pool,
and whether a pool is active).  Ethernet drivers use ethtool to provide
this type of functionality.  However, the buffers in the ibmveth driver
can have an arbitrary size (not only regular, mini, and jumbo which are
the only sizes that ethtool can change), and also ibmveth can have an
arbitrary number of buffer pools 

Under heavy load we have seen dropped packets which obviously kills TCP
performance.  We have created several fixes that mitigate this issue,
but we definitely need a way of changing the number of buffers for an
adapter dynamically.  Also, changing the size of the buffers allows
users to change the MTU to something big (bigger than a jumbo frame)
greatly improving performance on partition to partition transfers.

The patch creates directories pool1...pool4 in the device directory in
sysfs, each with files: num, size, and active (which default to the
values in the mainline version).

Comments and suggestions are welcome...
-- 
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center
--- a/drivers/net/ibmveth.h	2006-01-02 21:21:10.0 -0600
+++ b/drivers/net/ibmveth.h	2006-04-18 10:20:00.102520432 -0500
@@ -75,10 +75,13 @@
 
 #define IbmVethNumBufferPools 5
 #define IBMVETH_BUFF_OH 22 /* Overhead: 14 ethernet header + 8 opaque handle */
+#define IBMVETH_MAX_MTU 68
+#define IBMVETH_MAX_POOL_COUNT 4096
+#define IBMVETH_MAX_BUF_SIZE (1024 * 128)
 
-/* pool_size should be sorted */
 static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 };
 static int pool_count[] = { 256, 768, 256, 256, 256 };
+static int pool_active[] = { 1, 1, 0, 0, 0};
 
 #define IBM_VETH_INVALID_MAP ((u16)0x)
 
@@ -94,6 +97,7 @@ struct ibmveth_buff_pool {
 dma_addr_t *dma_addr;
 struct sk_buff **skbuff;
 int active;
+struct kobject kobj;
 };
 
 struct ibmveth_rx_q {
@@ -118,6 +122,7 @@ struct ibmveth_adapter {
 dma_addr_t filter_list_dma;
 struct ibmveth_buff_pool rx_buff_pool[IbmVethNumBufferPools];
 struct ibmveth_rx_q rx_queue;
+int pool_config;
 
 /* adapter specific stats */
 u64 replenish_task_cycles;
--- a/drivers/net/ibmveth.c	2006-01-02 21:21:10.0 -0600
+++ b/drivers/net/ibmveth.c	2006-04-18 10:19:55.624532480 -0500
@@ -96,6 +96,7 @@ static void ibmveth_proc_register_adapte
 static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter);
 static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance, struct pt_regs *regs);
 static inline void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
+static struct kobj_type ktype_veth_pool;
 
 #ifdef CONFIG_PROC_FS
 #define IBMVETH_PROC_DIR net/ibmveth
@@ -133,12 +134,13 @@ static inline int ibmveth_rxq_frame_leng
 }
 
 /* setup the initial settings for a buffer pool */
-static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool, u32 pool_index, u32 pool_size, u32 buff_size)
+static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool, u32 pool_index, u32 pool_size, u32 buff_size, u32 pool_active)
 {
 	pool-size = pool_size;
 	pool-index = pool_index;
 	pool-buff_size = buff_size;
 	pool-threshold = pool_size / 2;
+	pool-active = pool_active;
 }
 
 /* allocate and setup an buffer pool - called during open */
@@ -180,7 +182,6 @@ static int ibmveth_alloc_buffer_pool(str
 	atomic_set(pool-available, 0);
 	pool-producer_index = 0;
 	pool-consumer_index = 0;
-	pool-active = 0;
 
 	return 0;
 }
@@ -301,7 +302,6 @@ static void ibmveth_free_buffer_pool(str
 		kfree(pool-skbuff);
 		pool-skbuff = NULL;
 	}
-	pool-active = 0;
 }
 
 /* remove a buffer from a pool */
@@ -433,7 +433,9 @@ static void ibmveth_cleanup(struct ibmve
 	}
 
 	for(i = 0; iIbmVethNumBufferPools; i++)
-		ibmveth_free_buffer_pool(adapter, adapter-rx_buff_pool[i]);
+		if (adapter-rx_buff_pool[i].active)
+			ibmveth_free_buffer_pool(adapter, 
+		 adapter-rx_buff_pool[i]);
 }
 
 static int ibmveth_open(struct net_device *netdev)
@@ -489,9 +491,6 @@ static int ibmveth_open(struct net_devic
 	adapter-rx_queue.num_slots = rxq_entries;
 	adapter-rx_queue.toggle = 1;
 
-	/* call change_mtu to init the buffer pools based in initial mtu */
-	ibmveth_change_mtu(netdev, netdev-mtu);
-
 	memcpy(mac_address, netdev-dev_addr, netdev-addr_len);
 	mac_address = mac_address  16;
 
@@ -522,6 +521,17 @@ static int ibmveth_open(struct net_devic
 		return -ENONET; 
 	}
 
+	for(i = 0; iIbmVethNumBufferPools; i++) {
+		if(!adapter-rx_buff_pool[i].active)
+			continue;
+		if (ibmveth_alloc_buffer_pool(adapter-rx_buff_pool[i])) {
+			ibmveth_error_printk(unable to alloc pool\n);
+			adapter-rx_buff_pool[i].active = 0;
+			ibmveth_cleanup(adapter);
+			return -ENOMEM ;
+		}
+	}
+
 	ibmveth_debug_printk(registering irq 0x%x\n, netdev-irq);
 	if((rc = request_irq(netdev-irq, ibmveth_interrupt, 0, netdev-name, netdev)) != 0) {
 		ibmveth_error_printk(unable to request 

[RFC] ibmveth buffer pool sizes

2006-04-04 Thread Santiago Leon
The current ibmveth driver has a fixed number of buffers per buffer pool 
and under certain workloads, it's running out of buffers. So I would 
like to be able to change the number of buffers in each pool at runtime.


The way most drivers do it is with ethtool -G and/or module parameters. 
 However, this is not appropriate for the ibmveth driver because the 
driver can use buffers of any arbitrary size, not only regular, mini and 
jumbo.  For instance, at the moment we have buffers of 512, 2kB, 8kB, 
16kB (which get allocated based on the MTU).


I was thinking that I can achieve the flexibility of having X number of 
buffer pools, each with Y(X) number of buffers of Z(X) size with sysfs 
like this:


/sys/devices/vio/300X/num_pools
 /pool0
 /pool0/size
 /pool0/num
 /pool1
 /pool1/size
 /pool1/num
.
.
 /pool(num_pools-1)
 /pool(num_pools-1)/size
 /pool(num_pools-1)/num

Would that be an acceptable way of providing the functionality?

Another less flexible way that comes to mind is fixing the number of 
pools, fixing the size of buffers for each pool, and providing module 
parameters that would determine the number of buffers in each pool.


Any ideas/thoughts would be greatly appreciated.

--
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html