RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-07-04 Thread Mohammad Sadegh Sadri

Dear All,

Our tests show that CPU usage is always 100% during netperf test.
So the speed of CPU is important in the overall performance of the gigabit link.
If we can increase the CPU core clock frequency we may achieve better results 
using existing hardware/software configuration.

I know that the PPC core inside FX12 can run with clock frequencies up to 
450MHz, However Base System Builder for ML403 allows just frequencies of up to 
300MHz for PPC core. Does any body here know how I can make PPC core to run 
with 400MHz in ML403?

thanks





> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Tue, 26 Jun 2007 18:12:55 +
> 
> Dear Mohammad,
> 
> >ML403--->PC : 410Mbits/s
> >PC--->ML403 : 210Mbits/s
> 
> These results are interesting. In priciple, board to PC will be less than 
> PC to board. And also you board to PC speed is quite fast. I never had that 
> high before. :)
> 
> >We have described the characteristics of our base system in previous posts 
> here
> >
> >In additiona we have :
> >1- enabled the ppc caches
> 
> This will help the improvement quite a lot.
> 
> >2- we have set BD_IN_BRAM in adapter.c to 1. ( default is 0 )
> 
> Actually I didn't try to modify this before. My previous results are based 
> on bd_NOT_in_bram. :) From my understanding, enable this option will put 
> the buffer descriptor in BRAM rather than DDR. Perhaps I can also try it 
> and to see if there is any improvement on my system.
> 
> >TX_THRESHOLD is 16 and RX_THRESHOLD is 2.
> >
> >the virtex4 fx12 device on ML403 is now completely full, we do not have 
> any free block memories nor any logic slices. Maybe if we had more space we 
> could choose higher values for XTE_SEND_BD_CNT and XTE_RECV_BD_CNT i.e. 
> 384. Do you think this will improve performance?
> 
> Probably yes. But I never modified these numbers before. My default ones 
> are 512 respectively.
> 
> >There is also another interesting test,
> >We executed netperf on both of PC and ML403 simultanously , when we do not 
> put BDs in BRAM, the performance of ML403-->PC link drops from 390Mbits to 
> 45Mbits, but when using PLB BRAMs for BDs the performance decreases from 
> 410Mbits/s to just 130Mbita/s. It is important when the user wants to 
> transfer data in both directions simulatanously.
> 
> Definitely! The bottleneck is CPU processing capability. So if you send and 
> receive data at the same time, the results will be much worse. I think 
> another reason is TCP is guaranteed protocal. So there will be some 
> acknowledgements returning back when you send packets out. Thus your 
> bandwidth will be taken a little away. However compared with the 
> consumption of CPU, this perhaps will be trivial.
> 
> BR
> Ming
> 
> 
> 
> 
> >
> >
> >
> > > From: [EMAIL PROTECTED]
> > > To: [EMAIL PROTECTED]
> > > CC: [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org
> > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > Date: Mon, 25 Jun 2007 10:03:30 +
> > >
> > > Dear Mohammad,
> > >
> > > >The results are as follows:
> > > >PC-->ML403
> > > >TCP_SENDFILE : 38Mbps
> > > >
> > > >ML403--->PC
> > > >TCP_SENDFILE: 155Mbps
> > >
> > > This result is unreasonable. Because PC is more powerful than your 
> board,
> > > so PC->board should be faster than board->PC.
> > >
> > > >The transfer rate from ML403 to PC has improved by a factor of 2,
> > > >I see on the posts here in the mailing list that you have reached a 
> band
> > > width of 301Mbps.
> > >
> > > Yes, with all features which could improve performance enabled, we can 
> get
> > > around 300Mbps for TCP transfer. one more hint, did you enable caches 
> on
> > > your system? perhaps it will help. Anyway, double check your hardware
> > > design to make sure all features are enabled.That's all I can suggest.
> > >
> > > BR
> > > Ming
> > >
> > >
> > > >
> > > >
> > > >
> > > >
> > > >----------------
> > > > > From: [EMAIL PROTECTED]
> > > > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
> > > linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > > > Subject: RE: ML403 gigabit eth

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-26 Thread Ming Liu
Dear Mohammad,

>ML403--->PC : 410Mbits/s
>PC--->ML403 : 210Mbits/s

These results are interesting. In priciple, board to PC will be less than 
PC to board. And also you board to PC speed is quite fast. I never had that 
high before. :)

>We have described the characteristics of our base system in previous posts 
here
>
>In additiona we have :
>1- enabled the ppc caches

This will help the improvement quite a lot.

>2- we have set BD_IN_BRAM in adapter.c to 1. ( default is 0 )

Actually I didn't try to modify this before. My previous results are based 
on bd_NOT_in_bram. :) From my understanding, enable this option will put 
the buffer descriptor in BRAM rather than DDR. Perhaps I can also try it 
and to see if there is any improvement on my system.

>TX_THRESHOLD is 16 and RX_THRESHOLD is 2.
>
>the virtex4 fx12 device on ML403 is now completely full, we do not have 
any free block memories nor any logic slices. Maybe if we had more space we 
could choose higher values for XTE_SEND_BD_CNT and XTE_RECV_BD_CNT i.e. 
384. Do you think this will improve performance?

Probably yes. But I never modified these numbers before. My default ones 
are 512 respectively.

>There is also another interesting test,
>We executed netperf on both of PC and ML403 simultanously , when we do not 
put BDs in BRAM, the performance of ML403-->PC link drops from 390Mbits to 
45Mbits, but when using PLB BRAMs for BDs the performance decreases from 
410Mbits/s to just 130Mbita/s. It is important when the user wants to 
transfer data in both directions simulatanously.

Definitely! The bottleneck is CPU processing capability. So if you send and 
receive data at the same time, the results will be much worse. I think 
another reason is TCP is guaranteed protocal. So there will be some 
acknowledgements returning back when you send packets out. Thus your 
bandwidth will be taken a little away. However compared with the 
consumption of CPU, this perhaps will be trivial.

BR
Ming




>
>
>
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]
> > CC: [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Mon, 25 Jun 2007 10:03:30 +
> >
> > Dear Mohammad,
> >
> > >The results are as follows:
> > >PC-->ML403
> > >TCP_SENDFILE : 38Mbps
> > >
> > >ML403--->PC
> > >TCP_SENDFILE: 155Mbps
> >
> > This result is unreasonable. Because PC is more powerful than your 
board,
> > so PC->board should be faster than board->PC.
> >
> > >The transfer rate from ML403 to PC has improved by a factor of 2,
> > >I see on the posts here in the mailing list that you have reached a 
band
> > width of 301Mbps.
> >
> > Yes, with all features which could improve performance enabled, we can 
get
> > around 300Mbps for TCP transfer. one more hint, did you enable caches 
on
> > your system? perhaps it will help. Anyway, double check your hardware
> > design to make sure all features are enabled.That's all I can suggest.
> >
> > BR
> > Ming
> >
> >
> > >
> > >
> > >
> > >
> > >
> > > > From: [EMAIL PROTECTED]
> > > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
> > linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 19:10:16 +
> > > >
> > > > Use the following command in Linux please:
> > > >
> > > > ifconfig eth0 mtu 8982
> > > >
> > > > As well you should do that on your PC in the measurement.
> > > >
> > > > Ming
> > > >
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Ming Liu ,
> > > > ,,
> > > >
> > > > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 19:08:29 +
> > > > >
> > > > >
> > > > >Dear Ming,
> > > > >
> > > > >Really thanks for reply,
> > > > >
> > > > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > > > >
> > > > >but what about enabling jumbo frames? should I do any thing 
special to
> > > > enable Jumbo fram support?
> > > > >
> > > > >we were thinking that it is enabled by default. Is it?
> > > > >
> > > > >thanks
> > > > >
> >

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-26 Thread Mohammad Sadegh Sadri

Dear Ming,

Thanks to your comments , Our tests now give the following results:

ML403--->PC : 410Mbits/s
PC--->ML403 : 210Mbits/s

We have described the characteristics of our base system in previous posts here

In additiona we have :
1- enabled the ppc caches
2- we have set BD_IN_BRAM in adapter.c to 1. ( default is 0 )

TX_THRESHOLD is 16 and RX_THRESHOLD is 2.

the virtex4 fx12 device on ML403 is now completely full, we do not have any 
free block memories nor any logic slices. Maybe if we had more space we could 
choose higher values for XTE_SEND_BD_CNT and XTE_RECV_BD_CNT i.e. 384. Do you 
think this will improve performance? 

There is also another interesting test,
We executed netperf on both of PC and ML403 simultanously , when we do not put 
BDs in BRAM, the performance of ML403-->PC link drops from 390Mbits to 45Mbits, 
but when using PLB BRAMs for BDs the performance decreases from 410Mbits/s to 
just 130Mbita/s. It is important when the user wants to transfer data in both 
directions simulatanously. 

Thanks





> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Mon, 25 Jun 2007 10:03:30 +
> 
> Dear Mohammad,
> 
> >The results are as follows:
> >PC-->ML403
> >TCP_SENDFILE : 38Mbps
> >
> >ML403--->PC
> >TCP_SENDFILE: 155Mbps
> 
> This result is unreasonable. Because PC is more powerful than your board, 
> so PC->board should be faster than board->PC.
> 
> >The transfer rate from ML403 to PC has improved by a factor of 2,
> >I see on the posts here in the mailing list that you have reached a band 
> width of 301Mbps.
> 
> Yes, with all features which could improve performance enabled, we can get 
> around 300Mbps for TCP transfer. one more hint, did you enable caches on 
> your system? perhaps it will help. Anyway, double check your hardware 
> design to make sure all features are enabled.That's all I can suggest.
> 
> BR
> Ming
> 
> 
> >
> >
> >
> >
> >------------
> > > From: [EMAIL PROTECTED]
> > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > Date: Sat, 23 Jun 2007 19:10:16 +
> > >
> > > Use the following command in Linux please:
> > >
> > > ifconfig eth0 mtu 8982
> > >
> > > As well you should do that on your PC in the measurement.
> > >
> > > Ming
> > >
> > >
> > > >From: Mohammad Sadegh Sadri
> > > >To: Ming Liu ,
> > > ,,
> > >
> > > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > >Date: Sat, 23 Jun 2007 19:08:29 +
> > > >
> > > >
> > > >Dear Ming,
> > > >
> > > >Really thanks for reply,
> > > >
> > > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > > >
> > > >but what about enabling jumbo frames? should I do any thing special to
> > > enable Jumbo fram support?
> > > >
> > > >we were thinking that it is enabled by default. Is it?
> > > >
> > > >thanks
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > > From: [EMAIL PROTECTED]
> > > > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
> > > linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > > Date: Sat, 23 Jun 2007 18:48:19 +
> > > > >
> > > > > Dear Mohammad,
> > > > > There are some parameters which could be adjusted to improve the
> > > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In 
> my
> > > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both 
> waitbound=1.
> > > > >
> > > > > Also Jumbo frame of 8982 could be enable.
> > > > >
> > > > > Try those hints and share your improvement with us.
> > > > >
> > > > > BR
> > > > > Ming
> > > > >
> > > > > >From: Mohammad Sadegh Sadri
> > > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > > PPC, Grant Likely
> > > > > >Subject: ML403 gigabit ethernet bandwidth

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-26 Thread Ming Liu
Actually I have asked the xilinx expert on the statistics. With the 
PLB_TEMAC, we can also get a result like that, say 300Mbps for TCP. (From 
their numbers, the throughput is even higher.)

Some remindings from my experience: Remember to enable everything in the 
hardware and software which can improve the performance, such as, 
CS_offloading, data allighment engines, large fifos, and so on, from the 
configuration of PLB_TEMAC in EDK. As well, remember to enable cache. In 
the software field, interrupt_coaleascing will also help. At this time, 
normally we can get more than 100Mbps for TCP. Jumbo-frame of 8982 will 
almost double this number. 

Have fun.

BR
Ming


>From: "Greg Crocker" <[EMAIL PROTECTED]>
>To: linuxppc-embedded@ozlabs.org
>Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
>Date: Mon, 25 Jun 2007 15:42:09 -0400
>
>I was able to achieve ~320 Mbit/sec data rate using the Gigabit 
>System
>Reference Design (GSRD XAPP535/536) from Xilinx.  This utilizes the
>LocalLink TEMAC to perform the transfers.  The reference design 
>provides the
>Linux 2.4 drivers that can be ported to Linux 2.6 with a little 
>effort.
>
>This implementation did not use checksum offloading and the data 
>rates were
>achieved using TCP_STREAM on netperf.
>
>Greg


>___
>Linuxppc-embedded mailing list
>Linuxppc-embedded@ozlabs.org
>https://ozlabs.org/mailman/listinfo/linuxppc-embedded

_
与联机的朋友进行交流,请使用 MSN Messenger:  http://messenger.msn.com/cn  

___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-25 Thread Greg Crocker

I was able to achieve ~320 Mbit/sec data rate using the Gigabit System
Reference Design (GSRD XAPP535/536) from Xilinx.  This utilizes the
LocalLink TEMAC to perform the transfers.  The reference design provides the
Linux 2.4 drivers that can be ported to Linux 2.6 with a little effort.

This implementation did not use checksum offloading and the data rates were
achieved using TCP_STREAM on netperf.

Greg
___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-25 Thread Glenn . G . Hart

All,

I am also very interested in the network throughput.  I am using the Avenet
Mini-Module which has a V4FX12.  The ML403 is very close to the
Mini-Module.  I am getting a throughput of about 100 Mbps.  The biggest
difference was turning on the cache.  100 MHz vs. 300 MHz only improved the
performance slightly.  Using the checksum offloading was also a big help in
getting the throughput up.  The RX Threshhold also helped, but the jumbo
frames did not seem to help.  I am not sure what I can do to get the 300
Mbps Ming is getting.   I saw on a previous post someone was using 128k
FIFO depth.  I am using a 32k depth.

Glenn




 
 (Embedded "Ming Liu" <[EMAIL PROTECTED]>@ozlabs.org
 
 image moved   06/25/2007 06:03 AM  
 
 to file:   
 
 pic11478.jpg)  
 

 

 


Sent by:
   [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:[EMAIL PROTECTED], linuxppc-embedded@ozlabs.org
Subject:RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

Security Level:?  Internal


Dear Mohammad,

>The results are as follows:
>PC-->ML403
>TCP_SENDFILE : 38Mbps
>
>ML403--->PC
>TCP_SENDFILE: 155Mbps

This result is unreasonable. Because PC is more powerful than your board,
so PC->board should be faster than board->PC.

>The transfer rate from ML403 to PC has improved by a factor of 2,
>I see on the posts here in the mailing list that you have reached a band
width of 301Mbps.

Yes, with all features which could improve performance enabled, we can get
around 300Mbps for TCP transfer. one more hint, did you enable caches on
your system? perhaps it will help. Anyway, double check your hardware
design to make sure all features are enabled.That's all I can suggest.

BR
Ming


>
>
>
>
>
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 19:10:16 +
> >
> > Use the following command in Linux please:
> >
> > ifconfig eth0 mtu 8982
> >
> > As well you should do that on your PC in the measurement.
> >
> > Ming
> >
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Ming Liu ,
> > ,,
> >
> > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 19:08:29 +
> > >
> > >
> > >Dear Ming,
> > >
> > >Really thanks for reply,
> > >
> > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > >
> > >but what about enabling jumbo frames? should I do any thing special to
> > enable Jumbo fram support?
> > >
> > >we were thinking that it is enabled by default. Is it?
> > >
> > >thanks
> > >
> > >
> > >
> > >
> > >
> > > > From: [EMAIL PROTECTED]
> > > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
> > linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 18:48:19 +
> > > >
> > > > Dear Mohammad,
> > > > There are some parameters which could be adjusted to improve the
> > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In
my
> > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both
waitbound=1.
> > > >
> > > > Also Jumbo frame of 8982 could be enable.
> > > >
> > > > Try those hints and share your improvement with us.
> > > >
> > > > BR
> > > > Ming
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > PPC, Grant Likely
> > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 12:19:12 +
> > > > >
> > > > >
> > > > >Dear all,
> > > > >
> > > > >Recent

Re: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-25 Thread Bhupender Saharan

Hi,

We need to findout where is the bottlenect.

1. Run vmstat on the ML403 board and find out the percentage CPU is busy
when you are transferring the file. That will show if cpu is busy or not.
2. Run oprofile and find out which are the routines eating away the cpu
time.

Once we have data from both the above routines, we can find out the
bottlenecks.


Regards
Bhupi


On 6/23/07, Mohammad Sadegh Sadri <[EMAIL PROTECTED]> wrote:



Dear all,

Recently we did a set of tests on performance of virtex 4FX hard TEMAC
module using ML403

we studied all of the posts here carefully: these are the system
characteristics;

Board : ML403
EDK: EDK9.1SP2
Hard TEMAC version and PLTEMAC version are both 3.0.a
PPC clock frequency :  300MHz
Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one
week ago
DMA type: 3 (sg dma)
DRE : enabled for TX and RX, (2)
CSUM offload is enabled for both of TX and RX
tx and rx fifo sizes : 131072 bits

the board comes up over NFS root file system completely and without any
problems.

PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes
memory, Dual gigabit ethernet port, running linux 2.6.21.3
We have tested the PC system band width and it can easily reach 966mbits/s
when connected to the same PC. ( using the same cross cable used for ml403
test)

Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)

(from board to PC)
netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s
87380 -S 87380

the measured bandwidth for this test was just 40.66Mbits.
It is also true for netperf from PC to board.

we do not have any more idea about what we should do to improve the
bandwidth.
any help or ideas is appreciated...

_
Connect to the next generation of MSN Messenger

http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-25 Thread Ming Liu
Dear Mohammad,

>The results are as follows:
>PC-->ML403
>TCP_SENDFILE : 38Mbps
>
>ML403--->PC
>TCP_SENDFILE: 155Mbps

This result is unreasonable. Because PC is more powerful than your board, 
so PC->board should be faster than board->PC.

>The transfer rate from ML403 to PC has improved by a factor of 2,
>I see on the posts here in the mailing list that you have reached a band 
width of 301Mbps.

Yes, with all features which could improve performance enabled, we can get 
around 300Mbps for TCP transfer. one more hint, did you enable caches on 
your system? perhaps it will help. Anyway, double check your hardware 
design to make sure all features are enabled.That's all I can suggest.

BR
Ming


>
>
>
>
>
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 19:10:16 +
> >
> > Use the following command in Linux please:
> >
> > ifconfig eth0 mtu 8982
> >
> > As well you should do that on your PC in the measurement.
> >
> > Ming
> >
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Ming Liu ,
> > ,,
> >
> > >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 19:08:29 +
> > >
> > >
> > >Dear Ming,
> > >
> > >Really thanks for reply,
> > >
> > >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> > >
> > >but what about enabling jumbo frames? should I do any thing special to
> > enable Jumbo fram support?
> > >
> > >we were thinking that it is enabled by default. Is it?
> > >
> > >thanks
> > >
> > >
> > >
> > >
> > >
> > > > From: [EMAIL PROTECTED]
> > > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
> > linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > Date: Sat, 23 Jun 2007 18:48:19 +
> > > >
> > > > Dear Mohammad,
> > > > There are some parameters which could be adjusted to improve the
> > > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In 
my
> > > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both 
waitbound=1.
> > > >
> > > > Also Jumbo frame of 8982 could be enable.
> > > >
> > > > Try those hints and share your improvement with us.
> > > >
> > > > BR
> > > > Ming
> > > >
> > > > >From: Mohammad Sadegh Sadri
> > > > >To: Andrei Konovalov , Linux PPC Linux
> > > > PPC, Grant Likely
> > > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > > >Date: Sat, 23 Jun 2007 12:19:12 +
> > > > >
> > > > >
> > > > >Dear all,
> > > > >
> > > > >Recently we did a set of tests on performance of virtex 4FX hard 
TEMAC
> > > > module using ML403
> > > > >
> > > > >we studied all of the posts here carefully: these are the system
> > > > characteristics;
> > > > >
> > > > >Board : ML403
> > > > >EDK: EDK9.1SP2
> > > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > > >PPC clock frequency :  300MHz
> > > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing 
near
> > one
> > > > week ago
> > > > >DMA type: 3 (sg dma)
> > > > >DRE : enabled for TX and RX, (2)
> > > > >CSUM offload is enabled for both of TX and RX
> > > > >tx and rx fifo sizes : 131072 bits
> > > > >
> > > > >the board comes up over NFS root file system completely and 
without
> > any
> > > > problems.
> > > > >
> > > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz ,
> > 2Gigabytes
> > > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > > >We have tested the PC system band width and it can easily reach
> > 966mbits/s
> > > > when connected to the same PC. ( using the same cross cable used 
for
> > ml403
> > > > test)
> > > > &g

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-24 Thread Mohammad Sadegh Sadri

Dear Ming

We have changed our system characteristics to have TX_THRESHOLD=16 and 
RX_THRESHOLD=8 and in addition we enabled jumbo frames of 8982 bytes.

The results are as follows:
PC-->ML403
TCP_SENDFILE : 38Mbps

ML403--->PC
TCP_SENDFILE: 155Mbps

The transfer rate from ML403 to PC has improved by a factor of 2, 
I see on the posts here in the mailing list that you have reached a band width 
of 301Mbps.

we are also wondering why we do not have any improve in PC to ML403 bandwidth

also we observed if  TX_THRESHOLD=16 and RX_THRESHOLD=2, then PC to ML403 
bandwidth will increase to some thing near 60Mbps. 








> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org; 
> [EMAIL PROTECTED]
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Sat, 23 Jun 2007 19:10:16 +
> 
> Use the following command in Linux please:
> 
> ifconfig eth0 mtu 8982
> 
> As well you should do that on your PC in the measurement.
> 
> Ming
> 
> 
> >From: Mohammad Sadegh Sadri 
> >To: Ming Liu , 
> ,, 
> 
> >Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> >Date: Sat, 23 Jun 2007 19:08:29 +
> >
> >
> >Dear Ming,
> >
> >Really thanks for reply,
> >
> >about thresholds and waitbound OK! I'll adjust them in adapter.c ,
> >
> >but what about enabling jumbo frames? should I do any thing special to 
> enable Jumbo fram support?
> >
> >we were thinking that it is enabled by default. Is it?
> >
> >thanks
> >
> >
> >
> >
> >----------------
> > > From: [EMAIL PROTECTED]
> > > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
> linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > Date: Sat, 23 Jun 2007 18:48:19 +
> > >
> > > Dear Mohammad,
> > > There are some parameters which could be adjusted to improve the
> > > performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
> > > system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
> > >
> > > Also Jumbo frame of 8982 could be enable.
> > >
> > > Try those hints and share your improvement with us.
> > >
> > > BR
> > > Ming
> > >
> > > >From: Mohammad Sadegh Sadri
> > > >To: Andrei Konovalov , Linux PPC Linux
> > > PPC, Grant Likely
> > > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > > >Date: Sat, 23 Jun 2007 12:19:12 +
> > > >
> > > >
> > > >Dear all,
> > > >
> > > >Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> > > module using ML403
> > > >
> > > >we studied all of the posts here carefully: these are the system
> > > characteristics;
> > > >
> > > >Board : ML403
> > > >EDK: EDK9.1SP2
> > > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > > >PPC clock frequency :  300MHz
> > > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near 
> one
> > > week ago
> > > >DMA type: 3 (sg dma)
> > > >DRE : enabled for TX and RX, (2)
> > > >CSUM offload is enabled for both of TX and RX
> > > >tx and rx fifo sizes : 131072 bits
> > > >
> > > >the board comes up over NFS root file system completely and without 
> any
> > > problems.
> > > >
> > > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 
> 2Gigabytes
> > > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > > >We have tested the PC system band width and it can easily reach 
> 966mbits/s
> > > when connected to the same PC. ( using the same cross cable used for 
> ml403
> > > test)
> > > >
> > > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > > >
> > > >(from board to PC)
> > > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 
> 16384 -s
> > > 87380 -S 87380
> > > >
> > > >the measured bandwidth for this test was just 40.66Mbits.
> > > >It is also true for netperf from PC to board.
> > > >
> > > >we do not have any more idea about what we should do to improve the
> > > bandwidth.
> > > >any help or ideas is appreciated...
> > > >
> > > >

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-23 Thread Ming Liu
Use the following command in Linux please:

ifconfig eth0 mtu 8982

As well you should do that on your PC in the measurement.

Ming


>From: Mohammad Sadegh Sadri <[EMAIL PROTECTED]>
>To: Ming Liu <[EMAIL PROTECTED]>, 
<[EMAIL PROTECTED]>,, 
<[EMAIL PROTECTED]>
>Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
>Date: Sat, 23 Jun 2007 19:08:29 +
>
>
>Dear Ming,
>
>Really thanks for reply,
>
>about thresholds and waitbound OK! I'll adjust them in adapter.c ,
>
>but what about enabling jumbo frames? should I do any thing special to 
enable Jumbo fram support?
>
>we were thinking that it is enabled by default. Is it?
>
>thanks
>
>
>
>
>
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
linuxppc-embedded@ozlabs.org; [EMAIL PROTECTED]
> > Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > Date: Sat, 23 Jun 2007 18:48:19 +
> >
> > Dear Mohammad,
> > There are some parameters which could be adjusted to improve the
> > performance. They are: TX and RX_Threshold TX and RX_waitbound. In my
> > system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
> >
> > Also Jumbo frame of 8982 could be enable.
> >
> > Try those hints and share your improvement with us.
> >
> > BR
> > Ming
> >
> > >From: Mohammad Sadegh Sadri
> > >To: Andrei Konovalov , Linux PPC Linux
> > PPC, Grant Likely
> > >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel
> > >Date: Sat, 23 Jun 2007 12:19:12 +
> > >
> > >
> > >Dear all,
> > >
> > >Recently we did a set of tests on performance of virtex 4FX hard TEMAC
> > module using ML403
> > >
> > >we studied all of the posts here carefully: these are the system
> > characteristics;
> > >
> > >Board : ML403
> > >EDK: EDK9.1SP2
> > >Hard TEMAC version and PLTEMAC version are both 3.0.a
> > >PPC clock frequency :  300MHz
> > >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near 
one
> > week ago
> > >DMA type: 3 (sg dma)
> > >DRE : enabled for TX and RX, (2)
> > >CSUM offload is enabled for both of TX and RX
> > >tx and rx fifo sizes : 131072 bits
> > >
> > >the board comes up over NFS root file system completely and without 
any
> > problems.
> > >
> > >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 
2Gigabytes
> > memory, Dual gigabit ethernet port, running linux 2.6.21.3
> > >We have tested the PC system band width and it can easily reach 
966mbits/s
> > when connected to the same PC. ( using the same cross cable used for 
ml403
> > test)
> > >
> > >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> > >
> > >(from board to PC)
> > >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 
16384 -s
> > 87380 -S 87380
> > >
> > >the measured bandwidth for this test was just 40.66Mbits.
> > >It is also true for netperf from PC to board.
> > >
> > >we do not have any more idea about what we should do to improve the
> > bandwidth.
> > >any help or ideas is appreciated...
> > >
> > >_
> > >Connect to the next generation of MSN
> > 
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline

> >
> > >___
> > >Linuxppc-embedded mailing list
> > >Linuxppc-embedded@ozlabs.org
> > >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> >
> > _
> > 免费下载 MSN Explorer:   http://explorer.msn.com/lccn/
> >
>
>_
>News, entertainment and everything you care about at Live.com. Get it now!
>http://www.live.com/getstarted.aspx

_
免费下载 MSN Explorer:   http://explorer.msn.com/lccn/  

___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-23 Thread Mohammad Sadegh Sadri

Dear Ming,

Really thanks for reply,

about thresholds and waitbound OK! I'll adjust them in adapter.c ,

but what about enabling jumbo frames? should I do any thing special to enable 
Jumbo fram support? 

we were thinking that it is enabled by default. Is it?

thanks





> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; linuxppc-embedded@ozlabs.org; 
> [EMAIL PROTECTED]
> Subject: RE: ML403 gigabit ethernet bandwidth - 2.6 kernel
> Date: Sat, 23 Jun 2007 18:48:19 +
> 
> Dear Mohammad,
> There are some parameters which could be adjusted to improve the 
> performance. They are: TX and RX_Threshold TX and RX_waitbound. In my 
> system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.
> 
> Also Jumbo frame of 8982 could be enable.
> 
> Try those hints and share your improvement with us.
> 
> BR
> Ming
> 
> >From: Mohammad Sadegh Sadri 
> >To: Andrei Konovalov , Linux PPC Linux 
> PPC, Grant Likely 
> >Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel 
> >Date: Sat, 23 Jun 2007 12:19:12 +
> >
> >
> >Dear all,
> >
> >Recently we did a set of tests on performance of virtex 4FX hard TEMAC 
> module using ML403
> >
> >we studied all of the posts here carefully: these are the system 
> characteristics;
> >
> >Board : ML403
> >EDK: EDK9.1SP2
> >Hard TEMAC version and PLTEMAC version are both 3.0.a
> >PPC clock frequency :  300MHz
> >Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one 
> week ago
> >DMA type: 3 (sg dma)
> >DRE : enabled for TX and RX, (2)
> >CSUM offload is enabled for both of TX and RX
> >tx and rx fifo sizes : 131072 bits
> >
> >the board comes up over NFS root file system completely and without any 
> problems.
> >
> >PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes 
> memory, Dual gigabit ethernet port, running linux 2.6.21.3
> >We have tested the PC system band width and it can easily reach 966mbits/s 
> when connected to the same PC. ( using the same cross cable used for ml403 
> test)
> >
> >Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
> >
> >(from board to PC)
> >netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s 
> 87380 -S 87380
> >
> >the measured bandwidth for this test was just 40.66Mbits.
> >It is also true for netperf from PC to board.
> >
> >we do not have any more idea about what we should do to improve the 
> bandwidth.
> >any help or ideas is appreciated...
> >
> >_
> >Connect to the next generation of MSN 
> Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
> 
> >___
> >Linuxppc-embedded mailing list
> >Linuxppc-embedded@ozlabs.org
> >https://ozlabs.org/mailman/listinfo/linuxppc-embedded
> 
> _
> 免费下载 MSN Explorer:   http://explorer.msn.com/lccn/  
> 

_
News, entertainment and everything you care about at Live.com. Get it now!
http://www.live.com/getstarted.aspx
___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded

RE: ML403 gigabit ethernet bandwidth - 2.6 kernel

2007-06-23 Thread Ming Liu
Dear Mohammad,
There are some parameters which could be adjusted to improve the 
performance. They are: TX and RX_Threshold TX and RX_waitbound. In my 
system, we use TX_Threshold=16 and Rx_Threshold=8 and both waitbound=1.

Also Jumbo frame of 8982 could be enable.

Try those hints and share your improvement with us.

BR
Ming

>From: Mohammad Sadegh Sadri <[EMAIL PROTECTED]>
>To: Andrei Konovalov <[EMAIL PROTECTED]>, Linux PPC Linux 
PPC, Grant Likely <[EMAIL PROTECTED]>
>Subject: ML403 gigabit ethernet bandwidth - 2.6 kernel 
>Date: Sat, 23 Jun 2007 12:19:12 +
>
>
>Dear all,
>
>Recently we did a set of tests on performance of virtex 4FX hard TEMAC 
module using ML403
>
>we studied all of the posts here carefully: these are the system 
characteristics;
>
>Board : ML403
>EDK: EDK9.1SP2
>Hard TEMAC version and PLTEMAC version are both 3.0.a
>PPC clock frequency :  300MHz
>Kernel : 2.6.21-rc7 , downloaded from grant's git tree some thing near one 
week ago
>DMA type: 3 (sg dma)
>DRE : enabled for TX and RX, (2)
>CSUM offload is enabled for both of TX and RX
>tx and rx fifo sizes : 131072 bits
>
>the board comes up over NFS root file system completely and without any 
problems.
>
>PC system used for these tests is : CPU P4 Dual Core, 3.4GHz , 2Gigabytes 
memory, Dual gigabit ethernet port, running linux 2.6.21.3
>We have tested the PC system band width and it can easily reach 966mbits/s 
when connected to the same PC. ( using the same cross cable used for ml403 
test)
>
>Netperf is compiled with TCP SEND FILE enabled, ( -DHAVE_SENDFILE)
>
>(from board to PC)
>netperf -t TCP_SENDFILE -H 10.10.10.250 -F /boot/zImage.elf -- -m 16384 -s 
87380 -S 87380
>
>the measured bandwidth for this test was just 40.66Mbits.
>It is also true for netperf from PC to board.
>
>we do not have any more idea about what we should do to improve the 
bandwidth.
>any help or ideas is appreciated...
>
>_
>Connect to the next generation of MSN 
Messenger?>http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline

>___
>Linuxppc-embedded mailing list
>Linuxppc-embedded@ozlabs.org
>https://ozlabs.org/mailman/listinfo/linuxppc-embedded

_
免费下载 MSN Explorer:   http://explorer.msn.com/lccn/  

___
Linuxppc-embedded mailing list
Linuxppc-embedded@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-embedded