Ok I will do these tests tomorrow.
On Wed, Apr 22, 2009 at 9:30 PM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 10:56 Wed 22 Apr , Devesh Sharma wrote:
On Tue, Apr 21, 2009 at 9:48 PM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On
Following link explains usage of oprofile
http://www.serpentine.com/blog/2006/12/17/make-linux-performance-analysis-easier-with-oprofile/
Vishal
Devesh Sharma wrote:
On Tue, Apr 21, 2009 at 9:48 PM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 18:37 Tue 21 Apr
One more query I have in my mind, let's say user wants to transfer 1MB
data (callling send() only once), and socket buffer size is set to
4096 for example, how and who handles packaging of data?
On Wed, Apr 22, 2009 at 10:56 AM, Devesh Sharma deves...@gmail.com wrote:
On Tue, Apr 21, 2009 at
Its handled by tcp stack within linux kernel. If you want to take a look
at the code, then see
open linux/net/ipv4/tcp.c and look at tcp_sendmsg().
Vishal
Devesh Sharma wrote:
One more query I have in my mind, let's say user wants to transfer 1MB
data (callling send() only once), and socket
Hi!
On 10:56 Wed 22 Apr , Devesh Sharma wrote:
On Tue, Apr 21, 2009 at 9:48 PM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 18:37 Tue 21 Apr , Devesh Sharma wrote:
Hello michi, Sorry for late reply. comments are inline:
...
ping -f -s 4096 say? How
On Tue, Apr 21, 2009 at 06:49:13AM +0200, Michael Blizek wrote:
CPU causes problems only when you test with small packets, say 64 bytes
packets. Small packets will cause more frequent interrupt for the CPU.
If you can pass 24 Mbps with 2k data size, then you at least can pass
24 Mbps
Hello michi, Sorry for late reply. comments are inline:
On Sat, Apr 18, 2009 at 12:23 AM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 14:01 Fri 17 Apr , Devesh Sharma wrote:
Hi michi,
Its is using a TCP/IP protocol, program is Intel's MPI benchmark IMB,
and CPU
Hi!
On 14:31 Tue 21 Apr , Jeffrey Cao wrote:
On Tue, Apr 21, 2009 at 06:49:13AM +0200, Michael Blizek wrote:
CPU causes problems only when you test with small packets, say 64 bytes
packets. Small packets will cause more frequent interrupt for the CPU.
If you can pass 24 Mbps
Hi!
On 18:37 Tue 21 Apr , Devesh Sharma wrote:
Hello michi, Sorry for late reply. comments are inline:
...
Is there any packet loss on the link? What does ping -f and
packet loss is very less..after data transfer in TBits I observer
on 5 tx packet drops
ping -f -s 4096 say? How
On Tue, Apr 21, 2009 at 9:48 PM, Michael Blizek
mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 18:37 Tue 21 Apr , Devesh Sharma wrote:
Hello michi, Sorry for late reply. comments are inline:
...
Is there any packet loss on the link? What does ping -f and
packet loss is very
Hi!
On 15:57 Sun 19 Apr , Jeffrey Cao wrote:
On 2009-04-17, Michael Blizek mic...@michaelblizek.twilightparadox.com
wrote:
Hi!
On 11:03 Fri 17 Apr , Devesh Sharma wrote:
Hello list,
I have written a network device driver, and to validate it, I am
running a bandwidth
On 2009-04-17, Devesh Sharma deves...@gmail.com wrote:
Hi michi,
Its is using a TCP/IP protocol, program is Intel's MPI benchmark IMB,
and CPU is surely not a bottelneck because its Quad core Quad socket
machine with 64GB of physical mem. its a proprietary device for
Systems Area Networks
On 2009-04-17, Michael Blizek mic...@michaelblizek.twilightparadox.com wrote:
Hi!
On 11:03 Fri 17 Apr , Devesh Sharma wrote:
Hello list,
I have written a network device driver, and to validate it, I am
running a bandwidth measurement tool on it,
I am encountering a strange drop in
Hello list,
I have written a network device driver, and to validate it, I am
running a bandwidth measurement tool on it,
I am encountering a strange drop in bandwidth when data size reaches
to 4096, bandwidth figures drop form 24 Mbps (for 2k data size)
to .43 Mbps. what can be the problem, the
Hi!
On 11:03 Fri 17 Apr , Devesh Sharma wrote:
Hello list,
I have written a network device driver, and to validate it, I am
running a bandwidth measurement tool on it,
I am encountering a strange drop in bandwidth when data size reaches
to 4096, bandwidth figures drop form 24 Mbps (for
Hi michi,
Its is using a TCP/IP protocol, program is Intel's MPI benchmark IMB,
and CPU is surely not a bottelneck because its Quad core Quad socket
machine with 64GB of physical mem. its a proprietary device for
Systems Area Networks used for cluster computing. But the main problem
that I have
thanks for replying michi, BTW how does a layer above network device
driver manages skb cache? how can we tune this skb cache size? any
idea?
On Fri, Apr 17, 2009 at 2:01 PM, Devesh Sharma deves...@gmail.com wrote:
Hi michi,
Its is using a TCP/IP protocol, program is Intel's MPI benchmark IMB,
Hi!
On 14:01 Fri 17 Apr , Devesh Sharma wrote:
Hi michi,
Its is using a TCP/IP protocol, program is Intel's MPI benchmark IMB,
and CPU is surely not a bottelneck because its Quad core Quad socket
machine with 64GB of physical mem. its a proprietary device for
Systems Area Networks used
Hi!
On 14:54 Fri 17 Apr , Devesh Sharma wrote:
thanks for replying michi, BTW how does a layer above network device
driver manages skb cache? how can we tune this skb cache size? any
idea?
See http://lxr.linux.no/linux+v2.6.27.7/net/core/skbuff.c#L179
The struct sk_buff and the data area
19 matches
Mail list logo