Rick Steeves wrote:
>Roger Heflin wrote:
>> That would be network then.
>>
>> THe mvpmc does have issues if the block size is too large, it loses
>> packets because the network chip is rather cheap.
> 
> I don't see how it's network. I'm moving effectively the same amount of 
> data across the network using either method (myth protocol or CIFS 
> directly), right?  That it works with CIFS means that the network is 
> handling the load just fine, right?

CIFS, at the application layer, may bundle data into chunks that are 
smaller, or otherwise different from the MythTV protocol, and apparently 
better handled by your network.

That's, of course, just speculation, but Roger points out how with NFS 
you can adjust the block size, and it makes a difference.


> Do you mean blocksize of the file system?

No, he is referring to what gets bundled into a network packet.


> I don't see how using CIFS successfully doesn't remove the question 
> of the server's performance, and bring the entire issue back to the 
> mvpmc code that handles receipt of data from mythtv protocol.

While the code may have problems, it does seem to work for the vast 
majority of users most of the time. So there must be something unique to 
your environment that is different.

Have you looked to see if the MTU is set the same on the MVP as on your 
server?

# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:...:28
           inet addr:192.168.0.242  Bcast:192.168.0.255  Mask:255.255.255.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:1899738 errors:3924 dropped:0 overruns:3924 frame:0
           TX packets:840712 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:2710291352 (2.5 GiB)  TX bytes:0 (0.0 B)
           Interrupt:27 Base address:0xd300 DMA chan:1


This also could be related to TCP window scaling, where CIFS may have 
enough acknowledgment points in its protocol to naturally throttle the 
connection, whereas a simpler protocol dependent on TCP windows would 
have problems if something is breaking the TCP window scaling, resulting 
in overruns.

See:
http://lwn.net/Articles/92727/

Ways to work around the problem:
http://wiki.archlinux.org/index.php/Configuring_Network#The_TCP_window_scaling_issue
(you can temporarily disable window scaling on your server)

Background on TCP flow control:
http://www.linuxplanet.com/linuxplanet/tutorials/6539/1/

Linux TCP tuning (you should be able to experiment with some of these 
settings on the MVP side):
http://fasterdata.es.net/TCP-tuning/linux.html

  -Tom


------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
Mvpmc-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mvpmc-users
mvpmc wiki: http://mvpmc.wikispaces.com/

Reply via email to