Ethernet MII still negotiating status?

2015-12-02 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I'm looking into a problem with the mediatomb package not starting up
correctly on recent Ubuntu releases, and I see that the problem is
that the service depends on a network up event generated by
network-manager to start.  While it actually depends on eth0 being up,
n-m decides that "networking" is up as soon as it sees no interfaces
that have a phy link need more work done ( which seems to mean still
negotiating a DHCP lease ), and so it tends to declare that networking
is "up" as soon as lo is configured, because eth0's phy status is down.

This seems to be pulled from the MII info for the link, which as far
as I can see, only indicates up/down, rather than any sort of state
like "something seems to be there, give me a second to negotiate the
LLC" or "I don't know yet, give me a second".  For my e1000e, this
negotiation seems to take about 10 seconds, by which time n-m has
decided that all interfaces that are plugged in are configured so
networking is "up".

Is there no way for an ethernet adapter to indicate that it does not
yet know for sure that it is not plugged in, and is still trying to
negotiate a link ( and thus user space should wait a second instead of
concluding that everything that is plugged and therefore likely to
ever come up in is configured )?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBCgAGBQJWX7InAAoJEBB5UWFcu6UWwGQIAJ8Qc1/MdZl3yrDE3gMqkA1h
jJsrjJBC2gVxQqrdty6WZs4aiXgfMVzP7ROCKVNzfV3RBDl6M0g8EqQGkL7WDIgw
QrYRnELTKpbWVJKa0iweUeNhTunK7qQmYvS0REx2idLd94zPl+SRcrrugEvTesX2
9jufkt+rfEQpEiITzDU4Dy86criHR4HmOhIgUCkJrtIT2Z84V8l4R37mJjFG72qy
qYrag+hiajf2+JRkNq5VmdsYeJvnA6v78MkOepjRVvPmCioJ5y6E9v5p6OslCa9L
NrqMVphS2n8JOQab7vwpJYFfan2Z6/p4t5+7mCwLQJqN/9J7w4FkA+LbEFGVAvA=
=Jswv
-END PGP SIGNATURE-
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/6] TCP socket splice

2006-09-22 Thread Phillip Susi
How is this different than just having the application mmap() the file 
and recv() into that buffer?


Ashwini Kulkarni wrote:

My name is Ashwini Kulkarni and I have been working at Intel Corporation for
the past 4 months as an engineering intern. I have been working on the 'TCP
socket splice' project with Chris Leech. This is a work-in-progress version
of the project with scope for further modifications.

TCP socket splicing:
It allows a TCP socket to be spliced to a file via a pipe buffer. First, to
splice data from a socket to a pipe buffer, upto 16 source pages(s) are pulled
into the pipe buffer. Then to splice data from the pipe buffer to a file,
those pages are migrated into the address space of the target file. It takes
place entirely within the kernel and thus results in zero memory copies. It is
the receive side complement to sendfile() but unlike sendfile() it is
possible to splice from a socket as well and not just to a socket.

Current Method:
 + >  Application Buffer +
 |   |
_|___|_
 |   |
  Receive or |   | Write
  I/OAT DMA  |   |
 |   |
 |   V
   Network  File System
   Buffer  Buffer
 ^   |
 |   |
_|___|_
 DMA |   | DMA
 |   |
   Hardware  |   |
 |   V
NIC SATA

In the current method, the packet is DMA’d from the NIC into the network buffer.

There is a read on socket to the user space and the packet data is copied from
the network buffer to the application buffer. A write operation then moves the
data from the application buffer to the file system buffer which is then DMA'd
to the disk again. Thus, in the current method there will be one full copy of
all the data to the user space.

Using TCP socket splice:

Application Control
 |
_|__
 |
 |   TCP socket splice
 | +-+
 | | Direct path |
 V | V
   Network  File System
   Buffer  Buffer
 ^   |
 |   |
_|___|__
 DMA |   | DMA
 |   |
   Hardware  |   |
 |   V
NIC SATA

In this method, the objective is to use TCP socket splicing to create a direct

path in the kernel from the network buffer to the file system buffer via a pipe
buffer. The pages will migrate from the network buffer (which is associated
with the socket) into the pipe buffer for an optimized path. From the pipe
buffer, the pages will then be migrated to the output file address space page
cache. This will enable to create a LAN to file-system API which will avoid the
memcpy operations in user space and thus create a fast path from the network
buffer to the storage buffer.

Open Issues (currently being addressed):
There is a performance drop when transferring bigger files (usually larger than
65536 bytes in size). Performance drop increases with the size of the file.
Work is in progress to identify the source of this issue.

We encourage the community to review our TCP socket splice project. Feedback
would be greatly appreciated.

--
Ashwini Kulkarni


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [3/4] kevent: AIO, aio_sendfile() implementation.

2006-07-26 Thread Phillip Susi

Christoph Hellwig wrote:

Networking and disk AIO have significantly different needs.

Therefore, I really don't see it as reasonable to expect
a merge of these two things.  It doesn't make any sense.


I'm not sure about that.  The current aio interface isn't exactly nice
for disk I/O either.  I'm more than happy to have a discussion about
that aspect.




I agree that it makes perfect sense for a merger because aio and 
networking have very similar needs.  In both cases, the caller hands the 
kernel a buffer and wants the kernel to either fill it or consume it, 
and to be able to do so asynchronously.  You also want to maximize 
performance in both cases by taking advantage of zero copy IO.


I wonder though, why do you say the current aio interface isn't nice for 
disk IO?  It seems to work rather nicely to me, and is much better than 
the posix aio interface.


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] NET: Accurate packet scheduling for ATM/ADSL

2006-06-14 Thread Phillip Susi

Jesper Dangaard Brouer wrote:

The Linux traffic's control engine inaccurately calculates
transmission times for packets sent over ADSL links.  For
some packet sizes the error rises to over 50%.  This occurs
because ADSL uses ATM as its link layer transport, and ATM
transmits packets in fixed sized 53 byte cells.



I could have sworn that DSL uses its own framing protocol that is 
similar to the frame/superframe structure of HDSL ( T1 ) lines and over 
that you can run ATM or ethernet.  Or is it typically ethernet -> ATM -> 
HDSL?


In any case, why does the kernel care about the exact time that the IP 
packet has been received and reassembled on the headend?



-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html