When sendfile was integrated into Solaris, the rcp and ftp daemons
were modified to use it. This has resulted in numerous problems over
the years. As time has gone various issues have arisen, often having
to do with locking the files. As a result of this several constraints
were placed on the usage sendfile such that the user is expected to
prevent modifications of the file for the duration of the sendfile
call. These constraints are not required under the original semantics
of the rcp and ftp daemons.
My contention is that the use of the sendfile call in these two
programs constituted a change in the user semantics of the daemons.
Luckily, the issue if this type of locking doesn't come up too often,
but it is often enough to be a nuisance. And for the most part, I have
not seen a performance improvement, although my testing has not be
rigorous. This pretty much jives with what Andrew is saying.
So, my proposal is to convert these two programs back and remove the
usage of sendfile. Does anybody see any reason not to do this,
assuming that the performance is comparable? And does anybody have any
data to indicate that the performance is not comparable?
Andrew Gallatin wrote:
Peter Memishian wrote:
> The problem is that due to what I consider to be hacks to work around
> broken drivers, packets sent by sendfile are queued up freed later
> from a different context.
This is *not* to workaround broken drivers, it's to deal with the fact
that freeing the mblks synchronously means that drivers cannot hold locks
across freemsg(), which is not something we've ever documented as a
I thought the policy, or at least good common sense, was to avoid
holding locks across code that did allocations and frees.
constraint. Further, the asynchronous logic has been improved and I
believe is of comparable performance now.
I haven't tested much since crossbow hit. That reduced performance
so much that its hard to measure other things. However, I find it
hard to believe that the performance could be made comparable
with a hack like this.
Any time you're queuing mblks at roughly 300K enqueue / dequeue
operations per second (assumes 4K pages, and line rate 10GbE)
you're going to see bad performance characteristics -especially
cache misses, which mblks are particularly prone to.
Drew
_______________________________________________
networking-discuss mailing list
[email protected]
--
blu
It's bad civic hygiene to build technologies that could someday be
used to facilitate a police state. - Bruce Schneier
----------------------------------------------------------------------
Brian Utterback - Solaris RPE, Sun Microsystems, Inc.
Ph:877-259-7345, Em:brian.utterback-at-ess-you-enn-dot-kom
_______________________________________________
networking-discuss mailing list
[email protected]