:In my view, the problem can be described like this.
:
:Some applications need to process data from their VA space, on some
:devices. If the data is going to/from a file, it looks perfectly
:well to copy it into kernel buffers, since the kernel does caching
:and improves disk I/O performance. However, there are cases when the
:kernel can't be concerned with the data. For example, I have an
:encryption/compression processor on PCI board. For each operation,
:this processor needs two separated data buffers and performs the
:busmaster DMA. The user program is supposed to prepare the buffers
:and communicate their location to the kernel mode driver via IOCTL.
:
:What is more efficient - copy the data to/from the locked kernel
:buffers or lock the user buffers "in place" and do processing?
:(In my case, I don't even need to _remap_ the buffers, I only need
:physical addresses).
:
:I'd prefer the later, but I don't have sufficient FreeBSD knowledge
:to insist that I think right. There may be some principles of this
:O/S that I don't currently see that I violate by doing this.
:
:It would be nice if somebody could give an analysis of the problem.
:
:Stan

    Well, all the system buffer paradigm does is wire the pages and
    associate them with a struct buf.  You do not have to map the pages
    into KVM.  It also usually write-protects pages in user space for
    the duration of the I/O.  Even if the pages are mapped into KVM,
    the overhead is virtually nil if you do not actually touch the associated
    KVM.  I don't think you would notice the difference between using 
    the existing buffer code and rolling something custom.

                                        -Matt
                                        Matthew Dillon 
                                        <dil...@backplane.com>


To Unsubscribe: send mail to majord...@freebsd.org
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to