> Given that the system encourages to perceive files as having arbitrary
> semantics (as opposed to having regular sequential file semantics) it
> would make sense (to me) to have reads and writes at arbitrary offsets
> to have arbitrary semantics as well -- that's, after all, what offset
> (kind of) does on a regular file, too, although in a rather trivial
> way.

the nearest i've seen to this (abusing offset semantics)
in the system i've found in the usb device. from usb(3):

: S is the size of the DMA operations on the device (i.e., the
: minimum useful size for reads and writes), b is the number
: of bytes currently buffered for input or output, and o and t
: should be interpreted to mean that byte offset o was/will be
: reached at time t (nanoseconds since the epoch).  The data
: can be used to map file offsets to points in time.  To play
: or record samples exactly at some predetermined time, use o
: and t with the sampling rate to calculate the offset to seek
: to.

this works ok if you're always getting exactly the same number
of bytes per second, but wouldn't if you were streaming variable bit-rate
data.

the advantage of this over using a control file is that there's no
additional latency
or syscall overhead to exactly specifiying a sample time, which could make
a significant difference, i guess, particularly if you were using the device
over a network.

is this an abuse? i'm not sure, though rob pike definitely seemed to think
it was when i mentioned this to him at usenix one time.

Reply via email to