James Carlson wrote:
Darren Reed writes:
Picking up a packet from below is fine, and it can be put into a
buffer attached to the q_ptr ok.
putq (perhaps with noenable if necessary) is the usual way to do this,
but you're also free to invent your own mechanisms if you must.
Copying the data into the buffer
would need to be protected by holding some lock so that this is
synchronised with the reader from above.
putq handles these issues. If you do it yourself, you'll need to
reimplement this.
Understood. Using putq is obviously the answer here.
For reference, in the immediate delivery mode, the buffer is
checked against a BPF filter before being passed along to
the next module.
OK.
With delayed wait we don't want to just call "putnext" to
deliver the message onwards and upwards, rather we want
to either:
1) tell the system that a buffer is waiting to be read (with
0 or more bytes ready) so that select/poll wakeup;
I'm confused. You do that implicitly by calling putnext. The stream
head will then enable the waiter(s) if necessary.
Why would you want to wake up the reader without actually delivering
any data? Why does it need to be divorced from putnext?
See below.
Clues anyone?
Can you provide more details on _exactly_ what sort of behavior you
desire?
When asking bpfmod for packet data, there are two boundary
conditions that I am interested in.
The first is when it fills its local buffer (say 64k.) This is easy
enough to handle.
The second is when a certain amount of time elapses. I want
to do this because I don't want to wait, potentially, for an
unknown period of time. I want to receive data from the NIC
but to make things more efficient, I'd like to receive it buffered
but I don't want the buffer to delay delivery for too long as
this may impact my ability to process packet data in a timely
fashion.
To put it another way, a typical requirement might be to display
a list of currently opened TCP sessions on the network and for the
display to be no more than 1 second out of date. If there is one
TCP handshake every 10 seconds and a minor amount of associated
packets in that time, I don't want to wait 30 seconds for the 64k
buffer to be filled.
The BPF driver found in BSD systems today allows for you to set
both a maximum buffer size and a timeout value (in milliseconds)
for it to cause the waiter to "wake up" regardless.
I imagine the trick might be to schedule a qtimeout() when the
timeout gets set and for the timeout handler to successively
reschedule itself. When you say that the queue service procedure
will be enabled for the bulk putnext, what exactly is happening
here with respect to the timeout callback and the service
procedure?
I mentioned select/poll because using these function calls from
applications is the usual way to handle devices like BPF in an
event driven application.
Darren
_______________________________________________
networking-discuss mailing list
[email protected]