On Wed, 22 Nov 2006, Pete Zaitcev wrote:

> On Wed, 22 Nov 2006 11:45:52 +0100, Paolo Abeni <[EMAIL PROTECTED]> wrote:
> 
> BTW, I'm putting the cc: back. See, Marcel already replied. We ought
> to keep people in the loop. Although generally others are interested
> in patches only, it's good to have a record of the design process.

Although I haven't tried to follow this in any detail, it is nevertheless 
interesting to see what you're doing.  Except that I can't follow a lot of 
your discussion, quite possibly as a result of all the off-list history.  
Examples below.

Don't feel any need to reply or explain -- this is meant mainly to
illustrate that eavesdropping sometimes doesn't provide much useful
information...


> > One possible solution is to add another ioctl operation to remove a
> > specified number of records (header+data) from the buffer. User use this
> > ioctl after processing at least one urb.

How would the user know how many records to remove?  There might be other 
unexpected URBs in among the expected ones; removing them would be a 
mistake.

> Right, this is what I was going to do. It's part of what I call
> "mfetch". The mfetch takes this struct:

What on earth is "mfetch"?  Is that a made-up name for this combination of 
reading and flushing records?

> struct mfetch_info {
>       unsigned int *offvec;   /* Vector of events fetched */
>       int nfetch;             /* Number of events to fetch (out: fetched) */
>       int nflush;             /* Number of events to flush */
> }
> 
> The ioctl works likes this:
>  - Drop up to nflush events
>  - Wait if !O_NONBLOCK and buffer is empty
>  - Extract up to nfetch offsets, stores them in offvec, returns how
>    many were fetched in nfetch.

Why make this a single operation?  Why not have one operation to drop 
nflush events and another operation to do everything else?

> The idea here is that polling without any syscalls is a no-goal,

"no-goal"?  Does that mean nobody would ever want to do it so there's no 
point implementing it?

> considering systemic overhead elsewhere. By getting a bunch of
> mmap offsets, applications use a "fraction of syscall per event"
> model and do not need to think about wrapped buffers

Why should applications have to think about wrapped buffers in any case?

> (they see
> the filler packets, but it's a very small overhead).

What are "filler packets"?

> I'm thinking if I should add a "force O_NONBLOCK" flag somehow,
> in case someone wants to flush all events. If you find an application
> which can make an intelligent use of it, please let me know.
> 
> I am going to write some test code and validate if this works.
> 
> >             /*
> >              * remove args events or fillers from buffer. If args is 
> > greater 
> >              * than the number of events present in buffer, fail 
> >              * with error and no modification is applied to the buffer; 
> >              */
> >                     if (rp->cnt == 0) {
> >                             rp->b_cnt = cnt;
> >                             rp->b_out = out;
> >                             ret = -EINVAL;
> >                             break;
> 
> Why is that? I thought that it may be useful to start with INT_MAX
> events to flush.

Does that mean you start with INT_MAX made-up events in the buffer just so 
that the user can flush them?  That doesn't make any sense...

Alan Stern


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
linux-usb-devel@lists.sourceforge.net
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to