Hello all,

I'm implementing a Linux device driver for a piece of hardware I'm working on. It's important for me that it will behave in a classic UNIX way, whatever that means.


Now the dilemma: Suppose that the read() method was called with a requested byte count of 512. The driver checks its internal buffer, and discovers that it can supply 10 bytes right away and return, or it can block and wait until it has all the 512, and return only when the request is completed fully. The hardware data source is streaming, with no guarantee if and when this data will arrive.


Or, it can try waiting a little (what is "a little"?), and then time out, returning with whatever it has got (à la TCP/IP).


I suppose all three possibilities are legal. The question is what will work most naturally.


The overall package I'm working on is kind-of general purpose, and it's the package's user who chooses how the hardware input behaves. I can't know anything about the data flow behavior, and still, I want the reads from the relevant file descriptor to behave as one would expect. For example, a user may choose to read from the file descriptor with scanf or fgets. The user would expect these to return whenever sufficient data has been fed into the hardware side. On the other hand, if the data comes slowly into hardware, read()'s will return with one byte at a time, which is maybe not desirable either.


Any insights?


Thanks,

   Eli


--
Web: http://www.billauer.co.il

_______________________________________________
Haifux mailing list
Haifux@haifux.org
http://hamakor.org.il/cgi-bin/mailman/listinfo/haifux

Reply via email to