On Friday 13 November 2009 16:35, McKown, John wrote:
>Thanks for the reply. I'm very new to all this, so I appreciate the thoughts
> of those who are steeped in the "whys" of UNIX. Actually, my original
> solution was to use an environment variable to list the files to be read
> (didn't think of the seeking around in the set of files - yuck!). But I
> guess something like:
>
>command --input1=file1:file2:file3 --input2=otherfile:andmore regular.way
>
>would be more UNIXy to implement in my code. This would assume that for some
> reason, I must know of multiple files which contain compatable information
> and keep them separate from other sets of files with differently compatable
> information. That, in itself, may not be very UNIXy.

That's correct: having the kernel know anything about the contents of files is 
A Bad Thing (tm) in the UNIX world.  That's what user-space processes are 
for.  Files are just containers for bytes.

The exception is directory entries, which one could argue are known only to 
the filesystem layer of the kernel so it's OK, but their contents are used by 
the kernel.  That's why I'm thinking of a new file type of "meta-file".

I thought the original poster rejected the idea of named pipes because of 
concern about the I/O overhead?  Named pipes are the UNIX-style solution to 
this problem, but can they match the performance of a concatenated dataset?  
Is the I/O overhead of pipes significant in this context?
        - MacK.
-----
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street · Newton, MA 02466-2272 · USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com  

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to