I've developed a script, similar to the following, for each of the sites 
which we're indexing.  The scripts appear to work as intended; couple of 
questions about controlling execution of multiple versions. 

../bin/htdig    -asvc   pbbr.conf
../bin/htmerge  -asvc   pbbr.conf
../bin/htmerge  -asvm   pbbr.conf

All files which are altered in the first two steps are uniquely prefixed by 
client id (pbbr in this case).  Last step, however, merges into 
db.docdb.work, etc.  

I would presume that any number of instances, of the first two steps, could 
run concurrently -- since no common output files are involved.  

My specific question -- what happens if more than one instance, of the third 
step (the final htmerge) attempts to run at the same time?  Unless 
"something" prevents concurrent write access, the file(s) are quite likely to 
be corrupted.
Do I have to schedule the merges "one at a time", or will Unix enforce this 
"on its own".   

------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.

Reply via email to