Got a question for you guys out there. I currently have a list of records that I need to search for in a mult-db that we have. I was wondering if perl would be faster to process or stick with my .ksh script that uses sqlplus. Here's how the current setup is
cat input file and read for record # sqlplus into db1 and do a select statement to find what db that the account is located in. (This db just contains basic info on where the account is actually stored) exit sql plus open another sqlplus session to db that was found in previous select statement. select out data needed and write to a file. exit sqlplus Do the loop again. I needed to process about 8000 records today. It would get about 30% through and the process would just stop. Don't know if it's a buffer issue (that was a suggestion) or if perl because it will directly connect to the DB would be better. Got any ideas. -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]