John, Appreciate your quick response and Thank you very much.
Pls, let me know the tools I can use instead of Perl. Thank you very much, Regards, Mahesh On Fri, Aug 27, 2010 at 1:54 AM, John R Pierce <pie...@hogranch.com> wrote: > On 08/26/10 10:59 AM, Tim Bunce forwarded....: > > Hope you are doing good. I am a software programmer struggling to solve >> the problems in my code. >> Pls. find below the summary of my problem >> 1. I am migrating data from MSSQL to MSSQL DB. >> 2. I am fetching data from source DB, manipulating Data and inserting >> into Target DB. >> 3. Data manipulation involves some complex business logic. >> Note: The SQL statements are prepared. (More than 100 tables are >> present) >> a. Data have to be queried from TABLE X, Y and Z respectively >> and the sequence will change according to business logic. >> b. One row will be fetched from source and inserted to target at a >> time. >> 4. Data model between two DBs are different. >> 5. I am using Perl, DBI::ODBC to migrate the data. >> 6. I can only able to transfer 100, 000 records in a period of 60 >> minutes. >> The performance is very slow and I am struggling to overcome the >> problem. >> Can you pls. ignite some light in my problem ? >> In case of any queries, pls. revert back. >> Thank you very much >> Regards, >> Mahesh >> > > > Doing 1 single insert per transaction will greatly slow things down too, as > it forces a disk commit on each transaction. if you can organize your code > so you batch some number of transactions per commit, you likely will speed > things up. > > however, what Brian said was on my mind too... Microsoft SQL Server > provides a rather powerful tool for this exact sort of job. you can > implement business logic within the data extraction system using any .NET > type language. I'd run this stuff on the source server if you're reading > more than you're writing, and on the destination server if you're writing > more than you're reading. > > Without a WHOLE lot more specific information, its impossible to give any > better advise. "100+ tables", 'sequence of queries changes according to > the business logic', all leaves lots of gaps. Is the source database > reasonably well normalized, and you use proper joins to query it? Is the > destination database normalized? or do you just read everything into your > perl, mash it about, then insert into your target tables? > > ODBC is not the fastest way to access MS SQL Server, either, as it adds > additional translation and abstraction. ADO is generally quite a bit > faster. > > >