On 4/4/06, Nathan Kurz <[EMAIL PROTECTED]> wrote:
>
> > >> 3. The performance for inserts is really bad. Around 40k entries
> takes a
> > >>    few hours. What might I be doing wrong? I do a commit after
> > >>    all the inserts.
> > >
> > > A few things to help with speed:
> > >
> > > 1. Use DBI's prepared statements; eg, 1 prepare() and many execute().
> >
> >  Yes, this is what I do.
> > >
> > > 2. Don't commit for each row inserted but batch them so, say, you
> > >    commit once per 1000 rows.
> > >
> >  Unfortunately, I cannot commit till I do all the inserts.
>
> That doesn't seem right for speed.  In addition to using "commit", are
> you beginning a transaction with "begin"?  Are your inserts
> particularly complex or large?  More details about what you are doing
> would probably be good here, since something odd is happening here.
> Maybe you could post a tiny test program along with the time it takes?
>
> --nate
>
> I don't begin the transaction with begin. My assumption was that the first
insert operation would automatically begin a transaction.
My inserts are fairly simple with two columsn being long strings of length
255.


my @values = ($task_info_gid,$file_type_gid,$extracted_path,$media_path,
$size,$ctime,$mtime,$job_id,$is_in_du);

Raj

Reply via email to