Hi everyone,

Should inserts be so slow?

I've written a perl script to insert 10 million records for testing
purposes and it looks like it's going to take a LONG time with postgres.
MySQL is about 150 times faster! I don't have any indexes on either. I am
using the DBI and relevant DBD for both.

For Postgres 6.5.2 it's slow with either of the following table structures.
create table central ( counter serial, number varchar (12), name text,
address text );
create table central ( counter serial, number varchar (12), name
varchar(80), address varchar(80));

For MySQL I used:
create table central (counter int not null auto_increment primary key,
number varchar(12), name varchar(80), address varchar(80));

The relevant perl portion is (same for both):
                $SQL=<<"EOT";
insert into central (number,name,address) values (?,?,?)
EOT
                $cursor=$dbh->prepare($SQL);

        while ($c<10000000) {
                $number=$c;
                $name="John Doe the number ".$c;
                $address="$c, Jalan SS$c/$c, Petaling Jaya";
                $rv=$cursor->execute($number,$name,$address) or die("Error executing
insert!",$DBI::errstr);
                if ($rv==0) {
                        die("Error inserting a record with database!",$DBI::errstr);
                };
                $c++;
                $d++;
                if ($d>1000) {
                        print "$c\n";
                        $d=1;
                }
        }



************

Reply via email to