If @names was a few records, you could write something like
   select ... where name in (?,?,?,?,?)
using some Perl code to generate the placeholders ( join(',', ('?') x scalar @names) ). But with 100,000 records, i think this would bring DBI, DBD::Pg and the Pg SQL parser to their limits. Usually, I strongly oppose to using temporary tables. But in your case, I would create a temporary table (using some unique identifier, like hostname + PID + random number, as the table name), insert @names into that table, and then let Pg do the job:

   create table temptable2744143 ....;

   insert into temptable2744143  (name) values (?);
      (for each record in @names)

    SELECT j.job_log_id
    FROM job_log j, node n , temptable2744143 t
    WHERE n.name = t.name
    AND   n.type = ?
    AND   n.usage = ?
    AND   n.node_id = j.node_id;

    drop table temptable2744143;


Where does @names come from?

Alexander

On 01.05.2006 14:21, Bealach Na Bo wrote:

Hi folks,

Apologies if this has been asked and resolved - I've spent half a day
searching various archive, but have not found anything.

My problem is the following:

I have a very large postgresql table (ca. 3 mil records) and need to
replace a nested sql query with an "itarative" one so I end up with:



@names is a very large array - of the order of 100000 elements.


sub prep_sql {
   my $stmt = <<EOSQL;
     SELECT j.job_log_id
     FROM job_log j, node n
     WHERE n.name in ?       <============ (@names) should be here
DBI documentations says this must be a scalar
     AND   n.type = ?
     AND   n.usage = ?
     AND   n.node_id = j.node_id
EOSQL

   my @bind = (@names,'B','LIVE'); <=========can't use @names here
   return ($stmt,@bind);
} # end sub prep_sql


I know how to use sql subqueries to do this in one go in postgresql,
but I think this might be a lot faster?

Has anyone tried the same sort of thing?

Regards,

Bealach



--
Alexander Foken
mailto:[EMAIL PROTECTED]  http://www.foken.de/alexander/

Reply via email to