"Chris Hoover" <[EMAIL PROTECTED]> writes:
> The "not in (subselect)" is very slow in postgresql.
It's OK as long as the subselect result is small enough to hash, but
with 550 rows that's not going to happen :-(.
Another issue is that if there are any NULLs in the subselect then you
will prob
WITH: select * from partes where cedula not in (select cedula from sujetos)
Seq Scan on partes (cost=0.00..168063925339.69 rows=953831 width=109)
Filter: (NOT (subplan))
SubPlan
-> Seq Scan on sujetos (cost=0.00..162348.43 rows=5540143 width=15)
WITH: select * from partes where not exists (
: PostgreSQL Admin
Subject: [ADMIN] Too slow
How can I improve speed on my queries. For example this query takes one
day executing itself and it has not finalized !!!
"create table tmp_partes as select * from partes where identificacion
not in (select cedula from sujetos)"
partes have 18
PG version?
Maybe worth to try NOT EXISTS instead of NOT IN
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Sabio - PSQL
Sent: Tuesday, March 22, 2005 7:23 AM
To: PostgreSQL Admin
Subject: [ADMIN] Too slow
How can I improve speed on my queries. For
Sabio - PSQL wrote:
How can I improve speed on my queries. For example this query takes
one day executing itself and it has not finalized !!!
"create table tmp_partes as select * from partes where identificacion
not in (select cedula from sujetos)"
partes have 1888000 rows, an index on identific
How can I improve speed on my queries. For example this query takes one
day executing itself and it has not finalized !!!
"create table tmp_partes as select * from partes where identificacion
not in (select cedula from sujetos)"
partes have 1888000 rows, an index on identificacion
sujetos have 5
On Wed, 2005-02-09 at 11:01, Marek Lewczuk wrote:
> Shashi Gireddy napisaÅ(a):
> > I recently migrated from MySql, The database size in mysql was 1.4GB (It is
> > a static database). It generated a dump file (.sql) of size 8GB), It took
> > 2days to import the whole thing into postgres. After all
Shashi Gireddy napisaÅ(a):
I recently migrated from MySql, The database size in mysql was 1.4GB (It is a static database). It generated a dump file (.sql) of size 8GB), It took 2days to import the whole thing into postgres. After all the response from postgres is a disaster. It took 40sec's to run
I recently migrated from MySql, The database size in mysql was 1.4GB (It is a
static database). It generated a dump file (.sql) of size 8GB), It took 2days
to import the whole thing into postgres. After all the response from postgres
is a disaster. It took 40sec's to run a select count(logrecno)
Shashi Gireddy wrote:
I recently migrated from MySql, The database size in mysql was 1.4GB
(It is a static database). It generated a dump file (.sql) of size
8GB), It took 2days to import the whole thing into postgres. ]
Well that sounds like you don't have your postgresql.conf configured
to proper
On Tue, 8 Feb 2005 11:51:22 -0600, Shashi Gireddy <[EMAIL PROTECTED]> wrote:
> I recently migrated from MySql, The database size in mysql was 1.4GB
> (It is a static database). It generated a dump file (.sql) of size
> 8GB), It took 2days to import the whole thing into postgres. After all
> the res
I recently migrated from MySql, The database size in mysql was 1.4GB
(It is a static database). It generated a dump file (.sql) of size
8GB), It took 2days to import the whole thing into postgres. After all
the response from postgres is a disaster. It took 40sec's to run a
select count(logrecno) fr
12 matches
Mail list logo