nalyze) the use
of indexes was gone again on certain tables...
Any other suggestions?
-Original Message-
From: bernd pinter [mailto:[EMAIL PROTECTED]]
Sent: donderdag 31 mei 2001 8:53
To: Koen Antonissen
Subject: Re: [SQL] primary key scans in sequence
the problem is the optimizer.
you u
Scan on teams (cost=0.00..1.09 rows=1 width=173)
EXPLAIN
I really don't understand the difference between the two, and it didn't
work before i created an extra index on id...
Kind regards,
Koen Antonissen
-Original Message-
From: Richard Poole [mailto:[EMAIL PROTECTED]]
I was searching for the same thing, I couldn't found it though :(
-Original Message-
From: Kovacs Zoltan [mailto:[EMAIL PROTECTED]]
Sent: woensdag 11 april 2001 16:37
To: Poul L. Christiansen
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] enumerating rows
> Use the "serial" column type.
Unfor
Hi Postgres people ;-)
This is probably a simple question, still I need to know:
When restoring data using dumps
1. Will your indices be restored using copy dumps?
2. Does vacuumdb restore them?
3. If vacuumdb does not, is there something which does?
Kind regards,
Koen Antonissen
Hi there
Warning: PostgreSQL query failed: ERROR: Tuple is too big: size 13872,
max size 8140
Is there anyting we can do about that other than just not typing so much
text into the field?
Kind regards,
Koen Antonissen
---(end of broadcast
Hi there
I just recieved this error:
'Warning: PostgreSQL query failed: ERROR: Tuple is too big: size 13872,
max size 8140 '
Is there anyting I can do about that other than tell my users just not
typing so much text into the field?
Kind regards,
Koen Antonissen
-
problems remain
when the the result is more than 1 record (Unlike a unique emailaddress...)
Is there anyone out there who has ideas how to write faster queries
including tablejoins?
I already tried Inner Join, Natural Join and Join On, wich didn't seem
affect the performance in any way...
The problem is solved, there was a syntax error in the code I failed to spot
time after time...
tnx anyway,
Koen Antonissen