One last question, using the "weather report" example, is it going to
improve performance for the DISTINCT ON query to add an index to the
location and time columns?
Thanks a lot,
Travis
On 3/16/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Travis Whitton" <[EMAIL PR
Given the following test table, I want to grab only the newest record and
disregard any older duplicates based on name. Is this the most efficient way
to do it? Will the indicies even make a difference? The table below
demonstrates a simple proof of concept. My final table will have millions of
re
I don't remember where I read it, but I saw something online a while back
comparing vacuum stragies vs dumping with pg_dump and then reloading. The
pg_dump and restore ended up compacting the database significantly more. I
don't know if that still applies with 8.2, but it might be worth a try. I
c
why losing
pg_xlog would be a big problem if I were inserting and updating data
continuously throughout the day though.
Thanks,
Travis
On 12/4/06, Markus Schaber <[EMAIL PROTECTED]> wrote:
Hi, Travis,
Travis Whitton wrote:
> Hey guys, sorry if this is slightly OT for this list, bu
Hey guys, sorry if this is slightly OT for this list, but I figure it's a
simple question. If I'm storing pg_xlog on a second non-redundant drive
using the symlink method and the journal drive were to crash, how difficult
is recovery? Will Postgresql simply be able to reinitialize the journal on a
From everything I've been able to find, it seems that the only way to get
front-anchored regular expressions or LIKE patterns beginning with constants
to use an index is to have previously initialized your database using the C
locale. Is this still true? I'm trying to do something like:
SELECT k
I took off the USING clause like so, and it worked like a charm!DELETE FROM keywordsWHERE keyword ILIKE ANY (SELECT '%' || badword || '%' FROM badwords)
Thanks so much,TravisOn 11/10/06, Erik Jones <
[EMAIL PROTECTED]> w
Hi everybody,I have two tables of the following structure:Table "keywords"column | type-id | integerkeyword | varchar(255)andTable "badwords"
column | type--badword | varchar(255)I need to delete all the rows from the keywords table where badwo
Hello all, I'm running the following query on about 6,000 records worth
of data, and it takes about 8 seconds to complete. Can anyone provide
any suggestions to improve performance? I have an index on
two columns in the transacts table (program_id, customer_id). If I specify a number
for customer.i