kevin kempter wrote:
One of the things we need to query is the min date from the master table
- we may explore alternatives for this particular query, however even if
we fix this query I think we have a fundamental issue with the use of
indexes (actuallt the non-use) by the planner.
We had a
Bruce Momjian wrote:
Ravindran G-TLS,Chennai. wrote:
Note: Please bear with us for the disclaimer because it is automated in
the exchange server.
Regards,
Ravi
FYI, we are getting closer to rejecting any email with such a
disclaimer, or emailing you back every time saying we are ignoring the
Adam Rich wrote:
Doesn't sound like you want postgres at all Try mysql.
Could you explain your reason for suggesting mysql? I'm simply curious
why you would offer that as a solution.
---(end of broadcast)---
TIP 5: don't forget to increase
Merlin Moncure wrote:
On 2/5/05, Dirk Lutzebaeck <[EMAIL PROTECTED]> wrote:
Was the original message actually from 2/5/05?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail comman
David Gagnon wrote:
> " -> Index Scan using cr_pk on cr (cost=0.00..6.02 rows=1
> width=828) (actual time=0.073..0.077 rows=1 loops=13587)"
> "Index Cond: (((cr.crypnum)::text = 'M'::text) AND
> (cr.crnum = "outer".cscrnum))"
> "Filter: ((crda
evgeny gridasov wrote:
> Hi Everybody.
>
> I am going to replace some 'select count(*) from ... where ...' queries
> which run on large tables (10M+ rows) with something like
> 'explain select * from ... where ' and parse planner output after that
> to find out its forecast about number of row
Dennis Bjorklund wrote:
> On Thu, 23 Jun 2005, Bricklen Anderson wrote:
>
>
>>iii. UNIQUE constraint on table "t1". This didn't seem to perform too
>>badly with fewer rows (preliminary tests), but as you'd expect, on error
>>the whole transaction w
Jacques Caron wrote:
>
> I have a similar situation, and the solution I use (though I haven't
> really tested many different situations):
> - have a trigger ON INSERT which does:
> UPDATE set whatever_value=NEW.whatever_value,... WHERE
> whatever_key=NEW.whatever.key AND...
> IF FOUND THEN
> RETU
Meetesh Karia wrote:
> I don't know what this will change wrt how often you need to run VACUUM
> (I'm a SQL Server guy), but instead of an update and insert, try a
> delete and insert. You'll only have to find the duplicate rows once and
> your insert doesn't need a where clause.
>
> Meetesh
>
V
Situation:
I'm trying to optimize an ETL process with many upserts (~100k aggregated rows)
(no duplicates allowed). The source (table t2) table holds around 14 million
rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about
100,000 distinct rows in the destination table (t1).
Yves Vindevogel wrote:
> Hi,
>
> rvponp=# explain select * from tblprintjobs order by loginuser,
> desceventdate, desceventtime offset 25 limit 25 ;
>
> I have a very simple query on a big table. When I issue a "limit"
> and/or "offset" clause, the query is not using the index.
> Can anyone exp
Colton A Smith wrote:
Hi:
I have a table called sensors:
Table "public.sensor"
Column | Type |Modifiers
-+--+-
senso
Christopher Kings-Lynne wrote:
Another trick you can use with large data sets like this when you want
results
back in seconds is to have regularly updated tables that aggregate the
data
along each column normally aggregated against the main data set.
Maybe some bright person will prove me wrong
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
Dan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book
"SQL Tuning" is an excellent resource.
--
___
This
Greg Stark wrote:
Kevin Brown <[EMAIL PROTECTED]> writes:
Ouch. Is this really a reasonable assumption? I figured the primary
use of a cursor was to fetch small amounts of data at a time from a
large table, so 10% seems extremely high as an average fetch size. Or
is the optimization based on th
15 matches
Mail list logo