I have a query that runs quite quickly using a hash join when run
standalone.
When I use this query as a subquery the planner always seems to
pick a differnt plan with an order of magnitude worse performance.
This bad plan is chosen even when the outer sql statement is
a trivial expression like
Ron Mayer [EMAIL PROTECTED] writes:
- Seq Scan on points (cost=0.00..444.43 rows=1 width=82) (actual
time=0.096..132.255 rows=15743 loops=1)
Filter: (the_geom
Richard,
What do you mean by summary table? Basically a cache of the query into a
table with replicated column names of all the joins? I'd probably have to
whipe out the table every minute and re-insert the data for each carrier in
the system. I'm not sure how expensive this operation would
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure how expensive this
Ken Egervari wrote:
Josh,
...
I thought about this, but it's very important since shipment and
shipment_status are both updated in real time 24/7/365. I think I
might be able to cache it within the application for 60 seconds at
most, but it would make little difference since people tend to
I face problem when running the following pgplsql
function. The problem is it takes more than 24hours to
complete
the calculation. The EMP table has about 200,000
records. I execute the function through psql select
calculate();
(There is no cyclic link inside the data).
Ken,
I did everything you said and my query does perform a bit better. I've
been getting speeds from 203 to 219 to 234 milliseconds now. I tried
increasing the work mem and the effective cache size from the values you
provided, but I didn't see any more improvement. I've tried to looking
Charles Joseph [EMAIL PROTECTED] writes:
I face problem when running the following pgplsql
function. The problem is it takes more than 24hours to
complete
the calculation. The EMP table has about 200,000
records.
Sure there are no infinite loops of PARENT links in your table?
Also,
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure
I notice that by default, postgres sets numeric fields to
storage MAIN. What exactly does that mean? Does that mean
it stores it in some type of compressed BCD format? If so,
how much performance gain can I expect by setting the storage
to PLAIN? Also, the docs say that char(n) is implemented
Jeremiah Jahn wrote:
I have about 5M names stored on my DB. Currently the searches are very
quick unless, they are on a very common last name ie. SMITH. The Index
is always used, but I still hit 10-20 seconds on a SMITH or Jones
search, and I average about 6 searches a second and max out at about
[EMAIL PROTECTED] wrote:
I would like to know whether there is any command which the server will give the
record ID back to the client when client puts the data and the server generates
an autoincrement ID for that record.
For example if many clients try to put the money data to the server and
On Tue, Mar 01, 2005 at 10:34:29AM +0100, Ramon Bastiaans wrote:
Hi all,
I am doing research for a project of mine where I need to store several
billion values for a monitoring and historical tracking system for a big
computer system. My currect estimate is that I have to store (somehow)
Not true - with fsync on I get nearly 500 tx/s, with it off I'm as
high as 1600/sec with dual opteron and 14xSATA drives and 4GB RAM on a
3ware Escalade. Database has 3 million rows.
As long as queries use indexes, multi billion row shouldn't be too
bad. Full table scan will suck though.
Alex
Dave Held [EMAIL PROTECTED] writes:
I notice that by default, postgres sets numeric fields to
storage MAIN. What exactly does that mean?
See http://developer.postgresql.org/docs/postgres/storage-toast.html
There isn't any amazingly strong reason why numeric defaults to MAIN
rather than
15 matches
Mail list logo