Hi,
I did a vacuum with -z and it fixed the issue. I was not aware that
vacuumdb didn't ANALYZE by default.Thanks everybody for all of the help!
Benjamin
Tom Lane wrote:
> Benjamin Arai <[EMAIL PROTECTED]> writes:
>
>>-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
>> mutualf
Benjamin Arai <[EMAIL PROTECTED]> writes:
>-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
> mutualfd_weekday_qbid (cost=0.00..6.01 rows=1 width=19) (actual
> time=34.579..8510.801 rows=253 loops=1)
> Index Cond: ((pkey >= '2005-12-15'::date) AND (pkey <=
> '2006-12-15'::dat
Fabricio PeƱuelas wrote:
> Es correcta la configuracion que tengo?
Uff. No mucho; yo diria que hay varias cosillas que necesitan retoque.
Pero esta lista es en ingles.
> Alguna sugerencia?
Te aconsejo enviar tu mensaje (en ingles) a la lista pgsql-performance,
donde te pueden ayudar a configur
On Sat, 23 Dec 2006, Benjamin Arai wrote:
"-> Index Scan using mutualfd_weekday_qbid_pkey_idx on mutualfd_weekday_qbid
(cost=0.00..6.01 rows=1 width=19) (actual time=34.579..8510.801 rows=253
loops=1)"
You're right that this is the problem and show that the planner was expecting
a very low
Yes, ANALYZE should definitely improve the performance for query...
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote:
Just to make things more clear I ran EXPLAIN ANALYZE on the slow query.
I got
Merge Full Join (cost=62.33..7
Just to make things more clear I ran EXPLAIN ANALYZE on the slow query.
I got
Merge Full Join (cost=62.33..73.36 rows=1000 width=19) (actual
time=39.205..8521.644 rows=272 loops=1)
Merge Cond: ("outer".pkey = "inner".d1)
-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
mutualfd_we
adding to the last email, for now try the work_mem but you should be
adding ANALYZE along with the VACUUM (with a cron job I guess) you do
regularly.
Shoaib Mir
EntperpriseDB (www.enterprisedb.com)
On 12/24/06, Shoaib Mir <[EMAIL PROTECTED]> wrote:
Try increasing the work_mem
Try increasing the work_mem first to see the change, that might help.
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote:
I have been running pieces of my PL function by hand and I have found
that the following queries work by them
I have been running pieces of my PL function by hand and I have found
that the following queries work by themselves taking less than a second
to execute.
getDateRange"('12/1/2005','12/1/2006') <- simply generates a date
list. Doesn't even access a table
SELECT * FROM mutualfd_weekday_qbid
On Sat, 23 Dec 2006, Benjamin Arai wrote:
I thought that you only need to use the -z flag if the distribution of the
data is changing.
You're absolutely correct. Have you not been inserting, updating or deleting
data? It sounds like you are based on the followup email you just sent:
One m
One more note about my problem, when you run a query on older data in
the table then it work great but if you query newer data then is very slow.
Ex.
SELECT * from my_table WHERE date >=12/1/2005 and date <= 12/1/2006; <- slow
SELECT * from my_table WHERE date >=12/1/2002 and date <= 12/1/200
I thought that you only need to use the -z flag if the distribution of
the data is changing.
Jeff Frost wrote:
On Sat, 23 Dec 2006, Benjamin Arai wrote:
The largest table in my database (30GB) has mysteriously went from
taking milli-seconds to perform a query to minutes. This disks are
fine
On Sat, 23 Dec 2006, Benjamin Arai wrote:
The largest table in my database (30GB) has mysteriously went from taking
milli-seconds to perform a query to minutes. This disks are fine and I have
a 4GB shared_memory. Could this slow down have to do with the fsm_max_pages
or something else like t
Hi,
The largest table in my database (30GB) has mysteriously went from
taking milli-seconds to perform a query to minutes. This disks are fine
and I have a 4GB shared_memory. Could this slow down have to do with
the fsm_max_pages or something else like that? I made it larger but the
querie
14 matches
Mail list logo