You may want to consider performing more frequent vacuums a week or really
considering leveraging autovacuum if it makes sense to your transactions
volume.
Regards,
Husam
-Original Message-
From: Gnanakumar gna...@zoniac.com
Sent: Saturday, March 27, 2010 6:06 AM
To:
Have you run vacuum/analyze on the table?
--
Husam
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Cloos
Sent: Wednesday, December 13, 2006 10:48 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Optimizing a query
I've currently got
. Tweaking such parameters are
usually done as a last resort.
--
Husam
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Cloos
Sent: Wednesday, December 13, 2006 2:35 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM
Like many descent RDBMS, Postgresql server allocates its own shared
memory area where data is cached in. When receiving a query request,
Postgres engine checks first its shared memory buffers, if not found,
the engine performs disk I/Os to retrieve data from PostgreSQL data
files and place it in
therefore get time ?. Because an execution plan is
created
before..
Sincenerly
Adnan DURSUN
- Original Message -
From: Tomeh, Husam [EMAIL PROTECTED]
To: Adnan DURSUN [EMAIL PROTECTED];
pgsql-performance@postgresql.org
Sent: Wednesday, October 04, 2006 1:11 AM
Subject: Re: [PERFORM
, the above from the server log may shed some light
on the problem.
Thanks again,
Husam Tomeh
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 14, 2006 3:49 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut
to think about.
Husam Tomeh
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tomeh,
Husam
Sent: Thursday, February 23, 2006 11:57 AM
To: Tom Lane
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze
This is the second time I'm getting out of memory error when I start a
database vacuum or try to vacuum any table. Note this machine has been
used for data load batch purposes.
=# vacuum analyze code;
ERROR: out of memory
DETAIL: Failed on request of size 1073741820.
I'm running Postgres
:16 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index
Tomeh, Husam [EMAIL PROTECTED] writes:
I have run pg_dump and had no errors. I also got this error when
creating one index but not another. When I lowered
Postgres 8.1 performance rocks (compared with 8.0) specially with the use
in-memory index bitmaps. Complex queries that used to take 30+ minutes, it
takes now a few minutes to complete in 8.1. Many thanks to the all wonderful
developers for the huge 8.1 performance boost.
---
Husam
Have tried adjusting the effective_cache_size so that you don't the
planner may produce a better explain plan for you and not needing to set
seqscan to off.
--
Husam
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jean-Pierre
Pelletier
Sent:
The recommendation for effective_cache_size is about 2/3 of your
server's physical RAM (if the server is dedicated only for postgres).
This should have a significant impact on whether Postgres planner
chooses indexes over sequential scans.
--
Husam
-Original Message-
From: [EMAIL
I have an 8.02 postgresql database with about 180 GB in size, running on
2.6 RedHat kernel with 32 GB of RAM and 2 CPUs. I'm running the vacuum
full analyze command, and has been running for at least two consecutive
days with no other processes running (it's an offline loading server). I
tweaked
13 matches
Mail list logo