> William,
>
> You need to increase your fsm settings. The database is telling you it is
> trying to store 177K+ pages, but you have only provided it with 20K. Since
> these pages are cheap, I would set your fsm up with at least the following.
>
> max_fsm_pages 50
> max_fsm_relations 5000
>
On 7/7/06, William Scott Jordan <[EMAIL PROTECTED]> wrote:
Hi Jeff,Ah, okay. I see what information you were looking for. Doing aVACUUM on the full DB, we get the following results:INFO: free space map: 885 relations, 8315 pages stored; 177632 total
pages neededDETAIL
On Friday 07 July 2006 17:48, William Scott Jordan wrote:
> Hi Jeff,
>
> Ah, okay. I see what information you were looking for. Doing a
> VACUUM on the full DB, we get the following results:
>
>
> INFO: free space map: 885 relations, 8315 pages stored; 177632 total
>
Hi Jeff,
Ah, okay. I see what information you were looking for. Doing a
VACUUM on the full DB, we get the following results:
INFO: free space map: 885 relations, 8315 pages stored; 177632 total
pages needed
DETAIL: Allocated FSM size: 1000 relations + 2 pa
On Fri, 7 Jul 2006, William Scott Jordan wrote:
Hi Jeff,
We are running ANALYZE with the hourly VACUUMs. Most of the time the VACUUM
for this table looks like this:
INFO: vacuuming "public.event_sums"
INFO: index "event_sums_event_available" now contains 56121 row versions in
2256 pages
Hi Jeff,
We are running ANALYZE with the hourly VACUUMs. Most of the time the
VACUUM for this table looks like this:
INFO: vacuuming "public.event_sums"
INFO: index "event_sums_event_available" now contains 35669 row
versions in 1524 pages
DETAIL: 22736 index
On Fri, 7 Jul 2006, William Scott Jordan wrote:
Hi all!
Can anyone explain to me what VACUUM does that REINDEX doesn't? We have a
frequently updated table on Postgres 7.4 on FC3 with about 35000 rows which
we VACUUM hourly and VACUUM FULL once per day. It seem like the table still
slows to
> I'm trying to decide now if we need to include a daily REINDEX along
> with our daily VACUUM FULL, and more importantly I'm just curious to
> know why we should or shouldn't do that.
>
> Any information on this subject would be appreciated.
My understanding is that vaccum full frees all of th
Hi all!
Can anyone explain to me what VACUUM does that REINDEX doesn't? We
have a frequently updated table on Postgres 7.4 on FC3 with about
35000 rows which we VACUUM hourly and VACUUM FULL once per day. It
seem like the table still slows to a crawl every few weeks. Running
a REINDEX by i
On Fri, 7 Jul 2006, andy rost wrote:
is that 100k context switches over 10 seconds or one second? that
might be something to check out. pg 8.1 is regarded as the solution
to any cs problem, though.
According to man top, that's 100K per second. I'm interested in your
recommendation but am not
I've got a problem where Deletes on a certain table are taking very long (>5
sec) (PG 8.1.3, linux). Explain Analyze on the delete shows that two
(automatically created) triggers caused by foreign keys are responsible for
99% of the time.
* The two tables are large (>1.5mm and >400k rows), so se
Hi Merlin,
Thanks for the input. Please see below ...
Merlin Moncure wrote:
On 7/5/06, andy rost <[EMAIL PROTECTED]> wrote:
fsync = on # turns forced synchronization
have you tried turning this off and measuring performance?
No, not yet. We're trying a couple
Mark,
Thanks for the insight. I increased the value of effective_cache_size to
3 Gigs and will monitor the performance over the weekend. Prior to this
change we discovered that we are filling up WALs to the tune of 2400 per
day. Moving the pg_xlog subdirectory to its own drive seemed to boost
On 7/4/06, Luckys <[EMAIL PROTECTED]> wrote:
Hi all,
I got this query, I'm having indexes for PropertyId and Dates columns across
all the tables, but still it takes ages to get me the result. What indexes
would be proposed on this, or I'm helpless?
I would suggest posting your table schemas a
On Fri, Jul 07, 2006 at 03:51:38AM -0400, Gene wrote:
> Starting off with:
>
> Parent (Rule on insert instead insert into Child2)
> Child1 (Constraint date <= somedate1)
> Child2 (Constraint date > somedate1)
>
> Now I want to create another Partition:
>
> Create Table Child3
> BEGIN
> Update
Adaptecs RAID controllers as all underwhelming.
The best commodity RAID controllers in terms of performance, size of available
BBC, connectivity technologies (all of IDE, SCSI, SATA and FC are supported),
etc are made by Areca.
Get one of Areca's RAID controllers that hold up to 2 GB of BBC.
AR
On 29 Jun 2006 10:00:35 -0700, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
I have SP, which has a cursor iterations. Need to call another SP for
every loop iteration of the cursor. The pseudo code is as follows..
i would suggest converting your code to pl/pgsql and reposting. that
look awfull
Hi, all.
i'm trying to tune application which makes alots of queries
with semantics(find LONGEST PREFIX MATCH in a string) like:
SELECT cost
FROM tarif
WHERE $1 LIKE prefix
ORDER BY length(prefix) DESC
LIMIT 1
from table like:
CREATE TABLE tarif (
id bigint NOT NULL,
prefix varchar(55)
Hi, Gene,
Gene wrote:
> I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am
> attempting to use partitioning via Inherited tables. At first I was
> going to create a rule per sub-table based on a date range, but found
> out with multiple rules postgres will only return the affected-row
Hi, Craig,
Craig A. James wrote:
> If I insert a bunch of rows, then truncate, then insert a bunch more
> rows, do I need to vacuum? I've been assuming that TRUNCATE TABLE is a
> brute-force technique that more-or-less tosses the old table and starts
> fresh so that no vacuum is necessary.
>
> S
Hi, Eugeny,
Eugeny N Dzhurinsky wrote:
>> Do you add / remove tables a lot? Could be you've got system catalog
>> bloat.
>
> Yes, almost each table is dropped and re-created in 3-5 days.
If your really recreate the same table, TRUNCATE may be a better
solution than dropping and recreation.
HT
I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am
attempting to use partitioning via Inherited tables. At first I was
going to create a rule per sub-table based on a date range, but found
out with multiple rules postgres will only return the affected-row
count on the last rule which gi
22 matches
Mail list logo