Cliff Pratt wrote:
What are the main 'knobs' that can be used to tune PostgreSQL?
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com www.2ndQuadrant.us
--
Sent via pgsql-a
Alvaro Herrera escribió:
> Also, keep in mind that max_workers is a new setting in 8.3. Since the
> OP is running 8.2, he can only get one "worker". Presumable he needs to
> disable autovac for those two very active tables and setup a cron job to
> process them in their own schedule.
Err, sorry
Scott Marlowe escribió:
> On Tue, Mar 23, 2010 at 5:28 PM, Bhella Paramjeet-PFCW67
> wrote:
> > Hi Scott,
> >
> > Thanks for replying.
> > Can you explain what you mean by increase the number of threads or how I
> > can increase the number of threads? I just have 2 tables that are very
> > activ
nks
> Paramjeet Kaur
>
> -Original Message-
> From: Scott Marlowe [mailto:scott.marl...@gmail.com]
> Sent: Tuesday, March 23, 2010 2:42 PM
> To: Bhella Paramjeet-PFCW67
> Cc: pgsql-admin@postgresql.org; pgsql-performa...@postgresql.org
> Subject: Re: [ADMIN] tuning a
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Tuesday, March 23, 2010 2:42 PM
To: Bhella Paramjeet-PFCW67
Cc: pgsql-admin@postgresql.org; pgsql-performa...@postgresql.org
Subject: Re: [ADMIN] tuning auto vacuum for highly active tables
On Tue, Mar 23, 2010
we ran into the same problem, had big table, played with vacuum cost and
delay, but can't shrink too much because of heavy insert and delete.
we ended up with using slony for upgrade, also have data copy from fresh
because of inital replication to shrink our large table, with minimum
controlled dow
we ran into the same problem, had big table, played with vacuum cost and
delay, but can't shrink too much because of heavy insert and delete.
we ended up with using slony for upgrade, also have data copy from fresh
because of inital replication to shrink our large table, with minimum
controlled dow
On Tue, Mar 23, 2010 at 2:54 PM, Bhella Paramjeet-PFCW67
wrote:
> Hi All,
>
>
>
> We have a postgres database in which couple of tables get bloated due to
> heavy inserts and deletes. Auto vacuum is running. My question is how can I
> make auto vacuum more aggressive? I am thinking of enabling
>
"Allgood, John" wrote:
> What exactly am I looking for
> INFO: free space map contains 2667 pages in 22 relations
> DETAIL: A total of 3008 page slots are in use (including overhead).
> 3008 page slots are required to track all free space.
> Current limits are: 153600 page slots, 1000 relat
jallg...@ohl.com
www.ohl.com
-Original Message-
From: Glyn Astill [mailto:glynast...@yahoo.co.uk]
Sent: Wednesday, June 24, 2009 9:46 AM
To: Barbara Stephenson; Tom Lane; Allgood, John
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] tuning our database by increasing shared buffer
Run a
Run a vacuum verbose and look at the output at the end.
Word is that as of 8.4 these parameters will autotune themselvs.
--- On Wed, 24/6/09, Allgood, John wrote:
> From: Allgood, John
> Subject: Re: [ADMIN] tuning our database by increasing shared buffer
> To: "Barbara Step
pgsql-admin@postgresql.org
Subject: Re: [ADMIN] tuning our database by increasing shared buffer
Thank ypu!
Tom Lane wrote:
Barbara Stephenson
<mailto:barb...@turbocorp.com> writes:
We will be consolidating from 4 databases to 2 and want to make
sure that
these parameters
Thank ypu!
Tom Lane wrote:
Barbara Stephenson writes:
We will be consolidating from 4 databases to 2 and want to make sure that
these parameters are the only ones that need changing. Please advise.
Current Future
=
Barbara Stephenson writes:
> We will be consolidating from 4 databases to 2 and want to make sure that
> these parameters are the only ones that need changing. Please advise.
> Current Future
> =
On Sat, Feb 21, 2009 at 12:14 AM, Guillaume Lelarge
wrote:
> Tino Schwarze a écrit :
>> [...]
>> I'm going to pg_restore a database dump of about 220 GiB (uncompressed,
>> but most data is BLOBs). The machine has 8 GiB of memory and 8 cores.
>> Is there any advice to speed up restoring, postgresql
Tino Schwarze a écrit :
> [...]
> I'm going to pg_restore a database dump of about 220 GiB (uncompressed,
> but most data is BLOBs). The machine has 8 GiB of memory and 8 cores.
> Is there any advice to speed up restoring, postgresql.conf-wise?
>
> I already have a script which does the data loadi
On Mon, Jan 19, 2009 at 5:35 AM, BRAHMA PRAKASH TIWARI
wrote:
> Hi Francesco
>
> most probabely this is due to the auto vacuum option on and if auto vacuum
> is on then the in condition of high transaction on database it slows the
> speed of the hole database.Set it off in postgres.conf and vacuum
On Montag, 19. Januar 2009, BRAHMA PRAKASH TIWARI wrote:
| most probabely this is due to the auto vacuum option on and if auto vacuum
| is on then the in condition of high transaction on database it slows the
| speed of the hole database.Set it off in postgres.conf and vacuum and
| reindex transact
Hi Francesco
most probabely this is due to the auto vacuum option on and if auto vacuum
is on then the in condition of high transaction on database it slows the
speed of the hole database.Set it off in postgres.conf and vacuum and
reindex transactional tables manualy with in every two million
tra
On Fri, Jan 16, 2009 at 4:37 PM, Oleg Bartunov wrote:
> Just a quick question - why do you use quite old release 8.1 instead of
> 8.3.5 ?
I presume it's because it is the version that comes by default in
Debian Etch. So I'll take this opportunity to comment about
backports.org. If you use that r
Just a quick question - why do you use quite old release 8.1 instead of
8.3.5 ?
Post only changes in postgresql.conf and explain analyze of your slow
query.
Oleg
On Fri, 16 Jan 2009, Francesco Andreozzi wrote:
Hi all, i send this mesage because i just made a migration from my old mysql
db on
On Fri, Jan 16, 2009 at 04:18:02PM +0100, Francesco Andreozzi wrote:
> Hi all, i send this mesage because i just made a migration from my old
> mysql db on a more robust postgres server.
> I'm using version 8.1 on a linux debian 4.0
> i have a dual core XEON 2.33GHz and 2Gb of ram ... 2 disk 15k m
On Dienstag, 8. April 2008 paul rivers wrote:
> I don't see a direct way to monitor bloat from pg_stat*. If I'm
> wrong, please set me straight.
>
> For example, monitoring index bloat would involve deciding how many
> pages an index would ideally consume, based on either sampling the
> table you
> Are there scripts to automate this look at stats?
> Or could one draw graphs from that values, to visualize how well the db
> is? E.g., if you could calculate a % value, you could make RRD stats to
> see it's change over time. Is there any project on this?
I don't know about rrd graphing it,
Michael Monnerie wrote:
On Montag, 7. April 2008 Scott Marlowe wrote:
You can monitor things like index and table bloat with the pg_stat_*
series of views.
Are there scripts to automate this look at stats?
Or could one draw graphs from that values, to visualize how well the db
is? E.g
On Montag, 7. April 2008 Scott Marlowe wrote:
> You can monitor things like index and table bloat with the pg_stat_*
> series of views.
Are there scripts to automate this look at stats?
Or could one draw graphs from that values, to visualize how well the db
is? E.g., if you could calculate a % va
On Mon, Apr 07, 2008 at 12:45:38PM -0400, Chris Browne wrote:
> versions of PostgreSQL. e.g. - with v7.2, there were patterns of
> updates that would leave portions of indexes not usable, but the issue
> was rectified in ~7.4, and people have not been observing problems
> relating to this former s
[EMAIL PROTECTED] (Carol Walter) writes:
> I have some questions about tuning. The PostgreSQL documentation
> says that you don't need to worry about index maintenance and tuning
> with PostgreSQL. I'm used to systems that work a lot better if they
> are periodically re-orged or re-indexed. Is i
On Mon, Apr 7, 2008 at 9:33 AM, Carol Walter <[EMAIL PROTECTED]> wrote:
> I have some questions about tuning. The PostgreSQL documentation says that
> you don't need to worry about index maintenance and tuning with PostgreSQL.
> I'm used to systems that work a lot better if they are periodically r
On Mon, Mar 24, 2008 at 6:56 PM, Jack <[EMAIL PROTECTED]> wrote:
> Is there a Postgres equivalent of tkprofs for tuning?
No. But what you could do is combine several Postgres things to try
and get the same information.
IIRC, you could use:
- log_parser_stats
- log_planner_stats
- log_executor_st
Peter Kovacs написа:
[...]
BTW, I had a bookmark for a good tutorial on explain analyze, but the
page is no longer available:
http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10120 .
Can you suggest a good tutorial?
I'm not sure what was the tutorial at the above URL but this what
tkprof generates performance characteristics for an arbitrary number
of SQL statements. The output includes measurements for the individual
SQL statements as well as cumulated values for the sequence of SQL
statements being measured. Data is presented in a two dimensional
format with the first dime
On Mon, Mar 24, 2008 at 4:56 PM, Jack <[EMAIL PROTECTED]> wrote:
>
>
>
>
> Is there a Postgres equivalent of tkprofs for tuning?
you might want to provide an explanation of what tkprofs is and which
parts you're interested in emulating in pgsql.
Have you read up on explain, explain analyze, and t
At 03:47 PM 8/25/2003, Jonathan Ellis wrote:
>(Posted to General a while back but perhaps this is a more appropriate
>list since I got no responses. :)
>
>How can I tell if writing to the WAL is a bottleneck? In other words, how
>can I tell if upgrading the disk the WAL is on will improve perform
On Tue, 29 Jan 2002 17:45:34 + (UTC), [EMAIL PROTECTED] ("Zhang,
Anna") wrote:
>Thanks Peter Darley, Ross J. Reedstrom and Tom lane!!
>How silly am I! Your messages reminds me. Actually I want to insert rows of
>contact_discard table which are not exists in contact table to contact table
>(so
Just to complete the thread.
Ross
- Forwarded message from "Zhang, Anna" <[EMAIL PROTECTED]> -
From: "Zhang, Anna" <[EMAIL PROTECTED]>
To: "'Ross J. Reedstrom'" <[EMAIL PROTECTED]>
Subject: RE: [ADMIN] tuning SQL
Date: Tue,
On Tue, Jan 29, 2002 at 12:23:17PM -0500, Zhang, Anna wrote:
> Thanks Peter Darley, Ross J. Reedstrom and Tom lane!!
> How silly am I! Your messages reminds me. Actually I want to insert rows of
> contact_discard table which are not exists in contact table to contact table
> (some duplicates in tw
takes only a few
minues.
Thanks!
Anna Zhang
-Original Message-
From: Ross J. Reedstrom [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 29, 2002 11:39 AM
To: Zhang, Anna
Cc: [EMAIL PROTECTED]
Subject: Re: [ADMIN] tuning SQL
On Tue, Jan 29, 2002 at 10:57:01AM -0500, Zhang, Anna wrote
"Zhang, Anna" <[EMAIL PROTECTED]> writes:
> select count(*) from contact a, contact_discard b where a.contacthandle <>
> b.contacthandle;
Did you really mean "<>" here? If so, the plan you showed us is not for
this query. The above query is going to take approximately forever to
execute :-(
I
On Tue, Jan 29, 2002 at 10:57:01AM -0500, Zhang, Anna wrote:
> Hi,
> I am running a query on postgres 7.1.3 at Red Hat 7.2 (2 CPUs, 1.5G RAM, 2
> drive disk array).
> select count(*) from contact a, contact_discard b where a.contacthandle <>
> b.contacthandle;
What are you trying to do with th
Anna,
I'm not sure that this query is doing what you think it's doing.
Since your tables aren't linked you'll end up with count(contact) *
count(contact_discard) rows to evaluate, or
9,000,000*259,00=2,331,000,000,000 (2.3 trillion) rows.
Assuming that you want to find the
This in interesting effect, that I ran into more than once, and I have a
question concerning this:
We see that the cost for this query went from roughly 12,000,000 to about
12,000. Of course, we cannot assume that the time of execution will be
directly proportional to this, and also, the weight fa
Thanks, that sped things up a bit, from 7.6 sec. to about 5.5 sec. However
the plan still includes a sequential scan on ssa_candidate:
Aggregate (cost=12161.11..12161.11 rows=1 width=35)
-> Merge Join (cost=11611.57..12111.12 rows=19996 width=35)
-> Sort (cost=11488.27..11488.27 r
Not sure about tuning, but it seems to me that this query would be much more
effective if it's rewritten like this (especially if style_id columns on both
tables are indexed):
SELECT count(DISTINCT song_id) AS X
FROM ssa_candidate SC JOIN station_subgenre SS ON SC.style_id = SS.style_id
WHERE SS.
"Michael T. Halligan" <[EMAIL PROTECTED]> writes:
> The query sorts through about 80k rows.. here's the query
> --
> SELECT count(*) FROM (
> SELECT DISTINCT song_id FROM ssa_candidate WHERE
> style_id IN (
>
> 2) Increase the default linux shared memory
> (this may be necesary for the previous
> item to work)
> (for example, in the postgresql.init script
> add the line:
> echo "1" > /proc/sys/kernel/shmmax
> )
Does anyone know how this would be changed in the kernel config?
I can't find it i
-- Brian Baquiran <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm using Postgres 6.5.2 in a high-volume
> application that has on the order of a
> hundred inserts and selects (but mostly inserts) a
> minute. My loadaverage hovers
> around 1.0, and can go much, much higher (50-80)
> during vacuum and que
I have an application that did many inserts also. Only I was running
on a much lower powered machine and could get nowhere nere your 100
per minute rate. What I did has re-write the application to batch the
inserts and then use COPY. COPY it turns out is _much_ faster than
INSERT maybe by an o
Hi!
On Mon, 13 Mar 2000 17:30:10 +0800
Brian Baquiran <[EMAIL PROTECTED]> wrote:
[snipped]
> I'm running postmaster with the options -B600 -N300. I'm using the RedHat RPMs
> >from a pretty-much-stock RH6.1 installation. Can I tune postgres to load more
> data into memory?
You should increase
On Wed, 15 Mar 2000, Brian Baquiran wrote:
> > 3) Turn off flush to disk
> > (option -F )
> > (very little risk)
If this parameter is set, increase much the speed of fast periodic inserts,
updates. Because if postgres run without -F, every single statement generate
a sync() syscall.
> I'd rather
In case anyone is still interested, I was able to improve the load considerably
by wrapping a group of 10-15 INSERTS into a single transaction. I didn't know
that each insert/query not explicitly wrapped in a transaction got it's own
transaction (and the associated overhead) by default.
Load is n
"Hernan J. Gonzalez" wrote:
>
> -- Brian Baquiran <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I'm using Postgres 6.5.2 in a high-volume
> > application that has on the order of a
> > hundred inserts and selects (but mostly inserts) a
> > minute.
> Some suggestions:
>
> 1) Increase the -B and -S
52 matches
Mail list logo