Re: [GENERAL] Config for fast huge cascaded updates

2017-07-02 Thread Craig de Stigter
Thanks everyone. Sorry for the late reply.


Do you have indexes on all the referencing columns?


I had thought so, but it turns out no, and this appears to be the main
cause of the slowness. After adding a couple of extra indexes in the bigger
tables, things are going much more smoothly.


write the whole thing into a new SQL schema


This is a really interesting approach I hadn't thought of! We can currently
afford a little bit of downtime, but it's helpful to keep this in mind if
we ever do this kind of thing again in future.

The two changes we've made are:

   - Add a few indexes so that the cascades operate more efficiently
   - Move some of the tables (whose ID values don't matter so much to our
   app) into a separate migration, which can be run before we take down the
   site. Then only the tables whose IDs matter to the app/user are done while
   the site is down.

With those changes it looks like we can fit the downtime into the window we
have. Thanks for all the advice, much appreciated!


On 28 June 2017 at 01:28, Andrew Sullivan  wrote:

> On Mon, Jun 26, 2017 at 07:26:08PM -0700, Joshua D. Drake wrote:
>
> > Alternatively, and ONLY do this if you take a backup right before hand,
> you
> > can set the table unlogged, make the changes and assuming success, make
> the
> > table logged again. That will great increase the write speed and reduce
> wal
> > segment churn.
>
> Note that this is not for just that table, but for all of the
> implicated ones because of the CASCADE statements.  It sounds like the
> OP is basically rewriting a significant chunk of the entire database,
> so nothing is going to be super fast: all those CASCADEs have to fire
> and all those other tables need to be updated too.
>
> > However, if that fails, the table is dead. You will have to reload it
> from
> > backup.
>
> Right, and that goes for all the affected tables.
>
> Best regards,
>
> A
>
> --
> Andrew Sullivan
> a...@crankycanuck.ca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,
Craig

Developer
Koordinates

+64 21 256 9488 <+64%2021%20256%209488> / koordinates.com / @koordinates



Re: [GENERAL] Config for fast huge cascaded updates

2017-06-27 Thread Andrew Sullivan
On Mon, Jun 26, 2017 at 07:26:08PM -0700, Joshua D. Drake wrote:

> Alternatively, and ONLY do this if you take a backup right before hand, you
> can set the table unlogged, make the changes and assuming success, make the
> table logged again. That will great increase the write speed and reduce wal
> segment churn.

Note that this is not for just that table, but for all of the
implicated ones because of the CASCADE statements.  It sounds like the
OP is basically rewriting a significant chunk of the entire database,
so nothing is going to be super fast: all those CASCADEs have to fire
and all those other tables need to be updated too.
 
> However, if that fails, the table is dead. You will have to reload it from
> backup.

Right, and that goes for all the affected tables.

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config for fast huge cascaded updates

2017-06-26 Thread Joshua D. Drake

On 06/26/2017 06:29 PM, Andrew Sullivan wrote:

On Tue, Jun 27, 2017 at 10:17:49AM +1200, Craig de Stigter wrote:

We're doing a large migration on our site which involves changing most of
the primary key values. We've noticed this is a *very* slow process.




You can make it faster through a number of simple changes:

1. make sure synchronous_commit is off
2. make sure you have lots of checkpoint_segments (or a very large 
max_wal_size)
3. make sure you checkpoint_timeout is some ridiculously high value (2 
hours)


Alternatively, and ONLY do this if you take a backup right before hand, 
you can set the table unlogged, make the changes and assuming success, 
make the table logged again. That will great increase the write speed 
and reduce wal segment churn.


However, if that fails, the table is dead. You will have to reload it 
from backup.


JD


--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config for fast huge cascaded updates

2017-06-26 Thread Andrew Sullivan
On Tue, Jun 27, 2017 at 10:17:49AM +1200, Craig de Stigter wrote:
> We're doing a large migration on our site which involves changing most of
> the primary key values. We've noticed this is a *very* slow process.

Indeed.

Does the database need to be online when this is happening?

If it were me, I'd try to find a way to dump it, modify the data in a
dump file, and then reload it.  I think that'd be faster.

Another way you might try, if you need to be online while doing this,
is to write the whole thing into a new SQL schema.  Make the mods you
need.  When you think you're close to done, put a trigger in the "old
schema" to update data in the new schema, then do a last pass to catch
anything you missed in the interim, then cut your application over to
the new schema (update the search_path, force everything to disconnect
and reconnect, and when they reconnect they have the new data in
place).  A variation on this technique is also useful for gradual
roll-out of new features -- you don't have to upgrade everything at
once and you have a natural rollback strategy (but you need a more
complicated set of triggers that keeps the two schemas in sync during
cutover period).

This second approach isn't faster, it's hard on I/O and disk space,
but it keeps you up and you can do the changes at a leisurely pace.
Just make sure you have the I/O and space before you do it :)

Hope that helps,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config for fast huge cascaded updates

2017-06-26 Thread Tom Lane
Craig de Stigter  writes:
> We're doing a large migration on our site which involves changing most of
> the primary key values. We've noticed this is a *very* slow process.

> Firstly we've set up all the foreign keys to use `on update cascade`. Then
> we essentially do this on every table:

> UPDATE TABLE users SET id = id + 100;

> Since this cascades via about 40 foreign keys to most of the other tables
> in the database, this update on our fairly small table takes about five
> hours.

Do you have indexes on all the referencing columns?

The core problem here is that the updates will be cascaded one row at a
time.  As long as the referencing rows can be found by an indexscan,
that might be tolerable, but it's certainly not as fast as a bulk
update.

If you can guarantee no other updates while you're doing the migration,
it might be practical to drop the foreign key constraints, run all the
bulk updates by hand (on referencing tables too!), and then re-establish
the constraints.  Of course there's a lot of potential for errors of
omission here, but if you can script it and test the script in advance,
it's worth considering.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Config for fast huge cascaded updates

2017-06-26 Thread Craig de Stigter
Hi folks

We're doing a large migration on our site which involves changing most of
the primary key values. We've noticed this is a *very* slow process.

Firstly we've set up all the foreign keys to use `on update cascade`. Then
we essentially do this on every table:

UPDATE TABLE users SET id = id + 100;


Since this cascades via about 40 foreign keys to most of the other tables
in the database, this update on our fairly small table takes about five
hours.

This is understandable (it's rewriting most of the database) but what
settings can we tweak to make this process faster?

So far we have experimented with the following:

   - checkpoint_timeout : 3600
   - autovacuum: 0
   - max_wal_size: 128 (2GB)
   - synchronous_commit: off

What other things would you recommend to improve performance of this sort
of thing?


-- 
Regards,
Craig

Developer
Koordinates

+64 21 256 9488 <+64%2021%20256%209488> / koordinates.com / @koordinates



Re: [GENERAL] config file question between versions 7.4 - 9.1

2012-05-10 Thread Albe Laurenz
Randy Johnson wrote:
 in the config file for 7.4 we have an entry:
 
 shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
 
 in 9.1 the default is:
 
 shared_buffers = 32MB
 
 
 max connections is the default 100
 
 Do I need to make any adjustments or can I leave it at the default?
 
 The machine is dedicated to Postgres and has 8GB of memory and a
default install of 9.1 and 7.4
 doesn't appear to have any custom configuration.

Read the documentation at
http://www.postgresql.org/docs/current/static/runtime-config-resource.ht
ml#GUC-SHARED-BUFFERS

I'd set it to something between 500MB and 2GB.
You can use the pg_buffercache contrib module to check how
the buffer cache is used and adjust accordingly.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] config file question between versions 7.4 - 9.1

2012-05-09 Thread Randy Johnson
Hello,

in the config file for 7.4 we have an entry:

shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each

in 9.1 the default is:

shared_buffers = 32MB


max connections is the default 100

Do I need to make any adjustments or can I leave it at the default?

The machine is dedicated to Postgres and has 8GB of memory and a default
install of 9.1 and 7.4 doesn't appear to have any custom configuration.

Thanks!

Randy


[GENERAL] Config Changes Broke Postgres Service (Windows)

2010-05-28 Thread Tom Wilcox
Hi,

I am fighting with Postgres on a 64-bit Windows (Server 2008) machine with
96GB trying to get it to use as much memory as possible (I am the only user
and I am running complex queries on large tables). [See my previous thread
for details Out of Memory and Configuration Problems (Big Computer)].

I have changed my postgres.conf file by changing the values of
shared_buffers to 1024, work_mem = 1024 and effective_cache_size=2703601.

Now my postgres service wont start from the windows Services dialog, however
using pgAdmin I can start the service and connect to my database, run
queries etc.

So my problem is that Windows no longer seems to think my postgres service
is running even though it appears to be working fine through a client..

Can anyone tell me what might be going on and how I can fix it so that
postgres uses as much memory and processing power as poss... in a stable
manner?

Cheers,
Tom

P.S. Sorry if this is a repeat but I have been searching for an answer to
this for a LONG time (3months).


Re: [GENERAL] Config Changes Broke Postgres Service (Windows)

2010-05-28 Thread Stephen Frost
* Tom Wilcox (hungry...@googlemail.com) wrote:
 Can anyone tell me what might be going on and how I can fix it so that
 postgres uses as much memory and processing power as poss... in a stable
 manner?

I realize this probably isn't the answer you're looking for, and
hopefully someone can come up with a better answer for you, but my first
reaction to this would be move to a non-Windows platform, eg: Linux,
BSD, Solaris, etc..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Config Changes Broke Postgres Service (Windows)

2010-05-28 Thread Tom Wilcox

Hi Stephen,

Thanks for the response. Unfortunately, we are somewhat tied to a 
Windows platform and I would expect us to sooner switch to SQL Server 
rather than move to Linux/Unix/BSD.. Although, (in complete contrast to 
what I just said), I am toying with the idea of the dual boot or 
virtualisation options..


Cheers,
Tom

On 28/05/2010 15:44, Stephen Frost wrote:

* Tom Wilcox (hungry...@googlemail.com) wrote:
   

Can anyone tell me what might be going on and how I can fix it so that
postgres uses as much memory and processing power as poss... in a stable
manner?
 

I realize this probably isn't the answer you're looking for, and
hopefully someone can come up with a better answer for you, but my first
reaction to this would be move to a non-Windows platform, eg: Linux,
BSD, Solaris, etc..

Thanks,

Stephen
   



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config Changes Broke Postgres Service (Windows)

2010-05-28 Thread Alban Hertroys
On 28 May 2010, at 15:40, Tom Wilcox wrote:

 Hi,
 
 I am fighting with Postgres on a 64-bit Windows (Server 2008) machine with 
 96GB trying to get it to use as much memory as possible (I am the only user 
 and I am running complex queries on large tables). [See my previous thread 
 for details Out of Memory and Configuration Problems (Big Computer)].

There is no 64-bit Windows version of the postgresql binaries, so you probably 
won't be able to use more than 2 or 4 GB of all that memory...

There will be one once Postgres 9 is released: 
http://wiki.postgresql.org/wiki/64bit_Windows_port

 I have changed my postgres.conf file by changing the values of shared_buffers 
 to 1024, work_mem = 1024 and effective_cache_size=2703601.

I'm pretty sure those values are in bytes, so these are awfully small values 
for a database server.

 Now my postgres service wont start from the windows Services dialog, however 
 using pgAdmin I can start the service and connect to my database, run queries 
 etc.

Sorry, can't help you there - I rarely use Windows.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4bffe51f10216640413027!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-16 Thread Lew

BuyAndRead Test wrote:

This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be 
set to

if I use 8 GB, as much as 4 GB?


John R Pierce wrote:
I'd keep it around 1-2GB shared_buffers, and let the rest of the memory 
be used as file system cache.  postgres works quite happily that way.


From what I understand, database tuning is one of the Dark Arts.  PG is 
unique in that it's enterprise-grade but that standard settings work well 
across a wide range of usage scenarios.  If you are dealing with unusually 
large numbers of connections and/or unusually large working sets, I'm guessing 
as you approach terabyte-scale dbs and up, it pays to go to even larger 
shared_buffers and work_mem and do other arcane tuning magic.


--
Lew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Config help

2009-11-15 Thread BuyAndRead Test
Hi

I need some help with our postgresql.conf file. I would appreciate if
someone could look at the values and tell me if it looks alright or if I
need to change anything.

The db server has 4 GB of memory and one quad core CPU (2,53 GHz). 
The hard drives is on a iSCSI array and is configured as follows:
DB data: 4 x SAS (10.000 rpm) disks in RAID 10
DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1
OS: Linux (Debian Lenny)
DB: PostgreSQL 8.4

The DB is used by a website. It has 75 tables and about a total of 10 mill
rows. The total size of the DB data (data+indexes?) is reported to be about
4 GB when I use the \l+ command in version 8.4.

I have used the following link as a guide:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server 
This is the changes I have done to the default postgresql.conf file:
shared_buffers = 2048MB
work_mem = 6MB
wal_buffers = 256kB
checkpoint_segments = 20
random_page_cost = 3.0
default_statistics_target = 50

Should I change the default value of temp_buffers or maintenance_work_mem as
well, and what value should I choose? Is there any other values that should
be changed from the default?

And another question: Is there a way to find out the maximum simultaneous
connections that has been used? I think that I could reduce the max number
of connection to save some memory.

Regards

Bjørn Håkon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread Scott Marlowe
On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test t...@buyandread.com wrote:
 Hi

 I need some help with our postgresql.conf file. I would appreciate if
 someone could look at the values and tell me if it looks alright or if I
 need to change anything.

 The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
 The hard drives is on a iSCSI array and is configured as follows:
 DB data: 4 x SAS (10.000 rpm) disks in RAID 10
 DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1

Is there a battery backed cache in there somewhere?  That would help
on handling high write loads.

 OS: Linux (Debian Lenny)
 DB: PostgreSQL 8.4

 The DB is used by a website. It has 75 tables and about a total of 10 mill
 rows. The total size of the DB data (data+indexes?) is reported to be about
 4 GB when I use the \l+ command in version 8.4.

The cheapest performance boost would be more memory.  Going to 8Gigs
would let the whole db get cached and leave enough memory over for
sorts and OS etc.

 I have used the following link as a guide:
 http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
 This is the changes I have done to the default postgresql.conf file:
 shared_buffers = 2048MB

A little high for a machine with only 4G ram.  With 8 G if you
allocate 4G for share_buffers you'd leave 4G for OS and pg.  Here
you're only leaving 2G.

 work_mem = 6MB

Depending on your workload it might be better to raise this and lower
shared_buffers.

 wal_buffers = 256kB
 checkpoint_segments = 20
 random_page_cost = 3.0
 default_statistics_target = 50

The new default is 100, I'd tend to stick with that unless you have
very uniform data.

 Should I change the default value of temp_buffers or maintenance_work_mem as
 well, and what value should I choose? Is there any other values that should
 be changed from the default?

Always consider cranking up maint work mem because not many things use
it and the things that do can really use it.

 And another question: Is there a way to find out the maximum simultaneous
 connections that has been used? I think that I could reduce the max number
 of connection to save some memory.

You'd really need to track that yourself with some kind of simple
script.   (bash)

while true; do psql mydb -c select count(*) from pg_stat_activity
;sleep 60;done | tee myconn.log

or something like that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread BuyAndRead Test
Thanks for the quick and helpful reply.

Yes, the storage array has a battery backed cache, it’s a Dell PowerVault
MD3000i, with dual controllers.

This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?

-Bjørn


 -Opprinnelig melding-
 Fra: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] På vegne av Scott Marlowe
 Sendt: 15. november 2009 23:21
 Til: BuyAndRead Test
 Kopi: pgsql-general@postgresql.org
 Emne: Re: [GENERAL] Config help
 
 On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test t...@buyandread.com
 wrote:
  Hi
 
  I need some help with our postgresql.conf file. I would appreciate if
  someone could look at the values and tell me if it looks alright or
 if I
  need to change anything.
 
  The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
  The hard drives is on a iSCSI array and is configured as follows:
  DB data: 4 x SAS (10.000 rpm) disks in RAID 10
  DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1
 
 Is there a battery backed cache in there somewhere?  That would help
 on handling high write loads.
 
  OS: Linux (Debian Lenny)
  DB: PostgreSQL 8.4
 
  The DB is used by a website. It has 75 tables and about a total of 10
 mill
  rows. The total size of the DB data (data+indexes?) is reported to be
 about
  4 GB when I use the \l+ command in version 8.4.
 
 The cheapest performance boost would be more memory.  Going to 8Gigs
 would let the whole db get cached and leave enough memory over for
 sorts and OS etc.
 
  I have used the following link as a guide:
  http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
  This is the changes I have done to the default postgresql.conf file:
  shared_buffers = 2048MB
 
 A little high for a machine with only 4G ram.  With 8 G if you
 allocate 4G for share_buffers you'd leave 4G for OS and pg.  Here
 you're only leaving 2G.
 
  work_mem = 6MB
 
 Depending on your workload it might be better to raise this and lower
 shared_buffers.
 
  wal_buffers = 256kB
  checkpoint_segments = 20
  random_page_cost = 3.0
  default_statistics_target = 50
 
 The new default is 100, I'd tend to stick with that unless you have
 very uniform data.
 
  Should I change the default value of temp_buffers or
 maintenance_work_mem as
  well, and what value should I choose? Is there any other values that
 should
  be changed from the default?
 
 Always consider cranking up maint work mem because not many things use
 it and the things that do can really use it.
 
  And another question: Is there a way to find out the maximum
 simultaneous
  connections that has been used? I think that I could reduce the max
 number
  of connection to save some memory.
 
 You'd really need to track that yourself with some kind of simple
 script.   (bash)
 
 while true; do psql mydb -c select count(*) from pg_stat_activity
 ;sleep 60;done | tee myconn.log
 
 or something like that.
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.425 / Virus Database: 270.14.64/2501 - Release Date:
 11/14/09 19:42:00



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread John R Pierce

BuyAndRead Test wrote:

This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?
  



I'd keep it around 1-2GB shared_buffers, and let the rest of the memory 
be used as file system cache.  postgres works quite happily that way.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Tomasz Ostrowski
On Tue, 27 Nov 2007, Erik Jones wrote:

 I'm just wondering what is considered the general wisdom on config setting 
 for large pg_restore runs.

I think the first thing you can do is to fsync=off temporarily. But
do remember to turn this back on when you're done restoring.

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Vivek Khera


On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:

I'm just wondering what is considered the general wisdom on config  
setting for large pg_restore runs.  I know to increase  
maintenance_work_mem and turn off autovacuum and stats collection.   
Shoule should checkpoint_segments and checkpoint_timeout be  
increased?  Would twiddling shared_buffers help?  What about


At least with 8.0 testing I did a while back, I found that bumping  
checkpoint segments was the biggest benefit.  I use 256 segments as a  
matter of course now, even for normal operations.




=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.MailerMailer, LLC Rockville, MD
http://www.MailerMailer.com/ +1-301-869-4449 x806



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Erik Jones

Thanks, we're at 128 now but I'll see how bumping that up goes.

On Nov 28, 2007, at 9:46 AM, Vivek Khera wrote:



On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:

I'm just wondering what is considered the general wisdom on config  
setting for large pg_restore runs.  I know to increase  
maintenance_work_mem and turn off autovacuum and stats  
collection.  Shoule should checkpoint_segments and  
checkpoint_timeout be increased?  Would twiddling shared_buffers  
help?  What about


At least with 8.0 testing I did a while back, I found that bumping  
checkpoint segments was the biggest benefit.  I use 256 segments as  
a matter of course now, even for normal operations.




=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.MailerMailer, LLC Rockville, MD
http://www.MailerMailer.com/ +1-301-869-4449 x806




Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Config settings for large restore

2007-11-27 Thread Erik Jones

Hi,

I'm just wondering what is considered the general wisdom on config  
setting for large pg_restore runs.  I know to increase  
maintenance_work_mem and turn off autovacuum and stats collection.   
Shoule should checkpoint_segments and checkpoint_timeout be  
increased?  Would twiddling shared_buffers help?  What about the  
bgwriter (I'm thinking this should just follow the checkpoint and  
shared_buffers settings)?  I realize that the actual setting will be  
hardware dependent, I'm just looking for general rules of thumb for  
what to tweak wrt increased restore speed/performance.



Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] config

2000-10-13 Thread Tom Lane

Michael Engelhart [EMAIL PROTECTED] writes:
 Thanks Adam.  Yeah, I know that it uses a mach kernel and variant of
 freebsd runs atop the kernel.  I would attempt the FreeBSD template
 but the other snag is that it has to compile on PowerPC.

FreeBSD template seems like it'd be a good starting point.  7.0.*
will probably not work on PPC unless you compile with optimization
level -O0, so tweak CFLAGS in the template that way before running
configure.  Otherwise it seems like it'd more or less work --- give
it a shot and let us know how it goes.

BTW, I think the PPC optimization issues are solved in current sources.
From a development perspective it'd be more interesting to hear your
report on what happens with a current nightly snapshot instead of 7.0.2.

regards, tom "didn't order my OS X yet :-(" lane



Re: [GENERAL] config

2000-10-13 Thread Michael Engelhart

Thanks to everyone for giving me a starting point. 

here's what I tried so far:
changed the CFLAGS in the src/template/freebsd file to:
CFLAGS='-O0 -pipe'

did
./configure --with-template=freebsd

configure succeeded.

Did a make and started to build.  During the build, there were a ton of messages of 
this type:
../../../../src/include/catalog/pg_type.h:414: stray '\' in program

make exited with this:

cc -c  -I../../../src/include  -O0 -pipe -Wall -Wmissing-prototypes 
-Wmissing-declarations -o dynloader.o dynloader.c
dynloader.c:42: header file 'link.h' not found
dynloader.c:43: header file 'dlfcn.h' not found
../../../src/include/dynloader.h:19: header file 'link.h' not found
make[3]: *** [dynloader.o] Error 1
make[2]: *** [port-recursive] Error 2
make[1]: *** [all] Error 2
make: *** [all] Error 2


Not sure if configure didn't move the correct files or if there was something else 
wrong.  I was using the lastest developer snapshot for testing.  I also tried it on a 
stock 7.0.2 source but got the same error.

Any ideas?  I'm willing to try and get this working but unfortunately my grasp of the  
process is pretty weak.  

Mike


On Friday, October 13, 2000, at 11:42 PM, Tom Lane wrote:

Michael Engelhart [EMAIL PROTECTED] writes:
 Thanks Adam.  Yeah, I know that it uses a mach kernel and variant of
 freebsd runs atop the kernel.  I would attempt the FreeBSD template
 but the other snag is that it has to compile on PowerPC.

FreeBSD template seems like it'd be a good starting point.  7.0.*
will probably not work on PPC unless you compile with optimization
level -O0, so tweak CFLAGS in the template that way before running
configure.  Otherwise it seems like it'd more or less work --- give
it a shot and let us know how it goes.

BTW, I think the PPC optimization issues are solved in current sources.
From a development perspective it'd be more interesting to hear your
report on what happens with a current nightly snapshot instead of 7.0.2.

regards, tom "didn't order my OS X yet :-(" lane



[GENERAL] config

2000-10-12 Thread Michael Engelhart

Hi,
I just got the Mac OS X public beta running on my home computer  and want to compile 
postgresql for it but don't know where to start.  I have installed Postgresql on linux 
boxes but they always just work because there are configs for them.   Since v7.0.2 
doesn't know about Mac OS X I'm assuming I need to get my hands dirty to make it 
compile.  Does anyone have any pointers on where to begin?   I don't know where to 
start.

Out of the box when I run config, it says (which is  to be expected):

checking host system type... configure: error: can not guess host type; you must 
specify one


Thanks for any advice

Mike