[GENERAL] Error with Application Stack Builder 3.0.0

2011-11-13 Thread alextc
Hi all,

I am new to PostgreSQL. I have recently installed PostgreSQL 9.1 with
Application Stack Builder 3.0.0. However, I have ever had the Stack Builder
work while trying to install new software.

The error message is as below.
http://postgresql.1045698.n5.nabble.com/file/n4986863/postgreSQL.jpg 

PostgreSQL runs a home computer with a Vista OS.  I am with no LAN, and has
ticked off LAN settings on IE.

I have tried to google around it for a while but none of results quite
solved the problem. Could anyone please give me some tips? Thanks a lot in
advance.

Cheers, Alex

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-with-Application-Stack-Builder-3-0-0-tp4986863p4986863.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with Stack Builder

2011-11-13 Thread alextc
Hi Ray,

Have you got any luck to get around this issue?

I am having the same issue. I just installed PostgreSQL 9.1 with Stack
Builder 3.0.0.

Every time I was trying to install additional software I received the error
message popped out saying ...http://www.postgresql.org/application-v2.xml
cannot be opened.

http://postgresql.1045698.n5.nabble.com/file/n4986851/postgreSQL.jpg 

I am with a home computer running Vista and not within an LAN. I am able to
display the XML file in the browser.

Could anybody please help out?

Thanks,

Cheers, Alex

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-with-Stack-Builder-tp3262069p4986851.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error with Application Stack Builder 3.0.0

2011-11-13 Thread John R Pierce

On 11/12/11 5:00 AM, alextc wrote:

Hi all,

I am new to PostgreSQL. I have recently installed PostgreSQL 9.1 with
Application Stack Builder 3.0.0. However, I have ever had the Stack Builder
work while trying to install new software.

The error message is as below.
http://postgresql.1045698.n5.nabble.com/file/n4986863/postgreSQL.jpg





Stack Builder is not actually part of the PostgreSQL Database Server.  
Its a third party package from EnterpriseDB they bundle with their MS 
Windows port of Postgres.  You'd likely be better off asking 
EnteprriseDB for help.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Phoenix Kiula
Hi.

I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.

However, my db is now 60GB in size, so this daily operation is making
less and less sense. (Some of you may think this is foolish to begin
with).

Question: what can I do to rsync only the new additions in every table
starting 00:00:01 until 23:59:59 for each day?

Searching google leads to complex things like incremental WAL and
whatnot, or talks of stuff like pgcluster. I'm hoping there's a more
straightforward core solution without additional software or PHD
degrees.

Many thanks for any ideas!
PK

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Robins Tharakan

Hi,

Well, the 'complex' stuff is only as there for larger or high-traffic 
DBs. Besides at 60GB that is a largish DB in itself and you should begin 
to try out a few other backup methods nonetheless. That is moreso, if 
you are taking entire DB backups everyday, you would save a considerable 
lot on (backup) storage.


Anyway, as for pgdump, we have a DB 20x bigger than you mention (1.3TB) 
and it takes only half a day to do a pgdump+gzip (both). One thing that 
comes to mind, how are you compressing? I hope you are doing this in one 
operation (or at least piping pgdump to gzip before writing to disk)?


--
Robins Tharakan

On 11/13/2011 05:08 PM, Phoenix Kiula wrote:

Hi.

I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.

However, my db is now 60GB in size, so this daily operation is making
less and less sense. (Some of you may think this is foolish to begin
with).

Question: what can I do to rsync only the new additions in every table
starting 00:00:01 until 23:59:59 for each day?

Searching google leads to complex things like incremental WAL and
whatnot, or talks of stuff like pgcluster. I'm hoping there's a more
straightforward core solution without additional software or PHD
degrees.

Many thanks for any ideas!
PK



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Phoenix Kiula
On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan
robins.thara...@comodo.com wrote:
 Hi,

 Well, the 'complex' stuff is only as there for larger or high-traffic DBs.
 Besides at 60GB that is a largish DB in itself and you should begin to try
 out a few other backup methods nonetheless. That is moreso, if you are
 taking entire DB backups everyday, you would save a considerable lot on
 (backup) storage.


Thanks. I usually keep only the last 6 days of it. And monthly backups
as of Day 1. So it's not piling up or anything.

What other methods do you recommend? That was in fact my question.
Do I need to install some modules?



 Anyway, as for pgdump, we have a DB 20x bigger than you mention (1.3TB) and
 it takes only half a day to do a pgdump+gzip (both). One thing that comes to
 mind, how are you compressing? I hope you are doing this in one operation
 (or at least piping pgdump to gzip before writing to disk)?



I'm gzipping with this command (this is my backup.sh)--


BKPFILE=/backup/pg/dbback-${DATA}.sql
pg_dump MYDB -U MYDB_MYDB -f ${BKPFILE}
gzip --fast ${BKPFILE}


Is this good enough? Sadly, this takes up over 97% of the CPU when it's running!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-13 Thread Gregg Jaskiewicz
NVM the implementation, but ability to clone the database without
disconnects would be very good for backups and testing.
We also create loads of templates, so that would make it more practical.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FK dissapearing

2011-11-13 Thread Gregg Jaskiewicz
I know it's a no-no to respond to my own posts, but here's what I'm
going to do.
I'll test newer revisions of 8.3 and also 9.1 in the out-of-disk-space
scenario and report back :P

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Robins Tharakan

What other methods do you recommend? That was in fact my question.
Do I need to install some modules?


Well depending on your PG version you could read up about the various 
backup methods. I believe you'll be interested in 24.3 there when you 
ask for WAL archiving. The good thing is, its useful for DBs much bigger 
and especially for those that 'cant' go down for even a minute, but yes 
it has its trade-offs. (Its not that bad actually, but its a call you 
need to take).


http://www.postgresql.org/docs/8.4/static/backup.html

 I'm gzipping with this command (this is my backup.sh)--


  BKPFILE=/backup/pg/dbback-${DATA}.sql
  pg_dump MYDB -U MYDB_MYDB -f ${BKPFILE}
  gzip --fast ${BKPFILE}

You could club the pgdump / gzip in one step, thereby avoiding extra 
writes to disk. The URL below should help you on that (pgdump dbname | 
gzip  file.gz)


http://www.postgresql.org/docs/8.4/static/backup-dump.html#BACKUP-DUMP-LARGE

You could also do a

pg_dump -Fc | gzip -1 -c  dumpfile.gz

at the cost of a slightly larger (but faster backup).

--
Robins Tharakan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Robins Tharakan

You could also do a

pg_dump -Fc | gzip -1 -c  dumpfile.gz

at the cost of a slightly larger (but faster backup).


Actually if you're going this route, you could skip even the pg_dump 
compression as well...


pg_dump db | gzip -1 -c  dumpfile.gz

--
Robins Tharakan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error with Application Stack Builder 3.0.0

2011-11-13 Thread alextc
Thanks for your help, John.

I am working with Windows OS but is there any official (not 3rd party like
the EnterpriseDB one) PostgreSQL installer for Windows?

Thanks.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-with-Application-Stack-Builder-3-0-0-tp4986863p4988353.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Gregg Jaskiewicz
pg_dump -Fc already compresses, no need to pipe through gzip

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Craig Ringer
On Nov 13, 2011 7:39 PM, Phoenix Kiula

 Question: what can I do to rsync only the new additions in every table
 starting 00:00:01 until 23:59:59 for each day?

You can't really. You can rsync the whole thing and it can be faster, but
you can't really just copy the last changes as a diff.

That's because Pg writes all over the data files, it doesn't just append.
There isn't any 'last changed timestamp' on records, and even if there were
Pg would have no way to know which records to delete in the replication
target.

If you want differential backups you'll need to use a row based replication
system like slony or bucardo.

It'd be nice if Pg offered easier differential backups, but at this point
there isn't really anything.

 Searching google leads to complex things like incremental WAL and
 whatnot, or talks of stuff like pgcluster. I'm hoping there's a more
 straightforward core solution without additional software or PHD
 degrees.

Nothing really basic. You'll need to use PITR (WAL shipping), streaming
replication or a row level replication solution.

 Many thanks for any ideas!
 PK

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Andy Colson

On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:

pg_dump -Fc already compresses, no need to pipe through gzip



I dont think that'll use two core's if you have 'em.  The pipe method will use 
two cores, so it should be faster.  (assuming you are not IO bound).

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-13 Thread Clark C. Evans
On Sunday, November 13, 2011 7:33 AM, Simon Riggs
si...@2ndquadrant.com wrote:
 On Sat, Nov 12, 2011 at 9:40 PM, Clark C. Evans c...@clarkevans.com
  [We] should be using CREATE DATABASE ... WITH TEMPLATE. 
  However, this has two big disadvantages.  First, it only works 
  if you can kick the users off the clone.  Secondly, it still 
  takes time, uses disk space, etc.  We have some big databases.
 
 An interesting proposal. Thanks for taking the time to raise this.

Thank you for responding Simon.

 The existing situation is that you need to either:
 1) quiesce the database so it can be copied locally
 2) take a hot backup to create a clone on another server
 
 (1) currently involves disconnection. Would a command to quiesce
 sessions without disconnection be useful? We could get sessions to
 sleep until woken after the copy. With large databases we would still
 need to copy while sessions sleep to ensure a consistent database
 after the copy.

Could their be a way to put the database in read only mode,
where it rejects all attempts to change database state with an
appropriate application level error message?  We could then 
update our application to behave appropriately while the copy 
is being performed.   Something like this could be broadly 
useful in other contexts as well, for example, having a replica
that you brought up for reporting purposes.

Even so, the CREATE DATABASE... WITH TEMPLATE still has a set of 
additional issues with it.  It ties up the hard drive with activity
and then extra space while it duplicates data.  Further, it causes
the shared memory cache to be split between the original and the
replica, this causes both databases to be much slower.  Finally,
it creates a ton of WAL traffic (perhaps we could suspend this?)

 Is (2) a problem for you? In what way?

Due to our configuration, yes.  Being able to CLONE the
database in the same cluster is much preferred.  Our user
configuration, deliberately, does not involve hot backups.

Hot backups to another server won't work for us since our
servers are encrypted and isolated behind client firewalls.
Data that leaves the box has to be encrypted where the 
decrypt key is only available upon hardware failure, etc.
Our upstream pipe isn't huge... which is why the WAL traffic
for backups is also problematic.  

Perhaps we could create two PostgreSQL clusters on each server.
One of them would be production, the other would be for staging.  
This involves some logistics...  the advantage of this approach 
is that we could limit resource usage on the slave and turn off
backups on it, reducing our disk usage and WAL traffic.   We'd
keep shared memory on the slave to a minimum.   This solution
still chews up 2x disk space and doubles the disk activity.

Could WITH TEMPLATE reach into another cluster's storage?

Best,

Clark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Phoenix Kiula
On Sun, Nov 13, 2011 at 10:45 PM, Andy Colson a...@squeakycode.net wrote:
 On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:

 pg_dump -Fc already compresses, no need to pipe through gzip


 I dont think that'll use two core's if you have 'em.  The pipe method will
 use two cores, so it should be faster.  (assuming you are not IO bound).



I am likely IO bound. Anyway, what's the right code for the pipe
method? I think the earlier recommendation had a problem as -Fc
already does compression.

Is this the right code for the FASTEST possible backup if I don't care
about the size of the dump, all I want is that it's not CPU-intensive
(with the tables I wish excluded) --


   BKPFILE=/backup/pg/dbback-${DATA}.sql
   pg_dump MYDB -T excludetable1 -T excludetable2 -U MYDB_MYDB | gzip
--fast  ${BKPFILE}


Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM?]: Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-13 Thread Simon Riggs
On Sun, Nov 13, 2011 at 3:07 PM, Clark C. Evans c...@clarkevans.com wrote:

 Could their be a way to put the database in read only mode,
 where it rejects all attempts to change database state with an
 appropriate application level error message?  We could then
 update our application to behave appropriately while the copy
 is being performed.   Something like this could be broadly
 useful in other contexts as well, for example, having a replica
 that you brought up for reporting purposes.

 Even so, the CREATE DATABASE... WITH TEMPLATE still has a set of
 additional issues with it.  It ties up the hard drive with activity
 and then extra space while it duplicates data.  Further, it causes
 the shared memory cache to be split between the original and the
 replica, this causes both databases to be much slower.  Finally,
 it creates a ton of WAL traffic (perhaps we could suspend this?)

It would be possible to suspend writes to a particular database and
then copy the database without writing WAL. It's probably possible to
wait for all write transactions to complete first.

Yes, it would use up disk space and shared_buffers to cache the new db.

Allowing writes to continue while we copy is more complex.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Preserving ORDER of TEMP Tables during transaction

2011-11-13 Thread Ludo Smissaert

Greetings,

Within a PL/PgSQL function I do a

CREATE TEMPORARY TABLE v_temp ON COMMIT DROP
AS
SELECT ctime FROM source ORDER BY ctime
WITH DATA;

Then I use the v_temp in the same transaction block:

FOR v_ctime IN
SELECT ctime FROM v_temp
LOOP

END LOOP;

Now I am curious, will the loop return values for ctime in the *same order*
as the query that created the temporary table, or is this undefined?

With other words: can I rely on the ORDER BY of the query that defined 
the temporary table? Is there a way to do that?


Regards,
Ludo Smissaert


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CLONE DATABASE (with copy on write?)

2011-11-13 Thread Tom Lane
Clark C. Evans c...@clarkevans.com writes:
 Even so, the CREATE DATABASE... WITH TEMPLATE still has a set of 
 additional issues with it.  It ties up the hard drive with activity
 and then extra space while it duplicates data.  Further, it causes
 the shared memory cache to be split between the original and the
 replica, this causes both databases to be much slower.  Finally,
 it creates a ton of WAL traffic (perhaps we could suspend this?)

That last claim is false ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Preserving ORDER of TEMP Tables during transaction

2011-11-13 Thread David Johnston
On Nov 13, 2011, at 11:13, Ludo Smissaert l...@ludikidee.com wrote:

 Greetings,
 
 Within a PL/PgSQL function I do a
 
 CREATE TEMPORARY TABLE v_temp ON COMMIT DROP
 AS
 SELECT ctime FROM source ORDER BY ctime
 WITH DATA;
 
 Then I use the v_temp in the same transaction block:
 
 FOR v_ctime IN
SELECT ctime FROM v_temp
 LOOP

 END LOOP;
 
 Now I am curious, will the loop return values for ctime in the *same order*
 as the query that created the temporary table, or is this undefined?
 
 With other words: can I rely on the ORDER BY of the query that defined the 
 temporary table? Is there a way to do that?
 
 Regards,
 Ludo Smissaert
 
 

Why risk basing your query's success on an implementation artifact?  Put an 
explicit ORDER BY on the SELECT FROM v_temp.

Related question, though.  Does the time to perform a sort vary based upon the 
entropy of the input data?  If the original ORDER BY does result in the records 
being provided to sorter in order already does the sort basically finish 
immediately or is the algorithm strictly dependent upon the number of records 
to sort?

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Preserving ORDER of TEMP Tables during transaction

2011-11-13 Thread Ludo Smissaert

On 11/13/11 17:58, David Johnston wrote:

 On Nov 13, 2011, at 11:13, Ludo Smissaert l...@ludikidee.com wrote:
 Within a PL/PgSQL function I do a

 CREATE TEMPORARY TABLE v_temp ON COMMIT DROP AS SELECT ctime FROM
 source ORDER BY ctime WITH DATA;

 Then I use the v_temp in the same transaction block:

 FOR v_ctime IN SELECT ctime FROM v_temp LOOP  END LOOP;

 Now I am curious, will the loop return values for ctime in the *same
 order* as the query that created the temporary table, or is this
 undefined?

 With other words: can I rely on the ORDER BY of the query that
 defined the temporary table? Is there a way to do that?

 Why risk basing your query's success on an implementation artifact?
 Put an explicit ORDER BY on the SELECT FROM v_temp.

 Related question, though. Does the time to perform a sort vary based
 upon the entropy of the input data? If the original ORDER BY does
 result in the records being provided to sorter in order already does
 the sort basically finish immediately or is the algorithm strictly
 dependent upon the number of records to sort?


The algorithm is that I am returning a SETOF cursors pointing
to two different tables and data of these two tables will be
printed by the client like this:

row 1 of table a
   set of rows from table b, depending on value of preceding a
row 2 of table a
   set depending on ... etc.

The first cursor encompasses all rows of a and is needed
by the client for alignment.

dummy_cursor_a--  all rows for alignment
next_of_a-- first row
details_from_b_depending_on_previous_a
next_of_a
details_from_b

The client receives instructions in what to do with the cursors,
and basically does not know anything about the sort of data
it prints. It is just instructed in how to handle the cursors.

Further the entire result set of a depends on a dynamically
generated WHERE-clause.

EXECUTE 'CREATE TEMPORARY TABLE v_temp ON COMMIT DROP AS  '
  'SELECT projection FROM view WHERE ' || v_filter || ' ORDER BY '
 'WITH DATA;'


Well, I guess I will think of something simpler.

Thanks for answering.

Regards,

Ludo Smissaert



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Preserving ORDER of TEMP Tables during transaction

2011-11-13 Thread Scott Marlowe
On Sun, Nov 13, 2011 at 12:28 PM, Ludo Smissaert l...@ludikidee.com wrote:
 The algorithm is that I am returning a SETOF cursors pointing
 to two different tables and data of these two tables will be
 printed by the client like this:

Have you actually measure the cost of adding the order by to the
select from the view?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error with Application Stack Builder 3.0.0

2011-11-13 Thread John R Pierce

On 11/13/11 3:45 AM, alextc wrote:

I am working with Windows OS but is there any official (not 3rd party like
the EnterpriseDB one) PostgreSQL installer for Windows?


not any more... but, you don't have to use the 'stackbuilder' to run 
postgres


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Restore db

2011-11-13 Thread Alexander Burbello
Hi folks,

My server has a daily routine to import a dump file, however its taking
long time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there
are many blob fields), however it takes 4 hours to import using pg_restore.

What can I do to tune this database to speed up this restore??

My current db parameters are:
shared_buffers = 256MB
maintenance_work_mem = 32MB

Any suggestion is very welcome.
Thank you.
Alex


Re: [GENERAL] Restore db

2011-11-13 Thread Ondrej Ivanič
Hi,

On 14 November 2011 11:09, Alexander Burbello burbe...@yahoo.com.br wrote:
 What can I do to tune this database to speed up this restore??
 My current db parameters are:
 shared_buffers = 256MB
 maintenance_work_mem = 32MB

You should increase maintenance_work_mem as much as you can.
full_page_writes, archive_mode and auto_vacuum should be disable
during restore. Increase checkpoint_segments (for example to 64) and
set wal_buffers to 16MB. 8.4 introduced parallel restore (pg_restore
option -j num jobs).

Maybe you can't do anything mentioned above because it is not possible
to restart server (you can change maintenance_work_mem via PGOPTIONS)
or there is a single table to import (-j is not aplicable) -- try to
drop indexes and recreate them after import.

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restore db

2011-11-13 Thread Andy Colson

On 11/13/2011 06:09 PM, Alexander Burbello wrote:

Hi folks,

My server has a daily routine to import a dump file, however its taking long 
time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there are 
many blob fields), however it takes 4 hours to import using pg_restore.

What can I do to tune this database to speed up this restore??

My current db parameters are:
shared_buffers = 256MB
maintenance_work_mem = 32MB

Any suggestion is very welcome.
Thank you.
Alex


Are you doing this over a network?

If you watch it restore with vmstat (or top) are you IO bound?  If so, 
temporarily turn off fsync, restore the db, then turn it back on.

something like:
autovacuum = off
fsync = off
synchronous_commit = off
full_page_writes = off
bgwriter_lru_maxpages = 0


On the other hand, if you are cpu bound, use the multi-core-restore-option -j.

Or use both.


My current db parameters are:
shared_buffers = 256MB
maintenance_work_mem = 32MB


This is useless information without knowing anything about your computer.  If 
you have 512 Meg of ram its a lot different than if you have 32 Gig.


-Andy
 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with Stack Builder

2011-11-13 Thread Craig Ringer
On 12/11/11 20:51, alextc wrote:
 Hi Ray,
 
 Have you got any luck to get around this issue?
 
 I am having the same issue. I just installed PostgreSQL 9.1 with Stack
 Builder 3.0.0.
 
 Every time I was trying to install additional software I received the error
 message popped out saying ...http://www.postgresql.org/application-v2.xml
 cannot be opened.

Your computer is behind a proxy server or firewall that is limiting
access to the file. If it works in your browser, you'll need to check
what the proxy server settings are there and ensure the
installer/stackbuilder uses the same ones.

The installer's error message really needs to be improved to suggest
proxy server settings.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Where to get PG 9.0.5 SLES RPM's !?

2011-11-13 Thread David Morton
I'm desperately trying to get a hold of the latest RPM's for PostgreSQL 9.0..5 
for SLES 11 SP1 x86_64  I simply can not find these anywhere !!

It seems that the good folk over at software.opensuse.org are only compiling 
9.1.x now. Rather annoying to say the least for those of us who don't want to 
upgrade data format to keep up with bug fixes.

Anyone have ideas where these can be found / built !? I don't want to start 
building from source if it can be avoided ...

[GENERAL] partitioning a dataset + employing hysteresis condition

2011-11-13 Thread Amit Dor-Shifer
Hi,
I've got this table:
create table phone_calls
(
start_time timestamp,
device_id integer,
term_status integer
);

It describes phone call events. A 'term_status' is a sort-of an exit status
for the call, whereby a value != 0 indicates some sort of error.
Given that, I wish to retrieve data on devices with a persisting error on
them, of a specific type. I.E. that their last term_status was, say 2. I'd
like to employ some hysteresis on the query: only consider a device as
errorring if:
1. the last good (0) term_status pre-dates a bad (2) term_status.
2. it has at least N bad term_status events following the last good one.
3. The time span between the first bad term_status event and the last one
is = T minutes

For instance, w/the following data set:

INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '10 minutes', 1, 2, 0);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '9 minutes', 1, 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '7 minutes', 1, 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '6 minutes', 1, 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '5 minutes', 1, 2, 0);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '4 minutes', 1, 2, 2);
INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
interval '3 minutes', 1, 2, 2);
INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
interval '2 minutes', 1, 2, 2);

with N=3, T=3
The query should return device_id 2 as errorring, as it registered 3 bad
events for at least 3 minutes.

I assume some partitioning needs to be employed here, but am not very
sure-footed on the subject.

Would appreciate some guidance.
10x,


Re: [GENERAL] partitioning a dataset + employing hysteresis condition

2011-11-13 Thread Amit Dor-Shifer
On Mon, Nov 14, 2011 at 4:29 PM, Amit Dor-Shifer
amit.dor.shi...@gmail.comwrote:

 Hi,
 I've got this table:
 create table phone_calls
 (
 start_time timestamp,
 device_id integer,
 term_status integer
 );

 It describes phone call events. A 'term_status' is a sort-of an exit
 status for the call, whereby a value != 0 indicates some sort of error.
 Given that, I wish to retrieve data on devices with a persisting error on
 them, of a specific type. I.E. that their last term_status was, say 2. I'd
 like to employ some hysteresis on the query: only consider a device as
 errorring if:
 1. the last good (0) term_status pre-dates a bad (2) term_status.
 2. it has at least N bad term_status events following the last good
 one.
 3. The time span between the first bad term_status event and the last
 one is = T minutes

 For instance, w/the following data set:

 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '10 minutes', 1, 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '9 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '7 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '6 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '5 minutes', 1, 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now()
 - interval '4 minutes', 1, 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
 interval '3 minutes', 1, 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
 interval '2 minutes', 1, 2, 2);

 with N=3, T=3
 The query should return device_id 2 as errorring, as it registered 3 bad
 events for at least 3 minutes.

 I assume some partitioning needs to be employed here, but am not very
 sure-footed on the subject.

 Would appreciate some guidance.
 10x,


... fixed data set:

INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '10 minutes', 2, 0);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '9 minutes', 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '7 minutes', 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '6 minutes', 2, 1);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '5 minutes', 2, 0);
INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() -
interval '4 minutes', 2, 2);
INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
interval '3 minutes', 2, 2);
INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() -
interval '2 minutes', 2, 2);


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-13 Thread Venkat Balaji

 Question: what can I do to rsync only the new additions in every table
 starting 00:00:01 until 23:59:59 for each day?


A table level replication (like Slony) should help here.

Or

A trigger based approach with dblink would be an-other (but, a bit complex)
option.

Thanks
VB


Re: [GENERAL] partitioning a dataset + employing hysteresis condition

2011-11-13 Thread David Johnston
On Nov 14, 2011, at 0:35, Amit Dor-Shifer amit.dor.shi...@gmail.com wrote:
 On Mon, Nov 14, 2011 at 4:29 PM, Amit Dor-Shifer amit.dor.shi...@gmail.com 
 wrote:
 Hi, 
 I've got this table:
 create table phone_calls
 (
 start_time timestamp,
 device_id integer,
 term_status integer
 );
 
 It describes phone call events. A 'term_status' is a sort-of an exit status 
 for the call, whereby a value != 0 indicates some sort of error.
 Given that, I wish to retrieve data on devices with a persisting error on 
 them, of a specific type. I.E. that their last term_status was, say 2. I'd 
 like to employ some hysteresis on the query: only consider a device as 
 errorring if:
 1. the last good (0) term_status pre-dates a bad (2) term_status.
 2. it has at least N bad term_status events following the last good one.
 3. The time span between the first bad term_status event and the last one 
 is = T minutes
 
 For instance, w/the following data set:
 
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '10 minutes', 1, 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '9 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '7 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '6 minutes', 1, 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '5 minutes', 1, 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '4 minutes', 1, 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() - 
 interval '3 minutes', 1, 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() - 
 interval '2 minutes', 1, 2, 2);
 
 with N=3, T=3
 The query should return device_id 2 as errorring, as it registered 3 bad 
 events for at least 3 minutes.
 
 I assume some partitioning needs to be employed here, but am not very 
 sure-footed on the subject.
 
 Would appreciate some guidance.
 10x,
 
 ... fixed data set:
 
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '10 minutes', 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '9 minutes', 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '7 minutes', 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '6 minutes', 2, 1);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '5 minutes', 2, 0);
 INSERT INTO phone_calls(start_time, device_id, term_status) VALUES (now() - 
 interval '4 minutes', 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() - 
 interval '3 minutes', 2, 2);
 INSERT INTO phone_calls(start_time, device_id, term_status)VALUES (now() - 
 interval '2 minutes', 2, 2);

While a query is doable how about having another table which you update via a 
trigger on this table?  Whenever you insert a zero for a device you reset the 
support table.  Upon inserting a non-zero value you update a second timestamp 
with when the error occurred. At any point you can query this table for all 
devices whose error duration is longer than desired.  If you include a counter 
field to track log entry counts as well.  Build a third table where you can 
define N and T on a per-device basis and maybe have the support table use a 
trigger to send out a NOTIFY instead of constantly polling the table.

For a raw query you want the most recent 0 timestamp for each device and then, 
in the main query, select and count any later entries for the same device.  Use 
the MAX aggregate on those same records and compare it to the 0 timestamp.

David J.