[GENERAL] Online recovery of Tablespace

2014-05-21 Thread Sameer Kumar
Hi,

I am designing backup strategy for a PostgreSQL database (v9.3). I have a
scenario for recovery of tablespaces:

1. Backup of whole database (including individual tablespaces which are
stored on different disks) has been taken at 11AM

2. My disk which stores tablespace- tblspc1 crashed at 2:00PM

3. Can I restore the backup of 11AM (only for one tablespace) and then
recover that tablespace to 2:00PM state?


Is this possible? I have attached the steps I tried (I believe logically my
steps are wrong, since I am using recovery.conf but I am not replacing data
directory).

But is there any way to specify in recovery.conf or otherwise that I would
allow me to do recovery of transactions of a particular tablespace? A
custom solution which occurs to me is using pg_xlogdump contrib. Has anyone
tried something similar?

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350 <%2B65%208110%200350>*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] 



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).
Step1:

List of Tablespace:

   Name |  Owner   | Location 
-+--+--
 acct_tbsp   | postgres | /opt/PostgreSQL/tbspc
 pg_default  | postgres | 
 pg_global   | postgres | 
 test_tblspc | postgres | /opt/PostgresPlus/9.2AS/tblspc_1


Step2:

postgres=# select pg_start_backup('online_backup');
 pg_start_backup 
-
 0/1528
(1 row)


Step3:
Take backup of each tablespace location
cd /opt/PostgresPlus/9.2AS
tar -xzvf tblspc_1.tar.gz tblspc_1

cd /opt/PostgreSQL
tar -xzvf tbspc.tar.gz tbspc


Step4:
Take Backup of data directory.

cd $PGDATA/..
tar -czvf data.tar.gz data


Step5:
postgres=# select pg_stop_backup()
postgres-# ;
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
 pg_stop_backup 

 0/15015228
(1 row)


Step6: Create test tables on both the tablespaces

postgres=# create table after_online_backup_at_acct_tbsp(col1 int) tablespace 
acct_tbsp;
CREATE TABLE

postgres=# create table after_online_backup_at_test_tblspc(col1 int) tablespace 
test_tblspc;
CREATE TABLE



Step7:
Remove the directory for tablespace in another window:

rm -rf /opt/PostgreSQL/tbspc


Step8: Try to access the table which points to removed file

postgres=# select * from after_online_backup_at_tblspc1;
ERROR:  relation "after_online_backup_at_tblspc1" does not exist
LINE 1: select * from after_online_backup_at_tblspc1;


Step9: Restart the server- pg_ctl restart -m fast 

Check logs: Error noted-

2014-05-20 20:57:24 SGT LOG:  database system was shut down at 2014-05-20 
20:57:23 SGT
2014-05-20 20:57:24 SGT LOG:  could not open tablespace directory 
"pg_tblspc/41918/PG_9.3_201306121": No such file or directory
2014-05-20 20:57:24 SGT LOG:  autovacuum launcher started
2014-05-20 20:57:24 SGT LOG:  database system is ready to accept connections


List the tablesapces:
postgres=# \db
List of tablespaces
Name |  Owner   | Location 
-+--+--
 acct_tbsp   | postgres | /opt/PostgreSQL/tbspc
 pg_default  | postgres | 
 pg_global   | postgres | 
 test_tblspc | postgres | /opt/PostgresPlus/9.2AS/tblspc_1


Step10: Prepare for a recovery:
Stop the server
pg_ctl stop -m fast


go to data directory
cd $PGDATA
create recovery.conf with below content

restore_command = 'cp /opt/PostgresPlus/arch_dir/%f %p'


restore the tablespace backup:
cd /opt/PostgreSQL
tar -xzvf tbspc.tar.gz 


start PostgreSQL 
pg_ctl start

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need pg_dump not to dump extension-created triggers

2014-05-21 Thread Moshe Jacobson
I have an extension which, after installation, creates triggers and their
associated functions dynamically and adds the functions to the extension so
that they will be dropped with the extension.

However, there is no way to add the trigger to the extension, so pg_dump's
output still includes the definition of the trigger, even though it does
not include the definition of the corresponding function. This is causing
pg_restore to emit warnings on the CREATE TRIGGER commands.

The behavior does not change when I manually insert into pg_depend a
dependency from the trigger to the extension.

I'd like to know if the following changes can be made, as I think they will
fix this issue:

   1. Add command ALTER EXTENSION ... ADD TRIGGER.
   2. Modify pg_dump not to dump triggers that are dependent on an
   extension.

Thanks.

P.S. I'm here at PGCon and will be speaking on Friday about this very
extension. Come say hi!

Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc. 
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

"Quality is not an act, it is a habit." -- Aristotle


Re: [GENERAL] Do foreign key triggers get ran even if the key's value doesn't change?

2014-05-21 Thread Joe Van Dyk
On Wednesday, May 21, 2014, Jeff Janes  wrote:

>
> On Wed, May 21, 2014 at 1:11 PM, Joe Van Dyk 
> 
> > wrote:
>
>> I came across http://bonesmoses.org/2014/05/14/foreign-keys-are-not-free/
>> which seems to indicate so.
>>
>> When I run the following test script, having 50 foreign keys takes
>> about twice as long to do the update. Is there a reason for that?
>> Seems like the RI triggers wouldn't have to run on updates if the
>> value doesn't change.
>>
>
> That's kind of a question of definitions.  Perhaps the trigger itself
> doesn't need to run, but the code that decides whether the trigger needs to
> run does need to run.  Where do you draw the line around what is the
> trigger proper and what is just infrastructure?
>
> However you wish to define it, change your function so that it actually
> does change the key field, and see how much slower that is than the
> behavior where you update the row without updating the key.
>
>

I was expecting that the RI update triggers would have a "when (new.key is
distinct from old.key)" condition on them, which would mean that the number
of referencing tables wouldn't matter.



> Cheers,
>
> Jeff
>


Re: [GENERAL] Lock during insert statement

2014-05-21 Thread Jeff Janes
On Wednesday, May 21, 2014, AI Rumman  wrote:

> Got it.
> Thanks.
> Any special parameter to tune it? Like wal_buffers or shared_buffers?
>

Possible but unlikely.  You probably just need  better hardware (or more
patience).  How fast is the database growing? How much is actually being
written (sar, vmstat)?  What is the expected theoretical write throughput
of your RAID?

Cheers,

Jeff


Re: [GENERAL] Question regarding DEALLOCATE pdo_stmt_00000001

2014-05-21 Thread David G Johnston
Laurentius Purba wrote
> Hello all,
> 
> I've been seeing lots of this processes in my database DEALLOCATE
> pdo_stmt_0001 with idle state.
> 
> Executing *select * from pg_stat_activity where query like
> 'DEALLOCATE%';*I can see some of them are idle for +/- 30 minutes, and
> there is one idle
> for 4 days.
> 
> When I checked one of the pid by executing *select * from pg_stat_activity
> where pid = 63882;* (for example), after few minutes (can be longer), it
> became *DISCARD ALL* and after that it was removed from pg_stat_activity.
> 
> I was wondering if this is the normal behavior, since some of them are
> idle
> for +/- 30 minutes.
> 
> I am using PostgreSQL 9.3.2 and PgBouncer.
> 
> Any help is appreciated.
> 
> Regards,
> 
> -Laurentius

Yes.

9.3 - On an idle connection the value of query is the last executed query -
which in this case is some form session cleanup command before returning the
connection to the pool.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Question-regarding-DEALLOCATE-pdo-stmt-0001-tp5804667p5804672.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Got it.
Thanks.
Any special parameter to tune it? Like wal_buffers or shared_buffers?


On Wed, May 21, 2014 at 3:28 PM, Jeff Janes  wrote:

> On Wed, May 21, 2014 at 3:14 PM, AI Rumman  wrote:
>
>> Could any one please tell me why my system is waiting to get lock for an
>> INSERT statement?
>>
>> 2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired
>> ExclusiveLock on extension of relation 429298276 of database 21497 after
>> 3219.963 ms
>>
>
> "on extension of relation" means that it needs to add 8KB to the end of
> the table.  That it takes so long to obtain that locks suggests you have a
> some serious IO congestion.
>
> Cheers,
>
> Jeff
>


Re: [GENERAL] Lock during insert statement

2014-05-21 Thread Jeff Janes
On Wed, May 21, 2014 at 3:14 PM, AI Rumman  wrote:

> Could any one please tell me why my system is waiting to get lock for an
> INSERT statement?
>
> 2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired
> ExclusiveLock on extension of relation 429298276 of database 21497 after
> 3219.963 ms
>

"on extension of relation" means that it needs to add 8KB to the end of the
table.  That it takes so long to obtain that locks suggests you have a some
serious IO congestion.

Cheers,

Jeff


[GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Could any one please tell me why my system is waiting to get lock for an
INSERT statement?

2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired ExclusiveLock
on extension of relation 429298276 of database 21497 after 3219.963 ms
2014-05-21 07:52:49.965 PDT [10-1]STATEMENT:  INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2014-05-21 07:52:49.965 PDT [3-1]LOG:  duration: 4590.048 ms  execute
: INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)


Thanks.


[GENERAL] Question regarding DEALLOCATE pdo_stmt_00000001

2014-05-21 Thread Laurentius Purba
Hello all,

I've been seeing lots of this processes in my database DEALLOCATE
pdo_stmt_0001 with idle state.

Executing *select * from pg_stat_activity where query like
'DEALLOCATE%';*I can see some of them are idle for +/- 30 minutes, and
there is one idle
for 4 days.

When I checked one of the pid by executing *select * from pg_stat_activity
where pid = 63882;* (for example), after few minutes (can be longer), it
became *DISCARD ALL* and after that it was removed from pg_stat_activity.

I was wondering if this is the normal behavior, since some of them are idle
for +/- 30 minutes.

I am using PostgreSQL 9.3.2 and PgBouncer.

Any help is appreciated.

Regards,

-Laurentius


Re: [GENERAL] Do foreign key triggers get ran even if the key's value doesn't change?

2014-05-21 Thread Jeff Janes
On Wed, May 21, 2014 at 1:11 PM, Joe Van Dyk  wrote:

> I came across http://bonesmoses.org/2014/05/14/foreign-keys-are-not-free/
> which seems to indicate so.
>
> When I run the following test script, having 50 foreign keys takes
> about twice as long to do the update. Is there a reason for that?
> Seems like the RI triggers wouldn't have to run on updates if the
> value doesn't change.
>

That's kind of a question of definitions.  Perhaps the trigger itself
doesn't need to run, but the code that decides whether the trigger needs to
run does need to run.  Where do you draw the line around what is the
trigger proper and what is just infrastructure?

However you wish to define it, change your function so that it actually
does change the key field, and see how much slower that is than the
behavior where you update the row without updating the key.

Cheers,

Jeff


Re: [GENERAL] postgres 9.2.4 - ERROR: invalid input syntax for type numeric: ""

2014-05-21 Thread Raymond O'Donnell
On 20/05/2014 11:39, Khangelani Gama wrote:
> Sorry
> 
> I found the problem, it’s data problem in another database, it’s trying
> to convert numeric null but of which it’s not a null data, it’s a blank
> data.

Hi there,

I'm glad you were able to find the solution to your problem. I have to
say, though, that I found that monster of a query which you posted
completely unreadable maybe it's just me, but it scared me off even
attempting to help you! :-)

Some basic formatting on the query - indentation and what not - would
have made it much less impenetrable, and people might have some chance
of being able to help.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Do foreign key triggers get ran even if the key's value doesn't change?

2014-05-21 Thread Joe Van Dyk
I came across http://bonesmoses.org/2014/05/14/foreign-keys-are-not-free/
which seems to indicate so.

When I run the following test script, having 50 foreign keys takes
about twice as long to do the update. Is there a reason for that?
Seems like the RI triggers wouldn't have to run on updates if the
value doesn't change.

begin;

set client_min_messages='warning';

CREATE OR REPLACE FUNCTION fnc_check_fk_overhead(key_count INT)
RETURNS INTERVAL AS
$$
DECLARE
  i INT;
  start_time TIMESTAMP;
  end_time TIMESTAMP;
BEGIN
  DROP TABLE if exists test_fk CASCADE;

  CREATE TABLE test_fk
  (
id   BIGINT PRIMARY KEY,
junk VARCHAR
  );

  INSERT INTO test_fk
  SELECT generate_series(1, 10), repeat(' ', 20);

  CLUSTER test_fk_pkey ON test_fk;

  FOR i IN 1..key_count LOOP
EXECUTE 'CREATE TABLE test_fk_ref_' || i ||
' (test_fk_id BIGINT REFERENCES test_fk (id) ON UPDATE NO ACTION)';
  END LOOP;

  start_time = clock_timestamp();

  FOR i IN 1..10 LOOP
UPDATE test_fk SET junk = ''
 WHERE id = i;
  END LOOP;

  end_time = clock_timestamp();

  FOR i IN 1..key_count LOOP
EXECUTE 'DROP TABLE test_fk_ref_' || i;
  END LOOP;

  RETURN end_time - start_time;

END;
$$ LANGUAGE plpgsql VOLATILE;

SELECT fnc_check_fk_overhead(1);
SELECT fnc_check_fk_overhead(50);


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function performance drops during execution of loop

2014-05-21 Thread Seref Arikan
Just for the record, I've increased the data volume X10 and observed only
quite small performance drop: average time per inner function call
increased from 12.6 ms to 13.3 ms.

Regards
Seref



On Wed, May 21, 2014 at 5:19 PM, Seref Arikan <
serefari...@kurumsalteknoloji.com> wrote:

> Thanks a lot for the hint Tom! I've replaced deletes with TRUNCATE and it
> gave a performance of 50.950 sec which is twice as fast as the drop temp
> table method, with the added benefit of not having to raise the
> max_locks_per_transaction.
>
> I also think I can't see the performance decrease pattern anymore, or the
> operation is completing before that happens, will generate more data and
> try again.
>
> Regards
> Seref
>
>
>
> On Wed, May 21, 2014 at 4:52 PM, Tom Lane  wrote:
>
>> Seref Arikan  writes:
>> > What may be building up here? I suspect deleting all rows from the temp
>> > tables is not really deleting them since this is all happening in a
>> > transaction, but it is my uneducated guess only.
>>
>> I suspect you suspect correctly.  Autovacuum does not touch temp tables,
>> so it won't help you deal with deleted tuples.  Given the usage pattern
>> you're describing, I think that using a TRUNCATE rather than
>> delete-all-the-rows would help ... but if you're already doing that,
>> we need more info.
>>
>> regards, tom lane
>>
>
>


Re: [GENERAL] Need help on triggers - postgres 9.1.2

2014-05-21 Thread Khangelani Gama
Thank you very much, I will have a look.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, May 21, 2014 3:20 PM
To: Khangelani Gama; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help on triggers - postgres 9.1.2

On 05/21/2014 02:54 AM, Khangelani Gama wrote:
> Hi
>
> I have a postgres 9 database, inside this database I need to create a
> new table called *center_changed* that gets inserted by any change
> that take place in a table called *center*. So I need to create
> trigger to do this.
>
> *Example: *
>
> Table name*: center*(c_cde, c_dsc, ops_cde, grp_cde);
>
> Now on the table called *center , *I need to create an INSERT and
> UPDATE trigger will insert the *c_cde * of the inserted or updated
> *center* into the *center_changed* table
>
> Please help me
>
> I have this syntax below, but please help me with the overall query.
>
> CREATE TRIGGER check_center
>
> BEFORE INSERT OR UPDATE
>
> ON *cente*r FOR EACH ROW
>
> EXECUTE PROCEDURE check_center_changes();

See here:

http://www.postgresql.org/docs/9.3/interactive/plpgsql-trigger.html

Example 40-4. A PL/pgSQL Trigger Procedure For Auditing

>
> Thanks
>

--
Adrian Klaver
adrian.kla...@aklaver.com


CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function performance drops during execution of loop

2014-05-21 Thread Seref Arikan
Thanks a lot for the hint Tom! I've replaced deletes with TRUNCATE and it
gave a performance of 50.950 sec which is twice as fast as the drop temp
table method, with the added benefit of not having to raise the
max_locks_per_transaction.

I also think I can't see the performance decrease pattern anymore, or the
operation is completing before that happens, will generate more data and
try again.

Regards
Seref



On Wed, May 21, 2014 at 4:52 PM, Tom Lane  wrote:

> Seref Arikan  writes:
> > What may be building up here? I suspect deleting all rows from the temp
> > tables is not really deleting them since this is all happening in a
> > transaction, but it is my uneducated guess only.
>
> I suspect you suspect correctly.  Autovacuum does not touch temp tables,
> so it won't help you deal with deleted tuples.  Given the usage pattern
> you're describing, I think that using a TRUNCATE rather than
> delete-all-the-rows would help ... but if you're already doing that,
> we need more info.
>
> regards, tom lane
>


Re: [GENERAL] Function performance drops during execution of loop

2014-05-21 Thread Seref Arikan
Hi Tom,
Thanks for the feedback. I've moved the temp tables to internal function,
increased max_locks_per_transaction and dropped the tables instead of
deleting them.

The performance drop is till there, but it is much, much less then the
previous case. Previously the whole execution took 04:36:14 and when I use
the drop table approach it takes 00:01:44

less then 2 minutes compared to 4 and a half hours...

I will try TRUNCATE as well and see what happens then. Oh, btw, I realised
I've used perform in the version of the code I've included in my initial
message, it was simply to eliminate the main temp table in the outer query
as a suspect.

Regards
Seref



On Wed, May 21, 2014 at 4:52 PM, Tom Lane  wrote:

> Seref Arikan  writes:
> > What may be building up here? I suspect deleting all rows from the temp
> > tables is not really deleting them since this is all happening in a
> > transaction, but it is my uneducated guess only.
>
> I suspect you suspect correctly.  Autovacuum does not touch temp tables,
> so it won't help you deal with deleted tuples.  Given the usage pattern
> you're describing, I think that using a TRUNCATE rather than
> delete-all-the-rows would help ... but if you're already doing that,
> we need more info.
>
> regards, tom lane
>


Re: [GENERAL] Function performance drops during execution of loop

2014-05-21 Thread Tom Lane
Seref Arikan  writes:
> What may be building up here? I suspect deleting all rows from the temp
> tables is not really deleting them since this is all happening in a
> transaction, but it is my uneducated guess only.

I suspect you suspect correctly.  Autovacuum does not touch temp tables,
so it won't help you deal with deleted tuples.  Given the usage pattern
you're describing, I think that using a TRUNCATE rather than
delete-all-the-rows would help ... but if you're already doing that,
we need more info.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Function performance drops during execution of loop

2014-05-21 Thread Seref Arikan
Greetings,
I have a function that loops over a set of ids, calls a function inside the
loop using the current id as the parameter, and adds the result of the
function call to a temp table.

When the loop terminates, the temp table has all the rows from the function
call(s) and it is returned.

I've noticed that the function was taking longer than it should, so I've
used RAISE to write to postgresql log and watch performance with tail -f
/.../logfile
What I'm seeing is that the performance drops as the loop progresses. This
is a bit confusing for me due to following reasons:

This database contains synthetic data, so the ids returned by the loop
would point at sets of rows which are repetitions of the same pattern such
as row1, row2, row3 and then again row1, row2, row3 so the loop is not
dealing with changing data size or complexity.

The inner function simply selects a set of rows from source tables, inserts
them into a temp table created by the outer function (where the loop is)
performs joins and returns the result. The inner function deletes all rows
from the temp tables at entry, so the temp table should not be increasing
in size (or so I think)

But something is building up during execution which is leading to increased
times for processing of the same number of records. First the outer
function, then the log file:

CREATE OR REPLACE FUNCTION public.iterate_groups()
RETURNS TABLE (
--function return type defined here
)
AS
$BODY$
DECLARE
temp_row RECORD;
timer TEXT;
rec_counter INTEGER;
BEGIN
CREATE TEMP TABLE RESULTS_TABLE(
--this temp table matches the function return type
) ON COMMIT DROP;

--nodes temp table
CREATE TEMP TABLE criterianodes (
  --table 1 to be used by function called from the loop
) ON COMMIT DROP;

 --structure temp table
--table 2 to be used by function called from the loop
  ) ON COMMIT DROP;


RAISE NOTICE 'Starting loop over all documents';
select timeofday() into timer;
RAISE NOTICE '%', timer;

rec_counter := 0;
FOR temp_row in  select distinct id1, id2 from temp_eav_table_global LOOP

rec_counter := rec_counter + 1;

PERFORM query_instance_graph2(temp_row.id2);

IF  (rec_counter % 100) = 0 THEN
  RAISE NOTICE '%:', rec_counter;
  select timeofday() into timer;
  RAISE NOTICE '%', timer;
END IF;

END LOOP;
RETURN query select * from RESULTS_TABLE;
END
$BODY$
Language 'plpgsql';


log file shows:

NOTICE:  Starting loop over all documents
NOTICE:  Wed May 21 16:18:51.075245 2014 BST
NOTICE:  100:
NOTICE:  Wed May 21 16:19:04.306767 2014 BST
NOTICE:  200:
NOTICE:  Wed May 21 16:19:08.499653 2014 BST
NOTICE:  300:
NOTICE:  Wed May 21 16:19:14.917691 2014 BST
NOTICE:  400:
NOTICE:  Wed May 21 16:19:23.566721 2014 BST
NOTICE:  500:
NOTICE:  Wed May 21 16:19:34.369962 2014 BST
NOTICE:  600:
NOTICE:  Wed May 21 16:19:47.464242 2014 BST
NOTICE:  700:
NOTICE:  Wed May 21 16:20:02.598676 2014 BST
NOTICE:  800:
NOTICE:  Wed May 21 16:20:20.083649 2014 BST


After a while the time between calls gets longer and longer.

I kept the temp tables in the outer function or else I'd have to drop them
at entry to the inner function and it would give an out of shared memory
error (which it did in the previous version of the code) due to locks being
created as table is created and dropped thousands of times.

I'm trying to fit a large computation to a limited memory by using the
cursor (loop) and repeatedly calling the inner function which will operate
with limited data using the temp tables.

What may be building up here? I suspect deleting all rows from the temp
tables is not really deleting them since this is all happening in a
transaction, but it is my uneducated guess only.

Cheers
Seref


Re: [GENERAL] Convert an XML database

2014-05-21 Thread Adrian Klaver

On 05/21/2014 08:03 AM, Aram Fingal wrote:


a generic ETL (extract, transform, load) tool that supports XML and postgresql 
should be able to do it, but its not something built into postgresql as-is.
I can't recommend any specific ETL tool to do this as I've never needed to do 
this exactly.


Thanks.  That at least tells me that I am not missing something which would 
make this really easy.


The other thing to keep in mind is this note at the link you provided:

"The XML schema is currently in beta and is likely to change in the 
coming weeks."





-Aram




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Convert an XML database

2014-05-21 Thread Aram Fingal
> 
> a generic ETL (extract, transform, load) tool that supports XML and 
> postgresql should be able to do it, but its not something built into 
> postgresql as-is.I can't recommend any specific ETL tool to do this as 
> I've never needed to do this exactly.

Thanks.  That at least tells me that I am not missing something which would 
make this really easy.

-Aram

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgtune and massive shared_buffers recommendation

2014-05-21 Thread Bill Moran
On Wed, 21 May 2014 21:39:05 +0700 Stuart Bishop  
wrote:
> 
> I've got some boxes with 128GB of RAM and up to 750 connections, just
> upgraded to 9.3 so I'm revising my tuning. I'm getting a
> recommendation from pgtune to bump my shared_buffers up to 30GB and
> work_mem to 80MB. Is a shared_buffers this high now sane?
> 
> The PostgreSQL reference doesn't make recommendations on limits, but
> it didn't either with earlier versions of PostgreSQL where more than a
> few GB was normally a bad thing to do. The most recent blob posts I
> see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB.
> 
> (and yes, I am using pgbouncer but stuck in session mode and up to 750
> connections for the time being)

My experience with a busy database server over the last year or so demonstrated
that values much _higher_ than that result in occasional stalls on the part of
PostgreSQL.  My guess is that the code that manages shared_buffers doesn't
scale effectively to 64G (which is where we saw the problem) and would
occasionally stall waiting for some part of the code to rearrange some
memory, or write it to disk, or something else.  Other tuning attempts did
not alleviate the problem (such as tweaking various checkpoint settings) but
the problem completely disappeared when we lower shared_buffers to (I think)
32G.

Unfortunatley, I don't have access to exact details because I no longer work
at that job, so I'm just pulling from memory.

We never did get an opportunity to test whether there was any performance
change from 64G -> 32G.  I can tell you that if performance decreased, it
didn't decrease enough for it to be noticable from the application.

So my advice is that 30G might be just fine for shared_buffers, but if you
experience stalls (i.e., the database stops responding for an uncomfortably
long time) keep that in mind and lower it to see if it fixes the stalls.
Another important data point when considering this: we never experienced
any crashes or errors with shared_buffers set at 64G ... just the stalls, so
setting it too high appears to endanger performance, but nothing else.

A bit of advice coming from the other direction: shared_buffers doesn't really
need to be any larger than the working set of your data.  If you can estimate
that, and (for example) it's only 4G, you don't need to set shared_buffers
nearly that high, even if you have 4T of total data.  Of course, estimating
your working set can be difficult, but it's worth a look.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgtune and massive shared_buffers recommendation

2014-05-21 Thread Stuart Bishop
Hi.

I've got some boxes with 128GB of RAM and up to 750 connections, just
upgraded to 9.3 so I'm revising my tuning. I'm getting a
recommendation from pgtune to bump my shared_buffers up to 30GB and
work_mem to 80MB. Is a shared_buffers this high now sane?

The PostgreSQL reference doesn't make recommendations on limits, but
it didn't either with earlier versions of PostgreSQL where more than a
few GB was normally a bad thing to do. The most recent blob posts I
see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB.

(and yes, I am using pgbouncer but stuck in session mode and up to 750
connections for the time being)

-- 
Stuart Bishop 
http://www.stuartbishop.net/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on triggers - postgres 9.1.2

2014-05-21 Thread Adrian Klaver

On 05/21/2014 02:54 AM, Khangelani Gama wrote:

Hi

I have a postgres 9 database, inside this database I need to create a
new table called *center_changed* that gets inserted by any change that
take place in a table called *center*. So I need to create trigger to do
this.

*Example: *

Table name*: center*(c_cde, c_dsc, ops_cde, grp_cde);

Now on the table called *center , *I need to create an INSERT and UPDATE
trigger will insert the *c_cde * of the inserted or updated *center*
into the *center_changed* table

Please help me

I have this syntax below, but please help me with the overall query.

CREATE TRIGGER check_center

BEFORE INSERT OR UPDATE

ON *cente*r FOR EACH ROW

EXECUTE PROCEDURE check_center_changes();


See here:

http://www.postgresql.org/docs/9.3/interactive/plpgsql-trigger.html

Example 40-4. A PL/pgSQL Trigger Procedure For Auditing



Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LDAP authentication not working

2014-05-21 Thread Jürgen Fuchsberger

>  Original Message 
> Subject:  Re: [GENERAL] LDAP authentication not working
> Resent-From:  
> Date: Wed, 14 May 2014 06:47:45 -1000
> From: Stephan Fabel 
> To:   Magnus Hagander 
> CC:   Postgres List , Jürgen Fuchsberger
> 
>
>
>
> On May 14, 2014 12:56 AM, "Magnus Hagander"  > wrote:
>> On Wed, May 14, 2014 at 11:48 AM, Jürgen Fuchsberger
>  > wrote:
>>>
>>>
>>> On 05/14/2014 09:10 AM, Magnus Hagander wrote:
 On Wed, May 14, 2014 at 8:35 AM, Stephan Fabel  
 >> wrote:

  I don't think SSL support for LDAP is supported. Have you tried TLS
  on port 389?
>>> This does not work with our LDAP server (seems it is not configured to
>>> support TLS)
> Enabling TLS on OpenLDAP is trivial, especially if you have SSL enabled
> already. Ask your SysAdmin.
>
>
I did - as far as he knows we can not use both at the same time.

Juergen.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help on triggers - postgres 9.1.2

2014-05-21 Thread Khangelani Gama
Hi



I have a postgres 9 database, inside this database I need to create a new
table called *center_changed* that gets inserted by any change that take
place in a table called *center*. So I need to create trigger to do this.



*Example: *



Table name*: center*(c_cde, c_dsc, ops_cde, grp_cde);



Now on the table called *center , *I need to create an INSERT and UPDATE
trigger will insert the *c_cde * of the inserted or updated *center* into
the *center_changed* table





Please help me







I have this syntax below, but please help me with the overall query.





CREATE TRIGGER check_center





   BEFORE INSERT OR UPDATE





   ON *cente*r FOR EACH ROW





   EXECUTE PROCEDURE check_center_changes();











Thanks



CONFIDENTIALITY NOTICE
The contents of and attachments to this e-mail are intended for the addressee 
only, and may contain the confidential
information of Argility (Proprietary) Limited and/or its subsidiaries. Any 
review, use or dissemination thereof by anyone
other than the intended addressee is prohibited.If you are not the intended 
addressee please notify the writer immediately
and destroy the e-mail. Argility (Proprietary) Limited and its subsidiaries 
distance themselves from and accept no liability
for unauthorised use of their e-mail facilities or e-mails sent other than 
strictly for business purposes.


Re: [GENERAL] Force specific index disuse

2014-05-21 Thread Oleg Bartunov
We routinely use plantuner. http://www.sai.msu.su/~megera/wiki/plantuner



On Tue, May 20, 2014 at 12:38 PM, Steve Crawford
 wrote:
> Is there a way to force a specific index to be removed from consideration in
> planning a single query?
>
> Specifically, on a 60-million-row table I have an index that is a candidate
> for removal. I have identified the sets of nightly queries that use the
> index but before dropping it I would like to run EXPLAIN and do timing tests
> on the queries to see the impact of not having that index available and
> rewrite the query to efficiently use other indexes if necessary.
>
> Cheers,
> Steve
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general