Re: [GENERAL] postgres and xquery

2013-05-14 Thread John R Pierce

On 5/13/2013 9:31 PM, kristian kvilekval wrote:
  I saw on the mailing list that back in 2010 there was some 
discussion of using Zorba to add xquery capabilities to postgres.  I 
was wondering if  any progress had been made on this and what the 
current interest levels of supporting xml in postgres are?


Zorba has a JDBC database interface, and JDBC can talk to postgres.   
this would let you use zorba to query a postgres database via xQuery.  
isn't that what you want?



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: database a/system_data does not exist

2013-05-14 Thread sumita
I put the log_statement to 'all'  and log_connections  too.
I get below 
May 14 09:18:45 LOG:  execute unnamed: SELECT gid FROM pg_prepared_xacts
May 14 09:18:45 LOG:  connection received: host=127.0.0.1 port=55618
May 14 09:18:45 LOG:  connection authorized: user=postgres
database=a/system_data
May 14 09:18:45 FATAL:  database a/system_data does not exist


Do you think defining search_path for postgres user role do the trick.
Will search_path lead to any other issue?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/FATAL-database-a-system-data-does-not-exist-tp5754839p5755384.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: database a/system_data does not exist

2013-05-14 Thread sumita
Further analysis on my product code does not show this query being fired from
the product code at interval.
Not sure which application is invoking this at every 130 seconds



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/FATAL-database-a-system-data-does-not-exist-tp5754839p5755390.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: database a/system_data does not exist

2013-05-14 Thread sumita
Has anything changed in terms of search_path in postgres 9.2.4
I have upgraded postgres from 9.1.3 where I donot see these errors in the
log file



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/FATAL-database-a-system-data-does-not-exist-tp5754839p5755399.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problem

2013-05-14 Thread S H
 I wonder if you've got a bloated pg catalog then. Certainly sounds
 like it's a possibility.
 So other than vacuuming when you recreate this, is the server working
 hard? What is vacuum vacuuming when this happens (pg_stat_activity
 should show that)
Does vacuum full is required to avoid bloating, i am running vacuum analyze 
regularly but not vacuum full.
Could it be cause of bloating ?   

[GENERAL] Running out of memory on vacuum

2013-05-14 Thread Ioana Danes
Hi all,
 
I have a production database that sometimes runs out of
memory=at nightly vacuum.
 
The application runs typically with around 40 post=res
connections but there are times when the connections increase because =f some
queries going on. The reason is that the operations are slow, the t=rminals
time out and try to reconnect using new connections. Some time ago=I started to
have problems with too many connections being open so I lower=d the limit to
300 connections. It was all good until recently when even w=th 180 connections
I am running out of memory on vacuum... So the connecti=ns increase to 180 and
the system still runs properly for other 2 days but=then at nightly vacuum runs
out of memory.
The fix is to restart postgre= ... If I only close the
connections the problem is still these so I need =o restart postgres.
If I don't restart postgres then the system will run=out
of memory on queries at a point...
Another important thing is that d=ring vacuum at 1am
nothing else is going on that server so all the connect=ons are idle. 
 
2013-05-14 06:53:51.449
CST,postgres,abrazo,8=41,[local],519233dc.2225,3,VACUUM,2013-05-14
06:53:48 CST,174/67143,0,=RROR,53200,out of memory,Failed on
request of size 668141820.,vac=um;,,,psql
 
OS:
 
SUSE Linux Enterprise Server 11 (x86_64) VER=ION = 11
PATCHLEVEL = 2
 
Suse compiled postgre version :
 =A0=A0=A0 
version
--=--=--
 PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled
by GCC gcc (SU=E Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit
 
 
 
Postgre= configuration parameters:
 
max_connections = 300
shared_buffers =3D 2GB
maintenance_work_mem = 2GB
effective_cache_size = 4GB
wor=_mem = 2MB
wal_buffers = 1MB
 
 
checkpoint_segments = 16
au=ovacuum = on
 
 
wal_level = hot_standby
archive_mode = on
a=chive_command = 'test -f
/cbnDBscripts/tmp/PITR_primarydb_stop_backup ||=rsync --timeout=30 -atz %p
spsnidb1:/data01/wal_files/%f'
archive_time=ut = 60
 
 
free
             
    total  =A0   
used   free
shared    buffer= cached
Mem:  
7880512    7825136 =A0   
55376 
0  72376  =A0 4537644 -/+
buffers/cache:    3215116    4665396
Swap:=A0
2097148 
0    2097148
 
=etc/sysctl.conf
 
kernel.shmmax=2684354560
kernel.shmall=26843545=0
vm.swappiness=0
vm.overcommit_memory=2
 
postgres log:
TopM=moryContext: 179088 total in 18 blocks; 8608 free
(13 chunks); 170480 used=0A  TopTransactionContext: 24576 total in 2
blocks; 21888 free (43 chunk=); 2688 used
  Attopt cache: 57344 total in 3 blocks; 25824 free
(5 ch=nks); 31520 used
  Type information cache: 24576 total in 2 blocks;
11=88 free (5 chunks); 12688 used
  MessageContext: 8192 total in 1 block=; 4952 free
(1 chunks); 3240 used
  Operator class cache: 8192 total i= 1 blocks; 1680
free (0 chunks); 6512 used
  smgr relation table: 5734= total in 3 blocks;
21984 free (10 chunks); 35360 used
  TransactionAb=rtContext: 32768 total in 1 blocks;
32736 free (0 chunks); 32 used
  P=rtal hash: 8192 total in 1 blocks; 1680 free (0
chunks); 6512 used
  P=rtalMemory: 8192 total in 1 blocks; 7888 free (0
chunks); 304 used
 =A0  PortalHeapMemory: 2048 total in 1 blocks;
2016 free (3 chunks); 32 u=ed
  Vacuum: 24576 total in 2
blocks; 15392 free (0 chunks=; 9184 used
  Relcache by OID: 24576 total in 2 blocks; 6592
free (3 c=unks); 17984 used
  CacheMemoryContext: 2390704 total in 22 blocks;
64=048 free (4 chunks); 1749656 used
    pk_cbntransaction: 2048 tota= in 1
blocks; 752 free (0 chunks); 1296 used
    rel_transactionre=ailerid_fk: 2048
total in 1 blocks; 752 free (0 chunks); 1296 used
 =A0  rel_transaction_promotion_fk: 2048 total
in 1 blocks; 752 free (0 ch=nks); 1296 used
    reference_64_fk: 2048 total in 1
blocks; 752 f=ee (0 chunks); 1296 used
    reference_49_fk: 2048 total in 1
bloc=s; 752 free (0 chunks); 1296 used
    reference_46_fk: 2048 total =n 1
blocks; 752 free (0 chunks); 1296 used
    reference_28_fk: 20=8 total in 1
blocks; 752 free (0 chunks); 1296 used
    ix_transac=ion_date: 2048 total in 1
blocks; 504 free (0 chunks); 1544 used
  =A0 ix_referenceid_transac: 2048 total in 1
blocks; 752 free (0 chunks); 12=6 used
    ix_onlinetick_cbntrans: 2048 total in
1 blocks; 752 fre= (0 chunks); 1296 used
    ix_drawid_transaction: 2048 total in 1
=locks; 752 free (0 chunks); 1296 used
   
ix_cbntransaction_termin=lid_transactiondate: 2048 total in 1 blocks; 656 free
(0 chunks); 1392 use=
    ix_cbntransaction_purchaseid: 2048
total in 1 blocks; 752 fre= (0 chunks); 1296 used
    pk_cardactivity: 2048 total in 1
blocks= 752 free (0 chunks); 1296 used
    pk_business_type: 2048 total i= 1
blocks; 752 free (0 chunks); 1296 used
    pg_toast_207856_inde=: 2048 total in 1
blocks; 656 free (0 chunks); 1392 used
    pk_ba=k: 2048 total in 1 blocks; 752
free (0 chunks); 1296 used
    ix_a=dit_1: 2048 

Re: [GENERAL] FATAL: database a/system_data does not exist

2013-05-14 Thread Adrian Klaver

On 05/14/2013 03:30 AM, sumita wrote:

I put the log_statement to 'all'  and log_connections  too.
I get below
May 14 09:18:45 LOG:  execute unnamed: SELECT gid FROM pg_prepared_xacts
May 14 09:18:45 LOG:  connection received: host=127.0.0.1 port=55618
May 14 09:18:45 LOG:  connection authorized: user=postgres
database=a/system_data

   ^



May 14 09:18:45 FATAL:  database a/system_data does not exist


Do you think defining search_path for postgres user role do the trick.
Will search_path lead to any other issue?


I don't think this has anything to do with search_path. search_path is 
used to find schema within a database. The error is reporting that it 
cannot even find the database.

So:

1) Do you actually have a database named a/system_data?

2) In order for me to create such a name I had to quote it:
test=# create database a/system_data;
ERROR:  syntax error at or near /
LINE 1: create database a/system_data;
 ^
test=# create database a/system_data;
CREATE DATABASE

Are you sure that when it was quoted there was not an empty space put in?
or
That when it was created there was mixed case involved?
From psql do \l to get a listing of the databases.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/FATAL-database-a-system-data-does-not-exist-tp5754839p5755384.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman
 Subject: [GENERAL] Running out of memory on vacuum
 
 Hi all,
 
 I have a production database that sometimes runs out of memory=at
 nightly vacuum.
 
 The application runs typically with around 40 post=res connections but
 there are times when the connections increase because =f some queries
 going on. The reason is that the operations are slow, the t=rminals
 time out and try to reconnect using new connections. Some time ago=I
 started to have problems with too many connections being open so I
 lower=d the limit to
 300 connections. It was all good until recently when even w=th 180
 connections I am running out of memory on vacuum... So the connecti=ns
 increase to 180 and the system still runs properly for other 2 days
 but=then at nightly vacuum runs out of memory.
 The fix is to restart postgre= ... If I only close the connections the
 problem is still these so I need =o restart postgres.
 If I don't restart postgres then the system will run=out of memory on
 queries at a point...
 Another important thing is that d=ring vacuum at 1am nothing else is
 going on that server so all the connect=ons are idle.
 
 2013-05-14 06:53:51.449
 CST,postgres,abrazo,8=41,[local],519233dc.2225,3,VACUUM,2013-
 05-14
 06:53:48 CST,174/67143,0,=RROR,53200,out of memory,Failed on request
 of size 668141820.,vac=um;,,,psql
 
 OS:
 
 SUSE Linux Enterprise Server 11 (x86_64) VER=ION = 11 PATCHLEVEL = 2
 
 Suse compiled postgre version :
  =A0=A0=A0 version
 --=
 --=--
  PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled by GCC gcc (SU=E
 Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit
 
 Postgre= configuration parameters:
 
 max_connections = 300
 shared_buffers =3D 2GB
 maintenance_work_mem = 2GB
 effective_cache_size = 4GB
 wor=_mem = 2MB
 wal_buffers = 1MB
 
 
 checkpoint_segments = 16
 au=ovacuum = on
 
 
 wal_level = hot_standby
 archive_mode = on
 a=chive_command = 'test -f
 /cbnDBscripts/tmp/PITR_primarydb_stop_backup ||=rsync --timeout=30 -atz
 %p spsnidb1:/data01/wal_files/%f'
 archive_time=ut = 60
 
 
 free
 
     total  =A0
 used   free
 shared    buffer= cached
 Mem:
 7880512    7825136 =A0
 55376
 0  72376  =A0 4537644 -/+
 buffers/cache:    3215116    4665396
 Swap:=A0
 2097148
 0    2097148
 
 =etc/sysctl.conf
 
 kernel.shmmax=2684354560
 kernel.shmall=26843545=0
 vm.swappiness=0
 vm.overcommit_memory=2
 
.
 
 Thanks,
 ioana


1. You have autovacuum running.  Is there specific reason to run manual 
vacuum nightly?
2. Reduce maintenance_work_mem, you may have manual vacuum and autovacuum 
running at the same time, each requesting 2GB (your current setting).
3. Use connection pooling software (e.g. PgBouncer) and reduce max_connections.

Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Ioana Danes
Hi Igor,

1. I could remove the nightly vacuum but I think that is
 not the cause. The vacuum is only catching the problem. If I ignore the
 vacuum message for few days the system is gonna run out of memory on 
queries...
2. There is no autovacuum running in the same time. I 
tried to run vacuum verbose manually and checked what else was going on 
on the server.
I also reduced the maintenance work mem to 1 GB but I get the same error.
3.
 I do use connection pooling. I have 1500 terminals selling and at busy 
times I might need more than 100 active connections but just 
occationally...

Thanks for quick response,



- Original Message -
From: Igor Neyman iney...@perceptron.com
To: Ioana Danes ioanasoftw...@yahoo.ca; PostgreSQL General 
pgsql-general@postgresql.org
Cc: 
Sent: Tuesday, May 14, 2013 10:11:19 AM
Subject: Re: [GENERAL] Running out of memory on vacuum

 Subject: [GENERAL] Running out of memory on vacuum
 
 Hi all,
 
 I have a production database that sometimes runs out of memory=at
 nightly vacuum.
 
 The application runs typically with around 40 post=res connections but
 there are times when the connections increase because =f some queries
 going on. The reason is that the operations are slow, the t=rminals
 time out and try to reconnect using new connections. Some time ago=I
 started to have problems with too many connections being open so I
 lower=d the limit to
 300 connections. It was all good until recently when even w=th 180
 connections I am running out of memory on vacuum... So the connecti=ns
 increase to 180 and the system still runs properly for other 2 days
 but=then at nightly vacuum runs out of memory.
 The fix is to restart postgre= ... If I only close the connections the
 problem is still these so I need =o restart postgres.
 If I don't restart postgres then the system will run=out of memory on
 queries at a point...
 Another important thing is that d=ring vacuum at 1am nothing else is
 going on that server so all the connect=ons are idle.
 
 2013-05-14 06:53:51.449
 CST,postgres,abrazo,8=41,[local],519233dc.2225,3,VACUUM,2013-
 05-14
 06:53:48 CST,174/67143,0,=RROR,53200,out of memory,Failed on request
 of size 668141820.,vac=um;,,,psql
 
 OS:
 
 SUSE Linux Enterprise Server 11 (x86_64) VER=ION = 11 PATCHLEVEL = 2
 
 Suse compiled postgre version :
  =A0=A0=A0 version
 --=
 --=--
  PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled by GCC gcc (SU=E
 Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit
 
 Postgre= configuration parameters:
 
 max_connections = 300
 shared_buffers =3D 2GB
 maintenance_work_mem = 2GB
 effective_cache_size = 4GB
 wor=_mem = 2MB
 wal_buffers = 1MB
 
 
 checkpoint_segments = 16
 au=ovacuum = on
 
 
 wal_level = hot_standby
 archive_mode = on
 a=chive_command = 'test -f
 /cbnDBscripts/tmp/PITR_primarydb_stop_backup ||=rsync --timeout=30 -atz
 %p spsnidb1:/data01/wal_files/%f'
 archive_time=ut = 60
 
 
 free
 
     total  =A0
 used   free
 shared    buffer= cached
 Mem:
 7880512    7825136 =A0
 55376
 0  72376  =A0 4537644 -/+
 buffers/cache:    3215116    4665396
 Swap:=A0
 2097148
 0    2097148
 
 =etc/sysctl.conf
 
 kernel.shmmax=2684354560
 kernel.shmall=26843545=0
 vm.swappiness=0
 vm.overcommit_memory=2
 
.
 
 Thanks,
 ioana


1. You have autovacuum running.  Is there specific reason to run manual 
vacuum nightly?
2. Reduce maintenance_work_mem, you may have manual vacuum and autovacuum 
running at the same time, each requesting 2GB (your current setting).
3. Use connection pooling software (e.g. PgBouncer) and reduce max_connections.

Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres and xquery

2013-05-14 Thread kristian kvilekval
I was really hoping to see be able to store several ten's of millions XML
documents in postgres, but I would also like to use Xquery to retrieve
results.  Back in 2010 there was some mailing list discussion about using
integrating the xquery processor of zorba into postgres.  I was trying to
gauge the interest level and if anybody had attempted it.  As you say, JSON
has stolen all the thunder, and in fact the Zorba people have worked on
JSONiq (an xquery processor for JSON data structures), but our project uses
XML.We like the flexibility you get with Xquery and I am looking around
for some solutions.


On Tue, May 14, 2013 at 12:37 AM, John R Pierce pie...@hogranch.com wrote:

 On 5/13/2013 9:31 PM, kristian kvilekval wrote:

   I saw on the mailing list that back in 2010 there was some discussion
 of using Zorba to add xquery capabilities to postgres.  I was wondering if
  any progress had been made on this and what the current interest levels of
 supporting xml in postgres are?


 Zorba has a JDBC database interface, and JDBC can talk to postgres.   this
 would let you use zorba to query a postgres database via xQuery.  isn't
 that what you want?



 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



[GENERAL] bloating vacuum

2013-05-14 Thread S H
I am doing some experiment to
understand the behaviour of manual vacuum.

 

I created small
table and started doing insertion/deletion/updation on 2 rows in infinite loop.
It started bloating around 844 times, but after it stopped bloating.. what
could be the reason?
In between i am running manual vacuum analyze ( without full option)



  

Re: [GENERAL] bloating vacuum

2013-05-14 Thread Reid Thompson
On Tue, 2013-05-14 at 14:51 +, S H wrote:
 I am doing some experiment to understand the behaviour of manual
 vacuum.
 
 I created small table and started doing insertion/deletion/updation on
 2 rows in infinite loop. It started bloating around 844 times, but
 after it stopped bloating.. what could be the reason?
 
 In between i am running manual vacuum analyze ( without full option)

Explanation is described here
 http://www.postgresql.org/docs/9.1/static/sql-vacuum.html




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres and xquery

2013-05-14 Thread Rob Sargent

On 05/14/2013 08:48 AM, kristian kvilekval wrote:
I was really hoping to see be able to store several ten's of millions 
XML documents in postgres, but I would also like to use Xquery to 
retrieve results.  Back in 2010 there was some mailing list discussion 
about using integrating the xquery processor of zorba into postgres. 
 I was trying to gauge the interest level and if anybody had attempted 
it.  As you say, JSON has stolen all the thunder, and in fact the 
Zorba people have worked on JSONiq (an xquery processor for JSON data 
structures), but our project uses XML.We like the flexibility you 
get with Xquery and I am looking around for some solutions.



On Tue, May 14, 2013 at 12:37 AM, John R Pierce pie...@hogranch.com 
mailto:pie...@hogranch.com wrote:


On 5/13/2013 9:31 PM, kristian kvilekval wrote:

  I saw on the mailing list that back in 2010 there was some
discussion of using Zorba to add xquery capabilities to
postgres.  I was wondering if  any progress had been made on
this and what the current interest levels of supporting xml in
postgres are?


Zorba has a JDBC database interface, and JDBC can talk to
postgres.   this would let you use zorba to query a postgres
database via xQuery.  isn't that what you want?



-- 
john r pierce  37N 122W

somewhere on the middle of the left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Have you try xpath in postgres yet?  We store xml, but do very little 
xpathing against it.  I do it mainly for trouble shooting. We have a 
problem with multiple namespaces used in our content. Because of that 
one is often left searching for '//*/name()=some-value' to get around 
the namespace differences inter and intra record.  But xpath and 
particularly xpath_table work quite well even with that constraint.


Re: [GENERAL] bloating vacuum

2013-05-14 Thread bricklen
On Tue, May 14, 2013 at 7:51 AM, S H msq...@live.com wrote:

   I created small table and started doing insertion/deletion/updation on
 2 rows in infinite loop. It started bloating around 844 times, but after it
 stopped bloating.. what could be the reason?


Did autovacuum kick in and clean up the table?
Is autovacuum on? Type:
show autovacuum;

You can watch the dead tuple count  using *pgstattuple (an extension which
can be installed via create extension pgstattuple;.
Once installed,
select * from pgstattuple('name_of_your_table');

*http://www.postgresql.org/docs/current/static/pgstattuple.html


 In between i am running manual vacuum analyze ( without full option)

 After every iteration?




Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman


 -Original Message-
 From: Ioana Danes [mailto:ioanasoftw...@yahoo.ca]
 Sent: Tuesday, May 14, 2013 10:30 AM
 To: Igor Neyman; PostgreSQL General
 Subject: Re: [GENERAL] Running out of memory on vacuum
 
 Hi Igor,
 
 1. I could remove the nightly vacuum but I think that is  not the
 cause. The vacuum is only catching the problem. If I ignore the  vacuum
 message for few days the system is gonna run out of memory on
 queries...
 2. There is no autovacuum running in the same time. I tried to run
 vacuum verbose manually and checked what else was going on on the
 server.
 I also reduced the maintenance work mem to 1 GB but I get the same
 error.
 3.
  I do use connection pooling. I have 1500 terminals selling and at busy
 times I might need more than 100 active connections but just
 occationally...
 
 Thanks for quick response,
 
 

You still didn't explain, why do you need manual vacuuming.
You have autovacuum set on, so it'll wake up every so often do its job based 
on other autovacuum config parameters.

What kind of connection pooling are you using?
Is it set for session pooling, or transaction, or statement pooling?

Having more than 100 active connections at a time does not mean that all of 
them executing queries at the same time.
Unless you have a lot of processor cores (and I mean a LOT!), it does not make 
sense to set pool at 180, even less so at 300.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloating vacuum

2013-05-14 Thread S H
Got some information from following
http://www.depesz.com/2011/07/06/bloat-happens/
What is the general solution to avoid bloating.
 On Tue, 2013-05-14 at 14:51 +, S H wrote:
  I am doing some experiment to understand the behaviour of manual
  vacuum.
  
  I created small table and started doing insertion/deletion/updation on
  2 rows in infinite loop. It started bloating around 844 times, but
  after it stopped bloating.. what could be the reason?
  
  In between i am running manual vacuum analyze ( without full option)
 
 Explanation is described here
  http://www.postgresql.org/docs/9.1/static/sql-vacuum.html
 
 
 
  

Re: [GENERAL] postgres and xquery

2013-05-14 Thread Merlin Moncure
On Tue, May 14, 2013 at 9:48 AM, kristian kvilekval k...@cs.ucsb.edu wrote:

 I was really hoping to see be able to store several ten's of millions XML
 documents in postgres, but I would also like to use Xquery to retrieve
 results.  Back in 2010 there was some mailing list discussion about using
 integrating the xquery processor of zorba into postgres.  I was trying to
 gauge the interest level and if anybody had attempted it.  As you say, JSON
 has stolen all the thunder, and in fact the Zorba people have worked on
 JSONiq (an xquery processor for JSON data structures), but our project uses
 XML.We like the flexibility you get with Xquery and I am looking around
 for some solutions.

Advise refining your requirements so that you can search your xml with
xpath and then do transformations in sql -- this is more powerful than
it looks on the surface.  If you must use xquery, the shortest path to
get there is possibly through a backend pl (say perl or python) so
that you can tap into fancy xml support there.  If not xpath, then
xslt -- xquey is just a mess IMNSHO and should be avoided, especially
for new programming efforts.

JSON is superior in just about every way -- less verbose, more regular
syntax, faster parsing.  JSON is also very tightly integrated to
postgres (especially with coming 9.3) so that serialization to/from
database data is automagical and essentially eliminates the need for
the querying/construction steps that make dealing with xml such a bear
(especially when serializing from database).  So much so that if I was
tasked with complex xml transformations in the database I'd maybe
consider converting them to json, doing the transformation, then back
to xml again.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problem

2013-05-14 Thread Scott Marlowe
On Tue, May 14, 2013 at 7:27 AM, S H msq...@live.com wrote:
 I wonder if you've got a bloated pg catalog then. Certainly sounds
 like it's a possibility.
 So other than vacuuming when you recreate this, is the server working
 hard? What is vacuum vacuuming when this happens (pg_stat_activity
 should show that)

 Does vacuum full is required to avoid bloating, i am running vacuum analyze
 regularly but not vacuum full.

Normally vacuum full is NOT required on a regular basis. However, if
you did something like creation 100M tables and then dropped them, or
did it one at a time real fast, you could outrun your autovacuum
daemons and get bloat in the pg catalog tables.

Just offering a possibility for why a connection might be taking a
long time. There's plenty of other possibilities I'm sure.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Ioana Danes


Hi Igor,

I don't need the manual vacuum. I just don't want to remove it now because it 
gives me a clue that something is wrong and I need to restart postgres to free 
up the memory. Otherwise I run out of memory later in the day and that is 
something I want to avoid.

Even if I don't need it I don't think it is normal to run out of memory in 
these conditions. (I had it off for a while and I did run out of memory on 
pg_dump).


We use jdbc connection pooling and it does make sense to have it higher than 
100 at some points during the day. Anyhow at the time of the vacuum there is 
nothing else going on on the database. Sales are off.

Thanks,



- Original Message -
From: Igor Neyman iney...@perceptron.com
To: Ioana Danes ioanasoftw...@yahoo.ca; PostgreSQL General 
pgsql-general@postgresql.org
Cc: 
Sent: Tuesday, May 14, 2013 11:06:25 AM
Subject: Re: [GENERAL] Running out of memory on vacuum



 -Original Message-
 From: Ioana Danes [mailto:ioanasoftw...@yahoo.ca]
 Sent: Tuesday, May 14, 2013 10:30 AM
 To: Igor Neyman; PostgreSQL General
 Subject: Re: [GENERAL] Running out of memory on vacuum
 
 Hi Igor,
 
 1. I could remove the nightly vacuum but I think that is  not the
 cause. The vacuum is only catching the problem. If I ignore the  vacuum
 message for few days the system is gonna run out of memory on
 queries...
 2. There is no autovacuum running in the same time. I tried to run
 vacuum verbose manually and checked what else was going on on the
 server.
 I also reduced the maintenance work mem to 1 GB but I get the same
 error.
 3.
  I do use connection pooling. I have 1500 terminals selling and at busy
 times I might need more than 100 active connections but just
 occationally...
 
 Thanks for quick response,
 
 

You still didn't explain, why do you need manual vacuuming.
You have autovacuum set on, so it'll wake up every so often do its job based 
on other autovacuum config parameters.

What kind of connection pooling are you using?
Is it set for session pooling, or transaction, or statement pooling?

Having more than 100 active connections at a time does not mean that all of 
them executing queries at the same time.
Unless you have a lot of processor cores (and I mean a LOT!), it does not make 
sense to set pool at 180, even less so at 300.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloating vacuum

2013-05-14 Thread Scott Marlowe
On Tue, May 14, 2013 at 9:06 AM, S H msq...@live.com wrote:
 Got some information from following

 http://www.depesz.com/2011/07/06/bloat-happens/

 What is the general solution to avoid bloating.

1: Don't do massive deletes
2: Make sure your autovacuum is tuned aggressively enough to keep up
with your workload
3: Make sure your hardware is fast enough to allow autovacuum to be
tuned aggressively enough to keep up with your workload.

Some minor bloating is fine. 10, 20, 50% dead space in a table is no
big deal. OTOH, 99.99% dead space probably is.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman


 -Original Message-
 From: Ioana Danes [mailto:ioanasoftw...@yahoo.ca]
 Sent: Tuesday, May 14, 2013 11:29 AM
 To: Igor Neyman; PostgreSQL General
 Subject: Re: [GENERAL] Running out of memory on vacuum
 
 
 
 Hi Igor,
 
 I don't need the manual vacuum. I just don't want to remove it now
 because it gives me a clue that something is wrong and I need to
 restart postgres to free up the memory. Otherwise I run out of memory
 later in the day and that is something I want to avoid.
 
 Even if I don't need it I don't think it is normal to run out of
 memory in these conditions. (I had it off for a while and I did run out
 of memory on pg_dump).
 
 
 We use jdbc connection pooling and it does make sense to have it higher
 than 100 at some points during the day. Anyhow at the time of the
 vacuum there is nothing else going on on the database. Sales are off.
 
 Thanks,
 

Ioana,

You are probably correct that manual vacuum just highlights the problem, 
causing it to occur sooner rather than later.

The real problem is the number of connections, your hardware isn't good enough 
to support over a 100 of connections.
Connection pooler is needed, but not JDBC connection pooling, which probably 
does nothing for you.

I'd suggest that server-side connection pooler would fix your problem, if set 
up properly.
Take a look at PgBouncer.  It is lightweight, very easy to install and 
configure.  I've been using it for years and have nothing but the praise for it.

Igor Neyman 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Scott Marlowe
On Tue, May 14, 2013 at 8:30 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
 Hi Igor,

 1. I could remove the nightly vacuum but I think that is
  not the cause. The vacuum is only catching the problem. If I ignore the
  vacuum message for few days the system is gonna run out of memory on
 queries...

You should be able to run vacuum any time really, so yeah no great
need to kill it off right away.

 2. There is no autovacuum running in the same time. I
 tried to run vacuum verbose manually and checked what else was going on
 on the server.
 I also reduced the maintenance work mem to 1 GB but I get the same error.

Set it something MUCH lower. Like 256MB or something.
Also set your shared_buffers lower if you can, 2G on an 8G machines (I
think that's what you have) is pretty high, and if you're running out
of memory, it's definitely not helping. By the time you run out of
memory the OS is likely swapping out your shared_buffers (it doesn't
know any better, it's just one more thing to swap out, and if some
part isn't getting accessed a lot it gets swapped out to make room)

How big is your swap? How much is getting used? How much memory can
you put in this machine? My laptop has 12G, my netbook has 8G, may
production servers have 512GB to 1TB of memory. 8GB was a typical
memory size for a medium sized DB server about 8 or so years ago.
Memory is cheap, downtime and troubleshooting are not.

 3. I do use connection pooling. I have 1500 terminals selling and at busy
 times I might need more than 100 active connections but just
 occationally...

You can pool and they should just queue, depending on what method of
pooling you're using. That said I've seen perfectly reasonable
behaviour on larger machines witih ~500 or more connections (when
you've got HUNDREDS of servers needing a dozen persistent connections
each you just have to deal sometimes I guess).

So, tl;dr: Get more memory or lower your shared_buffers /
main_work_mem to something like 512MB each and see if that helps. Also
see what ELSE is using all your memory when this happens. A stack
trace is ok, but something like top with M for sorting is probably
more useful.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problem

2013-05-14 Thread S H
I am doing regular insertion/deletion on the same tables .. and running vacuum 
in an hour... 
I suspect there is bloating  in my tables.. but how does bloating in pgcatalog 
is happening is not clear... Normally vacuum full is NOT required on a regular 
basis. However, if
 you did something like creation 100M tables and then dropped them, or
 did it one at a time real fast, you could outrun your autovacuum
 daemons and get bloat in the pg catalog tables.
 
 Just offering a possibility for why a connection might be taking a
 long time. There's plenty of other possibilities I'm sure.
  

Re: [GENERAL] Vacuum problem

2013-05-14 Thread Scott Marlowe
Not saying you HAVE bloating there, saying you MIGHT.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Ioana Danes


Hi Scott,


I am running with the same configuration since 2 years ago and this only 
started to happen 2-3 month ago. 

I agree that in some conditions that configuration could not be enough for over 
100 connections and in that case I would expect the system to complain during 
these queries but it does not happen. 

During the vacuum analyze there is nothing else going on on that server. Top 
shows only one process running and that is postgres with vacuum analyze (I also 
checked the pg_stat_activity). The few connections from the application are 
IDLE and the processes are sleeping so they should not use much memory.
I think it is a memory leak when too many connections are open otherwise I 
can't explain. 

I will try to lower the parameters as you suggested but I have a feeling that 
this is only going to delay the behavior...

Thanks a lot for your response,
ioana



- Original Message -
From: Scott Marlowe scott.marl...@gmail.com
To: Ioana Danes ioanasoftw...@yahoo.ca
Cc: Igor Neyman iney...@perceptron.com; PostgreSQL General 
pgsql-general@postgresql.org
Sent: Tuesday, May 14, 2013 12:14:18 PM
Subject: Re: [GENERAL] Running out of memory on vacuum

On Tue, May 14, 2013 at 8:30 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
 Hi Igor,

 1. I could remove the nightly vacuum but I think that is
  not the cause. The vacuum is only catching the problem. If I ignore the
  vacuum message for few days the system is gonna run out of memory on
 queries...

You should be able to run vacuum any time really, so yeah no great
need to kill it off right away.

 2. There is no autovacuum running in the same time. I
 tried to run vacuum verbose manually and checked what else was going on
 on the server.
 I also reduced the maintenance work mem to 1 GB but I get the same error.

Set it something MUCH lower. Like 256MB or something.
Also set your shared_buffers lower if you can, 2G on an 8G machines (I
think that's what you have) is pretty high, and if you're running out
of memory, it's definitely not helping. By the time you run out of
memory the OS is likely swapping out your shared_buffers (it doesn't
know any better, it's just one more thing to swap out, and if some
part isn't getting accessed a lot it gets swapped out to make room)

How big is your swap? How much is getting used? How much memory can
you put in this machine? My laptop has 12G, my netbook has 8G, may
production servers have 512GB to 1TB of memory. 8GB was a typical
memory size for a medium sized DB server about 8 or so years ago.
Memory is cheap, downtime and troubleshooting are not.

 3. I do use connection pooling. I have 1500 terminals selling and at busy
 times I might need more than 100 active connections but just
 occationally...

You can pool and they should just queue, depending on what method of
pooling you're using. That said I've seen perfectly reasonable
behaviour on larger machines witih ~500 or more connections (when
you've got HUNDREDS of servers needing a dozen persistent connections
each you just have to deal sometimes I guess).

So, tl;dr: Get more memory or lower your shared_buffers /
main_work_mem to something like 512MB each and see if that helps. Also
see what ELSE is using all your memory when this happens. A stack
trace is ok, but something like top with M for sorting is probably
more useful.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Scott Marlowe
Well definitely look at getting more memory in it if you can. 8G is
seriously pretty small.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Storing Special Characters

2013-05-14 Thread Rebecca Clarke
Hi there.

This may be the wrong forum to inquire in, but I'd be grateful if I could
directed in the right direction if that is the case.

I am currently using Postgresql 9.1.

I have a table in which I want to store shop names. Some of the shop names
contain 'é' and '£'.

The query below works when I run it through pg_admin:

insert into retail(storename) values ('£'::character varying)

However, when I run the same query through pg_query in PHP, I get:

PHP Warning:  pg_query(): Query failed: ERROR:  invalid byte sequence for
encoding UTF8: 0xa3


I'd really like to find a solution to getting this code to work through
PHP. My PHP code is correct as it executes other queries (which have
no special characters) without complaint. Do I need to make any alterations
in Postgresql? Is my sql wrong? What is the difference in how its executed
though  PgAdmin and PHP?

I've been relentlessly searching google for a solution to no avail.

Many thanks in advance.

R Clarke


Re: [GENERAL] Storing Special Characters

2013-05-14 Thread Leif Biberg Kristensen
Tirsdag 14. mai 2013 18.05.05 skrev Rebecca Clarke:
 Hi there.
 
 This may be the wrong forum to inquire in, but I'd be grateful if I could
 directed in the right direction if that is the case.
 
 I am currently using Postgresql 9.1.
 
 I have a table in which I want to store shop names. Some of the shop names
 contain 'é' and '£'.
 
 The query below works when I run it through pg_admin:
 
 insert into retail(storename) values ('£'::character varying)
 
 However, when I run the same query through pg_query in PHP, I get:
 
 PHP Warning:  pg_query(): Query failed: ERROR:  invalid byte sequence for
 encoding UTF8: 0xa3

It's possibly a client encoding problem. See the PHP documentation on

http://php.net/manual/en/function.pg-set-client-encoding.php

regards, Leif


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to clone a running master cluster?

2013-05-14 Thread Jeff Janes
On Sun, May 12, 2013 at 5:54 AM, Moshe Jacobson mo...@neadwerx.com wrote:

 That option doesn't seem to exist, but pg_basebackup does seem to be the
 way to go.
 Thanks for the help, Jeff!


--xlog-method was add in 9.2.

Cheers,

Jeff


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Ioana Danes
I agree and I will do. 
Now let me ask you this. How much memory would be decent you put on a server 
with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) at 
pick times? I know more should be considered when taking such decision but I 
would like to know your point of view at a first sight...

Thanks



- Original Message -
From: Scott Marlowe scott.marl...@gmail.com
To: Ioana Danes ioanasoftw...@yahoo.ca
Cc: Igor Neyman iney...@perceptron.com; PostgreSQL General 
pgsql-general@postgresql.org
Sent: Tuesday, May 14, 2013 1:04:06 PM
Subject: Re: [GENERAL] Running out of memory on vacuum

Well definitely look at getting more memory in it if you can. 8G is
seriously pretty small.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Special Characters

2013-05-14 Thread Paul Jungwirth
The UTF-8 encoding for a pound sign is 0xc2a3, not just 0xa3. You
might want to make sure your PHP file is correct. If you're on Linux,
you can use a command like `od --format=ax1 foo.php` to see the actual
byte values. If that shows the wrong value, then the problem is your
text editor is saving the file in the wrong encoding. If the value is
right, then the problem is further downstream.

Good luck!
Paul


On Tue, May 14, 2013 at 10:15 AM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 Tirsdag 14. mai 2013 18.05.05 skrev Rebecca Clarke:
 Hi there.

 This may be the wrong forum to inquire in, but I'd be grateful if I could
 directed in the right direction if that is the case.

 I am currently using Postgresql 9.1.

 I have a table in which I want to store shop names. Some of the shop names
 contain 'é' and '£'.

 The query below works when I run it through pg_admin:

 insert into retail(storename) values ('£'::character varying)

 However, when I run the same query through pg_query in PHP, I get:

 PHP Warning:  pg_query(): Query failed: ERROR:  invalid byte sequence for
 encoding UTF8: 0xa3

 It's possibly a client encoding problem. See the PHP documentation on

 http://php.net/manual/en/function.pg-set-client-encoding.php

 regards, Leif


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] check_postgres_last_vacuum for all databases

2013-05-14 Thread AI Rumman
Hi,

I have been setting up nagios alert for check_postgres_last_vacuum.
I went through the code and saw that I have to define the databases in
order to check for all database tables. In my environment, database
creation and deletion is dynamic. So any time any one can create database
or delete for their needs.
And I configured manual vacuum process for all databases each night.

In this scenario, I was looking for a tool to check the last vacuum time in
the servers so that if any table's last vacuum date is N days older, nagios
will send an alert.

Any idea for it.

Thanks,
AI


[GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Jashaswee
i want to convert numbers into words in postgresql.is there any query for it?
if yes please reply soon



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-convert-numbers-into-words-in-postgresql-tp5755370.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Running out of memory at vacuum

2013-05-14 Thread Ioana Danes
Hi all,

I have a production database that sometimes runs out of memory at nightly 
vacuum.

The application runs typically with around 40 postgres connections but there 
are times when the connections increase because of some queries going on. The 
reason is that the operations are slow, the terminals time out and try to 
reconnect using new connections. Some time ago I started to have problems with 
too many connections being open so I lowered the limit to 300 connections. It 
was all good until recently when even with 180 connections I am running out of 
memory on vacuum... So the connections increase to 180 and the system still 
runs properly for other 2 days but then at nightly vacuum runs out of memory.
The fix is to restart postgres ... If I only close the connections the problem 
is still these so I need to restart postgres.
If I don't restart postgres then the system will run out of memory on queries 
at a point...
Another important thing is that during vacuum at 1am nothing else is going on 
that server so all the connections are idle. 

2013-05-14 06:53:51.449 
CST,postgres,abrazo,8741,[local],519233dc.2225,3,VACUUM,2013-05-14 
06:53:48 CST,174/67143,0,ERROR,53200,out of memory,Failed on request of size 
668141820.,vacuum;,,,psql

OS:

SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2

Suse compiled postgre version :
  version

 PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled by GCC gcc (SUSE Linux) 
4.3.4 [gcc-4_3-branch revision 152973], 64-bit



Postgres configuration parameters:

max_connections = 300
shared_buffers = 2GB
maintenance_work_mem = 2GB
effective_cache_size = 4GB
work_mem = 2MB
wal_buffers = 1MB


checkpoint_segments = 16
autovacuum = on


wal_level = hot_standby
archive_mode = on
archive_command = 'test -f /cbnDBscripts/tmp/PITR_primarydb_stop_backup || 
rsync --timeout=30 -atz %p spsnidb1:/data01/wal_files/%f'
archive_timeout = 60


free
                  total   used   free shared    buffers cached
Mem:   7880512    7825136  55376  0  72376    4537644
-/+ buffers/cache:    3215116    4665396
Swap:  2097148  0    2097148

/etc/sysctl.conf

kernel.shmmax=2684354560
kernel.shmall=2684354560
vm.swappiness=0
vm.overcommit_memory=2

postgres log:
TopMemoryContext: 179088 total in 18 blocks; 8608 free (13 chunks); 170480 used
  TopTransactionContext: 24576 total in 2 blocks; 21888 free (43 chunks); 2688 
used
  Attopt cache: 57344 total in 3 blocks; 25824 free (5 chunks); 31520 used
  Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 
used
  MessageContext: 8192 total in 1 blocks; 4952 free (1 chunks); 3240 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  smgr relation table: 57344 total in 3 blocks; 21984 free (10 chunks); 35360 
used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 
used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
    PortalHeapMemory: 2048 total in 1 blocks; 2016 free (3 chunks); 32 used
  Vacuum: 24576 total in 2 blocks; 15392 free (0 chunks); 9184 used
  Relcache by OID: 24576 total in 2 blocks; 6592 free (3 chunks); 17984 used
  CacheMemoryContext: 2390704 total in 22 blocks; 641048 free (4 chunks); 
1749656 used
    pk_cbntransaction: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    rel_transactionretailerid_fk: 2048 total in 1 blocks; 752 free (0 chunks); 
1296 used
    rel_transaction_promotion_fk: 2048 total in 1 blocks; 752 free (0 chunks); 
1296 used
    reference_64_fk: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    reference_49_fk: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    reference_46_fk: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    reference_28_fk: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    ix_transaction_date: 2048 total in 1 blocks; 504 free (0 chunks); 1544 used
    ix_referenceid_transac: 2048 total in 1 blocks; 752 free (0 chunks); 1296 
used
    ix_onlinetick_cbntrans: 2048 total in 1 blocks; 752 free (0 chunks); 1296 
used
    ix_drawid_transaction: 2048 total in 1 blocks; 752 free (0 chunks); 1296 
used
    ix_cbntransaction_terminalid_transactiondate: 2048 total in 1 blocks; 656 
free (0 chunks); 1392 used
    ix_cbntransaction_purchaseid: 2048 total in 1 blocks; 752 free (0 chunks); 
1296 used
    pk_cardactivity: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    pk_business_type: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    pg_toast_207856_index: 2048 total in 1 blocks; 656 free (0 chunks); 1392 
used
    pk_bank: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
    ix_audit_1: 2048 total in 1 

Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Raymond O'Donnell
On 14/05/2013 07:27, Jashaswee wrote:
 i want to convert numbers into words in postgresql.is there any query
 for it? if yes please reply soon

What exactly are you trying to do?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread John R Pierce

On 5/13/2013 11:27 PM, Jashaswee wrote:

i want to convert numbers into words in postgresql.is there any query for it?
if yes please reply soon


you mean, like 123 - one hundred twenty three  ?that would be 
better done in your client software.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread John R Pierce

On 5/14/2013 10:25 AM, Ioana Danes wrote:

I agree and I will do.
Now let me ask you this. How much memory would be decent you put on a server 
with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) at 
pick times? I know more should be considered when taking such decision but I 
would like to know your point of view at a first sight...


how many 100s of CPU cores do you have to execute those 1000+ concurrent 
transactions?



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In what situations do you use -U of initdb?

2013-05-14 Thread Kevin Grittner
Tsunakawa, Takayuki tsunakawa.ta...@jp.fujitsu.com wrote:

 I'd like to hear other cases like:
 
 - Packaged applications (not OS) that embeds or uses PostgreSQL
 - The corporate environment where some security policy is
   enforced that the OS user and the database administrator user
   must be different

Well, where I used to work, we had many instances of PostgreSQL
running on a server, and found it to be *very* good policy to use a
different OS user to run each cluster.  We wanted the inital
superuser login to match the OS user, for trust login.  (The
superuser login had no password; you had to log in as yourself and
run use sudo to run as a database superuser, or root could schedule
crontab jobs to run as a database superuser.)  So, essentially, the
database superuser was always a name meaningful for the cluster --
never postgres.

We never liked to allow any OS login except as an identifiable
person, and then we could track who was logged in when and what
they ran through sudo.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Alvaro Herrera
Jashaswee escribió:
 i want to convert numbers into words in postgresql.is there any query for it?
 if yes please reply soon

Try cash_words()

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Scott Marlowe
On Tue, May 14, 2013 at 11:25 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
 I agree and I will do.
 Now let me ask you this. How much memory would be decent you put on a server 
 with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) at 
 pick times? I know more should be considered when taking such decision but I 
 would like to know your point of view at a first sight...

2000 users running a transaction every 4 seconds each is 2000/4 tps or
500 tps.  500 tps is no big deal for most servers with a decent RAID
array and battery backed controller or running on a single SSD. Memory
wise if you need to have a connection open and just waiting for the
next transaction, you'll need ~6MB free per connection for the basic
backend, plus extra memory for sorts etc. Let's say 10MB. Double that
for a fudge factor. Times 2000. That's 4GB just to hold all that state
in memory. After that you want maint work mem, shared buffers and then
add all that up and double it so the OS can do a lot of caching. So,
I'd say look at going to at least 16G. Again, I'd fudge factor that to
32G just to be sure.

I have built servers that held open ~1000 connections, most idle but
persistent on 8 core 32G machines with 16 drives in a RAID controller
with a battery back RAID that were plenty fast in that situation. 32G
is pretty darned cheap, assuming your server can hold that much
memory. If it can hold more great, if it's not too much look at 64G
and more. How big is your data store? The more of it you can fit in
kernel cache the better. If you're dealing with a 10G database great,
if it's 500GB then try to get as much memory as possible up to 512GB
or so into that machine.

On Tue, May 14, 2013 at 3:32 PM, John R Pierce wrote:

 how many 100s of CPU cores do you have to execute those 1000+ concurrent 
 transactions?

I think you're misreading the OP's post. 2000 clients running a
transaction every 4 seconds == 500 tps. With an SSD my laptop could do
that with 16G RAM probably.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Scott Marlowe
Meant to add: I'd definitely be looking at using pgbouncer if you can
to pool locally. Makes a huge difference in how the machine behaves
should things go badly (i.e. it starts to slow down and connections
want to pile up)

On Tue, May 14, 2013 at 4:15 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Tue, May 14, 2013 at 11:25 AM, Ioana Danes ioanasoftw...@yahoo.ca wrote:
 I agree and I will do.
 Now let me ask you this. How much memory would be decent you put on a server 
 with 2000 users creating transactions every 4-10 seconds (2 to 20 inserts) 
 at pick times? I know more should be considered when taking such decision 
 but I would like to know your point of view at a first sight...

 2000 users running a transaction every 4 seconds each is 2000/4 tps or
 500 tps.  500 tps is no big deal for most servers with a decent RAID
 array and battery backed controller or running on a single SSD. Memory
 wise if you need to have a connection open and just waiting for the
 next transaction, you'll need ~6MB free per connection for the basic
 backend, plus extra memory for sorts etc. Let's say 10MB. Double that
 for a fudge factor. Times 2000. That's 4GB just to hold all that state
 in memory. After that you want maint work mem, shared buffers and then
 add all that up and double it so the OS can do a lot of caching. So,
 I'd say look at going to at least 16G. Again, I'd fudge factor that to
 32G just to be sure.

 I have built servers that held open ~1000 connections, most idle but
 persistent on 8 core 32G machines with 16 drives in a RAID controller
 with a battery back RAID that were plenty fast in that situation. 32G
 is pretty darned cheap, assuming your server can hold that much
 memory. If it can hold more great, if it's not too much look at 64G
 and more. How big is your data store? The more of it you can fit in
 kernel cache the better. If you're dealing with a 10G database great,
 if it's 500GB then try to get as much memory as possible up to 512GB
 or so into that machine.

 On Tue, May 14, 2013 at 3:32 PM, John R Pierce wrote:

 how many 100s of CPU cores do you have to execute those 1000+ concurrent 
 transactions?

 I think you're misreading the OP's post. 2000 clients running a
 transaction every 4 seconds == 500 tps. With an SSD my laptop could do
 that with 16G RAM probably.



--
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread John R Pierce

On 5/14/2013 2:57 PM, Alvaro Herrera wrote:

Jashaswee escribió:

i want to convert numbers into words in postgresql.is there any query for it?
if yes please reply soon

Try cash_words()



um, where is that documented?  I've searched the various 'functions' 
pages, and not seen it, nor is it in the manual Index.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Special Characters

2013-05-14 Thread CR Lender
On 2013-05-14 19:32, Paul Jungwirth wrote:
 The UTF-8 encoding for a pound sign is 0xc2a3, not just 0xa3. You
 might want to make sure your PHP file is correct.

Just for the record, the Unicode code point for the pound symbol (£) is
actually 0x00A3. 0xC2A3 is the Hangul syllable Syuh (슣).

- crl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Merlin Moncure
On Tue, May 14, 2013 at 4:32 PM, John R Pierce pie...@hogranch.com wrote:
 On 5/13/2013 11:27 PM, Jashaswee wrote:

 i want to convert numbers into words in postgresql.is there any query for
 it?
 if yes please reply soon


 you mean, like 123 - one hundred twenty three  ?that would be better
 done in your client software.

idk: it only took a few minutes to convert this routine:

http://www.sqlusa.com/bestpractices2008/number-to-words/

CREATE OR REPLACE FUNCTION fnNumberToWords(n BIGINT) RETURNS TEXT AS
$$
DECLARE
  e TEXT;
BEGIN

  WITH Below20(Word, Id) AS
  (
VALUES
  ('Zero', 0), ('One', 1),( 'Two', 2 ), ( 'Three', 3),
  ( 'Four', 4 ), ( 'Five', 5 ), ( 'Six', 6 ), ( 'Seven', 7 ),
  ( 'Eight', 8), ( 'Nine', 9), ( 'Ten', 10), ( 'Eleven', 11 ),
  ( 'Twelve', 12 ), ( 'Thirteen', 13 ), ( 'Fourteen', 14),
  ( 'Fifteen', 15 ), ('Sixteen', 16 ), ( 'Seventeen', 17),
  ('Eighteen', 18 ), ( 'Nineteen', 19 )
   ),
   Below100(Word, Id) AS
   (
  VALUES
   ('Twenty', 2), ('Thirty', 3),('Forty', 4), ('Fifty', 5),
   ('Sixty', 6), ('Seventy', 7), ('Eighty', 8), ('Ninety', 9)
   )
   SELECT
 CASE
  WHEN n = 0 THEN  ''
  WHEN n BETWEEN 1 AND 19
THEN (SELECT Word FROM Below20 WHERE ID=n)
 WHEN n BETWEEN 20 AND 99
   THEN  (SELECT Word FROM Below100 WHERE ID=n/10) ||  '-'  ||
 fnNumberToWords( n % 10)
 WHEN n BETWEEN 100 AND 999
   THEN  (fnNumberToWords( n / 100)) || ' Hundred ' ||
   fnNumberToWords( n % 100)
 WHEN n BETWEEN 1000 AND 99
   THEN  (fnNumberToWords( n / 1000)) || ' Thousand ' ||
   fnNumberToWords( n % 1000)
 WHEN n BETWEEN 100 AND 9
   THEN  (fnNumberToWords( n / 100)) || ' Million ' ||
   fnNumberToWords( n % 100)
 WHEN n BETWEEN 10 AND 
   THEN  (fnNumberToWords( n / 10)) || ' Billion ' ||
   fnNumberToWords( n % 10)
 WHEN n BETWEEN 1 AND 999
   THEN  (fnNumberToWords( n / 1)) || ' Trillion ' ||
   fnNumberToWords( n % 1)
WHEN n BETWEEN 1000 AND 99
   THEN  (fnNumberToWords( n / 1000)) || ' Quadrillion ' ||
   fnNumberToWords( n % 1000)
WHEN n BETWEEN 100 AND 9
   THEN  (fnNumberToWords( n / 100)) || ' Quintillion ' ||
   fnNumberToWords( n % 100)
  ELSE ' INVALID INPUT' END INTO e;

  e := RTRIM(e);

  IF RIGHT(e,1)='-' THEN
e := RTRIM(LEFT(e,length(e)-1));
  END IF;

  RETURN e;
END;
$$ LANGUAGE PLPGSQL;

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Adrian Klaver

On 05/14/2013 03:17 PM, John R Pierce wrote:

On 5/14/2013 2:57 PM, Alvaro Herrera wrote:

Jashaswee escribió:

i want to convert numbers into words in postgresql.is there any
query for it?
if yes please reply soon

Try cash_words()



um, where is that documented?  I've searched the various 'functions'
pages, and not seen it, nor is it in the manual Index.



production= \df cash_words
 List of functions
   Schema   |Name| Result data type | Argument data types |  Type
++--+-+
 pg_catalog | cash_words | text | money   | normal

Seems to only work with money type.








--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Merlin Moncure
On Tue, May 14, 2013 at 5:24 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On 05/14/2013 03:17 PM, John R Pierce wrote:

 On 5/14/2013 2:57 PM, Alvaro Herrera wrote:

 Jashaswee escribió:

 i want to convert numbers into words in postgresql.is there any
 query for it?
 if yes please reply soon

 Try cash_words()



 um, where is that documented?  I've searched the various 'functions'
 pages, and not seen it, nor is it in the manual Index.



 production= \df cash_words
  List of functions
Schema   |Name| Result data type | Argument data types |  Type
 ++--+-+
  pg_catalog | cash_words | text | money   | normal

 Seems to only work with money type.

from the source:
/* cash_words()
 * This converts a int4 as well but to a representation using words
 * Obviously way North American centric - sorry
 */
Datum
cash_words(PG_FUNCTION_ARGS)
{

so, it comes down to the money type is one of the more bizarre things
left from postgres past and we try not to advertise too loudly I
suppose.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Special Characters

2013-05-14 Thread Chris Angelico
On Wed, May 15, 2013 at 8:20 AM, CR Lender crlen...@gmail.com wrote:
 On 2013-05-14 19:32, Paul Jungwirth wrote:
 The UTF-8 encoding for a pound sign is 0xc2a3, not just 0xa3. You
 might want to make sure your PHP file is correct.

 Just for the record, the Unicode code point for the pound symbol (£) is
 actually 0x00A3. 0xC2A3 is the Hangul syllable Syuh (슣).

Which, in UTF-8, is represented by the byte sequence C2 A3. (The
latter would be represented as EC 8A A3.)

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread John R Pierce

On 5/14/2013 3:24 PM, Adrian Klaver wrote:



um, where is that documented?  I've searched the various 'functions'
pages, and not seen it, nor is it in the manual Index.



production= \df cash_words
 List of functions
   Schema   |Name| Result data type | Argument data types |  Type
++--+-+ 


 pg_catalog | cash_words | text | money | normal

Seems to only work with money type. 


not sure I'd call that documentation.  I was asking, where is that 
described in the postgresql manual?   I looked in every category I 
thought was appropriate here, 
http://www.postgresql.org/docs/current/static/functions.html and didn't 
find it.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing Special Characters

2013-05-14 Thread CR Lender
On 2013-05-15 00:31, Chris Angelico wrote:
 Which, in UTF-8, is represented by the byte sequence C2 A3. (The
 latter would be represented as EC 8A A3.)

Right, my bad.
I read Unicode instead of UTF-8.

- crl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Alvaro Herrera
John R Pierce wrote:

 not sure I'd call that documentation.  I was asking, where is that
 described in the postgresql manual?   I looked in every category I
 thought was appropriate here,
 http://www.postgresql.org/docs/current/static/functions.html and
 didn't find it.

A quick grep in doc/src/sgml/ says that cash_words only appears once:

./release-old.sgml:listitemparaFix cash_words() to not overflow buffer 
(Tom)/para/listitem

This is a release note item for the 7.3 release.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Julian
On 15/05/13 08:27, Merlin Moncure wrote:
 On Tue, May 14, 2013 at 5:24 PM, Adrian Klaver adrian.kla...@gmail.com 
 wrote:
 On 05/14/2013 03:17 PM, John R Pierce wrote:

 On 5/14/2013 2:57 PM, Alvaro Herrera wrote:

 Jashaswee escribió:

 i want to convert numbers into words in postgresql.is there any
 query for it?
 if yes please reply soon

 Try cash_words()



 um, where is that documented?  I've searched the various 'functions'
 pages, and not seen it, nor is it in the manual Index.



 production= \df cash_words
  List of functions
Schema   |Name| Result data type | Argument data types |  Type
 ++--+-+
  pg_catalog | cash_words | text | money   | normal

 Seems to only work with money type.
 
 from the source:
 /* cash_words()
  * This converts a int4 as well but to a representation using words
  * Obviously way North American centric - sorry
  */
 Datum
 cash_words(PG_FUNCTION_ARGS)
 {
 
 so, it comes down to the money type is one of the more bizarre things
 left from postgres past and we try not to advertise too loudly I
 suppose.
 
 merlin
 
 
What the?
I currently use a python number to words module (n2w.py if OP is
interested in it, let me know). I've been using this for years and never
knew about the cash_words() function, what other secret functions does
postgres have? :)
Now there is a use for the money type, perhaps this function could be
upgrade to not need it?

jules.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Adrian Klaver

On 05/14/2013 03:27 PM, Merlin Moncure wrote:

On Tue, May 14, 2013 at 5:24 PM, Adrian Klaver adrian.kla...@gmail.com wrote:



Seems to only work with money type.


from the source:
/* cash_words()
  * This converts a int4 as well but to a representation using words
  * Obviously way North American centric - sorry
  */
Datum
cash_words(PG_FUNCTION_ARGS)
{


Well Pg 9.0 did not get that memo:)

test= SELECT version();
 version 


--
 PostgreSQL 9.0.13 on i686-pc-linux-gnu, compiled by GCC gcc (SUSE 
Linux) 4.7.1 20120723 [gcc-4_7-branch revision 189773], 32-bit

(1 row)

test= SELECT cash_words(123);
ERROR:  function cash_words(integer) does not exist
LINE 1: SELECT cash_words(123);
   ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.


Though using text works:

test= SELECT cash_words('123');
   cash_words
-
 One hundred twenty three dollars and zero cents




so, it comes down to the money type is one of the more bizarre things
left from postgres past and we try not to advertise too loudly I
suppose.


Agreed.



merlin




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL TCL extension - Redhat 5

2013-05-14 Thread Oscar Calderon
Hi to everybody, this is my first email in this list. I wanna ask you this
because i couldn't find accurated information in google about it. Currently
our company gives maintenance service to another company in postgresql.
They have a redhat 5 server of 64 bits with postgresql 9.1.1 . The server
has TCL 8.4 installed.

So, they asked for help to install the extension of TCL in postgresql
because they developed a procedure written in TCL to send an email with
data extracted from the database. In development they easily installed
PostgreSQL 9.2 in a virtual machine centos from the repositories from
http://yum.postgresql.org/. But in the server, we have a problem, because
when i tried to install the extension for redhat 5, if i update the repos,
it asked me to install postgresql91, postgresql91-libs because the version
that it requires to work as i think.

So i searched for an older version and i found this:

http://yum.postgresql.org/8.4/redhat/rhel-5-x86_64/postgresql-tcl-1.6.2-1PGDG.rhel5.x86_64.rpm

I had installed it without problems, and it appears when i execute SELECT *
FROM pg_available_extensions; but when i try to install it, it gives me
error because it looks for the .so file of tcl 8.5, and the server has tcl
8.4 . And because i'm not responsible of maintenance on the redhat server
(that's another company task), they notified us that that's the maximum
version of TCL that they can upgrade, so my doubt is if there's a version
of postgresql tcl extension that works with tcl 8.4 or if there's another
way to solve it.

Regards.

***
Oscar Calderon
Analista de Sistemas
Soluciones Aplicativas S.A. de C.V.
www.solucionesaplicativas.com
Cel. (503) 7741 7850


Re: [GENERAL] PostgreSQL TCL extension - Redhat 5

2013-05-14 Thread John R Pierce

On 5/14/2013 5:02 PM, Oscar Calderon wrote:

So i searched for an older version and i found this:

http://yum.postgresql.org/8.4/redhat/rhel-5-x86_64/postgresql-tcl-1.6.2-1PGDG.rhel5.x86_64.rpm

I had installed it without problems, and it appears when i execute 
SELECT * FROM pg_available_extensions; but when i try to install it, 
it gives me error because it looks for the .so file of tcl 8.5, and 
the server has tcl 8.4 . And because i'm not responsible of 
maintenance on the redhat server (that's another company task), they 
notified us that that's the maximum version of TCL that they can 
upgrade, so my doubt is if there's a version of postgresql tcl 
extension that works with tcl 8.4 or if there's another way to solve it.


that file is for postgresql 8.4, not 9.1

you want the postgresql91-pltcl package most likely.   I just checked 
the one for RHEL5 with Postgres 9.2, and its linked to libtcl8.4.so


# rpm -ql postgresql92-pltcl
/usr/pgsql-9.2/bin/pltcl_delmod
/usr/pgsql-9.2/bin/pltcl_listmod
/usr/pgsql-9.2/bin/pltcl_loadmod
/usr/pgsql-9.2/lib/pltcl.so
..

# ldd /usr/pgsql-9.2/lib/pltcl.so
linux-gate.so.1 =  (0x4000)
libtcl8.4.so = /usr/lib/libtcl8.4.so (0x40018000)
libdl.so.2 = /lib/libdl.so.2 (0x400d8000)
libpthread.so.0 = /lib/libpthread.so.0 (0x400dd000)
libm.so.6 = /lib/libm.so.6 (0x400f7000)
libc.so.6 = /lib/libc.so.6 (0x40121000)
/lib/ld-linux.so.2 (0x0035d000)




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL TCL extension - Redhat 5

2013-05-14 Thread John R Pierce

On 5/14/2013 5:38 PM, ocalde...@solucionesaplicativas.com wrote:

Can you provide me the link to that rpm please? Because I tried some rpms of 
postgres91 tcl that requires 8.5 version. Also I had the problem that also 
those packages that I tried asked me to install dependencies postgresql91 and 
postgre91-libs, like i wouldnkt have an instance of postgresql installed.


I got it from yum.postgresql.com via the yum command.

yum install postgresql91-pltcl

it WILL want to install a newer 9.1 sub-release, you said you're running 
9.1.1 which is like 2 years worth of patches behind.   The current 
supported 9.1 release is 9.1.9... in particular, there's a data 
corruption bug in 9.1.5 and earlier, which can lead to corrupt 
indexes.   you should reindex after upgrading to 9.1.6+




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory at vacuum

2013-05-14 Thread Jeff Janes
On Tuesday, May 14, 2013, Ioana Danes wrote:

 Hi all,

 I have a production database that sometimes runs out of memory at nightly
 vacuum.

 The application runs typically with around 40 postgres connections but
 there are times when the connections increase because of some queries going
 on.


You should consider not allowing that to happen.  If 40 connections is the
sweet spot for your hardware, then you can't change reality simply by
changing the connection pool setting.


 The reason is that the operations are slow, the terminals time out and try
 to reconnect using new connections.



So, The beatings will continue until morale improves.  You should
consider not allowing that to happen, either.  If the original operation is
slow, why would trying it again on a new connection (while leaving the
original one behind to clog things up) be any better?




 Some time ago I started to have problems with too many connections being
 open so I lowered the limit to 300 connections. It was all good until
 recently when even with 180 connections I am running out of memory on
 vacuum... So the connections increase to 180 and the system still runs
 properly for other 2 days but then at nightly vacuum runs out of memory.
 The fix is to restart postgres ... If I only close the connections the
 problem is still these so I need to restart postgres.


How are you closing the connections?



 If I don't restart postgres then the system will run out of memory on
 queries at a point...
 Another important thing is that during vacuum at 1am nothing else is going
 on that server so all the connections are idle.


Truly idle, or idle in transaction, or not even that?  If the abandoned
connections have completed whatever long running thing caused them to be
abandoned, then they should no longer exist.  What are the largest
processes according to top?

You seem to have a series of unfortunate events here, but I think you are
tackling them from the wrong end.  What are these slow queries that take up
a lot of memory, and why can't they be improved?  Given that you have slow
queries that take a lot of memory, why does your app respond to this by
launching a Denial of Service attack against its own database, and why do
you let it do that?

Cheers,

Jeff


[GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
If I have this:

CREATE OR REPLACE VIEW Link.Foo AS
  select * from dblink(
'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
password=secret',
'select * from Foo') as ...

Then it works.  However, if I do:

CREATE OR REPLACE VIEW Link.Foo AS
  select * from dblink(
'hostaddr=db.domain.com dbname=KitchenPC user=Website password=secret',
'select * from Foo') as ...

Then I get:

ERROR:  could not establish connection
DETAIL:  could not translate host name db.domain.com to address: Unknown
host

However, from a command prompt I can ping db.domain.com and get
123.123.123.123.

Does dblink just not support DNS resolution?  I really don't want to hard
code IP addresses in my scripts.  Thanks!


Re: [GENERAL] How to convert numbers into words in postgresql

2013-05-14 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160

Jashaswee asked:

 i want to convert numbers into words in postgresql.
 is there any query for it?

Easy enough with PlPerl:

$ sudo apt-get install liblingua-en-inflect-perl

$ createlang plperlu

$ psql eot
 create or replace function numwords(int)
   returns text
   language plperlu
   immutable
 as '
 use Lingua::EN::Inflect qw( NUMWORDS );
 return NUMWORDS(shift);
 ';
 eot
CREATE FUNCTION

$ psql -tc 'select numwords(1234)'

 one thousand, two hundred and thirty-four


- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201305150015
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlGTDFwACgkQvJuQZxSWSsgpIACgyXX3Bt3SMDje/5V+tzSMESD+
HdsAmwZpYqWgnZeZvmEn8jclUCQzdKTG
=x9DW
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Adrian Klaver

On 05/14/2013 09:17 PM, Mike Christensen wrote:

If I have this:

CREATE OR REPLACE VIEW Link.Foo AS
   select * from dblink(
 'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
password=secret',
 'select * from Foo') as ...

Then it works.  However, if I do:

CREATE OR REPLACE VIEW Link.Foo AS
   select * from dblink(
 'hostaddr=db.domain.com http://db.domain.com dbname=KitchenPC
user=Website password=secret',
 'select * from Foo') as ...

Then I get:

ERROR:  could not establish connection
DETAIL:  could not translate host name db.domain.com
http://db.domain.com to address: Unknown host

However, from a command prompt I can ping db.domain.com
http://db.domain.com and get 123.123.123.123.

Does dblink just not support DNS resolution?  I really don't want to
hard code IP addresses in my scripts.  Thanks!


See below for explanation of hostaddr and host. Short version, you are 
looking for host:


http://www.postgresql.org/docs/9.2/interactive/libpq-connect.html#LIBPQ-PARAMKEYWORDS


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
Excellent!  Thanks so much.


On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 05/14/2013 09:17 PM, Mike Christensen wrote:

 If I have this:

 CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
  'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
 password=secret',
  'select * from Foo') as ...

 Then it works.  However, if I do:

 CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
  'hostaddr=db.domain.com http://db.domain.com dbname=KitchenPC

 user=Website password=secret',
  'select * from Foo') as ...

 Then I get:

 ERROR:  could not establish connection
 DETAIL:  could not translate host name db.domain.com
 http://db.domain.com to address: Unknown host


 However, from a command prompt I can ping db.domain.com
 http://db.domain.com and get 123.123.123.123.


 Does dblink just not support DNS resolution?  I really don't want to
 hard code IP addresses in my scripts.  Thanks!


 See below for explanation of hostaddr and host. Short version, you are
 looking for host:

 http://www.postgresql.org/**docs/9.2/interactive/libpq-**
 connect.html#LIBPQ-**PARAMKEYWORDShttp://www.postgresql.org/docs/9.2/interactive/libpq-connect.html#LIBPQ-PARAMKEYWORDS


 --
 Adrian Klaver
 adrian.kla...@gmail.com



Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
Though I'm a bit curious why there's a host and hostaddr.  Why can't it
just resolve whatever you give it?


On Tue, May 14, 2013 at 9:31 PM, Mike Christensen m...@kitchenpc.comwrote:

 Excellent!  Thanks so much.


 On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 05/14/2013 09:17 PM, Mike Christensen wrote:

 If I have this:

 CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
  'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
 password=secret',
  'select * from Foo') as ...

 Then it works.  However, if I do:

 CREATE OR REPLACE VIEW Link.Foo AS
select * from dblink(
  'hostaddr=db.domain.com http://db.domain.com dbname=KitchenPC

 user=Website password=secret',
  'select * from Foo') as ...

 Then I get:

 ERROR:  could not establish connection
 DETAIL:  could not translate host name db.domain.com
 http://db.domain.com to address: Unknown host


 However, from a command prompt I can ping db.domain.com
 http://db.domain.com and get 123.123.123.123.


 Does dblink just not support DNS resolution?  I really don't want to
 hard code IP addresses in my scripts.  Thanks!


 See below for explanation of hostaddr and host. Short version, you are
 looking for host:

 http://www.postgresql.org/**docs/9.2/interactive/libpq-**
 connect.html#LIBPQ-**PARAMKEYWORDShttp://www.postgresql.org/docs/9.2/interactive/libpq-connect.html#LIBPQ-PARAMKEYWORDS


 --
 Adrian Klaver
 adrian.kla...@gmail.com