[GENERAL] How can I run a PostgreSQL database outside /var/run/postgresql?

2013-11-02 Thread Frank Church
How can I run a PostgreSQL database independently of the normal packaged
based configuration?

I just want to start up postgres or pg_ctl process using a different port,
pointing to a different data directory and get it running, with permissions
etc working okay.


I don't want it to depend on the existing package page structure in
/etc/postgresql and /var/run/postgresql. The only thing required is
database compatiblity with the existing package based installations, which
I may be able to install if they are absent

The main points are -

**

** 1. Which data and configurations should be copied, ie the files and
directories should be copied from /etc/postgresql/xxx and
/var/run/postgresql

2. What executables are required, whether they can be extracted from
existing packages ie rpm/deb directly or after they have been installations
to their locations.

3. My preference will be to use existing package based installations or
install them if possible in case of dependencies which are not present in
my files, with my executables as there second option.

Thanks

voipfc


Re: [GENERAL] Table with Field Serial - Problem

2013-11-02 Thread Francisco Olarte
On Thu, Oct 31, 2013 at 5:13 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 Table1
   Column  |   Type|  Modifiers

 --+---__+-__--__--

   id   | integer   | not null default
 nextval('test_table_id_fld___seq'::regclass)


 Table2
 Column  |   Type|  related

 --+---__+-__--__--

   id_table1   | integer   |  FK of Table1.id
   id_lang   | integer  |  FK of lang.id http://lang.id
   name |  varchar


 I may be having one of my dumb moments, but what does the above accomplish
 that including the serial column in Table2 does not?

The default constraint puzzles me a bit, but you can have duplicate
values in table2 and check they are in t1. Imagine something like
this. You store message ids and translations. When a new message is
needed you insert it into t1, put this id wherever it's needed, and
comunicate the id to the translators, which then can insert the
translations in t2 at their pace. It has it uses.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table with Field Serial - Problem

2013-11-02 Thread Adrian Klaver

On 11/02/2013 04:58 AM, Francisco Olarte wrote:

On Thu, Oct 31, 2013 at 5:13 PM, Adrian Klaver adrian.kla...@gmail.com wrote:

Table1
   Column  |   Type|  Modifiers

--+---__+-__--__--

   id   | integer   | not null default
nextval('test_table_id_fld___seq'::regclass)


Table2
Column  |   Type|  related

--+---__+-__--__--

   id_table1   | integer   |  FK of Table1.id
   id_lang   | integer  |  FK of lang.id http://lang.id
   name |  varchar



I may be having one of my dumb moments, but what does the above accomplish
that including the serial column in Table2 does not?


The default constraint puzzles me a bit, but you can have duplicate
values in table2 and check they are in t1. Imagine something like
this. You store message ids and translations. When a new message is
needed you insert it into t1, put this id wherever it's needed, and
comunicate the id to the translators, which then can insert the
translations in t2 at their pace. It has it uses.


I understand the need to generate uniqueness, what I am not 
understanding is this method. Table1 is just a series of numbers, so 
were is the context that tells you what the numbers mean? To me it boils 
down to; if you just want to generate numbers use a sequence directly, 
if the numbers have meaning, supply context. Probably have spent too 
much time on this already, just one of those things that puzzle:)




Francisco Olarte.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why release index relation lock

2013-11-02 Thread Tom Lane
DT kurt...@hotmail.com writes:
 1. What's the rule of index relation locking? 
 2. Releasing lock is for higher concurrency, but for INSERT/UPDATE, i did not 
 find
any operation that could get benefit from releasing index relation lock? 
 Or to
say : what will happen if we treat index relation lock like heap relation 
 lock?

The reason we hold relation locks till end of transaction is mainly to
avoid transactional behavior surprises, eg an in-progress transaction
finding that a relation's schema has changed underneath it.  There is
no corresponding risk for indexes, because there is no such thing as
a schema-definition change for an index --- short of dropping it,
which we disallow without having AccessExclusiveLock on the parent rel.
However, there are *physical* changes to indexes, such as REINDEX or
ALTER INDEX TABLESPACE, which require locking out other accesses till
they finish.  So the point of locking indexes in use by a query is
just to interlock against those types of operations, and there's no
need to continue holding the lock once the query is done.

VACUUM might look like an exception, but it's not since it can't
run inside a transaction block.  There's no meaningful difference
between statement end and transaction end for it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] changing port numbers so pgbouncer can read geoserver and postgres

2013-11-02 Thread Rowan Collins

On 01/11/2013 13:58, Adrian Klaver wrote:

On 11/01/2013 06:29 AM, Birta Levente wrote:






geoserver is using port 8080, some how they don't seem to be 
speaking to

each other.


I don't know what is this geoserver, but this port 8080 I think it's not
relevant in this.



If I am following correctly the OP chain of connections as originally 
set up and I believe still is:


End User -- port 8080 (Tomcat) -- port 5432 (Postgres)

and they are trying to get to

End User -- port 808 (Tomcat) -- port 6432 (pgBouncer) -- port 5432 
(Postgres)






That sounds right, but port 8080 is definitely irrelevant in this case, 
as it is just a detail of the application - it could just as well be a 
desktop application with no associated network port.


For the sake of what needs to be configured, the chain is just:

Before:
(application using PostgreSQL) -- port 5432 (Postgres)

After:
(application using PostgreSQL) -- port 6432 (pgBouncer) -- port 5432 
(Postgres)


So there are two things to configure:

a) in the application, tell it to connect to port 6432 for its database 
connections, instead of port 5432
b) in pgBouncer, make sure it can connect properly to the postgres 
server on port 5432


It sounds like (b) is currently the issue.

--
Rowan Collins
[IMSoP]



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump with lots and lots of tables

2013-11-02 Thread Andy Colson


pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped it 
up several times (up to 600 so far) but now sure how many it needs.

I'm merging 90 databases into a single database with 90 schemas.  Each schema 
can have 500'ish tables.  Do I need to set max_locks_per_transaction to 
(90*500) 45,000?  Will that even work?

Will I ever need to bump up sysctl kernel.shmmax?

Oh, I'm on Slackware 64, PG 9.3.1.  I'm trying to get my db from the test box 
back to the live box.  For regular backup I think I'll be switching to 
streaming replication.

Thanks for your time,

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Curious question about physical files to store database

2013-11-02 Thread Patrick Dung
As I have seen, some database created or pre-allocate large physical files on 
the file system to as the backend of the database tablespace.

For Postgresql, I have observed that it created several files in the base and 
global directory.

It may be by design, what is the pros and cons of this behavior?

Thanks and regards,
Patrick


Re: [GENERAL] changing port numbers so pgbouncer can read geoserver and postgres

2013-11-02 Thread Adrian Klaver

On 11/02/2013 08:11 AM, Rowan Collins wrote:

On 01/11/2013 13:58, Adrian Klaver wrote:

On 11/01/2013 06:29 AM, Birta Levente wrote:









If I am following correctly the OP chain of connections as originally
set up and I believe still is:

End User -- port 8080 (Tomcat) -- port 5432 (Postgres)

and they are trying to get to

End User -- port 808 (Tomcat) -- port 6432 (pgBouncer) -- port 5432
(Postgres)





That sounds right, but port 8080 is definitely irrelevant in this case,
as it is just a detail of the application - it could just as well be a
desktop application with no associated network port.

For the sake of what needs to be configured, the chain is just:

Before:
(application using PostgreSQL) -- port 5432 (Postgres)

After:
(application using PostgreSQL) -- port 6432 (pgBouncer) -- port 5432
(Postgres)


Well, what I showed is supposition on my part and is not necessarily the 
truth. Part of the issue is discerning the truth of how information 
flows through the system, in particular what exactly is/are the 
application(s) talking to Postgres/pgBouncer. It is still unknown, at 
least to me, where geoserver fits into the above and whether it is 
connecting directly to Postgres or going through the Tomcat server. Once 
some sort of schematic for connection(current, desired) is provided then 
it would be possible to do as you suggest. The confusion from what I am 
following is that the (application using PostgreSQL) -- port 5432 
(Postgres) part is not known.






So there are two things to configure:

a) in the application, tell it to connect to port 6432 for its database
connections, instead of port 5432
b) in pgBouncer, make sure it can connect properly to the postgres
server on port 5432

It sounds like (b) is currently the issue.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] changing port numbers so pgbouncer can read geoserver and postgres

2013-11-02 Thread John R Pierce

On 11/2/2013 9:56 AM, Adrian Klaver wrote:
Well, what I showed is supposition on my part and is not necessarily 
the truth. Part of the issue is discerning the truth of how 
information flows through the system, in particular what exactly 
is/are the application(s) talking to Postgres/pgBouncer. It is still 
unknown, at least to me, where geoserver fits into the above and 
whether it is connecting directly to Postgres or going through the 
Tomcat server. Once some sort of schematic for connection(current, 
desired) is provided then it would be possible to do as you suggest. 
The confusion from what I am following is that the (application using 
PostgreSQL) -- port 5432 (Postgres) part is not known.


tomcat itself has no database layer.  its purely a web appserver. the 
java apps running on tomcat use JDBC to connect to a database server 
like postgresql.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump with lots and lots of tables

2013-11-02 Thread Tom Lane
Andy Colson a...@squeakycode.net writes:
 pg_dump is upset that my max_locks_per_transaction is too low.  I've bumped 
 it up several times (up to 600 so far) but now sure how many it needs.

 I'm merging 90 databases into a single database with 90 schemas.  Each schema 
 can have 500'ish tables.  Do I need to set max_locks_per_transaction to 
 (90*500) 45,000?  Will that even work?

The pg_dump will need about 45000 locks altogether, so anything north of
45000/max_connections should work (more if you have other sessions going
on at the same time).

Basically the lock table is sized at max_locks_per_transaction*max_connections,
and transactions can use as many entries as they want --- there's no
attempt to hold a session to its fair share of the table.  The parameter
is only defined as it is to ensure that if you bump up max_connections the
lock table will get bigger automatically, so you won't starve sessions of
locks accidentally.

 Will I ever need to bump up sysctl kernel.shmmax?

If the postmaster fails to start with the larger setting, then yes.
But lock entries aren't that large so probably it won't matter.
If it does matter, and increasing shmmax is inconvenient, you could
back off shared_buffers to make room.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

Hello,

Recently I had some problem with tsearch2 in postgres 8.3. Application 
was using JDBC connection pool and it was possible that each connection 
from the pool used tsearch2(with dictionary) in some query. This 
resulted in almost every postmaster process using 0.5g memory.
Solution of this particular problem was simple: create separate small 
pool for tsearch2 related queries.


Unfortunately some other queries make postmaster processes grow and 
while it does not make much problems in production environment, it makes 
difficult to set up dev instance, because application and postgres take 
more RAM than I currently have. 4GB memory in swap kills everything even 
though I have it in software raid0 with 6 disks. I run postgres and 
application in same LXC instance.


Now I could play around with application code even more, but it is old 
and ugly, so instead I am looking at solution in postgresql.conf, which 
I don't fully understand.


Postgres upgrade could also be some solution, but was there any 
significant improvement in system resources management in 8.X? Upgrade 
to 9.X might be too big effort at this time. I want to avoid problems 
with compatiblity, because it will take a lot of time to track them in 
application code which is very low quality.


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool.


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

Hello,

Recently I had some problem with tsearch2 in postgres 8.3. Application 
was using JDBC connection pool and it was possible that each connection 
from the pool used tsearch2(with dictionary) in some query. This 
resulted in almost every postmaster process using 0.5g memory.
Solution of this particular problem was simple: create separate small 
pool for tsearch2 related queries.


Unfortunately some other queries make postmaster processes grow and 
while it does not make much problems in production environment, it makes 
difficult to set up dev instance, because application and postgres take 
more RAM than I currently have. 4GB memory in swap kills everything even 
though I have it in software raid0 with 6 disks. I run postgres and 
application in same LXC instance.


Now I could play around with application code even more, but it is old 
and ugly, so instead I am looking at solution in postgresql.conf, which 
I don't fully understand.


Postgres upgrade could also be some solution, but was there any 
significant improvement in system resources management in 8.X? Upgrade 
to 9.X might be too big effort at this time. I want to avoid problems 
with compatiblity, because it will take a lot of time to track them in 
application code which is very low quality.


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool.


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread John R Pierce

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Grzegorz Tańczyk

On 11/02/2013 07:47 PM, John R Pierce wrote:

On 11/2/2013 11:03 AM, Grzegorz Tańczyk wrote:


Is there any way to limit total memory usage by postgres and keep 
maximum connections limit? Postgresql.conf settings are default for 
8.3.23. I need to have 100 connections in pool. 


the size of your connection pool shouldn't be much more than 2-3 times 
the CPU core count on the server for optimal throughput... 100 queries 
running at once will grind ANY non-monster server to a standstill
In fact thats what happened when tsearch2 problem occured even though 
there was only few queries running at once. Group of idle connections 
was using resources and that's the thing I don't understand.
Did tsearch2 dictionary caching implementation improve after 8.3 on this 
matter?


Making small connection pool will help, however how small should it be? 
1 connection max, 0 connections minimum? Connections will get closed 
after they are released by application code, but still there will be 
some group of postmaster processes and how can I be sure if none of them 
will get 1gb of system memory? I can't have any control over this (other 
than grepping ps output and manually pg_cancel_backend them once they 
grow too much).


Thanks!

--
Regards,
  Grzegorz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [100% SPAM] Re: [GENERAL] Memory usage per postmaster process

2013-11-02 Thread Tom Lane
=?UTF-8?B?R3J6ZWdvcnogVGHFhGN6eWs=?= golia...@polzone.pl writes:
 Did tsearch2 dictionary caching implementation improve after 8.3 on this 
 matter?

Well, there was this:

Author: Tom Lane t...@sss.pgh.pa.us
Branch: master Release: REL9_1_BR [3e5f9412d] 2010-10-06 19:31:05 -0400

Reduce the memory requirement for large ispell dictionaries.

This patch eliminates per-chunk palloc overhead for most small allocations
needed in the representation of an ispell dictionary.  This saves close to
a factor of 2 on the current Czech ispell data.  While it doesn't cover
every last small allocation in the ispell code, we are at the point of
diminishing returns, because about 95% of the allocations are covered
already.

Pavel Stehule, rather heavily revised by Tom

If you're not using ispell, it's not relevant, and I'm not sure whether
the savings were significant for anything but Czech.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovaccum task got cancelled

2013-11-02 Thread Gary Fu

On 11/01/13 03:23, Kevin Grittner wrote:

Sergey Konoplev gray...@gmail.com wrote:


As far as I know, the application programs do not make any
specific lock on the 'file' table.  I'm not sure if it is caused
by the pgpool or something else.

[...]


2013-10-31 18:01:30 UTCLOG:  sending cancel to blocking autovacuum PID 8614
2013-10-31 18:01:30 UTCDETAIL:  Process 8677 waits for ShareRowExclusiveLock on 
relation 11959608 of database 596746.
2013-10-31 18:01:30 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW EXCLUSIVE MODE
2013-10-31 18:01:30 UTCERROR:  canceling autovacuum task
2013-10-31 18:01:30 UTCCONTEXT:  automatic vacuum of table sd3ops1.public.file

 From the release notes to 9.0.12:

Fix performance problems with autovacuum truncation in busy
workloads (Jan Wieck)

I don't think the problem described here has anything to do with
that.  It looks to me like there is an explicit LOCK TABLE
statement being executed for a mode which conflicts with a normal
vacuum or analyze, even without truncation.  The cited change
*avoids* this sort of cancellation for the truncation phase, so it
is not getting that far.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Thanks for all the replies.  I'm pretty sure right now, it is the pgpool
since I searched the pgpool source codes and found those strings.
Also, I have the pgpool configuration 'insert_lock' on (by default),
but without applying the 'insert_lock.sql' as pgpool suggested.

However, I don't know why it did not happen before.  By the way,
I think Kevin is right, since the problem happened to our test instance
also and it is with postgres 9.2.4.

For pgpool, if anyone knows that if I can apply the 'insert_lock.sql' when
the pgpool is still running (maybe I should ask this in pgpool groups) ?

Thanks,
Gary



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I run a PostgreSQL database outside /var/run/postgresql?

2013-11-02 Thread Kevin Grittner
Frank Church voi...@gmail.com wrote:

 I just want to start up postgres or pg_ctl process using a
 different port, pointing to a different data directory and get it
 running, with permissions etc working okay.

I recommend you spend some time looking over the relevant
documentation, trying to make it work, and posting a more specific
question if you have problems with that.  These links might be good
places to start:

http://www.postgresql.org/docs/current/interactive/tutorial-start.html

http://www.postgresql.org/docs/current/interactive/app-initdb.html

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: How can I run a PostgreSQL database outside /var/run/postgresql?

2013-11-02 Thread David Johnston
Kevin Grittner-5 wrote
 Frank Church lt;

 voipfc@

 gt; wrote:
 
 I just want to start up postgres or pg_ctl process using a
 different port, pointing to a different data directory and get it
 running, with permissions etc working okay.
 
 I recommend you spend some time looking over the relevant
 documentation, trying to make it work, and posting a more specific
 question if you have problems with that.  These links might be good
 places to start:
 
 http://www.postgresql.org/docs/current/interactive/tutorial-start.html
 
 http://www.postgresql.org/docs/current/interactive/app-initdb.html

Or this one:

http://www.postgresql.org/docs/current/interactive/installation.html

Its hard to tell what you ultimate goal is.  Do you want to create you own
custom file layout instead of one that has been chosen by one of the popular
and well-supported distros?  If so the short answer is to, in effect, create
you own custom package.  That ultimately requires that you compile
PostgreSQL yourself with your own custom configuration.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-can-I-run-a-PostgreSQL-database-outside-var-run-postgresql-tp5776712p5776769.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general