Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
so i fix it and got it working !!! i followed the best practices of doing the
dump with the newest pg_dump version.

and now is working

thanks everyone for your help



--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827821.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread Adrian Klaver

On 11/20/2014 04:57 PM, zach cruise wrote:

On 11/20/14, Adrian Klaver  wrote:

On 11/20/2014 12:30 PM, zach cruise wrote:


For more info see:

http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html

to be clear- i change my 2 VMs setup {"1. master (dev) - 2. slave
(prod) setup"} to 3 VMs {"1. master (dev) - 2. slave (prod) setup - 3.
archive (wal)"}.

but what do i gain?


Extra protection against failure, maybe.

So:

--->  WAL Archive ---
||
|Streaming   |
master ---  > slave

If the direct link between the master and slave goes down, the slave can
still get WALs from the archive. If the archive machine goes down you
still have the direct link. If you take the slave down the master can
still push WALs to the archive. This assumes the 'machines' are actually
separated and connecting through different networks. You say you are
using VMs, but not where they are running. If they are all running on
the same machine running through the same network link then you really
do not have protection against network issues. The same if the host
machine goes down. This is one of those pen and paper times, when you
sketch out the arrangement and start doing what ifs.


First of all, the below is really in need of whiteboard/paper diagram to 
keep track of the moving parts. That being said here it goes:




master, slave and archive can be 3 separate VMs on 1 host, with their
clones on 2nd and 3rd hosts.


I can see the above being a potential nightmare. I am not sure how you 
ensure that the cloning process results in clones that exactly mirror 
the state of the originals at a particular point in time. Failing in 
that would seem to me to lead to no end of issues in the replication 
process.




a follow-up question on WAL recycling: ("When WAL archiving is being
done, the log segments must be archived before being recycled or
removed" from http://www.postgresql.org/docs/9.3/static/wal-configuration.html)

say streaming is off-
* if both master and archive are down, slave is still up and running. yes?


Yes.


* if master writes data when archive is down, it will copy over to
slave when archive is back up. yes?


If streaming is off and you are doing archiving then it will copy over 
to the archive.



* but if WAL is recycled before archive is back up, it will not copy
over to slave. yes?


The issue here as pointed out previously is that the WALs will stack up 
on the master because it will not be able to archive them. So then you 
run into a potential of of space issue on the master. From here:


http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html

"It is important that the archive command return zero exit status if and 
only if it succeeds. Upon getting a zero result, PostgreSQL will assume 
that the file has been successfully archived, and will remove or recycle 
it. However, a nonzero status tells PostgreSQL that the file was not 
archived; it will try again periodically until it succeeds."


...

"While designing your archiving setup, consider what will happen if the 
archive command fails repeatedly because some aspect requires operator 
intervention or the archive runs out of space. For example, this could 
occur if you write to tape without an autochanger; when the tape fills, 
nothing further can be archived until the tape is swapped. You should 
ensure that any error condition or request to a human operator is 
reported appropriately so that the situation can be resolved reasonably 
quickly. The pg_xlog/ directory will continue to fill with WAL segment 
files until the situation is resolved. (If the file system containing 
pg_xlog/ fills up, PostgreSQL will do a PANIC shutdown. No committed 
transactions will be lost, but the database will remain offline until 
you free some space.)"




see my concern with a separate archive is if archive is down and
master gets stuck retrying to push the same segment again and again,
there may be a problem in recovery when archive is back up. no?


See above.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread zach cruise
On 11/20/14, Adrian Klaver  wrote:
> On 11/20/2014 12:30 PM, zach cruise wrote:
>>>
>>> For more info see:
>>>
>>> http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html
>> to be clear- i change my 2 VMs setup {"1. master (dev) - 2. slave
>> (prod) setup"} to 3 VMs {"1. master (dev) - 2. slave (prod) setup - 3.
>> archive (wal)"}.
>>
>> but what do i gain?
>
> Extra protection against failure, maybe.
>
> So:
>
>--->  WAL Archive ---
>||
>|Streaming   |
> master ---  > slave
>
> If the direct link between the master and slave goes down, the slave can
> still get WALs from the archive. If the archive machine goes down you
> still have the direct link. If you take the slave down the master can
> still push WALs to the archive. This assumes the 'machines' are actually
> separated and connecting through different networks. You say you are
> using VMs, but not where they are running. If they are all running on
> the same machine running through the same network link then you really
> do not have protection against network issues. The same if the host
> machine goes down. This is one of those pen and paper times, when you
> sketch out the arrangement and start doing what ifs.

master, slave and archive can be 3 separate VMs on 1 host, with their
clones on 2nd and 3rd hosts.

a follow-up question on WAL recycling: ("When WAL archiving is being
done, the log segments must be archived before being recycled or
removed" from http://www.postgresql.org/docs/9.3/static/wal-configuration.html)

say streaming is off-
* if both master and archive are down, slave is still up and running. yes?
* if master writes data when archive is down, it will copy over to
slave when archive is back up. yes?
* but if WAL is recycled before archive is back up, it will not copy
over to slave. yes?
see my concern with a separate archive is if archive is down and
master gets stuck retrying to push the same segment again and again,
there may be a problem in recovery when archive is back up. no?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deferring ForeignKey checks when you didn't set a deferrable constraint ?

2014-11-20 Thread Melvin Davidson
Try the following queries. It will give you two .sql files
(create_fkeys.sql & drop_fkeys.sql).


First review them to make sure they look ok.
Then execute the drop_fkeys.sql
ie: \i drop_fkeys.sql

Do your deletes, then rebuild your fk's with
\i create_fkeys.sql

Good luck.


==
-- CREATE STATEMENT FOR ALL FK's

\o create_fkeys.sql

SELECT E'\\timing';

SELECT 'ALTER TABLE ' || n.nspname || '.' || '"' || t.relname || '"'
   || ' ADD CONSTRAINT ' || '"' || c.conname || '"'
   || ' '
   || pg_get_constraintdef( c.oid)
   || ';'
  FROM pg_class t
  JOIN pg_constraint c ON ( c.conrelid = t.OID AND c.contype = 'f')
  JOIN pg_namespace n ON (n.oid = t.relnamespace)
  JOIN pg_class f ON (f.oid = c.confrelid)
 WHERE c.contype = 'f'
   AND t.relkind = 'r'
   AND t.relname NOT LIKE 'pg_%'
   AND t.relname NOT LIKE 'sql_%'
   ORDER BY n.nspname,
t.relname;


-- DROP FK's

\o drop_fkeys.sql

SELECT E'\\timing';

SELECT 'ALTER TABLE ' || n.nspname || '.' || '"' || t.relname || '"'
   || ' DROP CONSTRAINT ' || '"' || c.conname || '"' || ' CASCADE;'
  FROM pg_class t
  JOIN pg_constraint c ON ( c.conrelid = t.OID AND c.contype = 'f')
  JOIN pg_namespace n ON (n.oid = t.relnamespace)
  JOIN pg_class f ON (f.oid = c.confrelid)
 WHERE c.contype = 'f'
   AND t.relkind = 'r'
   AND t.relname NOT LIKE 'pg_%'
   AND t.relname NOT LIKE 'sql_%'
   ORDER BY 1;


On Thu, Nov 20, 2014 at 5:13 PM, Jonathan Vanasco  wrote:

>
> I have a core table with tens-of-millions of rows, and need to delete
> about a million records.
>
> There are 21 foreign key checks against this table.  Based on the current
> performance, it would take a few days to make my deletions.
>
> None of the constraints were defined as `DEFERRABLE INITIALLY IMMEDIATE',
> so I'm out of luck on deferring them.
>
> Dropping/redefining constraints looks to be an ordeal --  and something
> I'm scared to make a mistake on.
>
> i looked into disabling triggers on a table, but I couldn't find any info
> on how to trigger at the end of the transaction so I can ensure integrity.
>
> does anyone have suggestions on things that might work?
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] deferring ForeignKey checks when you didn't set a deferrable constraint ?

2014-11-20 Thread Jonathan Vanasco

I have a core table with tens-of-millions of rows, and need to delete about a 
million records.

There are 21 foreign key checks against this table.  Based on the current 
performance, it would take a few days to make my deletions.

None of the constraints were defined as `DEFERRABLE INITIALLY IMMEDIATE', so 
I'm out of luck on deferring them.

Dropping/redefining constraints looks to be an ordeal --  and something I'm 
scared to make a mistake on.

i looked into disabling triggers on a table, but I couldn't find any info on 
how to trigger at the end of the transaction so I can ensure integrity.

does anyone have suggestions on things that might work?
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread Adrian Klaver

On 11/20/2014 12:30 PM, zach cruise wrote:



For more info see:

http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html

to be clear- i change my 2 VMs setup {"1. master (dev) - 2. slave
(prod) setup"} to 3 VMs {"1. master (dev) - 2. slave (prod) setup - 3.
archive (wal)"}.

but what do i gain?


Extra protection against failure, maybe.

So:


  --->  WAL Archive ---
  ||
  |Streaming   |
master ---  > slave


If the direct link between the master and slave goes down, the slave can 
still get WALs from the archive. If the archive machine goes down you 
still have the direct link. If you take the slave down the master can 
still push WALs to the archive. This assumes the 'machines' are actually 
separated and connecting through different networks. You say you are 
using VMs, but not where they are running. If they are all running on 
the same machine running through the same network link then you really 
do not have protection against network issues. The same if the host 
machine goes down. This is one of those pen and paper times, when you 
sketch out the arrangement and start doing what ifs.





as it is, in the worst case, VMs can always be restored "fairly
quickly" for our use.


For failover see:

http://www.postgresql.org/docs/9.3/interactive/warm-standby-failover.html

"PostgreSQL does not provide the system software required to identify a
failure on the primary and notify the standby database server. Many such
tools exist and are well integrated with the operating system facilities
required for successful failover, such as IP address migration."

So if you are looking for auto-promote you will need to look at third
party tools or writing your own script.

while i can always use "pg_ctl promote", any recommendations for windows?


Not from me, I do not run Postgres on Windows so I will be of no help
there.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread zach cruise
On 11/20/14, Adrian Klaver  wrote:
> On 11/20/2014 11:02 AM, zach cruise wrote:
>> On 11/20/14, Adrian Klaver  wrote:
>>> On 11/20/2014 08:00 AM, zach cruise wrote:
 combining replies for the list:

>
>>> Well it would depend on your setup and the load on the master. Assuming
>>> streaming replication. Simple explanation:
>> yes streaming replication.
>>>
>>> 1) If the master is down and slave is up then the slave will stall at
>>> whatever the last WAL was sent. When the master comes back up it will
>>> catch up as new WALs are generated.
>>>
>>> 2) If the slave is down and the master is up, the master will keep on
>>> creating WALs. The issue is that WALs are recycled over time, so given a
>>> significant load on the master and extended downtime for the slave it is
>>> possible that when the slave comes back up a WAL it needs is no longer
>>> available and it will start throwing errors. One way to tune this is
>>> modify wal_keep_segments (integer):
>>>
>>> http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html
>> while the WAL archive directory has to be shared with both master and
>> slave, should the WAL archive directory be independent of them ie
>> should it not go down with either of them? if it has to go down with
>> one, it seems it'd best for the WAL archive directory to go down with
>> slave?
>
> So I am to understand that you have WAL archiving set up also?
yes, slave gets updates from stream and WAL. if either fails, the
other will update.

> Again a simplified version:
>
> The ideal situation is you have a third machine that has the WAL
> archives. The issue is that if the master cannot archive a WAL it will
> keep it around until it can. So depending on load and outage you can end
> with disk space issues on the master should it not be able clear the WALs.
>
> For more info see:
>
> http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html
to be clear- i change my 2 VMs setup {"1. master (dev) - 2. slave
(prod) setup"} to 3 VMs {"1. master (dev) - 2. slave (prod) setup - 3.
archive (wal)"}.

but what do i gain?

as it is, in the worst case, VMs can always be restored "fairly
quickly" for our use.

>>> For failover see:
>>>
>>> http://www.postgresql.org/docs/9.3/interactive/warm-standby-failover.html
>>>
>>> "PostgreSQL does not provide the system software required to identify a
>>> failure on the primary and notify the standby database server. Many such
>>> tools exist and are well integrated with the operating system facilities
>>> required for successful failover, such as IP address migration."
>>>
>>> So if you are looking for auto-promote you will need to look at third
>>> party tools or writing your own script.
>> while i can always use "pg_ctl promote", any recommendations for windows?
>
> Not from me, I do not run Postgres on Windows so I will be of no help
> there.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactions to create pg_multixact members and offsets

2014-11-20 Thread Alvaro Herrera
Dev Kumkar wrote:
> On Thu, Nov 20, 2014 at 11:15 PM, Alvaro Herrera 
> wrote:

> > Merely waiting does not, but more than one lock being acquired on a
> > tuple does cause a multixact to be created.  Try SELECT FOR SHARE on two
> > transactions on the same tuple.
>
> Sure.
> Also what if there are only inserts/updates which transaction have and
> there are no explicit select queries?

If there are foreign keys on the tables, the system internally runs some
SELECT FOR KEY SHARE queries on the referenced tables (the ones
containing the primary or unique keys).  You can get some multixacts
that way too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread Adrian Klaver

On 11/20/2014 11:02 AM, zach cruise wrote:

On 11/20/14, Adrian Klaver  wrote:

On 11/20/2014 08:00 AM, zach cruise wrote:

combining replies for the list:





Well it would depend on your setup and the load on the master. Assuming
streaming replication. Simple explanation:

yes streaming replication.


1) If the master is down and slave is up then the slave will stall at
whatever the last WAL was sent. When the master comes back up it will
catch up as new WALs are generated.

2) If the slave is down and the master is up, the master will keep on
creating WALs. The issue is that WALs are recycled over time, so given a
significant load on the master and extended downtime for the slave it is
possible that when the slave comes back up a WAL it needs is no longer
available and it will start throwing errors. One way to tune this is
modify wal_keep_segments (integer):

http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html

while the WAL archive directory has to be shared with both master and
slave, should the WAL archive directory be independent of them ie
should it not go down with either of them? if it has to go down with
one, it seems it'd best for the WAL archive directory to go down with
slave?




So I am to understand that you have WAL archiving set up also?

Again a simplified version:

The ideal situation is you have a third machine that has the WAL 
archives. The issue is that if the master cannot archive a WAL it will 
keep it around until it can. So depending on load and outage you can end 
with disk space issues on the master should it not be able clear the WALs.


For more info see:

http://www.postgresql.org/docs/9.3/interactive/continuous-archiving.html








For failover see:

http://www.postgresql.org/docs/9.3/interactive/warm-standby-failover.html

"PostgreSQL does not provide the system software required to identify a
failure on the primary and notify the standby database server. Many such
tools exist and are well integrated with the operating system facilities
required for successful failover, such as IP address migration."

So if you are looking for auto-promote you will need to look at third
party tools or writing your own script.

while i can always use "pg_ctl promote", any recommendations for windows?


Not from me, I do not run Postgres on Windows so I will be of no help there.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Bill Moran
On Thu, 20 Nov 2014 11:52:23 -0700 (MST)
Marcos Cano  wrote:

> i found this in the file... 
> 
> ERROR:  could not access file "$libdir/rtpostgis-2.0": No such file or
> directory
> 
> 
> which acording to the firs link I  found
> 
>   

Are you upgrading to a system that also has PostGIS 2.0? Or does it have a 
different
version of PostGIS installed?

Opinions may differ, but I wouldn't call this a bug. If your trying to upgrade 
two
different pieces of software at the same time, you're going to have to know the
internals well enough to work things out. Have you tried upgrading PostgreSQL in
one step and PostGIS in another? Seems like the more practical way to handle 
things.
Probably the best bet is to upgrade PostGIS first, then migrate to the new 
version
of PostgreSQL -- but that depends on your situation.

If that's impractical, you can probably do a pg_dump/restore in 2 stages: first 
do
a pg_dump -s to only dump the schema objects. Manually edit the resultant file 
to
adjust any version-specific PostGIS stuff before loading that into the new 
server.
Then do a pg_dump -a to dump all the data and load that. Whether that works
depends on how much has changed bewteen PostGIS versions -- I haven't worked 
with
PostGIS in almost a year, so I don't know for sure if it will work or not.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
yes i'd better look at the postgis list.. i thought for one moment that this
was encoding related.

thanks for your help



--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827760.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Adrian Klaver

On 11/20/2014 10:52 AM, Marcos Cano wrote:

i found this in the file...


What file?

Remember the list need context for your statements. You are at the 
computer and see all that goes on. We only know what you tell us and 
statements without supporting data are hard to troubleshoot.




ERROR:  could not access file "$libdir/rtpostgis-2.0": No such file or
directory



Look like something is looking for the old version of 
PostGIS(postgis-2.0.3) on the new database cluster where you have 
postgis-2.1.3 installed.




which acording to the firs link I  found

, is a postgis bug..

i don't know if that is the reason though.

but i guess so. :/


I would say you will probably have better luck pursuing this issue on 
the PostGIS list:


http://lists.osgeo.org/mailman/listinfo/postgis-users.

It seems there are things you need to sort about migrating PostGIS 
versions before you can get to the dump/restore process.







--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827750.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread zach cruise
On 11/20/14, Adrian Klaver  wrote:
> On 11/20/2014 08:00 AM, zach cruise wrote:
>> combining replies for the list:
>>
>>
>> On 11/19/14, Charles Zaffery  wrote:
>>> 2 and 3 can be covered by this:
>>> http://clusterlabs.org/wiki/PgSQL_Replicated_Cluster
>> does something similar exist for windows?
>>
>>
>> On 11/20/14, Michael Paquier  wrote:
>>> On Thu, Nov 20, 2014 at 10:58 AM, zach cruise 
>>> wrote:
 2. what happens if master-slave are rebooted at different times?
>>> What do you mean by that? If replication is impacted?
>> eg if i were to reboot the vmware running the master in the evening,
>> and reboot the vmware running the slave in the night, how would they
>> sync up?
>
> Well it would depend on your setup and the load on the master. Assuming
> streaming replication. Simple explanation:
yes streaming replication.
>
> 1) If the master is down and slave is up then the slave will stall at
> whatever the last WAL was sent. When the master comes back up it will
> catch up as new WALs are generated.
>
> 2) If the slave is down and the master is up, the master will keep on
> creating WALs. The issue is that WALs are recycled over time, so given a
> significant load on the master and extended downtime for the slave it is
> possible that when the slave comes back up a WAL it needs is no longer
> available and it will start throwing errors. One way to tune this is
> modify wal_keep_segments (integer):
>
> http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html
while the WAL archive directory has to be shared with both master and
slave, should the WAL archive directory be independent of them ie
should it not go down with either of them? if it has to go down with
one, it seems it'd best for the WAL archive directory to go down with
slave?


>> On 11/20/14, Adrian Klaver wrote:
>>> What replication method are you using?
>>> The built in methods, Slony. Bucardo, etc?
>> built in
>>
>>> The production users cannot enter or update records?
>> they can't. slave is read-only.
>>
>>> If you have replication set up master -> slave, how can there be a
>>> difference between the two?
>> there isn't. both contain dev and prod databases. users connect to the
>> dev databases from the dev web server, and to the prod databases from
>> the prod web server.
>
> Crossed wires on my part, I was reading databases and thinking database
> clusters.
>
>>
>>> Not sure where the mssql databases into this?
>> our corporate partners use them. when i need to query against them, i
>> import.
>>
 2. what happens if master-slave are rebooted at different times?

 3. i also need to auto-promote slave to master if master fails (without
 using repmgr or postgres-r or even postgres-xl). how?
>>> Answers for 2 & 3 are dependent on answers to the above questions.
>
> For failover see:
>
> http://www.postgresql.org/docs/9.3/interactive/warm-standby-failover.html
>
> "PostgreSQL does not provide the system software required to identify a
> failure on the primary and notify the standby database server. Many such
> tools exist and are well integrated with the operating system facilities
> required for successful failover, such as IP address migration."
>
> So if you are looking for auto-promote you will need to look at third
> party tools or writing your own script.
while i can always use "pg_ctl promote", any recommendations for windows?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactions to create pg_multixact members and offsets

2014-11-20 Thread Dev Kumkar
On Thu, Nov 20, 2014 at 11:15 PM, Alvaro Herrera 
wrote:

> Search for "burnmulti" in the archives, which is a contrib module to
> test pg_multixact.
>
Thanks, got some links. Will give a try and get back.

 Merely waiting does not, but more than one lock being acquired on a
> tuple does cause a multixact to be created.  Try SELECT FOR SHARE on two
> transactions on the same tuple.
>
Sure.
Also what if there are only inserts/updates which transaction have and
there are no explicit select queries?

Regards...


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
i found this in the file... 

ERROR:  could not access file "$libdir/rtpostgis-2.0": No such file or
directory


which acording to the firs link I  found

  
, is a postgis bug..

i don't know if that is the reason though.

but i guess so. :/



--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827750.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Bill Moran
On Thu, 20 Nov 2014 11:25:10 -0700 (MST)
Marcos Cano  wrote:

> i did it again... and the file shows exactly the same lots of :
> 
> invalid command \N
> invalid command \N
> invalid command \N
> invalid command \N
> invalid command \N
> invalid command \N
> invalid command \N
> invalid command \N

Those errors are unrelated. Based on your earlier message, the table it's
trying to insert into doesn't exist, which means that each line of the copy
statement becomes that error as the script tries to continue executing.
Essentially, all of those are just chain-reaction symptoms of the real
error, which should be reported at the very beginning.

Got back to the top of that list of errors and find the one that preceeds
all of them and you'll start getting to the real cause of things. Based on
your earlier post, the table failed to be created ... find the reason that
fails and you'll be making headway.

There is a switch to psql (-v ON_ERROR_STOP=1) that causes psql to stop at
the first error it encounters instead of trying to execute what's in the
rest of the file. I don't know why that isn't the default anyway, but
enabling that will remove a lot of the red herrings from your error output.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
to answer to Adrian 


1) i am using the old version to dump (i will try with the latest)

2) no the postgis version is different. pg9.2.4->  postgis-2.0.3  and
pg9.3.5->   postgis-2.1.3


3) the schema is public

\d+ al_shared_place

   Table
"public.al_shared_place"
   Column   | Type  |   Modifiers   
   
| Storage  | Stats target | Description 
+---+---+--+--+-
 id | integer   | not null default
nextval(('al_landmark_seq'::text)::regclass) | plain|  | 
 x  | character varying(15) |   
   
| extended |  | 
 y  | character varying(15) |   
   
| extended |  | 
 name   | character varying(50) |   
   
| extended |  | 
 placetype  | integer   |   
   
| plain|  | 
 point_geom | geometry  |   
   
| main |  | 
Indexes:
"al_shared_place_pkey" PRIMARY KEY, btree (id)
"al_sharedplace_geom_idx" gist (point_geom)
Check constraints:
"$1" CHECK (srid(point_geom) = 4326)
"$2" CHECK (geometrytype(point_geom) = 'POINT'::text OR point_geom IS
NULL)
Has OIDs: no







--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827747.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
i did it again... and the file shows exactly the same lots of :

invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N




--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827746.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Adrian Klaver

On 11/20/2014 10:06 AM, Marcos Cano wrote:

might be awkard but there is no file, so i assume 2 things:

1) there was no upgrade errors
2) the script + command is not writing to stderr (i think it is doing it)


Aah, meant to add:
Is there anything in the Postgres log for the time period of the restore 
that would help?








--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827739.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Adrian Klaver

On 11/20/2014 10:06 AM, Marcos Cano wrote:

might be awkard but there is no file, so i assume 2 things:

1) there was no upgrade errors


So to be clear the file below does not exist?:

2>$backup_path/$db_upgrade_errors.txt"  

or it exists but there is nothing in it?


2) the script + command is not writing to stderr (i think it is doing it)





--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827739.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Adrian Klaver

On 11/20/2014 09:47 AM, Marcos Cano wrote:

hello im trying to dump a complete DB, i've been doing something like this.
(i'm in the process of upgrading from 9.2.4 to 9.3.5)


my current DB looks like this:
   Name|  Owner   | Encoding  |   Collate   | Ctype |   Access privileges
---+--+---+-+---+---
  DB  | postgres | UTF8  | en_US.UTF-8 | C | =Tc/postgres +
|  |   | ||
postgres=CTc/postgres+
|  |   | | |
jp=CTc/postgres

having 171 tables

my dump has been done with this:

  0) su postgres -c "/usr/local/pgsql9.2.4/bin/pg_dump  -p 5433 -Fc  -v $db >
$backup_path/$db.bkp"


Best practices is to use the later version of pg_dump(9.3.5) to dump the 
older database.




1) then i create the DB (i postgis enable it)


Are you using the same versions of PostGIS on both servers?


2) and then do the restore with a postgis perl script
su postgres -c "export
PATH=/usr/local/pgsql9.3.5/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
&& perl /dls/gis/postgis-2.1.3/utils/postgis_restore.pl $baclup_file |
/usr/local/pgsql9.3.5/bin/psql -h localhost -p 5432 -U postgres $db 2>
$backup_path/$db_upgrade_errors.txt"   


everything seems to work fine until i noticed that i'm missing 5 tables,
doing a diff on two files i find out which 5 tables are missing, there is
nothing special about this tables except that I noticed some empty fields ,
like this:

  id  | x | y |   name| placetype |
point_geom
--+---+---+---+---+
 1 |   |   | Km. 223123 RN-09 | 1 |
010120E610F75BB76C0C1A57DCasdaas2F40
 2 |   |   | Km. 223120 RN-09 | 1 |
010120E610ECFCasdasdasd1D3FC122F40


What is the schema definition for al_shared_place?

Or to be more specific what are the data types for the fields?



then i try to do an individual dump of the missing tables (which is not so
tedious giving the fact that it are only 5 tables)

but when i see the dump file those empty fields are translated to something
like this :

COPY al_shared_place (id, x, y, name, placetype, point_geom) FROM stdin;
1   *\N  \N * Km. 223123 RN-09   1
010120E610F75BB76C0C1A57DCasdaas2F40
2   *\N  \N * Km. 223120 RN-09   1
010120E610ECFCasdasdasd1D3FC122F40


so when i do the restore of the table

SET
SET
SET
SET
SET
SET
ERROR:  relation "al_shared_place" does not exist
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N



thanks for your help.

NOTE: is important to mention that my DB is a postgis enable DB, i don't
think this is the issue that's why im asking here because i think its more
an encoding missmatch or something like that.



--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
might be awkard but there is no file, so i assume 2 things: 

1) there was no upgrade errors
2) the script + command is not writing to stderr (i think it is doing it)





--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736p5827739.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Tom Lane
Marcos Cano  writes:
> everything seems to work fine until i noticed that i'm missing 5 tables,

Did you look at the error output from the restore to see if there were
any complaints?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgdump (9.2.4) not dumping all tables

2014-11-20 Thread Marcos Cano
hello im trying to dump a complete DB, i've been doing something like this.
(i'm in the process of upgrading from 9.2.4 to 9.3.5)


my current DB looks like this:
  Name|  Owner   | Encoding  |   Collate   | Ctype |   Access privileges   
---+--+---+-+---+---
 DB  | postgres | UTF8  | en_US.UTF-8 | C | =Tc/postgres +
   |  |   | ||
postgres=CTc/postgres+
   |  |   | | |
jp=CTc/postgres

having 171 tables

my dump has been done with this:

 0) su postgres -c "/usr/local/pgsql9.2.4/bin/pg_dump  -p 5433 -Fc  -v $db >
$backup_path/$db.bkp"

1) then i create the DB (i postgis enable it)
2) and then do the restore with a postgis perl script
su postgres -c "export
PATH=/usr/local/pgsql9.3.5/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
&& perl /dls/gis/postgis-2.1.3/utils/postgis_restore.pl $baclup_file |
/usr/local/pgsql9.3.5/bin/psql -h localhost -p 5432 -U postgres $db 2>
$backup_path/$db_upgrade_errors.txt"


everything seems to work fine until i noticed that i'm missing 5 tables,
doing a diff on two files i find out which 5 tables are missing, there is
nothing special about this tables except that I noticed some empty fields ,
like this:

 id  | x | y |   name| placetype |
point_geom 
--+---+---+---+---+
1 |   |   | Km. 223123 RN-09 | 1 |
010120E610F75BB76C0C1A57DCasdaas2F40
2 |   |   | Km. 223120 RN-09 | 1 |
010120E610ECFCasdasdasd1D3FC122F40

then i try to do an individual dump of the missing tables (which is not so
tedious giving the fact that it are only 5 tables)

but when i see the dump file those empty fields are translated to something
like this :

COPY al_shared_place (id, x, y, name, placetype, point_geom) FROM stdin;
1   *\N  \N * Km. 223123 RN-09   1  
010120E610F75BB76C0C1A57DCasdaas2F40
2   *\N  \N * Km. 223120 RN-09   1  
010120E610ECFCasdasdasd1D3FC122F40


so when i do the restore of the table 

SET
SET
SET
SET
SET
SET
ERROR:  relation "al_shared_place" does not exist
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N
invalid command \N



thanks for your help.

NOTE: is important to mention that my DB is a postgis enable DB, i don't
think this is the issue that's why im asking here because i think its more
an encoding missmatch or something like that. 



--
View this message in context: 
http://postgresql.nabble.com/pgdump-9-2-4-not-dumping-all-tables-tp5827736.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactions to create pg_multixact members and offsets

2014-11-20 Thread Alvaro Herrera
Dev Kumkar wrote:
> Hellos,
> 
> How to manually increase pg_multixact members and offsets?

Search for "burnmulti" in the archives, which is a contrib module to
test pg_multixact.

> Does a transaction waiting for exclusive lock or shared lock result into
> entry being created in pg_multixact?

Merely waiting does not, but more than one lock being acquired on a
tuple does cause a multixact to be created.  Try SELECT FOR SHARE on two
transactions on the same tuple.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread Adrian Klaver

On 11/20/2014 08:00 AM, zach cruise wrote:

combining replies for the list:


On 11/19/14, Charles Zaffery  wrote:

2 and 3 can be covered by this:
http://clusterlabs.org/wiki/PgSQL_Replicated_Cluster

does something similar exist for windows?


On 11/20/14, Michael Paquier  wrote:

On Thu, Nov 20, 2014 at 10:58 AM, zach cruise  wrote:

2. what happens if master-slave are rebooted at different times?

What do you mean by that? If replication is impacted?

eg if i were to reboot the vmware running the master in the evening,
and reboot the vmware running the slave in the night, how would they
sync up?


Well it would depend on your setup and the load on the master. Assuming 
streaming replication. Simple explanation:


1) If the master is down and slave is up then the slave will stall at 
whatever the last WAL was sent. When the master comes back up it will 
catch up as new WALs are generated.


2) If the slave is down and the master is up, the master will keep on 
creating WALs. The issue is that WALs are recycled over time, so given a 
significant load on the master and extended downtime for the slave it is 
possible that when the slave comes back up a WAL it needs is no longer 
available and it will start throwing errors. One way to tune this is 
modify wal_keep_segments (integer):


http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html




On 11/20/14, Adrian Klaver wrote:

What replication method are you using?
The built in methods, Slony. Bucardo, etc?

built in


The production users cannot enter or update records?

they can't. slave is read-only.


If you have replication set up master -> slave, how can there be a difference 
between the two?

there isn't. both contain dev and prod databases. users connect to the
dev databases from the dev web server, and to the prod databases from
the prod web server.


Crossed wires on my part, I was reading databases and thinking database 
clusters.





Not sure where the mssql databases into this?

our corporate partners use them. when i need to query against them, i import.


2. what happens if master-slave are rebooted at different times?

3. i also need to auto-promote slave to master if master fails (without using 
repmgr or postgres-r or even postgres-xl). how?

Answers for 2 & 3 are dependent on answers to the above questions.


For failover see:

http://www.postgresql.org/docs/9.3/interactive/warm-standby-failover.html

"PostgreSQL does not provide the system software required to identify a 
failure on the primary and notify the standby database server. Many such 
tools exist and are well integrated with the operating system facilities 
required for successful failover, such as IP address migration."


So if you are looking for auto-promote you will need to look at third 
party tools or writing your own script.








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transactions to create pg_multixact members and offsets

2014-11-20 Thread Dev Kumkar
Hellos,

How to manually increase pg_multixact members and offsets?

Does a transaction waiting for exclusive lock or shared lock result into
entry being created in pg_multixact?

Excerpt of multixact.c:
/*-
2  *
3  * multixact.c
4  * PostgreSQL multi-transaction-log manager
5  *
6  * The pg_multixact manager is a pg_clog-like manager that stores an
array of
7  * MultiXactMember for each MultiXactId. It is a fundamental part of the
8  * shared-row-lock implementation.

Regards...


Re: [GENERAL] pgsql_tmp consuming all inodes

2014-11-20 Thread Adrian Klaver

On 11/20/2014 08:20 AM, Nestor A. Diaz wrote:

Hello People.

I have installed a postgres engine:  PostgreSQL 8.4.22 on
x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.7.2-5) 4.7.2,
64-bit from debian packages at postgresql official repository running on
debian wheezy 7.7 64 bit.

I have a couple of databases running on a cluster, and two of them have
been experiencing the following behavior since installed yesterday:

They create a lot of files under directory "pgsql_tmp" up to the point
they consume all the inodes, when I do a 'df -i' I found the partition
assigned to the tablespace where the database is located is out of inodes.

I don't know what postgres internals trigger this behavior but at least
we can replicate the behavior with a query.


What query?

How is it executed?



Python database clients got this error message:

"DatabaseError: could not create temporary file
"pg_tblspc/16575/pgsql_tmp/pgsql_tmp27225.105148": No space left on device"

I work on postgresql 8.4 because of issues we found working with 9.1
when it comes to partitioning, however we can not switch to 9.3 because
we use postgis 1.5 which is supported up to 9.1.

On the other side our experience with 8.4 have been good, except that
today the database goes in recovery mode due to an out of memory error:

Nov 20 09:10:52 feline1 kernel: [125090.855205] Out of memory: Kill
process 14314 (postgres) score 145 or sacrifice child
Nov 20 09:10:52 feline1 kernel: [125090.855267] Killed process 14314
(postgres) total-vm:3579868kB, anon-rss:2708kB, file-rss:2668600kB
Nov 20 09:10:52 feline1 kernel: [125091.592295] postgres invoked
oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0
Nov 20 09:10:52 feline1 kernel: [125091.592367] postgres cpuset=/
mems_allowed=0
Nov 20 09:10:52 feline1 kernel: [125091.592402] Pid: 21339, comm:
postgres Tainted: G   O 3.2.0-4-amd64 #1 Debian 3.2.63-2+deb7u1


Well this is coming from the kernel OOM(Out Of Memory) manager.

For how to deal with this see here:

http://www.postgresql.org/docs/8.4/static/kernel-resources.html#AEN24213



Any ideas on how to fix the creation of too much files under 'pgsql_tmp'
directory ?   Any hint on the Out of memory error ?





Slds.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [sfpug] Linuxfest 2015 Call for Papers

2014-11-20 Thread Adrian Klaver

On 11/20/2014 08:21 AM, David Gallagher wrote:

Hmm, end of January in Bellingham... Sounds like a great excuse to ski Mt Baker 
and beyond :D.


FYI the Bellingham event is at end of April. Trying to avoid the "Have 
You Seen Me?" being repeated at Heather Meadows:)



-David

On Nov 19, 2014, at 12:09 PM, Josh Berkus  wrote:

On 11/19/2014 11:57 AM, Adrian Klaver wrote:


So as not to have lost souls wandering around Seattle in April, LFNW is
actually in Bellingham(about 90 miles N):)


I'm now imagining geeks wandering around Pike Place Market with posters
of the Postgres elephant that say "Have You Seen Me?"

;-)

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql_tmp consuming all inodes

2014-11-20 Thread Tom Lane
"Nestor A. Diaz"  writes:
> I have a couple of databases running on a cluster, and two of them have
> been experiencing the following behavior since installed yesterday:
> They create a lot of files under directory "pgsql_tmp" up to the point
> they consume all the inodes, when I do a 'df -i' I found the partition
> assigned to the tablespace where the database is located is out of inodes.

I'd bet money that what you've got is a broken query, ie something that's
trying to compute a ridiculously large join.  PG will faithfully try to
do it, until it runs out of disk space to hold the intermediate result.
It's not exactly hard to do that in SQL --- forget a join condition,
say, and suddenly you've got a Cartesian-product intermediate result.

> On the other side our experience with 8.4 have been good, except that
> today the database goes in recovery mode due to an out of memory error:

Sounds a bit like the same problem in a different guise.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [sfpug] Linuxfest 2015 Call for Papers

2014-11-20 Thread Joshua D. Drake


On 11/20/2014 08:21 AM, David Gallagher wrote:


Hmm, end of January in Bellingham... Sounds like a great excuse to ski Mt Baker 
and beyond :D.


What I find funny is that Mt. Baker (mtbaker.us) is actually the resorts 
on Mt. Shuksan. Which is not-arguably more beautiful :D


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
"If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans."


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [sfpug] Linuxfest 2015 Call for Papers

2014-11-20 Thread David Gallagher
Hmm, end of January in Bellingham... Sounds like a great excuse to ski Mt Baker 
and beyond :D.
-David
> On Nov 19, 2014, at 12:09 PM, Josh Berkus  wrote:
> 
> On 11/19/2014 11:57 AM, Adrian Klaver wrote:
>> 
>> So as not to have lost souls wandering around Seattle in April, LFNW is
>> actually in Bellingham(about 90 miles N):)
> 
> I'm now imagining geeks wandering around Pike Place Market with posters
> of the Postgres elephant that say "Have You Seen Me?"
> 
> ;-)
> 
> -- 
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql_tmp consuming all inodes

2014-11-20 Thread Nestor A. Diaz
Hello People.

I have installed a postgres engine:  PostgreSQL 8.4.22 on
x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.7.2-5) 4.7.2,
64-bit from debian packages at postgresql official repository running on
debian wheezy 7.7 64 bit.

I have a couple of databases running on a cluster, and two of them have
been experiencing the following behavior since installed yesterday:

They create a lot of files under directory "pgsql_tmp" up to the point
they consume all the inodes, when I do a 'df -i' I found the partition
assigned to the tablespace where the database is located is out of inodes.

I don't know what postgres internals trigger this behavior but at least
we can replicate the behavior with a query.

Python database clients got this error message:

"DatabaseError: could not create temporary file
"pg_tblspc/16575/pgsql_tmp/pgsql_tmp27225.105148": No space left on device"

I work on postgresql 8.4 because of issues we found working with 9.1
when it comes to partitioning, however we can not switch to 9.3 because
we use postgis 1.5 which is supported up to 9.1.

On the other side our experience with 8.4 have been good, except that
today the database goes in recovery mode due to an out of memory error:

Nov 20 09:10:52 feline1 kernel: [125090.855205] Out of memory: Kill
process 14314 (postgres) score 145 or sacrifice child
Nov 20 09:10:52 feline1 kernel: [125090.855267] Killed process 14314
(postgres) total-vm:3579868kB, anon-rss:2708kB, file-rss:2668600kB
Nov 20 09:10:52 feline1 kernel: [125091.592295] postgres invoked
oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0
Nov 20 09:10:52 feline1 kernel: [125091.592367] postgres cpuset=/
mems_allowed=0
Nov 20 09:10:52 feline1 kernel: [125091.592402] Pid: 21339, comm:
postgres Tainted: G   O 3.2.0-4-amd64 #1 Debian 3.2.63-2+deb7u1

Any ideas on how to fix the creation of too much files under 'pgsql_tmp'
directory ?   Any hint on the Out of memory error ?

Slds.

-- 
Nestor A Diaz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread zach cruise
combining replies for the list:


On 11/19/14, Charles Zaffery  wrote:
> 2 and 3 can be covered by this:
> http://clusterlabs.org/wiki/PgSQL_Replicated_Cluster
does something similar exist for windows?


On 11/20/14, Michael Paquier  wrote:
> On Thu, Nov 20, 2014 at 10:58 AM, zach cruise  wrote:
>> 2. what happens if master-slave are rebooted at different times?
> What do you mean by that? If replication is impacted?
eg if i were to reboot the vmware running the master in the evening,
and reboot the vmware running the slave in the night, how would they
sync up?


On 11/20/14, Adrian Klaver wrote:
> What replication method are you using?
> The built in methods, Slony. Bucardo, etc?
built in

> The production users cannot enter or update records?
they can't. slave is read-only.

> If you have replication set up master -> slave, how can there be a difference 
> between the two?
there isn't. both contain dev and prod databases. users connect to the
dev databases from the dev web server, and to the prod databases from
the prod web server.

> Not sure where the mssql databases into this?
our corporate partners use them. when i need to query against them, i import.

>> 2. what happens if master-slave are rebooted at different times?
>>
>> 3. i also need to auto-promote slave to master if master fails (without 
>> using repmgr or postgres-r or even postgres-xl). how?
> Answers for 2 & 3 are dependent on answers to the above questions.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-20 Thread Alvaro Herrera
Robert DiFalco wrote:
> I have a question about modeling a mutual relationship. It seems basic but
> I can't decide, maybe it is 6 of one a half dozen of the other.
> 
> In my system any user might be friends with another user, that means they
> have a reciprocal friend relationship.
> 
> It seems I have two choices for modeling it.

Have you considered having an array with all friends of each person?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] better architecture?

2014-11-20 Thread Adrian Klaver

On 11/19/2014 05:58 PM, zach cruise wrote:

i need some advice:

1. for our small business, i have a master (dev) - slave (prod) setup.
i develop using the master. i get data from other people's dev
(mssql) databases. i also get data from their prod (mssql) databases. i
replicate everything on slave.

apps using the master can connect only to dev databases, and apps using
the slave can connect only to prod databases. so when it's time to go
from dev to prod, i simply change that connection. no restart.

comments?


Yes I am confused.

What replication method are you using?
The built in methods, Slony. Bucardo, etc?

The production users cannot enter or update records?

If you have replication set up master -> slave, how can there be a 
difference between the two?


Not sure where the mssql databases into this?



2. what happens if master-slave are rebooted at different times?

3. i also need to auto-promote slave to master if master fails (without
using repmgr or postgres-r or even postgres-xl). how?


Answers for 2 & 3 are dependent on answers to the above questions.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Modeling Friendship Relationships

2014-11-20 Thread Robert DiFalco
Thanks Jonathan. So in your use case would you put non-approved friend
requests in this table as non-reciprocal? If so, did the person requesting
friendship get the row in there or the person receiving the friend request?
Also, if A and B are friends, and B decided to remove A as a friend, are
you saying that you would not remove both rows?

Thanks!

On Thu, Nov 13, 2014 at 8:10 AM, Jonathan Vanasco  wrote:

>
> On Nov 11, 2014, at 5:38 PM, Robert DiFalco wrote:
>
> > Thoughts? Do I just choose one or is there a clear winner? TIA!
>
>
> I prefer this model
>
> user_id__a INT NOT NULL REFERENCES user(id),
> user_id__b INT NOT NULL REFERENCES user(id),
> is_reciprocal BOOLEAN
> primary key (user_id__a, user_id__b)
>
> if a relationship is confirmed (or dropped) I toggle is_reciprocal.
> having that value saves a lot of work doing joins or analyzing friendship
> sets
>
> if you have multiple relationship types, then things get tricky.
>
> you can either
> - treat the row as a triplet ( user_id__a, user_id__b,
> relationship_type_id)   [i still recommend the reciprocal bool]
> - if you have a finite set of relationship types, you could just
> use each one as a bool column within the a2b row
>
> I've tried doing the "one row per relationship" approach, and didn't like
> it.   the time savings on simple searches were marginally faster, but the
> sql was increasingly more complex and slower to execute as we leveraged the
> table into other queries.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Performance question

2014-11-20 Thread Anil Menon
Thanks Adrian


On Thu, Nov 20, 2014 at 3:46 AM, Adrian Klaver 
wrote:

> On 11/19/2014 08:26 AM, Anil Menon wrote:
>
>> Hello,
>>
>> I would like to ask from your experience which would be the best
>> "generic" method for checking if row sets of a certain condition exists
>> in a PLPGSQL function.
>>
>> I know of 4 methods so far (please feel free to add if I missed out any
>> others)
>>
>> 1) get a count (my previous experience with ORCL shaped this option)
>>
>> select count(*) into vcnt
>> from table
>> where <>
>> if vcnt >0 then
>>do X
>> else
>>do y
>> end if
>> Cons : It seems doing a count(*) is not the best option for PG
>>
>
>
> Well that would depend on the table size, whether it was 100 rows vs
> 1,000,000 rows
>
>

​The table is estimated/guesstimated to be ~900 million rows (~30Ma day​,
90 days history, though initially it would be ~30M), though the <>
part of the query would return between 0 and ~2 rows




>
>> 2) Use a non-count option
>> select primary_key_Col into vcnt
>> from table
>> where <>
>> if found then
>>do X
>> else
>>do y
>> end if
>> Cons :Some people seems not to prefer this as (AFAIU) it causes a
>> plpgsql->sql->plpgsql switches
>>
>
> plpgsql is fairly tightly coupled to SQL, so I have not really seen any
> problems. But then I am not working on large datasets.
>

​I think that ~900M rows would constitute a large data set most likely
​


>
>
>> 3) using perform
>> perform primary_key_Col into vcnt
>> from table
>> where <>
>> if found then
>>do X
>> else
>>do y
>> end if
>>
>> Seems to remove the above (item 2) issues (if any)
>>
>
> AFAIK, you cannot do the above as written. PERFORM does not return a
> result:
>
> http://www.postgresql.org/docs/9.3/interactive/plpgsql-
> statements.html#PLPGSQL-STATEMENTS-SQL-NORESULT
>
> It would have to be more like:
>
> perform primary_key_Col from table where <>
>
>
​You are absolutely right - my bad​.

>
>> 4) using exists
>> if exists ( select 1 from table where <> ) then
>>   do x
>> else
>>   do y
>> end if
>>
>>
>> My question is what would be the best (in terms of performance) method
>> to use? My gut feeling is to use option 4 for PG. Am I right or is there
>> any other method?
>>
>
> All of the above is context specific. To know for sure you will need to
> test on actual data.
>

​Absolutely right, just that I want to ensure that I follow the most
optimal method before the DB goes into production, after which priorities
change on what needs to be changed.​


>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

​I guess the best answer would be "its very context specific​", but picking
the brains of experienced resources helps :-)

Thanks again
Anil​