[GENERAL] Problem with pl/python procedure connecting to the internet

2015-08-20 Thread Igor Sosa Mayor
Hi,

I'm a beginner with pgsql and have the following problem. I have written
a pl/python procedure to query geolocations (with the library geopy). It
inserts them into a table. All this works perfectly.

The problem is the configuration with the network connections. I'm using
pgsql in a private laptop. In order to let the procedure get results
from the internet I had to put 

listen_addresses = '*' (in postgresql.conf) and
host all all 0.0.0.0/0 trust (in pg_hba.conf)

I don't think this is a good solution. Moreover: it doesn't work in all
places (work, home, etc.), since I imagine it depends on the fact that
the port is open.

My question is therefore:
1. is there a way to permit the pl/python to connect to the internet all
   the time and with a better configuration?
2. or should I forget the procedure and write a python script outside
   the database?

Many thanks in advance.

Igor Sosa Mayor 

-- 
:: Igor Sosa Mayor :: joseleopoldo1...@gmail.com ::
:: GnuPG: 0x1C1E2890   :: http://www.gnupg.org/  ::
:: jabberid: rogorido  ::::



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] repmgr won't update witness after failover

2015-08-20 Thread Aviel Buskila
Hey,
Thanks for the reply, this helped me very much.

Kind Regards,
Aviel Buskila.
בתאריך 17 באוג' 2015 08:49, "Jony Cohen"  כתב:

> Hi,
> The clone command just clones the data from node2 to node1, you need to
> also register it with the `force` option to override the old record. (as if
> you're building a new replica node...)
> see:
>
> https://github.com/2ndQuadrant/repmgr#converting-a-failed-master-to-a-standby
>
> Regards,
>  - Jony
>
>
> On Sun, Aug 16, 2015 at 3:19 PM, Aviel Buskila  wrote:
>
>> Hey,
>> I think I know what the problem is,
>> after the first failover when I clone the old master to be standby with
>> the 'repmgr standby clone' command it seems that nothing updates the
>> repl_nodes table with the new standby in my cluster so on the next failover
>> the repmgrd is failed to find a new upcoming standby to failover..
>>
>> this issue is confirmed after that I manually updated the repl_nodes
>> table after the clone so that the old master is now a standby database.
>>
>> now my question is:
>> Where does is suppose to happen that after I issue the 'repmgr standby
>> clone' the repl_nodes should be updated too about the new standby server?
>>
>> Best regards,
>> Aviel Buskila
>>
>>
>>
>> 2015-08-16 12:11 GMT+03:00 Aviel Buskila :
>>
>>> hey,
>>>
>>> I have tried to set the configuration all over again, now the status of
>>> 'repl_nodes' before the failover is:
>>>
>>> id | type | upstream_node_id | cluster | name | conninfo | priority |
>>> active
>>>
>>> +-+---++--+-
>>> 1 | master |  | cluster_name |node1| host=node1
>>> dbname=repmgr port=5432 user=repmgr | 100 | t
>>> 2 | standby|1| cluster_name |node2| host=node2
>>> dbname=repmgr port=5432 user=repmgr | 100 | t
>>>
>>> 3 | witness|| cluster_name |node3| host=node3
>>> dbname=repmgr port=5499 user=repmgr | 100 | t
>>>
>>>
>>> repmgr is started on node2 and node3 (standby and witness) now when I
>>> kill postgresmaster process I can see in the
>>>
>>> repmgrd log the following messages:
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 60
>>> seconds before failover decision
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 50
>>> seconds before failover decision
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 40
>>> seconds before failover decision
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 30
>>> seconds before failover decision
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 20
>>> seconds before failover decision
>>>
>>> [WARNING] connection to master has been lost, trying to recover... 10
>>> seconds before failover decision
>>>
>>>
>>> and than when it tried to elect node2 to be promoted it shows the
>>> following messages:
>>>
>>> [DEBUG] connecting to: 'host=node2 user=repmgr dbname=repmgr
>>> fallback_application_name='repmgr''
>>>
>>> [WARNING] unable to defermmine a valid master server; waiting 10 seconds
>>> to retry...
>>>
>>> [ERROR] unable to determine a valid master node, terminating...
>>>
>>> [INFO] repmgrd terminating..
>>>
>>>
>>>
>>> what am I doing wrong?
>>>
>>>
>>> El 14/08/15 a las 04:14, Aviel Buskila escribió:
>>> > Hey,
>>> > yes I did .. and still it wont fail back..
>>>
>>> Can you send over the output of "repmgr cluster show" before and after
>>> the failover process?
>>>
>>> The output of SELECT * FROM repmgr_schema.repl_nodes; after the failover
>>> (you need to change repmgr_schema with what you have configured).
>>>
>>> Also, which version of repmgr are you running?
>>>
>>> > 2015-08-13 16:23 GMT+03:00 Jony Vesterman Cohen <
>>> jony.cohe...@gmail.com>:
>>> >
>>> >> Hi, did you make the old master follow the new one using repmgr?
>>> >>
>>> >> It doesn't update itself automatically...
>>> >> From the looks of it repmgr thinks you have 2 masters - the old one
>>> >> offline and the new one online.
>>>
>>> Regards,
>>>
>>> --
>>> Martín Marquéshttp://www.2ndQuadrant.com/
>>> PostgreSQL Development, 24x7 Support, Training & Services
>>>
>>
>


[GENERAL] Dangers of mislabelled immutable functions

2015-08-20 Thread Jeff Janes
I want to index the textual representations of a table's rows.

You can cast a row to text by using the name of the table where you would
usually use the name of a column, like "table_name::text".  But this is not
immutable and so can't be used in an expression index.

I can easily make a pl/perl function which does the same thing as the text
cast, and label it as immutable and use it in an index.  But are the
dangers of doing this?

I know that configuration changes to the date and time formats could change
the way that the data is shown in ways that would cause the rows inserted
under a different configuration to be missed. That doesn't bother me.

I know that if I drop a table column, the contents of the column will still
be in the index.  That doesn't bother me either, as all the indexes I plan
to use are lossy and so will do a recheck anyway.

What other problems am I setting myself up for by doing this?

Cheers,

Jeff


Re: [GENERAL] hot backup with zfs ?

2015-08-20 Thread Joseph Kregloh
On Thu, Aug 20, 2015 at 9:50 AM, Laurent Laborde 
wrote:

> Friendly greetings !
>
> I'm wondering if anyone tried the following hot backup process, and if it
> works :
> pg_start_backup()
> zfs snapshop
> pg_stop_backup()
>
> copy the snapshot to the backup server.
>

That is how I currently handle our backup schema. I do a couple daily ZFS
snapshots as well as Barman. In the event of a rollback being required I
can go back to the latest usable snapshot and copy over the logs to the
point in time I want to recover. Or even just go back to the snapshot point
in time.

ZFS snapshots are useful for other reasons by themselves. Testing for
example or transferring the DB from one machine to another, be it for
replication or to give devs a copy.

Please note to delete snapshots periodically as they do take up data and
increases scrub time. I normally keep 10 days worth of snapshots.


> Thx :)
>
> PS : the WAL will still be archived the usual way.
>
> --
> Laurent "ker2x" Laborde
>


Re: [GENERAL] hot backup with zfs ?

2015-08-20 Thread Yves Dorfsman
On 2015-08-20 07:50, Laurent Laborde wrote:
> I'm wondering if anyone tried the following hot backup process, and if it 
> works :
> pg_start_backup()
> zfs snapshop
> pg_stop_backup()
> 
> copy the snapshot to the backup server.

I have done this with AWS EBS snapshots, created new volumes from the
snapshots and created a new server with those volumes, and it works well.

I can't see why it'd be different with ZFS.

-- 
http://yves.zioup.com
gpg: 4096R/32B0F416



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] hot backup with zfs ?

2015-08-20 Thread Laurent Laborde
Friendly greetings !

I'm wondering if anyone tried the following hot backup process, and if it
works :
pg_start_backup()
zfs snapshop
pg_stop_backup()

copy the snapshot to the backup server.
Thx :)

PS : the WAL will still be archived the usual way.

-- 
Laurent "ker2x" Laborde


Re: [GENERAL] Preparing statement using PQexec vs PQprepare

2015-08-20 Thread Merlin Moncure
On Wed, Aug 19, 2015 at 5:55 AM, tonymark09  wrote:
> Hello,
>
> Is preparing a statement using PQexec and executing it is much slower that
> using other API such as PQprepare and PQexecPrepared?

PQexec can be faster than a combination of PQprepare and
PQexecPrepared particularly if you only need to execute the query
once, but I don't recommend using it for the most part.   If the query
is to be executed many times, or has a lot of parameters, you'll want
to use the prepared interface.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general