How to install pgAgent on windows for postresql-bigsql-10.5

2018-09-05 Thread jimmy
I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database.
How to install pgAgent on windows for postresql-bigsql-10.5.
I have been searching some articles to install pgAgent.
But they do not work.
I found there has not any version of pgAgent for windows in the website 
'www.pgadmin.org'.
And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the pgAgent.
When I execute CREATE EXTENSION pgagent, it throws ERROR:  could not open 
extension control file 
"H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control": No 
such file or directory.
How can I resolve these problems.
Thank you.



Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Jeremy Finzel
If I follow your use case, we have written something that just may fit your
scenario and plan to open source it rather soon.

It has several layers but let me boil it down. First we use an open sourced
auditing system to log changes to the source tables. This becomes your
queue. A postgres background worker will asynchronously process these
changes based on your configuration, which is highly configurable. It also
handles the concurrency you are questioning.

This allows you to build history tables without requiring you for example
to do it directly via a trigger. It also removes redundancy if you have the
same key updated multiple times. It assumes we are fine with the data built
not being 100% up to date data because these updates obviously don’t all
happen in the same transaction as the source data change.

Let me know if this interests you and I can share more.

Thanks,
Jeremy

On Wed, Sep 5, 2018 at 10:07 AM Thiemo Kellner 
wrote:

>
> Hi all
>
> I am designing a framework for historisation implementation (SCD). One
> feature I would like to provide is a table in that the actual state of
> an entity is put and if this is complete, this history table is
> "updated":
>
> -
> ==>  | ENTITY_ACT |  ==>  | ENTITY_HIST |
> -
>
> I plan to use instead-of-triggers on the hist table that read the
> actual table and perfoms all necessary inserts und updates on the
> history table. If I want the termination of a record version (actually
> the record of a specific business key with a specific payload) to get
> propagated up and/or down referential integrities (no overlapping
> validities) I have to make sure that only one of those processes is
> modifying a table. I was thinking of a scheduler queue where the
> trigger would put a process request and PostgreSQL would work through.
> Is there a scheduler within PostgreSQL? I read the documentation and
> searched the web but could not find a hint. But before going another
> road or implementing something myself, I ask. Maybe this design is no
> good at all.
>
> Kind regards
>
> Thiemo
>
>
>


Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
Because I am looking for a fifo queue and not for time schedule.  
Thanks anyway.


Quoting Tim Clarke :



Why not just call your "do a scheduled run" code from cron?

Tim Clarke







Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
This is a queue but if I am not mistaken, it is outside PostgreSQL  
where to go I am very reluctant. I will look at it in more depth.  
Thanks!


Quoting Ron :


Maybe https://github.com/chanks/que is what you need.

On 09/05/2018 02:35 PM, Thiemo Kellner wrote:
I have seen pg_cron but it is not what I am looking for. It  
schedules tasks only by time. I am looking for a fifo queue.  
pg_cron neither prevents from simultaneous runs I believe.







Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Ron



Maybe https://github.com/chanks/que is what you need.

On 09/05/2018 02:35 PM, Thiemo Kellner wrote:
I have seen pg_cron but it is not what I am looking for. It schedules 
tasks only by time. I am looking for a fifo queue. pg_cron neither 
prevents from simultaneous runs I believe.


Quoting Thomas Kellerer :


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron







--
Angular momentum makes the world go 'round.



Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Tim Clarke
On 05/09/18 16:06, Thiemo Kellner wrote:
>
> Hi all
>
> I am designing a framework for historisation implementation (SCD). One
> feature I would like to provide is a table in that the actual state of
> an entity is put and if this is complete, this history table is
> "updated":
>
>    -
> ==>  | ENTITY_ACT |  ==>  | ENTITY_HIST |
>    -
>
> I plan to use instead-of-triggers on the hist table that read the
> actual table and perfoms all necessary inserts und updates on the
> history table. If I want the termination of a record version (actually
> the record of a specific business key with a specific payload) to get
> propagated up and/or down referential integrities (no overlapping
> validities) I have to make sure that only one of those processes is
> modifying a table. I was thinking of a scheduler queue where the
> trigger would put a process request and PostgreSQL would work through.
> Is there a scheduler within PostgreSQL? I read the documentation and
> searched the web but could not find a hint. But before going another
> road or implementing something myself, I ask. Maybe this design is no
> good at all.
>
> Kind regards
>
> Thiemo
>
>

Why not just call your "do a scheduled run" code from cron?

Tim Clarke



Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
I have seen pg_cron but it is not what I am looking for. It schedules  
tasks only by time. I am looking for a fifo queue. pg_cron neither  
prevents from simultaneous runs I believe.


Quoting Thomas Kellerer :


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron







Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Andreas Kretschmer




Am 05.09.2018 um 19:39 schrieb Raghavendra Rao J S V:

Hi All,

We are using postgres 9.2 verstion database.


9.2 is out of support. Please consider a upgrade. soon!



Please let me know, how many max number of wal files in pg_xlog directory?


depends on the workload and on several settings. There is no rule of thumb.


Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Ron

On 09/05/2018 12:39 PM, Raghavendra Rao J S V wrote:

Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me 
which decides number of max WAL files in PG_XLOG directory for Postgres 
9.2 Database,please?


If you're doing WAL file replication, and -- for whatever reason -- the 
standby system isn't applying them fast enough, there could be tens of 
thousands of files in pg_xlog.  (It would drain pretty quickly when you 
resolve the problem, though.)



--
Angular momentum makes the world go 'round.


Re: unaccent(text) fails depending on search_path (WAS: pg_upgrade fails saying function unaccent(text) doesn't exist)

2018-09-05 Thread Tom Lane
Gunnlaugur Thor Briem  writes:
> SET search_path = "$user"; SELECT public.unaccent('foo');
> SET
> ERROR:  text search dictionary "unaccent" does not exist

Meh.  I think we need the attached, or something just about like it.

It's barely possible that there's somebody out there who's relying on
setting the search path to allow choosing among multiple "unaccent"
dictionaries.  But there are way more people whose functions are
broken due to the recent search-path-tightening changes.

regards, tom lane

diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c
index 247c202..5f6ad8a 100644
*** a/contrib/unaccent/unaccent.c
--- b/contrib/unaccent/unaccent.c
***
*** 20,25 
--- 20,26 
  #include "tsearch/ts_locale.h"
  #include "tsearch/ts_public.h"
  #include "utils/builtins.h"
+ #include "utils/lsyscache.h"
  #include "utils/regproc.h"
  
  PG_MODULE_MAGIC;
*** unaccent_dict(PG_FUNCTION_ARGS)
*** 376,382 
  
  	if (PG_NARGS() == 1)
  	{
! 		dictOid = get_ts_dict_oid(stringToQualifiedNameList("unaccent"), false);
  		strArg = 0;
  	}
  	else
--- 377,393 
  
  	if (PG_NARGS() == 1)
  	{
! 		/*
! 		 * Use the "unaccent" dictionary that is in the same schema that this
! 		 * function is in.
! 		 */
! 		Oid			procnspid = get_func_namespace(fcinfo->flinfo->fn_oid);
! 		char	   *procnsp = get_namespace_name(procnspid);
! 		List	   *dictname;
! 
! 		dictname = list_make2(makeString(procnsp),
! 			  makeString(pstrdup("unaccent")));
! 		dictOid = get_ts_dict_oid(dictname, false);
  		strArg = 0;
  	}
  	else


Re: scram-sha-256 authentication broken in FIPS mode

2018-09-05 Thread Michael Paquier
On Wed, Sep 05, 2018 at 01:19:39PM +, Alessandro Gherardi wrote:
> Hi Michael,I'm actually running postgres on Windows.

First you may want to avoid top-posting.  This is not the style of the
community lists and this breaks the logic of a thread.

> I added code to fe-secure-openssl.c and be-secure-openssl.c that reads
> the Windows "standard" FIPS registry entry, and if FIPS is enabled
> calls FIPS_mode_set(1). This is to mimic to behavior of the .NET
> framework.

That's rather uncharted territory, as you are patching both the backend
*and* the client.  If we could prove that sha2-openssl.c is actually
unreliable even if FIPS is enabled system-wide with either SCRAM
authentication or any of the other hashing functions, then I would be
ready to accept a patch.  Now, as far as I can see and heard from other
folks for at least Linux, if FIPS is enabled at the OS level, then
Postgres would use it automatically and SCRAM is able to work.  I have
yet to hear that this part is broken.  As far as I know from companies
within the community which worked on STIG requirements, the thing
works.

> Below is the code I added to fe-secure-openssl.c, the code in
> be-secure-openssl.c is similar: 
> Thoughts? I can try to fix the scram-sha-256 issue by using EVP and
> send you a merge request for the patch and the code below if you think
> my approach is correct.

That's a bit unreadable I am afraid :)
You may want to attach a patch after producing it with for example "git
format-patch -1".
--
Michael


signature.asc
Description: PGP signature


Re: Autovacuum degrades all other operations by keeping all buffers dirty?

2018-09-05 Thread David Pacheco
On Fri, Aug 31, 2018 at 3:50 PM, Andres Freund  wrote:

> On 2018-08-31 19:31:47 -0300, Alvaro Herrera wrote:
> > On 2018-Aug-31, David Pacheco wrote:
> >
> > > From reading the 9.6.3 source, it looks like the autovacuum process
> > > itself is single-threaded, and it reads pages essentially linearly
> > > from the relation (possibly skipping some).  When the autovacuum
> > > process needs to modify a page, it doesn't write it directly, but
> > > rather marks the buffer dirty.  The page will be written later,
> >
> > Unless there's some bug, there is a BufferAccessStrategy that only lets
> > a few dozen buffers go unwritten before the autovac worker process
> > itself is forced to write some.
>
> I've not re-checked, but I'm not sure that's true if the buffer is
> already in s_b, which it'll be for many workloads.
>


Does that mean this analysis from above is accurate?

It looks to me like the autovacuum process is effectively generating work
> (in
> the form of async writes) that's being distributed implicitly to the
> various
> backend processes, creating latency for any other query that happens to
> require
> a buffer (including read-only queries).


Thanks,
Dave


Re: Full table lock dropping a foreign key

2018-09-05 Thread Tom Lane
Paul Jungwirth  writes:
> I noticed that Postgres takes an AccessExclusiveLock (a lock on the 
> whole table) against the *referenced* table when dropping a foreign key. 

Yeah, that's because it involves removing a trigger.

Adding a trigger used to require AEL as well, but it was successfully
argued that that end of things could use a weaker lock.  I find the
reasoning rather dubious myself.

regards, tom lane



RE: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Johnes Castro
1 wal by default occupies 16MB.
The parameter in version 9.2 that controls this is: wal_keep_segments


By setting the parameter to 10, the maximum size of the US pg_xlog will be 
160MB.

Best Regards,
Johnes Castro



De: Johnes Castro 
Enviado: quarta-feira, 5 de setembro de 2018 15:48
Para: Raghavendra Rao J S V; pgsql-general@lists.postgresql.org
Assunto: RE: Max number of WAL files in pg_xlog directory for Postgres 9.2 
version

Hi,

This page in the documentation can help you.
https://www.postgresql.org/docs/9.2/static/wal-configuration.html

Best Regards,
Johnes Castro
PostgreSQL: Documentation: 9.2: WAL 
Configuration
29.4. WAL Configuration. There are several WAL-related configuration parameters 
that affect database performance.This section explains their use. Consult 
Chapter 18 for general information about setting server configuration 
parameters.. Checkpoints are points in the sequence of transactions at which it 
is guaranteed that the heap and index data files have been updated with all 
information ...
www.postgresql.org



De: Raghavendra Rao J S V 
Enviado: quarta-feira, 5 de setembro de 2018 15:39
Para: pgsql-general@lists.postgresql.org
Assunto: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me which 
decides number of max WAL files in PG_XLOG directory for Postgres 9.2 
Database,please?


--
Regards,
Raghavendra Rao J S V



RE: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Johnes Castro
Hi,

This page in the documentation can help you.
https://www.postgresql.org/docs/9.2/static/wal-configuration.html

Best Regards,
Johnes Castro
PostgreSQL: Documentation: 9.2: WAL 
Configuration
29.4. WAL Configuration. There are several WAL-related configuration parameters 
that affect database performance.This section explains their use. Consult 
Chapter 18 for general information about setting server configuration 
parameters.. Checkpoints are points in the sequence of transactions at which it 
is guaranteed that the heap and index data files have been updated with all 
information ...
www.postgresql.org



De: Raghavendra Rao J S V 
Enviado: quarta-feira, 5 de setembro de 2018 15:39
Para: pgsql-general@lists.postgresql.org
Assunto: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me which 
decides number of max WAL files in PG_XLOG directory for Postgres 9.2 
Database,please?


--
Regards,
Raghavendra Rao J S V



Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Raghavendra Rao J S V
Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me
which decides number of max WAL files in PG_XLOG directory for Postgres 9.2
Database,please?


-- 
Regards,
Raghavendra Rao J S V


Re: increasing HA

2018-09-05 Thread Dimitri Maziuk
On 09/05/2018 12:04 PM, Thomas Poty wrote:
> I want to get :
> Automatic failover (promoting a slave)
> Automatic Routing traffic to master
> Fencing in case of node failure.

Why would you want to fence a node that's already failed? -- You want to
fence off the master during the failover so it does not accept any more
DB updates.

Anyway, here's how we run:
- we provide services to customers; customer-facing service access
points is where we want to reduce downtime.
- We buy decent quality hardware with redundant disks and power
circuits. 99.% of the downtime is maintenance reboots.
- We carefully partition or services so that most of them require
read-only access. Those run off slaves.
- The master is not even visible to the customers. Its downtime does not
affect service availability.

Obviously, this works because we can run read-only. If you need
read-write on a public-facing database, the above won't work for you.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature


Re: increasing HA

2018-09-05 Thread Nicolas Karolak
I have a setup kind of like yours, one primary and two stanby, but
with streaming replication.

Here is what i use:
-> on application (Django webapp) servers:
|---> a local HAProxy as a frontend for database servers, doing the
fencing through a check on a custom daemon running on database servers
-> on PostgreSQL servers:
|---> repmgr for the automatic failover
|---> the custom daemon which expose the state of the node (by trying
to retreive its state according to each node's repmgr status table and
doing a kind of vote, for exemple "if according to me i'm the primary
but not according to the two other nodes, i'm fenced")

Regards,
Nicolas KAROLAK | SysAdmin
6-8, rue André Voguet - 94200 Ivry-sur-Seine
+33 1 77 56 77 96 | www.ubicast.eu | www.ubicast.tv

Le mer. 5 sept. 2018 à 18:39, Dmitri Maziuk  a écrit :
>
> On Wed, 5 Sep 2018 17:45:05 +0200
> Thomas Poty  wrote:
>
> > We have one master and 2 slaves so 3 nodes
>
> So what is the "HA" that you're trying to "increase"? Are you adding a 3rd 
> slave? A 2nd master? A hot standby? All of the above?
>
> --
> Dmitri Maziuk 
>



Re: increasing HA

2018-09-05 Thread Thomas Poty
I want to get :
Automatic failover (promoting a slave)
Automatic Routing traffic to master
Fencing in case of node failure.

I already have 2 asynchronous slaves in hot standby mode + Replication
slot. I don't want to add a new node.



Le mer. 5 sept. 2018 à 18:39, Dmitri Maziuk  a
écrit :

> On Wed, 5 Sep 2018 17:45:05 +0200
> Thomas Poty  wrote:
>
> > We have one master and 2 slaves so 3 nodes
>
> So what is the "HA" that you're trying to "increase"? Are you adding a 3rd
> slave? A 2nd master? A hot standby? All of the above?
>
> --
> Dmitri Maziuk 
>
>


Re: increasing HA

2018-09-05 Thread Dmitri Maziuk
On Wed, 5 Sep 2018 17:45:05 +0200
Thomas Poty  wrote:

> We have one master and 2 slaves so 3 nodes

So what is the "HA" that you're trying to "increase"? Are you adding a 3rd 
slave? A 2nd master? A hot standby? All of the above? 

-- 
Dmitri Maziuk 



Full table lock dropping a foreign key

2018-09-05 Thread Paul Jungwirth

Hello,

I noticed that Postgres takes an AccessExclusiveLock (a lock on the 
whole table) against the *referenced* table when dropping a foreign key. 
I wasn't expecting that, and some experimentation showed it does *not* 
take one when creating the FK. For example:


pjtest=# create table parent (id integer primary key);
CREATE TABLE
pjtest=# create table child (id integer primary key, parent_id integer);
CREATE TABLE
pjtest=# begin;
BEGIN
pjtest=# alter table child add constraint pfk foreign key (parent_id) 
references parent (id);

ALTER TABLE

And now pg_locks has this:

pjtest=# select locktype, relation::regclass, mode from pg_locks;
   locktype|  relation   | mode
---+-+---
 relation  | parent_pkey | AccessShareLock
 relation  | child_pkey  | AccessShareLock
 virtualxid| | ExclusiveLock
 relation  | pg_locks| AccessShareLock
 virtualxid| | ExclusiveLock
 relation  | parent  | AccessShareLock
 relation  | parent  | RowShareLock
 relation  | parent  | ShareRowExclusiveLock
 transactionid | | ExclusiveLock
 relation  | child   | AccessShareLock
 relation  | child   | ShareRowExclusiveLock
(11 rows)

But after dropping it:

pjtest=# commit;
COMMIT
pjtest=# begin;
BEGIN
pjtest=# alter table child drop constraint pfk;
ALTER TABLE

Now my locks are:

pjtest=# select locktype, relation::regclass, mode from pg_locks;
   locktype| relation |mode
---+--+-
 virtualxid|  | ExclusiveLock
 relation  | pg_locks | AccessShareLock
 virtualxid|  | ExclusiveLock
 relation  | parent   | AccessExclusiveLock
 relation  | child| AccessExclusiveLock
 transactionid |  | ExclusiveLock
 object|  | AccessExclusiveLock
 object|  | AccessExclusiveLock
 object|  | AccessExclusiveLock
 object|  | AccessExclusiveLock
 object|  | AccessExclusiveLock
(11 rows)

I noticed this on 9.5 but confirmed it on 10.5.

I was surprised because the docs give a pretty short list of things that 
take AccessExclusiveLocks 
(https://www.postgresql.org/docs/current/static/explicit-locking.html). 
It mentions ALTER TABLE, and it makes sense when I recall that foreign 
keys are implemented by putting triggers on *both* referencing & 
referenced tables, but still it caught me off guard. Also I don't 
understand why the lock is not necessary when adding a foreign key?


Anyway I don't have much of a question, although I wouldn't mind adding 
a note to the docs that dropping an FK takes this lock on both tables, 
if others agree that is a good idea.


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: increasing HA

2018-09-05 Thread Thomas Poty
> How many nodes do you want run? Keep in mind that with only 2 nodes,
fencing off the right one is by definition an undecidable problem. That's
one of the features of the current linux HA stack that makes one want to go
BSD/CARP.
>
> And if you want to run fully distributed, you might want to look at redis
in the first place.
We have one master and 2 slaves so 3 nodes

Le mer. 5 sept. 2018 à 16:42, Dmitri Maziuk  a
écrit :

> On Wed, 5 Sep 2018 13:23:41 +0200
> Thomas Poty  wrote:
>
> > At first glance, i may use for automatic failover PAF, a proxy HAproxy
> and
> > for fencincg, i am a bit disappointed, i don't know what to do/use
>
> How many nodes do you want run? Keep in mind that with only 2 nodes,
> fencing off the right one is by definition an undecidable problem. That's
> one of the features of the current linux HA stack that makes one want to go
> BSD/CARP.
>
> And if you want to run fully distributed, you might want to look at redis
> in the first place.
>
> --
> Dmitri Maziuk 
>
>


Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thomas Kellerer
Thiemo Kellner schrieb am 05.09.2018 um 17:06:
> I am designing a framework for historisation implementation (SCD).
> One feature I would like to provide is a table in that the actual
> state of an entity is put and if this is complete, this history table
> is "updated":
> 
>    -
> ==>  | ENTITY_ACT |  ==>  | ENTITY_HIST |
>    -
> 

> I plan to use instead-of-triggers on the hist table that read the
> actual table and perfoms all necessary inserts und updates on the
> history table. If I want the termination of a record version
> (actually the record of a specific business key with a specific
> payload) to get propagated up and/or down referential integrities (no
> overlapping validities) I have to make sure that only one of those
> processes is modifying a table. I was thinking of a scheduler queue
> where the trigger would put a process request and PostgreSQL would
> work through. Is there a scheduler within PostgreSQL? I read the
> documentation and searched the web but could not find a hint. But
> before going another road or implementing something myself, I ask.
> Maybe this design is no good at all.


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron





PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner



Hi all

I am designing a framework for historisation implementation (SCD). One  
feature I would like to provide is a table in that the actual state of  
an entity is put and if this is complete, this history table is  
"updated":


   -
==>  | ENTITY_ACT |  ==>  | ENTITY_HIST |
   -

I plan to use instead-of-triggers on the hist table that read the  
actual table and perfoms all necessary inserts und updates on the  
history table. If I want the termination of a record version (actually  
the record of a specific business key with a specific payload) to get  
propagated up and/or down referential integrities (no overlapping  
validities) I have to make sure that only one of those processes is  
modifying a table. I was thinking of a scheduler queue where the  
trigger would put a process request and PostgreSQL would work through.  
Is there a scheduler within PostgreSQL? I read the documentation and  
searched the web but could not find a hint. But before going another  
road or implementing something myself, I ask. Maybe this design is no  
good at all.


Kind regards

Thiemo




Re: increasing HA

2018-09-05 Thread Dmitri Maziuk
On Wed, 5 Sep 2018 13:23:41 +0200
Thomas Poty  wrote:

> At first glance, i may use for automatic failover PAF, a proxy HAproxy and
> for fencincg, i am a bit disappointed, i don't know what to do/use

How many nodes do you want run? Keep in mind that with only 2 nodes, fencing 
off the right one is by definition an undecidable problem. That's one of the 
features of the current linux HA stack that makes one want to go BSD/CARP.

And if you want to run fully distributed, you might want to look at redis in 
the first place.

-- 
Dmitri Maziuk 



Re: unaccent(text) fails depending on search_path (WAS: pg_upgrade fails saying function unaccent(text) doesn't exist)

2018-09-05 Thread Gunnlaugur Thor Briem
Yep, a neater workaround for sure!

Cheers,
Gulli

On Wed, Sep 5, 2018 at 2:00 PM Adrian Klaver 
wrote:

> On 09/05/2018 01:49 AM, Gunnlaugur Thor Briem wrote:
> > OK, I found the cause of the unaccent dictionary problem, and a
> workaround.
> >
> > It's not the vacuumdb version, not the unaccent version, and it's not
> > even a pg_upgrade problem: I get this error also with PG 9.4.18 running
> > on the old cluster, with both the 10.5 vacuumdb and the 9.4.18 vacuumdb,
> > and I get the same error in both.
> >
> > And it's not strictly a vacuumdb problem, though vacuumdb triggers it.
> >
> > Here's a very minimal test case, unrelated to my DB, that you ought to
> > be able to reproduce:
> >
> > SET search_path = "$user"; SELECT public.unaccent('fóö');
> > SET
> > ERROR:  text search dictionary "unaccent" does not exist
> >
> > and here's a workaround:
> >
> > SET search_path = "$user"; SELECT public.unaccent(tsdict.oid, 'fóö')
> > FROM pg_catalog.pg_ts_dict tsdict WHERE dictname='unaccent';
> > SET
> >   unaccent
> > --
> >   foo
> > (1 row)
> >
> > The workaround avoids the OID lookup of the dictionary ... that lookup
> > (in the single-argument unaccent function) is done by unqualified name:
> >
> >
> https://github.com/postgres/postgres/blob/fb8697b31aaeebe6170c572739867dcaa01053c6/contrib/unaccent/unaccent.c#L377
> >
> >  dictOid =
> > get_ts_dict_oid(stringToQualifiedNameList("unaccent"), false);
> >
> > and that fails if the search path doesn't include public. >
> > So it is indeed triggered by the security changes that Bruce mentioned;
> > those were backported into 9.4.17:
> > https://www.postgresql.org/docs/9.4/static/release-9-4-17.html ... and
> > so got pulled in by my Macports upgrades. So nothing to do with
> pg_upgrade.
> >
> > So the workaround for my vacuumdb/function-index problem is to give
> > unaccent the OID of the text search dictionary, so that the search path
> > isn't in play:
> >
> > CREATE OR REPLACE FUNCTION public.semantic_normalize(title text)
> > RETURNS text
> >   LANGUAGE sql
> >   IMMUTABLE STRICT
> > AS $function$
> >SELECT lower(public.unaccent(16603, btrim(regexp_replace($1, '\s+', '
> > ', 'g'), ' "')))
> > $function$;
> >
> > and that makes vacuumdb -z work in both 9.4.18 and 10.5, and makes
> > ./analyze_new_cluster.sh complete without problems.
>
>
> Nice investigation. Working off the above, I offer a suggestion:
>
> SET search_path = "$user"; SELECT public.unaccent('unaccent', 'fóö');
> SET
> ERROR:  text search dictionary "unaccent" does not exist
> LINE 1: SELECT public.unaccent('unaccent', 'fóö');
>
>
> SET search_path = "$user"; SELECT public.unaccent('public.unaccent',
> 'fóö');
> SET
>   unaccent
> --
>   foo
>
> That eliminates hard wiring the OID.
>
> >
> > The proper fix is, I suppose, to make the single-argument unaccent
> > function explicitly look up the dictionary in the same schema as the
> > function itself is in.
> >
> > Cheers,
> > Gulli
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: increasing HA

2018-09-05 Thread Thomas Poty
> OK, so either patch PAF yourself (not recommended) or choose something
> else. Note that two other ways are working with Pacemaker:
>   * the pgsql resource agent (see FAQ of PAF)
>   * a shared disk architecture (no pgsql replication)
Probably, i will be interested by the solution "patroni-etcd-haproxy" you
suggested

> I suppose you could find fencing agents for:

> * the blade itself, but it would fence all the container running on it
> * the access to the SAN from the failing container

> I don't know if fencing agent exists for a container itself. Note that
I'm not
> familiar with the container world, I lack a lot of knowledge on this
> technology.

I am not familiar with this too. I heard that for the first time few days
ago by reading how to improve HA ... :-)

Many thanks for your opinions/advices

Le mer. 5 sept. 2018 à 15:44, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> On Wed, 5 Sep 2018 15:06:21 +0200
> Thomas Poty  wrote:
>
> > > In fact, PAF does not support slots. So it is not a good candidate if
> > > slot are a requirement.
> > Effectively slots are a requirement we prefer to keep
>
> OK, so either patch PAF yourself (not recommended) or choose something
> else. Note that two other ways are working with Pacemaker:
>   * the pgsql resource agent (see FAQ of PAF)
>   * a shared disk architecture (no pgsql replication)
>
> > > > a proxy HAproxy and
> > > > for fencincg, i am a bit disappointed, i don't know what to do/use
> > > Depend on your hardware or your virtualization technology.
> > Our production cluster (master and slave) runs on LXC container. Each LXC
> > container runs on a HPE Blade Server. The storage is on a SAN 3PAR array.
> > Any advice ?
>
> I suppose you could find fencing agents for:
>
> * the blade itself, but it would fence all the container running on it
> * the access to the SAN from the failing container
>
> I don't know if fencing agent exists for a container itself. Note that I'm
> not
> familiar with the container world, I lack a lot of knowledge on this
> technology.
>
> ++
>


Re: unaccent(text) fails depending on search_path (WAS: pg_upgrade fails saying function unaccent(text) doesn't exist)

2018-09-05 Thread Adrian Klaver

On 09/05/2018 01:49 AM, Gunnlaugur Thor Briem wrote:

OK, I found the cause of the unaccent dictionary problem, and a workaround.

It's not the vacuumdb version, not the unaccent version, and it's not 
even a pg_upgrade problem: I get this error also with PG 9.4.18 running 
on the old cluster, with both the 10.5 vacuumdb and the 9.4.18 vacuumdb, 
and I get the same error in both.


And it's not strictly a vacuumdb problem, though vacuumdb triggers it.

Here's a very minimal test case, unrelated to my DB, that you ought to 
be able to reproduce:


SET search_path = "$user"; SELECT public.unaccent('fóö');
SET
ERROR:  text search dictionary "unaccent" does not exist

and here's a workaround:

SET search_path = "$user"; SELECT public.unaccent(tsdict.oid, 'fóö')
FROM pg_catalog.pg_ts_dict tsdict WHERE dictname='unaccent';
SET
  unaccent
--
  foo
(1 row)

The workaround avoids the OID lookup of the dictionary ... that lookup 
(in the single-argument unaccent function) is done by unqualified name:


https://github.com/postgres/postgres/blob/fb8697b31aaeebe6170c572739867dcaa01053c6/contrib/unaccent/unaccent.c#L377

         dictOid = 
get_ts_dict_oid(stringToQualifiedNameList("unaccent"), false);


and that fails if the search path doesn't include public. >
So it is indeed triggered by the security changes that Bruce mentioned; 
those were backported into 9.4.17: 
https://www.postgresql.org/docs/9.4/static/release-9-4-17.html ... and 
so got pulled in by my Macports upgrades. So nothing to do with pg_upgrade.


So the workaround for my vacuumdb/function-index problem is to give 
unaccent the OID of the text search dictionary, so that the search path 
isn't in play:


CREATE OR REPLACE FUNCTION public.semantic_normalize(title text)
RETURNS text
  LANGUAGE sql
  IMMUTABLE STRICT
AS $function$
   SELECT lower(public.unaccent(16603, btrim(regexp_replace($1, '\s+', ' 
', 'g'), ' "')))

$function$;

and that makes vacuumdb -z work in both 9.4.18 and 10.5, and makes 
./analyze_new_cluster.sh complete without problems.



Nice investigation. Working off the above, I offer a suggestion:

SET search_path = "$user"; SELECT public.unaccent('unaccent', 'fóö');
SET
ERROR:  text search dictionary "unaccent" does not exist
LINE 1: SELECT public.unaccent('unaccent', 'fóö');


SET search_path = "$user"; SELECT public.unaccent('public.unaccent', 'fóö');
SET
 unaccent
--
 foo

That eliminates hard wiring the OID.



The proper fix is, I suppose, to make the single-argument unaccent 
function explicitly look up the dictionary in the same schema as the 
function itself is in.


Cheers,
Gulli




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: increasing HA

2018-09-05 Thread Jehan-Guillaume (ioguix) de Rorthais
On Wed, 5 Sep 2018 15:06:21 +0200
Thomas Poty  wrote:

> > In fact, PAF does not support slots. So it is not a good candidate if  
> > slot are a requirement.  
> Effectively slots are a requirement we prefer to keep

OK, so either patch PAF yourself (not recommended) or choose something
else. Note that two other ways are working with Pacemaker:
  * the pgsql resource agent (see FAQ of PAF)
  * a shared disk architecture (no pgsql replication)

> > > a proxy HAproxy and
> > > for fencincg, i am a bit disappointed, i don't know what to do/use  
> > Depend on your hardware or your virtualization technology.  
> Our production cluster (master and slave) runs on LXC container. Each LXC
> container runs on a HPE Blade Server. The storage is on a SAN 3PAR array.
> Any advice ?

I suppose you could find fencing agents for:

* the blade itself, but it would fence all the container running on it
* the access to the SAN from the failing container

I don't know if fencing agent exists for a container itself. Note that I'm not
familiar with the container world, I lack a lot of knowledge on this
technology.

++



Re: scram-sha-256 authentication broken in FIPS mode

2018-09-05 Thread Alessandro Gherardi
Hi Michael,I'm actually running postgres on Windows.
I added code to fe-secure-openssl.c and be-secure-openssl.c that reads the 
Windows "standard" FIPS registry entry, and if FIPS is enabled calls 
FIPS_mode_set(1). This is to mimic to behavior of the .NET framework.
Below is the code I added to fe-secure-openssl.c, the code in 
be-secure-openssl.c is similar:
Thoughts? I can try to fix the scram-sha-256 issue by using EVP and send you a 
merge request for the patch and the code below if you think my approach is 
correct.
Thanks,Alessandro
 intpgtls_init(PGconn *conn){...
        if (!ssl_lib_initialized) { if (pq_init_ssl_lib) {
#ifdef WIN32 HKEY rootKey;
 DWORD fipsEnabled = 0; DWORD fipsEnabledSize = sizeof(DWORD);
 if (RegOpenKeyEx(HKEY_LOCAL_MACHINE, 
"System\\CurrentControlSet\\Control\\Lsa\\FipsAlgorithmPolicy", 0, KEY_READ, 
) != ERROR_SUCCESS) { printfPQExpBuffer(>errorMessage, 
libpq_gettext("could not open FIPS registry key")); return -1; } if 
(RegQueryValueEx(rootKey, "Enabled", 0, 0, (LPBYTE) , 
) != ERROR_SUCCESS) { RegCloseKey(rootKey); 
printfPQExpBuffer(>errorMessage, libpq_gettext("could not read FIPS 
registry entry")); return -1; } RegCloseKey(rootKey);
 if (fipsEnabled == 1 && FIPS_mode() == 0) { if (FIPS_mode_set(1) != 1) { 
printfPQExpBuffer(>errorMessage, libpq_gettext("could not enable FIPS 
mode")); return -1; } }#endif
#ifdef HAVE_OPENSSL_INIT_SSL OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, 
NULL);#else OPENSSL_config(NULL); SSL_library_init(); 
SSL_load_error_strings();#endif


   On Tuesday, September 4, 2018, 10:27:22 PM MDT, Michael Paquier 
 wrote:  
 
 On Wed, Sep 05, 2018 at 03:29:31AM +, Alessandro Gherardi wrote:
> It looks like scram-sha-256 doesn't work when postgres is linked
> against FIPS-enabled OpenSSL and FIPS mode is turned on.
> 
> Specifically, all login attempts fail with an OpenSSL error saying
> something along the lines of "Low level API call to digest SHA256
> forbidden in fips mode".

The error comes from libc, right?  Postgres can of course be configured
to work with FIPS without patching it, it just needs to be enabled
system-wide, which is what RedHat does, and what you are doing I guess?

> I think this issue could be solved by refactoring the code in
> sha2_openssl.c to use the OpenSSL EVP interface
> (see https://wiki.openssl.org/index.php/EVP_Message_Digests ). 
> Any thoughts? Is this a known issue?

This report is the first of this kind since Postgres 10, which is where
the SHA2 interface for OpenSSL has been introduced.  So likely we'd need
to look into that more deeply..  This has the strong smell of a bug.  If
your system is new enough, you should have sha256() & co as system
functions, so you would see the failure as well?  The regression tests
would have likely complained.
--
Michael  

Re: increasing HA

2018-09-05 Thread Thomas Poty
> In fact, PAF does not support slots. So it is not a good candidate if
slot are
> a requirement.
Effectively slots are a requirement we prefer to keep

> > a proxy HAproxy and
> > for fencincg, i am a bit disappointed, i don't know what to do/use

> Depend on your hardware or your virtualization technology.
Our production cluster (master and slave) runs on LXC container. Each LXC
container runs on a HPE Blade Server. The storage is on a SAN 3PAR array.
Any advice ?

> > How about you, do you have any preference about tools/solutions to use ?

> If you want a simple and well community adopted solution, pick Patroni.
It deals
> with slots, rely on etcd or zookeeper, fit nicely with haproxy, deal with
> watchdog to keep itself under monitor. However, it lacks of fencing and
its
> callback are asynchronous. You would have to take special care of your
> network and master connectivity upon primary failure.

I am looking after some infrmation about this solution on their doc/irc...
Your opinion about it is important for me by knowing  you maintain PAF :-)

> If you want something able to keep multiple services avaliable
(PostgreSQL, vIP,
> storage, pgBouncer, apache, whatever...), deal with dependencies,
locations,
> constraints, rules etc, pick Pacemaker (and a larger coffee machine). I
would
> (obviously) recommend PAF as resource agent for PgSQL, but you would have
to
> design your cluster without slots :/

many thanks

Le mer. 5 sept. 2018 à 14:15, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> On Wed, 5 Sep 2018 13:23:41 +0200
> Thomas Poty  wrote:
>
> > Hi Jehan-Guillaume,
>
> Hello,
>
> > Thanks for your opinion.
> >
> > At first glance, i may use for automatic failover PAF,
>
> In fact, PAF does not support slots. So it is not a good candidate if slot
> are
> a requirement.
>
> > a proxy HAproxy and
> > for fencincg, i am a bit disappointed, i don't know what to do/use
>
> Depend on your hardware or your virtualization technology.
>
> > How about you, do you have any preference about tools/solutions to use ?
>
> If you want a simple and well community adopted solution, pick Patroni. It
> deals
> with slots, rely on etcd or zookeeper, fit nicely with haproxy, deal with
> watchdog to keep itself under monitor. However, it lacks of fencing and its
> callback are asynchronous. You would have to take special care of your
> network and master connectivity upon primary failure.
>
> If you want something able to keep multiple services avaliable
> (PostgreSQL, vIP,
> storage, pgBouncer, apache, whatever...), deal with dependencies,
> locations,
> constraints, rules etc, pick Pacemaker (and a larger coffee machine). I
> would
> (obviously) recommend PAF as resource agent for PgSQL, but you would have
> to
> design your cluster without slots :/
>
> ++
>


Re: increasing HA

2018-09-05 Thread Jehan-Guillaume (ioguix) de Rorthais
On Wed, 5 Sep 2018 13:23:41 +0200
Thomas Poty  wrote:

> Hi Jehan-Guillaume,

Hello,

> Thanks for your opinion.
> 
> At first glance, i may use for automatic failover PAF, 

In fact, PAF does not support slots. So it is not a good candidate if slot are
a requirement.

> a proxy HAproxy and
> for fencincg, i am a bit disappointed, i don't know what to do/use

Depend on your hardware or your virtualization technology.

> How about you, do you have any preference about tools/solutions to use ?

If you want a simple and well community adopted solution, pick Patroni. It deals
with slots, rely on etcd or zookeeper, fit nicely with haproxy, deal with
watchdog to keep itself under monitor. However, it lacks of fencing and its
callback are asynchronous. You would have to take special care of your
network and master connectivity upon primary failure.

If you want something able to keep multiple services avaliable (PostgreSQL, vIP,
storage, pgBouncer, apache, whatever...), deal with dependencies, locations,
constraints, rules etc, pick Pacemaker (and a larger coffee machine). I would
(obviously) recommend PAF as resource agent for PgSQL, but you would have to
design your cluster without slots :/

++



Re: increasing HA

2018-09-05 Thread Thomas Poty
Hi Jehan-Guillaume,

Thanks for your opinion.

At first glance, i may use for automatic failover PAF, a proxy HAproxy and
for fencincg, i am a bit disappointed, i don't know what to do/use

How about you, do you have any preference about tools/solutions to use ?

now, I am aware that i will have to check and document limitation/risk...

Le mer. 5 sept. 2018 à 11:38, Jehan-Guillaume (ioguix) de Rorthais <
iog...@free.fr> a écrit :

> Hi all,
>
> On Tue, 4 Sep 2018 15:09:51 +
> ROS Didier  wrote:
>
> > Hi
> >I have made a lot of PostgreSQL High Availability tests
> (more
> > than 20 by solution) and the two following products respond well to the
> need :
> >
> > (1)Repmgr (2ndQuadrant)
> >
> > (2)Pglookout (aiven)
>
> Both solutions use a simple and naive implementation, which makes them
> easy to
> use and admin. However, it gives the responsibilities to the admin to deal
> with
> fencing, which is a mandatory piece in almost all kind of DB cluster if you
> want to cover most of the failure cases and avoid split brain.
>
> So yes, they are simple, because complexity is left to the admin skills. It
> kind of require you rewrote and test yourself part of the fencing stack of
> Pacemaker. Good luck.
>
> And I'm not speaking about watchdog here, which I just fail to imagine how
> the
> admin could implement it himself.
>
> Just consider how messy it is to deal with "logical fencing" when
> considering
> doing it with pgbouncer.
>
> In short: if you are ready to spend many dev/admin hours to build a safe HA
> cluster for your DB and set strict requirements, those are fine.
>
> > About PAF, the product is hard to install and set up . It need a linux
> > cluster and a system engineers team to use it.
>
> Indeed, Pacemaker has a steep learning cuve and documentation still
> requires
> some improvement. But HA is not an easy subject. Just look at RHEL or Suse
> requirements before their team accept to support your DB cluster (spoiler:
> fencing).
>
> Whatever solution you pick, you must **know and document** its limitations
> and
> risks.
>


unaccent(text) fails depending on search_path (WAS: pg_upgrade fails saying function unaccent(text) doesn't exist)

2018-09-05 Thread Gunnlaugur Thor Briem
OK, I found the cause of the unaccent dictionary problem, and a workaround.

It's not the vacuumdb version, not the unaccent version, and it's not even
a pg_upgrade problem: I get this error also with PG 9.4.18 running on the
old cluster, with both the 10.5 vacuumdb and the 9.4.18 vacuumdb, and I get
the same error in both.

And it's not strictly a vacuumdb problem, though vacuumdb triggers it.

Here's a very minimal test case, unrelated to my DB, that you ought to be
able to reproduce:

SET search_path = "$user"; SELECT public.unaccent('fóö');
SET
ERROR:  text search dictionary "unaccent" does not exist

and here's a workaround:

SET search_path = "$user"; SELECT public.unaccent(tsdict.oid, 'fóö')
FROM pg_catalog.pg_ts_dict tsdict WHERE dictname='unaccent';
SET
 unaccent
--
 foo
(1 row)

The workaround avoids the OID lookup of the dictionary ... that lookup (in
the single-argument unaccent function) is done by unqualified name:

https://github.com/postgres/postgres/blob/fb8697b31aaeebe6170c572739867dcaa01053c6/contrib/unaccent/unaccent.c#L377

dictOid = get_ts_dict_oid(stringToQualifiedNameList("unaccent"),
false);

and that fails if the search path doesn't include public.

So it is indeed triggered by the security changes that Bruce mentioned;
those were backported into 9.4.17:
https://www.postgresql.org/docs/9.4/static/release-9-4-17.html ... and so
got pulled in by my Macports upgrades. So nothing to do with pg_upgrade.

So the workaround for my vacuumdb/function-index problem is to give
unaccent the OID of the text search dictionary, so that the search path
isn't in play:

CREATE OR REPLACE FUNCTION public.semantic_normalize(title text)
RETURNS text
 LANGUAGE sql
 IMMUTABLE STRICT
AS $function$
  SELECT lower(public.unaccent(16603, btrim(regexp_replace($1, '\s+', ' ',
'g'), ' "')))
$function$;

and that makes vacuumdb -z work in both 9.4.18 and 10.5, and makes
./analyze_new_cluster.sh complete without problems.

The proper fix is, I suppose, to make the single-argument unaccent function
explicitly look up the dictionary in the same schema as the function itself
is in.

Cheers,
Gulli