Re: PG SQL and LIKE clause

2019-09-13 Thread Ron

On 9/13/19 12:28 AM, Matthias Apitz wrote:

Hello,

We're porting a huge Library Management System, written using all kind
of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
from the DBS Sybase to PG, millions of lines of code, which works also
with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.

We got to know that in CHAR columns with trailing blanks a

SELECT ... FROM ... WHERE name LIKE 'Ali'

does not match in 'name' having 'Ali '.


Did you forget the "%"?  Because the SQL standard which PostgreSQL follows is:

SELECT ... FROM ... WHERE name LIKE 'Ali%'


I glanced through our code with grep pipelines and found some hundred
places which would be affected by this problem. I'm not interested in a
religious discussion if or if not this behaviour of PG is correcter or
better than in Sybase. It's just different to Sybase.


SQL Server derives from Sybase, and it would also fail on this statement.


Any hints to address this problem? Or is there any compile time option
for the PG server to address this?

Thanks

matthias


--
Angular momentum makes the world go 'round.




Re: PG SQL and LIKE clause

2019-09-13 Thread Thomas Kellerer
Matthias Apitz schrieb am 13.09.2019 um 07:28:
> We're porting a huge Library Management System, written using all kind
> of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
> from the DBS Sybase to PG, millions of lines of code, which works also
> with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.
> 
> We got to know that in CHAR columns with trailing blanks a 

In a nutshell: do not use the CHAR data type (neither in Oracle nor in Postgres)

Unless your application actively checks the data type name of columns in a 
table you can simply change the type of those columns to varchar (or text) and 
get rid of the annoying (but required) behaviour of the CHAR type. 

To be honest, I am surprised this didn't show up in Oracle, as I think the CHAR 
behaviour there is the same as in Postgres.

Thomas





Re: pgbouncer with ldap

2019-09-13 Thread Ayub M
Stumbled in the first step - PAM authentication via pgbouncer. After
compiling pgbouncer with the pam plug-in, I am unable to login into the db
- throws PAM error message. Please help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1 pam.d]$ psql -h localhost -p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed

Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408
login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86,
recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020:
testdb/testuser@[::1]:52408
read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020:
testdb/testuser@[::1]:52408
pam_auth_begin(): pam_first_taken_slot=1, pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing slot
1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed:
Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization
completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408
closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020:
testdb/testuser@[::1]:52408
pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1 pam.d]$ su - testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1 ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

The user was created as follows
[root@ip-1.1.1.1 ~]# adduser -p hello123 testuser
[root@ip-1.1.1.1 ~]# id testuser
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)

Here is the pgbouncer.ini config
[ec2-user@ip-1.1.1.1 etc]$ less pgbouncer.ini | grep -v '^$' | grep -v '^;'
[databases]
testdb = host=dbhost port=3306 dbname=db1
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam

Am I missing something? Any permissions?

On Thu, Sep 12, 2019 at 4:54 AM Ayub M  wrote:

> Okay, thanks for the response. Unfortunately Aurora does not expose these
> files or I should say there is no concept of these files in AWS managed
> Aurora DB service. Anyway I will give a try and let you know.
>
> On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios <
> ach...@matrix.gatewaynet.com> wrote:
>
>> On 11/9/19 2:47 μ.μ., Ayub M wrote:
>>
>> Achilleas, for this setup to work are changes to postgresql.conf and
>> pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where
>> these files are not accessible.
>>
>> Those files are needed in any case if you work with postgresql.
>> Unfortunately no experience with Aurora. He have been building from source
>> for ages.
>>
>> On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios <
>> ach...@matrix.gatewaynet.com> wrote:
>>
>>> On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
>>> > Christoph Moench-Tegeder wrote:
>>> >>> It has hba and via hba file one can specify ldap connections
>>> >>>
>>> >>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
>>> >> https://pgbouncer.github.io/config.html#hba-file-format
>>> >> "Auth-method field: Only methods supported by PgBouncer’s auth_type
>>> >> are supported", and "ldap" is not supported.
>>> >> When there's no ldap support in pgbouncer, there's no ldap support
>>> >> in pgbouncer.
>>> > To throw in something less tautological:
>>> >
>>> > PgBouncer supports PAM authentication, so if you are on UNIX,
>>> > you could use PAM's LDAP module to do what you want.
>>> Right, I had written a blog about it :
>>>
>>> https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap
>>>
>>> However, I always wished (since my first endeavors with pgbouncer) it
>>> was less complicated.
>>> >
>>> > Yours,
>>> > Laurenz Albe
>>>
>>>
>>> --
>>> Achilleas Mantzios
>>> IT DEV Lead
>>> IT DEPT
>>> Dynacom Tankers Mgmt
>>>
>>>
>>>
>>>
>>
>> --
>> Achilleas Mantzios
>> IT DEV Lead
>> IT DEPT
>> Dynacom Tankers Mgmt
>>
>>
>
> --
> Regards,
> Ayub
>


-- 
Regards,
Ayub


Re: Referncing a calculated column in a select?

2019-09-13 Thread Peter J. Holzer
On 2019-09-13 11:49:28 +0900, Kyotaro Horiguchi wrote:
> At Thu, 12 Sep 2019 23:16:01 +0200, "Peter J. Holzer"  
> wrote in <20190912211601.ga3...@hjp.at>
> > On 2019-09-12 15:35:56 -0500, Ron wrote:
> > > On 9/12/19 2:23 PM, stan wrote:
> > > > I am creating some views, that have columns with fairly complex 
> > > > calculations
> > > > in them. The I want to do further calculations using the result of this
> > > > calculation. Right now, I am just duplicating the first calculation in 
> > > > the
> > > > select fro the 2nd calculated column. There must be a batter way to do
> > > > this, right?
[...]
> > I think he wants to refer to other columns in the view. 
> ..
> > What you can do is nest views:
> 
> Doesn't subquery work?

Yes, a subquery would work, too.

> Or CTE (WITH clause) might look better.

CTEs (which I quite like in general) are optimisation barriers, so using
them in a view may lead to inefficient plans (depending on what you do
with the view). So, yes, it would work, but you have to check the plans
of your queries (and be prepared to rewrite your view if they aren't
good enough), which is why I didn't suggest it.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


why postgres process takes so much memory during day time

2019-09-13 Thread Marcin Giedz
Hi all, is there any way to figure out why particular postgres process takes 
more and more memory during the day? This process performes order data queries 
coming from order management system - exchange system. Normally it's about 
0,5mln messages per day but after aprox 4h postgres process occupies : 

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 
11920 pgsql 20 0 17,6g 9,7g 566572 S 2,6 10,4 2:40.47 postgres 

Perhaps dev guys (our team) should change something inside our system or it's 
postgresql conf matter? Postgres engine is 9.6.7. Any hint on this please? 


Thx 
Marcin 




How to handle things that change over time?

2019-09-13 Thread stan


I am working on a system that will support internal bossiness work for a
company. Periodicly things will change in their "world". For example they
periodically recalculate the "mark up" of various components, of their
bushiness, such as labor and or purchased materials. Presently I am keeping
these constants in a table, and I have the constant, and an effective start,
and end date for these factors. In this scenario, the number of rows will
just grow over time, and by using the proper conditions in our select
statement, i can apply the correct factors as they change over time.

In another scenario, a column of the employee table is the pay rate.
Obviously this will change over time, also. It does not appear to me that
this lends itself to this same treatment, as most of the data associated
with a given employee, is fairly static, and if I add an entire row, just
because the pay rate changed, this looks overly complex.

This cannot be the first time this issue has been addressed. What have
others done in this scenario?




Re: pgbouncer with ldap

2019-09-13 Thread Achilleas Mantzios

On 13/9/19 10:19 π.μ., Ayub M wrote:

Stumbled in the first step - PAM authentication via pgbouncer. After compiling 
pgbouncer with the pam plug-in, I am unable to login into the db - throws PAM 
error message. Please help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1 pam.d]$ psql -h localhost -p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed


ok, pgbouncer should be able to read /etc/pam* files.
Did you miss the
|# chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer |
|# chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer|
part?



Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408 
login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86, recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
testdb/testuser@[::1]:52408 read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
testdb/testuser@[::1]:52408 pam_auth_begin(): pam_first_taken_slot=1, 
pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing slot 1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed: 
Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization 
completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408 
closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
testdb/testuser@[::1]:52408 pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1 pam.d]$ su - testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1 ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) 
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

The user was created as follows
[root@ip-1.1.1.1 ~]# adduser -p hello123 testuser
[root@ip-1.1.1.1 ~]# id testuser
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)

Here is the pgbouncer.ini config
[ec2-user@ip-1.1.1.1 etc]$ less pgbouncer.ini | grep -v '^$' | grep -v '^;'
[databases]
testdb = host=dbhost port=3306 dbname=db1
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam

Am I missing something? Any permissions?

On Thu, Sep 12, 2019 at 4:54 AM Ayub M mailto:hia...@gmail.com>> wrote:

Okay, thanks for the response. Unfortunately Aurora does not expose these 
files or I should say there is no concept of these files in AWS managed Aurora 
DB service. Anyway I will give a try and
let you know.

On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf and 
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. 
Unfortunately no experience with Aurora. He have been building from source for 
ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> Christoph Moench-Tegeder wrote:
>>> It has hba and via hba file one can specify ldap connections
>>>
>>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
>> https://pgbouncer.github.io/config.html#hba-file-format
>> "Auth-method field: Only methods supported by PgBouncer’s 
auth_type
>> are supported", and "ldap" is not supported.
>> When there's no ldap support in pgbouncer, there's no ldap 
support
>> in pgbouncer.
> To throw in something less tautological:
>
> PgBouncer supports PAM authentication, so if you are on UNIX,
> you could use PAM's LDAP module to do what you want.
Right, I had written a blog about it :

https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

However, I always wished (since my first endeavors with pgbouncer) 
it was less complicated.
>
> Yours,
> Laurenz Albe


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt






-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt

Re: How to handle things that change over time?

2019-09-13 Thread Robert Heinen
I"m a big fan of a table / table_history combo. Meaning, if you have a
person and that person can have different states that change frequently,
you can do something like this:

create table people(
person_id
person_name
)

create table people_state_history(

person_id references people,
effective tsrange not null
default tsrange(clock_timestamp()::timestamp without time zone,
'infinity'),

primary key (person_id, effective),

person_state

)

Use a constraint so you can't have overlapping effective ranges:
alter table person_state_history add
constraint no_overlapping_person_state_history
exclude using gist (
cast(person_id as text ) with =,
effective with &&);


Then, to see a person's state at the current time, you can do something
like this:

select person_id, person_state from people_state_history where effective <@
now()::timestamp without time zone
(and add your joins in as well).

We've also created a trigger, so any new insert on the state_history table
will automatically close the last row.

create or replace function set_event_history_in_use ()
returns trigger
as
$$

begin

update person_state_history
set effective = tsrange(lower(effective),
clock_timestamp()::timestamp without time zone)
where clock_timestamp()::timestamp without time zone <@ effective
and person_id = NEW.person_id;
NEW.effective := tsrange(clock_timestamp()::timestamp without time
zone, 'infinity');
return NEW;

end;
$$
language plpgsql;

Then, depending on business requirements, you can create this dual table to
track history across time. If salaries are something people would like to
see over time, then you could do a person_salary_history table, etc.

Hope this helps!

Best,
Rob Heinen

On Fri, Sep 13, 2019 at 11:57 AM stan  wrote:

>
> I am working on a system that will support internal bossiness work for a
> company. Periodicly things will change in their "world". For example they
> periodically recalculate the "mark up" of various components, of their
> bushiness, such as labor and or purchased materials. Presently I am keeping
> these constants in a table, and I have the constant, and an effective
> start,
> and end date for these factors. In this scenario, the number of rows will
> just grow over time, and by using the proper conditions in our select
> statement, i can apply the correct factors as they change over time.
>
> In another scenario, a column of the employee table is the pay rate.
> Obviously this will change over time, also. It does not appear to me that
> this lends itself to this same treatment, as most of the data associated
> with a given employee, is fairly static, and if I add an entire row, just
> because the pay rate changed, this looks overly complex.
>
> This cannot be the first time this issue has been addressed. What have
> others done in this scenario?
>
>
>


Re: Web GUI for PG table ?

2019-09-13 Thread George Neuner
On Thu, 12 Sep 2019 10:56:21 -0400, David Gauthier
 wrote:

>Many good visualization options but I need one that runs on the web AND
>allows insert/update/delete records.

Most browser GUI toolkits have some kind of spreadsheet-like "grid"
widget that allows editing the displayed data.  Some frameworks also
have "database" access widgets that can connect to the DBMS and
execute queries there without needing additional middleware on the
server.

But understand that there likely will be some significant Javascript
(or compatible) programming on the frontend to make it work as you
desire.

I've dabbled a bit in frontend web programming and found it to be a
PITA ... I much prefer the server side.  I've done a fair amount of
complex desktop GUI programming over 20+ years (on Macintosh and on
Windows, some of it before the introduction of the GUI frameworks),
and personally I find browser application development to be quite
unfriendly. YMMV.

If you want to go the web route and you don't already have someone
that is pretty well versed in frontend web coding, you might want to
hire a professional.

George





backing up the data from a single table?

2019-09-13 Thread stan
My development methodology is to create scripts that init the db, and load
test data, as I make changes.

Now, I am starting to move toward a test instance with real data. The end
user has provided "real" test data, n spreadsheets. I have the inputing of
data from these working OK. (takes a bit of hand work). What I would like
to do, is  continue to init the "test" database. To make that easy, I would
like to be able to "export" individual tables, so I can use the scripts to
reload them. I prefer not to have to convert these to my load script
format.

Is there a way to "export" a single table, that can be easily re
"imported"?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: PG SQL and LIKE clause

2019-09-13 Thread Adrian Klaver

On 9/12/19 11:11 PM, Ron wrote:

On 9/13/19 12:28 AM, Matthias Apitz wrote:

Hello,

We're porting a huge Library Management System, written using all kind
of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
from the DBS Sybase to PG, millions of lines of code, which works also
with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.

We got to know that in CHAR columns with trailing blanks a

SELECT ... FROM ... WHERE name LIKE 'Ali'

does not match in 'name' having 'Ali '.


Did you forget the "%"?  Because the SQL standard which PostgreSQL 
follows is:


SELECT ... FROM ... WHERE name LIKE 'Ali%'


The above is if you want to find a string starting with 'Ali'. If you 
are looking for the complete string 'Ali' then it is appropriate. The OP 
is looking for a way to automatically match a complete string against a 
right stripped string from a CHAR field.





I glanced through our code with grep pipelines and found some hundred
places which would be affected by this problem. I'm not interested in a
religious discussion if or if not this behaviour of PG is correcter or
better than in Sybase. It's just different to Sybase.


SQL Server derives from Sybase, and it would also fail on this statement.


Any hints to address this problem? Or is there any compile time option
for the PG server to address this?

Thanks

matthias





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: backing up the data from a single table?

2019-09-13 Thread Ray O'Donnell

On 13/09/2019 15:13, stan wrote:

My development methodology is to create scripts that init the db, and load
test data, as I make changes.

Now, I am starting to move toward a test instance with real data. The end
user has provided "real" test data, n spreadsheets. I have the inputing of
data from these working OK. (takes a bit of hand work). What I would like
to do, is  continue to init the "test" database. To make that easy, I would
like to be able to "export" individual tables, so I can use the scripts to
reload them. I prefer not to have to convert these to my load script
format.

Is there a way to "export" a single table, that can be easily re
"imported"?


Yes, absolutely:

  pg_dump -t  (etc)

Ray.


--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie




Re: PG SQL and LIKE clause

2019-09-13 Thread Tom Lane
Matthias Apitz  writes:
> We got to know that in CHAR columns with trailing blanks a 

> SELECT ... FROM ... WHERE name LIKE 'Ali'

> does not match in 'name' having 'Ali '.

> I glanced through our code with grep pipelines and found some hundred
> places which would be affected by this problem. I'm not interested in a
> religious discussion if or if not this behaviour of PG is correcter or
> better than in Sybase. It's just different to Sybase.

> Any hints to address this problem?

Don't use char(n) for anything more complicated than cases like
it-must-be-a-2-character-state-abbreviation.  You can use varchar(n)
if you must have an upper limit on the field length.  Or text.

regards, tom lane




Re: backing up the data from a single table?

2019-09-13 Thread Ron

On 9/13/19 9:13 AM, stan wrote:

My development methodology is to create scripts that init the db, and load
test data, as I make changes.

Now, I am starting to move toward a test instance with real data. The end
user has provided "real" test data, n spreadsheets. I have the inputing of
data from these working OK. (takes a bit of hand work). What I would like
to do, is  continue to init the "test" database. To make that easy, I would
like to be able to "export" individual tables, so I can use the scripts to
reload them. I prefer not to have to convert these to my load script
format.

Is there a way to "export" a single table, that can be easily re
"imported"?


The COPY command is what you want.

http://postgresguide.com/utilities/copy.html
https://www.postgresql.org/docs/9.6/sql-copy.html

--
Angular momentum makes the world go 'round.




Re: PG SQL and LIKE clause

2019-09-13 Thread Ron

On 9/13/19 9:14 AM, Adrian Klaver wrote:

On 9/12/19 11:11 PM, Ron wrote:

On 9/13/19 12:28 AM, Matthias Apitz wrote:

Hello,

We're porting a huge Library Management System, written using all kind
of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
from the DBS Sybase to PG, millions of lines of code, which works also
with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.

We got to know that in CHAR columns with trailing blanks a

SELECT ... FROM ... WHERE name LIKE 'Ali'

does not match in 'name' having 'Ali '.


Did you forget the "%"?  Because the SQL standard which PostgreSQL 
follows is:


SELECT ... FROM ... WHERE name LIKE 'Ali%'


The above is if you want to find a string starting with 'Ali'. If you are 
looking for the complete string 'Ali' then it is appropriate. The OP is 
looking for a way to automatically match a complete string against a right 
stripped string from a CHAR field.


This is highly dependent on implementation.  On the RDBMS that I used to 
work on, trailing whitespace was automatically stripped from CHAR(xx) fields.


--
Angular momentum makes the world go 'round.




Re: backing up the data from a single table?

2019-09-13 Thread Olivier Gautherot
Hi Stan,

El vie., 13 de septiembre de 2019 11:14, stan  escribió:

> My development methodology is to create scripts that init the db, and load
> test data, as I make changes.
>
> Now, I am starting to move toward a test instance with real data. The end
> user has provided "real" test data, n spreadsheets. I have the inputing of
> data from these working OK. (takes a bit of hand work). What I would like
> to do, is  continue to init the "test" database. To make that easy, I would
> like to be able to "export" individual tables, so I can use the scripts to
> reload them. I prefer not to have to convert these to my load script
> format.
>
> Is there a way to "export" a single table, that can be easily re
> "imported"?
>
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>

It may be a strange suggestion but did you consider containers like Docker?
It's been a great help on a 100GB+ test database. Successful changes were
committed, failed changes were rolled back creating a new instance based on
the same image (with data preloaded). It takes seconds to do a rollback.

Olivier

>


Re: backing up the data from a single table?

2019-09-13 Thread Hans Schou
On Fri, Sep 13, 2019 at 4:14 PM stan  wrote:

> Is there a way to "export" a single table, that can be easily re
> "imported"?
>

Export:

pg_dump --table=foo > foo.sql

Import:

cat foo.sql | psql


PANIC: could not write to file "pg_wal/xlogtemp.11399": No space left on device

2019-09-13 Thread Jason Ralph
Hello list,
[10:47:13] [postgres@host] $  psql --version
psql (PostgreSQL) 11.5
[11:06:36] [postgres@host] $ cat /etc/redhat-release
CentOS release 6.10 (Final)
[11:06:33] [postgres@host] $ uname -a
Linux host 2.6.32-754.18.2.el6.x86_64 #1 SMP Wed Aug 14 16:26:59 UTC 2019 
x86_64 x86_64 x86_64 GNU/Linux

I have been in the process of upgrading my pg9.3 systems to pg11.5, all went 
well with my upgrade using the pg_upgrade method.  Once the upgrade was 
complete I ran the following:
/usr/pgsql-11/bin/vacuumdb -v -j 6 --all --analyze

I have pg_wal on a separate partition that ran out of space, when I saw this, I 
issued what I think was a premature (service postgresql-11 stop), then moved 
the pg_wal log to a larger partition and started postgres, the database did 
recover after some time of the recovering process running which is great!

My question is should I be concerned with data loss when a vacuumdb with 
analyze crashed due to space, and I restarted the database possibly when it was 
trying to recover?  The hinted message at 2019-09-12 23:26:15.978  in the log 
scares me a bit.

2019-09-12 23:15:23.519 EDT [10101] ERROR:  canceling autovacuum task
2019-09-12 23:15:23.519 EDT [10101] CONTEXT:  automatic vacuum of table 
"famnet5.public.private_tablename"
2019-09-12 23:24:51.640 EDT [11399] PANIC:  could not write to file 
"pg_wal/xlogtemp.11399": No space left on device
2019-09-12 23:24:51.640 EDT [11399] CONTEXT:  writing block 288 of relation 
base/16402/4190122
2019-09-12 23:24:51.640 EDT [11399] STATEMENT:  VACUUM (VERBOSE, ANALYZE) 
public.private_table_name;
2019-09-12 23:24:51.686 EDT [9792] LOG:  server process (PID 11399) was 
terminated by signal 6: Aborted
2019-09-12 23:24:51.686 EDT [9792] DETAIL:  Failed process was running: VACUUM 
(VERBOSE, ANALYZE) public.private_tablename;
2019-09-12 23:24:51.687 EDT [9792] LOG:  terminating any other active server 
processes
2019-09-12 23:24:51.689 EDT [11396] WARNING:  terminating connection because of 
crash of another server process
2019-09-12 23:24:51.689 EDT [11396] DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupte
d shared memory.
2019-09-12 23:24:51.689 EDT [11396] HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-09-12 23:24:51.689 EDT [9799] WARNING:  terminating connection because of 
crash of another server process
2019-09-12 23:24:51.689 EDT [9799] DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted
 shared memory.
2019-09-12 23:24:51.689 EDT [9799] HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-09-12 23:24:51.689 EDT [11398] WARNING:  terminating connection because of 
crash of another server process
2019-09-12 23:24:51.689 EDT [11398] DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupte
d shared memory.
2019-09-12 23:24:51.689 EDT [11398] HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-09-12 23:24:51.689 EDT [11397] WARNING:  terminating connection because of 
crash of another server process
2019-09-12 23:24:51.689 EDT [11397] DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupte
d shared memory.
2019-09-12 23:24:51.689 EDT [11397] HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-09-12 23:24:51.691 EDT [12616] LOG:  PID 11398 in cancel request did not 
match any process
2019-09-12 23:24:51.692 EDT [9792] LOG:  all server processes terminated; 
reinitializing
2019-09-12 23:24:51.758 EDT [12618] LOG:  PID 11399 in cancel request did not 
match any process
2019-09-12 23:24:51.758 EDT [12617] LOG:  database system was interrupted; last 
known up at 2019-09-12 23:21:01 EDT
2019-09-12 23:24:54.110 EDT [12617] LOG:  database system was not properly shut 
down; automatic recovery in progress
2019-09-12 23:24:54.121 EDT [12617] LOG:  redo starts at 119/3A8EAF50
2019-09-12 23:25:34.712 EDT [9792] LOG:  received fast shutdown request
2019-09-12 23:25:34.735 EDT [9792] LOG:  abnormal database system shutdown
2019-09-12 23:25:34.779 EDT [9792] LOG:  database system is shut down
2019-09-12 23:26:15.978 EDT [12972] LOG:  database system was interrupted while 
in recovery at 2019-09-12 23:24:54 EDT
2019-09-12 23:26:15.978 EDT [12972] HINT:  This probably means that some data 
is corrupted and you will have to use the last backup for recovery.
2019-09-12 23:26:19.213 EDT [12972] LOG:  database system was not properly shut 
down; automatic recovery in progress
2019-

Re: How to handle things that change over time?

2019-09-13 Thread Paul Jungwirth

On 9/13/19 2:57 AM, stan wrote:


I am working on a system that will support internal bossiness work for a
company. Periodicly things will change in their "world".


Oh this is my favorite topic. :-) It's a common problem, although 
solutions don't seem to be well-known in the programming community. 
Typically you'd use start/end times on your table, as you suggested. 
Postgres is a great RDBMS for this since it has range types and 
exclusion constraints. The SQL:2011 standard also has temporal primary 
keys, foreign keys, SELECTs, and UPDATE/DELETEs, and we're working on 
adding those too. But your use case sounds easy to build even without 
those extra features. Here is a talk I gave this summer about temporal 
features in Postgres:


https://github.com/pjungwir/postgres-temporal-talk

And here is an annotated bibliography to the main writings & tools out 
there, including some projects that might help you implement what you 
want in Postgres:


https://illuminatedcomputing.com/posts/2017/12/temporal-databases-bibliography/

> In another scenario, a column of the employee table is the pay rate.
> Obviously this will change over time, also. It does not appear to me
> that this lends itself to this same treatment, as most of the data
> associated with a given employee, is fairly static, and if I add an
> entire row, just because the pay rate changed, this looks overly
> complex.

Normally people would indeed just add another row. I wouldn't call it 
complex (especially compared to the alternatives), but maybe a little 
wasteful. The Date/Darwen/Lorentzos book gives a lot of attention to 
avoiding the problem though. Essentially you'd adopt a sixth-normal form 
where each attribute (column) gets its own table.


Good luck. I'm always happy to talk about temporal databases if you 
like. :-)


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: PANIC: could not write to file "pg_wal/xlogtemp.11399": No space left on device

2019-09-13 Thread Tom Lane
Jason Ralph  writes:
> My question is should I be concerned with data loss when a vacuumdb with 
> analyze crashed due to space, and I restarted the database possibly when it 
> was trying to recover?  The hinted message at 2019-09-12 23:26:15.978  in the 
> log scares me a bit.

No, that's unsurprising given that you did a forced shutdown while
recovery from the initial crash was still running.  The HINT is
probably not terribly well worded; it really only applies if
recovery crashed, which doesn't seem to be the case here.

regards, tom lane




No primary key table

2019-09-13 Thread Ertan Küçükoglu
Hello,

We are using PostgreSQL 10.0 on a Windows  VM.

There is one database in that server.
There are several tables that will be used for data warehouse purposes.

There are daily inserts and relatively heavy bulk (whole month data at once) 
reads at end of months. Reads will be from several hundred clients and will be 
over internet and no local network connection.

Daily data saving application check for duplicate entries of a single record 
using an sql before each insert. That is only select statement during in month 
days.

End of the month queries will be selecting bulk data from previous month 
records and will filter on GUID field and a varchar(25) field.

There is one primary key of a bigint identity column on each table. Primary key 
won't be used for any purpose for any queries.

We wonder if it maybe a suggested to remove primary key index and column. There 
is no unique key index on these tables otherwise.

Thanks & regards,
Ertan Küçükoğlu

Sent from my iPhone



Re: No primary key table

2019-09-13 Thread Ron

On 9/13/19 10:34 AM, Ertan Küçükoglu wrote:

Hello,

We are using PostgreSQL 10.0 on a Windows  VM.

There is one database in that server.
There are several tables that will be used for data warehouse purposes.

There are daily inserts and relatively heavy bulk (whole month data at once) 
reads at end of months. Reads will be from several hundred clients and will be 
over internet and no local network connection.

Daily data saving application check for duplicate entries of a single record 
using an sql before each insert. That is only select statement during in month 
days.

End of the month queries will be selecting bulk data from previous month 
records and will filter on GUID field and a varchar(25) field.

There is one primary key of a bigint identity column on each table. Primary key 
won't be used for any purpose for any queries.

We wonder if it maybe a suggested to remove primary key index and column. There 
is no unique key index on these tables otherwise.


Don't you need an index to "check for duplicate entries of a single record 
using an sql before each insert"?


--
Angular momentum makes the world go 'round.




Suitable licence for schema components

2019-09-13 Thread Bernard Quatermass
I’ve started on extracting some of the DB backed tables and stored procedures 
I’ve developed for use behind my jpigd helper 
(https://gitlab.quatermass.co.uk/jpig/jpigd 
) and was wondering what informed 
folks here felt as to the most appropriate licence for releasing frameworks 
that live in their own schemas.

The most obvious ones that come to mind would be GPLV3+ or the PostgreSQL 
License  and I was wondering 
about the applicability of GPLV3 if the framework essentially constitutes a 
library and it possible pollution of the using project.

Regarding the frameworks themselves, they’re not tied to the daemon itself but 
are designed to provide specific elements of backend functionality that I’ve 
needed as part of some ongoing development.

* An application password test/encode framework that knows about a lot 
of common formats but can automatically upgrade to a default (leaving only the 
unused accounts in old formats.
* A JSON-RPC decode/dispatch framework (bi-mode handles a direct DB 
connection doing JSON-RPC calls and also a similar mech behind jpigd with web 
session cookie management).

I expect to get the first drop of these beasts in a week or two once I’ve 
cleanly extricated them from the original project and got some form of 
test-case stuff in place.

So, what does everybody feel about the suitable licences for this ?

— 
Bernard Quatermass.

Re: No primary key table

2019-09-13 Thread SERHAD ERDEM
Hi,
if you have not seen any benefit , of course you can remove identity column 
from a DWH table , there is a sequence and a trigger  for identity serial 
column.
ID columns are being generaly used for base tables which are under end-user 
operations.



From: Ertan Küçükoglu 
Sent: Friday, September 13, 2019 3:34 PM
To: pgsql-general@lists.postgresql.org 
Subject: No primary key table

Hello,

We are using PostgreSQL 10.0 on a Windows  VM.

There is one database in that server.
There are several tables that will be used for data warehouse purposes.

There are daily inserts and relatively heavy bulk (whole month data at once) 
reads at end of months. Reads will be from several hundred clients and will be 
over internet and no local network connection.

Daily data saving application check for duplicate entries of a single record 
using an sql before each insert. That is only select statement during in month 
days.

End of the month queries will be selecting bulk data from previous month 
records and will filter on GUID field and a varchar(25) field.

There is one primary key of a bigint identity column on each table. Primary key 
won't be used for any purpose for any queries.

We wonder if it maybe a suggested to remove primary key index and column. There 
is no unique key index on these tables otherwise.

Thanks & regards,
Ertan Küçükoğlu

Sent from my iPhone



problems importing from csv

2019-09-13 Thread stan


I am trying to import some data from spreadsheets. Included in the data
sets are US monetary values. These appear in the CSV file, like this: $1.00
The column is defined like this: NUMERIC(5,2) NOT NULL. When I try to
import this data using the \copy functionality, i get the following error;

stan@smokey:~/pm_db/live_data$ psql < import_employee.sql
ERROR:  invalid input syntax for type numeric: "$1.00"
CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"

How can I work around this?

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: problems importing from csv

2019-09-13 Thread Melvin Davidson
>ERROR:  invalid input syntax for type numeric: "$1.00"
>CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"

Try changing the format of the column from currency to numeric BEFORE
exporting.

On Fri, Sep 13, 2019 at 2:22 PM stan  wrote:

>
> I am trying to import some data from spreadsheets. Included in the data
> sets are US monetary values. These appear in the CSV file, like this: $1.00
> The column is defined like this: NUMERIC(5,2) NOT NULL. When I try to
> import this data using the \copy functionality, i get the following error;
>
> stan@smokey:~/pm_db/live_data$ psql < import_employee.sql
> ERROR:  invalid input syntax for type numeric: "$1.00"
> CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"
>
> How can I work around this?
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Cascade Trigger Not Firing

2019-09-13 Thread Judy Loomis
I have a trigger that updates a target column when some other columns
change.

There is another trigger on the target column to update another table (the
column can be changed in other ways besides the first trigger).

If I update the target column directly the expected trigger fires.

But if the 1st trigger changes the target column and it wasn't in the list
of updated columns, the 2nd trigger doesn't fire.

Is this expected behavior? I thought that ANY change to the column would
fire the trigger.

Note that I've got a work-around by making the first trigger an AFTER
trigger and calling UPDATE instead of just changing NEW. But it was a while
before we caught this and it's worrisome to me that a column can change
without a trigger noticing.

Here's about the smallest example I could come up with:
---
CREATE TABLE IF NOT EXISTS table1(
  id SERIAL,
  total INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS table2(
  id SERIAL,
  t1_id INTEGER,
  col1 INTEGER DEFAULT 0,
  col2 INTEGER DEFAULT 0
);
CREATE OR REPLACE FUNCTION update_total()
RETURNS TRIGGER AS $$
DECLARE
BEGIN
RAISE WARNING '### in update_total: %',NEW;
UPDATE table1
   SET total = NEW.col2
 WHERE id = NEW.t1_id;

RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';


CREATE OR REPLACE FUNCTION update_col2()
RETURNS TRIGGER AS $$
DECLARE
BEGIN
RAISE WARNING ' in update_col2: %', NEW;
NEW.col2 = NEW.col1 * 3;

RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';

CREATE TRIGGER au_update_total
  AFTER UPDATE OF col2
  ON table2
  FOR EACH ROW
  EXECUTE PROCEDURE update_total();

CREATE TRIGGER biu_update_col2
  BEFORE INSERT OR UPDATE OF col1
  ON table2
  FOR EACH ROW
  EXECUTE PROCEDURE update_col2();

INSERT INTO table1 VALUES (DEFAULT, -99);
INSERT INTO table2 VALUES (DEFAULT, 1, 10, 10); -- fires col2 trigger
SELECT * FROM table1;
SELECT * FROM table2;

UPDATE table2 SET col2 = 99; -- fires total trigger
SELECT * FROM table1;
SELECT * from table2;

UPDATE table2 SET col1 = 5; -- ** only col2 trigger is fired; expected
total trigger to fire ***
SELECT * FROM table1;
SELECT * from table2;

UPDATE table2 SET col1 = 3, col2 = col2; -- fires both triggers
SELECT * FROM table1;
SELECT * from table2;


Re: Cascade Trigger Not Firing

2019-09-13 Thread Tom Lane
Judy Loomis  writes:
> I have a trigger that updates a target column when some other columns
> change.
> There is another trigger on the target column to update another table (the
> column can be changed in other ways besides the first trigger).
> If I update the target column directly the expected trigger fires.
> But if the 1st trigger changes the target column and it wasn't in the list
> of updated columns, the 2nd trigger doesn't fire.
> Is this expected behavior?

Per the manual (NOTES section of the CREATE TRIGGER man page):

A column-specific trigger (one defined using the UPDATE OF column_name
syntax) will fire when any of its columns are listed as targets in the
UPDATE command's SET list. It is possible for a column's value to
change even when the trigger is not fired, because changes made to the
row's contents by BEFORE UPDATE triggers are not
considered. Conversely, a command such as UPDATE ... SET x = x ...
will fire a trigger on column x, even though the column's value
did not change.

It's not really practical for trigger firings to depend on what other
triggers did or might do --- you'd soon end up with circularities.

regards, tom lane




Re: PG SQL and LIKE clause

2019-09-13 Thread Adrian Klaver

On 9/13/19 7:35 AM, Ron wrote:

On 9/13/19 9:14 AM, Adrian Klaver wrote:

On 9/12/19 11:11 PM, Ron wrote:

On 9/13/19 12:28 AM, Matthias Apitz wrote:

Hello,

We're porting a huge Library Management System, written using all kind
of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on 
Linux

from the DBS Sybase to PG, millions of lines of code, which works also
with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.

We got to know that in CHAR columns with trailing blanks a

SELECT ... FROM ... WHERE name LIKE 'Ali'

does not match in 'name' having 'Ali '.


Did you forget the "%"?  Because the SQL standard which PostgreSQL 
follows is:


SELECT ... FROM ... WHERE name LIKE 'Ali%'


The above is if you want to find a string starting with 'Ali'. If you 
are looking for the complete string 'Ali' then it is appropriate. The 
OP is looking for a way to automatically match a complete string 
against a right stripped string from a CHAR field.


This is highly dependent on implementation.  On the RDBMS that I used to 
work on, trailing whitespace was automatically stripped from CHAR(xx) 
fields.




That is what is driving the OP's question. On Sybase the trailing 
whitespace is stripped apparently, on Postgres it is not, at least for 
the purposes of LIKE. Though for other comparisons whitespaces are 
ignored. See below for more information:


https://www.postgresql.org/docs/11/datatype-character.html

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: problems importing from csv

2019-09-13 Thread Adrian Klaver

On 9/13/19 11:22 AM, stan wrote:


I am trying to import some data from spreadsheets. Included in the data


What program?

Usually there is an option to save the raw values not the formatted 
ones. In LibreOffice you uncheck the Save content as shown option.



sets are US monetary values. These appear in the CSV file, like this: $1.00
The column is defined like this: NUMERIC(5,2) NOT NULL. When I try to
import this data using the \copy functionality, i get the following error;

stan@smokey:~/pm_db/live_data$ psql < import_employee.sql
ERROR:  invalid input syntax for type numeric: "$1.00"
CONTEXT:  COPY employee, line 2, column hourly_rate: "$1.00"

How can I work around this?




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PG SQL and LIKE clause

2019-09-13 Thread John W Higgins
Is this a possibility?

From
https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP
-

~ 'ali[ ]*$' matches strings ending in ali that have zero of more spaces
after ali

this would match

'bali'
'ali'
'ali '
'bali '

If you need full string then it would be like this

~ '^ali[ ]*$' matches string only containing ali plus zero or more spaces
after ali

this would match

'ali'
'ali '

but not match

'bali'
'bali '

Also switching ~* for ~ makes if case insensitive if necessary.


John


On Thu, Sep 12, 2019 at 10:29 PM Matthias Apitz  wrote:

>
> Hello,
>
> We're porting a huge Library Management System, written using all kind
> of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
> from the DBS Sybase to PG, millions of lines of code, which works also
> with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.
>
> We got to know that in CHAR columns with trailing blanks a
>
> SELECT ... FROM ... WHERE name LIKE 'Ali'
>
> does not match in 'name' having 'Ali '.
>
> I glanced through our code with grep pipelines and found some hundred
> places which would be affected by this problem. I'm not interested in a
> religious discussion if or if not this behaviour of PG is correcter or
> better than in Sybase. It's just different to Sybase.
>
> Any hints to address this problem? Or is there any compile time option
> for the PG server to address this?
>
> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> Mientras haya voluntad de lucha habrá esperanza de vencer.
>
>
>


Re: Cascade Trigger Not Firing

2019-09-13 Thread Judy Loomis
I thought that might be the answer, but it's a pretty big hole when we're
using triggers for audit purposes on financial data.

I'm going to have to really look at all my BEFORE UPDATE triggers and make
sure we're not missing any more.

And I have to stop telling management that a trigger means we always know
when a value changes.

Thanks,
Judy

On Fri, Sep 13, 2019 at 2:08 PM Tom Lane  wrote:

> Judy Loomis  writes:
> > I have a trigger that updates a target column when some other columns
> > change.
> > There is another trigger on the target column to update another table
> (the
> > column can be changed in other ways besides the first trigger).
> > If I update the target column directly the expected trigger fires.
> > But if the 1st trigger changes the target column and it wasn't in the
> list
> > of updated columns, the 2nd trigger doesn't fire.
> > Is this expected behavior?
>
> Per the manual (NOTES section of the CREATE TRIGGER man page):
>
> A column-specific trigger (one defined using the UPDATE OF column_name
> syntax) will fire when any of its columns are listed as targets in the
> UPDATE command's SET list. It is possible for a column's value to
> change even when the trigger is not fired, because changes made to the
> row's contents by BEFORE UPDATE triggers are not
> considered. Conversely, a command such as UPDATE ... SET x = x ...
> will fire a trigger on column x, even though the column's value
> did not change.
>
> It's not really practical for trigger firings to depend on what other
> triggers did or might do --- you'd soon end up with circularities.
>
> regards, tom lane
>


-- 

--
*Judy Loomis*
469.235.5839


Re: Cascade Trigger Not Firing

2019-09-13 Thread Tom Lane
Judy Loomis  writes:
> I'm going to have to really look at all my BEFORE UPDATE triggers and make
> sure we're not missing any more.
> And I have to stop telling management that a trigger means we always know
> when a value changes.

Well, you can rely on that, just not like this.  Use an AFTER trigger
(else, you can't be sure it fires after all the BEFORE triggers)
and instead of triggering it with a column parameter, have it do
something like "if old.col is distinct from new.col".

Yeah, it's a bit slower that way, but there's no free lunch,
especially if you don't trust your other triggers.  (Although,
if you have so many triggers that that's a problem, I think you
might have some other design issues.)

regards, tom lane




Re: Cascade Trigger Not Firing

2019-09-13 Thread Judy Loomis
At the very least that note about this behavior should be highlighted,
probably on the Trigger Behavior page and not buried in a bunch of notes on
the Create Trigger page.



On Fri, Sep 13, 2019 at 4:03 PM Tom Lane  wrote:

> Judy Loomis  writes:
> > I'm going to have to really look at all my BEFORE UPDATE triggers and
> make
> > sure we're not missing any more.
> > And I have to stop telling management that a trigger means we always know
> > when a value changes.
>
> Well, you can rely on that, just not like this.  Use an AFTER trigger
> (else, you can't be sure it fires after all the BEFORE triggers)
> and instead of triggering it with a column parameter, have it do
> something like "if old.col is distinct from new.col".
>
> Yeah, it's a bit slower that way, but there's no free lunch,
> especially if you don't trust your other triggers.  (Although,
> if you have so many triggers that that's a problem, I think you
> might have some other design issues.)
>
> regards, tom lane
>


-- 

--
*Judy Loomis*
469.235.5839


Re: pgbouncer with ldap

2019-09-13 Thread Ayub M
Yes I did set that, here is how pgbouncer looks like ---

-rwsrwsr-x. 1 root root 2087504 Sep 13 00:45 pgbouncer

On Fri, Sep 13, 2019 at 6:50 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 13/9/19 10:19 π.μ., Ayub M wrote:
>
> Stumbled in the first step - PAM authentication via pgbouncer. After
> compiling pgbouncer with the pam plug-in, I am unable to login into the db
> - throws PAM error message. Please help.
>
> User created with the same password as linux user --
> localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
> db1=> create user testuser password 'hello123';
> CREATE ROLE
>
> [ec2-user@ip-1.1.1.1 pam.d]$ psql -h localhost -p 5432 testdb -U testuser
> Password for user testuser:
> psql: ERROR:  auth failed
>
>
> ok, pgbouncer should be able to read /etc/pam* files.
> Did you miss the
> # chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer
> # chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer
> part?
>
>
> Log entries - pgbouncer.log
> 2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: 
> testdb/testuser@[::1]:52408
> login attempt: db=testdb user=testuser tls=no
> 2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
> 2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86,
> recv=86
> 2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0,
> recv=0
> 2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
> 2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
> testdb/testuser@[::1]:52408
> read pkt='p' len=14
> 2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
> testdb/testuser@[::1]:52408
> pam_auth_begin(): pam_first_taken_slot=1, pam_first_free_slot=1
> 2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing
> slot 1
> 2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed:
> Authentication failure
> 2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization
> completed, status=3
> 2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: 
> testdb/testuser@[::1]:52408
> closing because: auth failed (age=0s)
> 2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
> testdb/testuser@[::1]:52408
> pooler error: auth failed
>
> Able to login as testuser
> [ec2-user@ip-1.1.1.1 pam.d]$ su - testuser
> Password:
> Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
> [testuser@ip-1.1.1.1 ~]$ id
> uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)
> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
>
> The user was created as follows
> [root@ip-1.1.1.1 ~]# adduser -p hello123 testuser
> [root@ip-1.1.1.1 ~]# id testuser
> uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)
>
> Here is the pgbouncer.ini config
> [ec2-user@ip-1.1.1.1 etc]$ less pgbouncer.ini | grep -v '^$' | grep -v
> '^;'
> [databases]
> testdb = host=dbhost port=3306 dbname=db1
> [users]
> [pgbouncer]
> logfile = /var/log/pgbouncer/pgbouncer.log
> pidfile = /var/run/pgbouncer/pgbouncer.pid
> listen_addr = *
> listen_port = 5432
> auth_type = pam
>
> Am I missing something? Any permissions?
>
> On Thu, Sep 12, 2019 at 4:54 AM Ayub M  wrote:
>
>> Okay, thanks for the response. Unfortunately Aurora does not expose these
>> files or I should say there is no concept of these files in AWS managed
>> Aurora DB service. Anyway I will give a try and let you know.
>>
>> On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios <
>> ach...@matrix.gatewaynet.com> wrote:
>>
>>> On 11/9/19 2:47 μ.μ., Ayub M wrote:
>>>
>>> Achilleas, for this setup to work are changes to postgresql.conf and
>>> pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where
>>> these files are not accessible.
>>>
>>> Those files are needed in any case if you work with postgresql.
>>> Unfortunately no experience with Aurora. He have been building from source
>>> for ages.
>>>
>>> On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios <
>>> ach...@matrix.gatewaynet.com> wrote:
>>>
 On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
 > Christoph Moench-Tegeder wrote:
 >>> It has hba and via hba file one can specify ldap connections
 >>>
 >>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
 >> https://pgbouncer.github.io/config.html#hba-file-format
 >> "Auth-method field: Only methods supported by PgBouncer’s auth_type
 >> are supported", and "ldap" is not supported.
 >> When there's no ldap support in pgbouncer, there's no ldap support
 >> in pgbouncer.
 > To throw in something less tautological:
 >
 > PgBouncer supports PAM authentication, so if you are on UNIX,
 > you could use PAM's LDAP module to do what you want.
 Right, I had written a blog about it :

 https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

 However, I always wished (since my first endeavors with pgbouncer) it
 was less complicated.
 >
 > Yours,
 > Laurenz Albe


 --
 Achilleas Mant