Re: [GENERAL] column names and dollar sign

2017-05-17 Thread Armand Pirvu (home)
Thanks  Tom


Armand

On May 17, 2017, at 4:14 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> "Armand Pirvu (home)" <armand.pi...@gmail.com> writes:
>> Ran into the following statement
> 
>> CREATE TABLE test(
>>  Date$ date,
>>  Month_Number$ int,
>>  Month$ varchar(10),
>>  Year$ int
>> );
> 
>> While it does execute, I wonder if the $ has any special meaning ?
> 
> Postgres thinks it's a valid identifier character, as long as it's
> not the first character.  I don't believe it's standard SQL, but
> we hacked it in a long time ago for compatibility with Oracle or
> somebody like that.
> 
>   regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] column names and dollar sign

2017-05-17 Thread Armand Pirvu (home)
Thank you Steve

— Armand

On May 17, 2017, at 4:10 PM, Steve Atkins <st...@blighty.com> wrote:

> 
>> On May 17, 2017, at 2:02 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
>> wrote:
>> 
>> Hi 
>> 
>> Ran into the following statement
>> 
>> CREATE TABLE test(
>>  Date$ date,
>>  Month_Number$ int,
>>  Month$ varchar(10),
>>  Year$ int
>> );
>> 
>> 
>> While it does execute, I wonder if the $ has any special meaning ?
>> 
>> Can anyone shed some light please ?
> 
> No special meaning to postgresql - in postgresql a dollar sign is a valid 
> character in an identifier.
> 
> It might have some special meaning to the app that was using it, perhaps.
> 
> Cheers,
>  Steve
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] column names and dollar sign

2017-05-17 Thread Armand Pirvu (home)
Hi 

Ran into the following statement

CREATE TABLE test(
Date$ date,
Month_Number$ int,
Month$ varchar(10),
Year$ int
);


While it does execute, I wonder if the $ has any special meaning ?

Can anyone shed some light please ?


Thanks
Armand




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pglogical and slony

2017-05-11 Thread Armand Pirvu (home)
Thanks to Adrian I got pointed in the direction of enabling triggers to use 
with pglogical

In other words, whatever comes down the pglogical in terms of 
inserts/updates/deletes, I can track those in some audit table or do something 
else

That got me thinking why not putting at the end of pglogical, Slony  which is 
trigger based ?


Postgres 9.5.5

edb1 192.168.5.251  -   master pglogical
edb2 192.168.5.252  -   slave pglogical


edb2 192.168.5.252  -   master slony
edb3 192.168.5.253  -   slave slony



sudo -u postgres /usr/pgsql-9.5/bin/slonik << _EOF_
cluster name=armandpc123;
node 1 admin conninfo = 'dbname=csidb host=192.168.5.252 user=postgres';
node 2 admin conninfo = 'dbname=csidb host=192.168.5.253 user=postgres';
init cluster (id=1, comment='Node 1');
store node (id=2, comment='Node 2',event node=1);
store path (server=1, client=2, conninfo='dbname=csidb host=192.168.5.252 
user=postgres', connretry=10);
store path (server=2, client=1, conninfo='dbname=csidb host=192.168.5.253 
user=postgres', connretry=10);
create set (id=1, origin=1, comment='some test tables');
set add table (id=1, set id=1, origin=1, fully qualified name='public.tbl2', 
comment='tbl2 table', key='tbl2_pkey');
_EOF_


- master (edb2 192.168.5.252)
sudo -u postgres /usr/pgsql-9.5/bin/slon armandpc123 "dbname=csidb 
user=postgres host=192.168.5.252"
- slave (edb3 192.168.5.253)
sudo -u postgres /usr/pgsql-9.5/bin/slon armandpc123 "dbname=csidb 
user=postgres host=192.168.5.253"


sudo -u postgres /usr/pgsql-9.5/bin/slonik << _EOF_
cluster name=armandpc123;
node 1 admin conninfo = 'dbname=csidb host=192.168.5.252 user=postgres';
node 2 admin conninfo = 'dbname=csidb host=192.168.5.253 user=postgres';
subscribe set (id=1, provider=1, receiver=2, forward=no);
_EOF_



csidb=# \d tbl2;
Table "public.tbl2"
 Column | Type  | Modifiers 
+---+---
 col1   | integer   | not null
 col2   | character(10) | 
Indexes:
"tbl2_pkey" PRIMARY KEY, btree (col1)
Triggers:
_armandpc123_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON tbl2 FOR EACH 
ROW EXECUTE PROCEDURE _armandpc123.denyaccess('_armandpc123')
_armandpc123_truncatedeny BEFORE TRUNCATE ON tbl2 FOR EACH STATEMENT 
EXECUTE PROCEDURE _armandpc123.deny_truncate()
Disabled user triggers:
_armandpc123_logtrigger AFTER INSERT OR DELETE OR UPDATE ON tbl2 FOR EACH 
ROW EXECUTE PROCEDURE _armandpc123.logtrigger('_armandpc123', '1', 'k')
_armandpc123_truncatetrigger BEFORE TRUNCATE ON tbl2 FOR EACH STATEMENT 
EXECUTE PROCEDURE _armandpc123.log_truncate('1')




csidb=# alter table tbl2 ENABLE ALWAYS TRIGGER _armandpc123_logtrigger;
ALTER TABLE
csidb=# \d tbl2
Table "public.tbl2"
 Column | Type  | Modifiers 
+---+---
 col1   | integer   | not null
 col2   | character varying(10) | 
Indexes:
"tbl2_pkey" PRIMARY KEY, btree (col1)
Triggers:
_armandpc123_truncatetrigger BEFORE TRUNCATE ON tbl2 FOR EACH STATEMENT 
EXECUTE PROCEDURE _armandpc123.log_truncate('1')
Disabled user triggers:
_armandpc123_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON tbl2 FOR EACH 
ROW EXECUTE PROCEDURE _armandpc123.denyaccess('_armandpc123')
_armandpc123_truncatedeny BEFORE TRUNCATE ON tbl2 FOR EACH STATEMENT 
EXECUTE PROCEDURE _armandpc123.deny_truncate()
Triggers firing always:
_armandpc123_logtrigger AFTER INSERT OR DELETE OR UPDATE ON tbl2 FOR EACH 
ROW EXECUTE PROCEDURE _armandpc123.logtrigger('_armandpc123', '1', 'k')
trig_hist_aud_prev_tbl2 AFTER INSERT OR DELETE OR UPDATE ON tbl2 FOR EACH 
ROW EXECUTE PROCEDURE func_audit_tname()


csidb=# alter table tbl2 disable trigger _armandpc123_denyaccess;

Manual insert in edb2.csidb.tbl2 replicates fine to edb3.csidb.tbl2, but 
nothing coming in say like replicating from edb1.csidb.tbl2 via pglogical 
although triggers are enabled

Is this possible ? 


Thank you

Armand




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] views and fdw usage and performance

2017-05-09 Thread Armand Pirvu (home)

On May 9, 2017, at 7:11 PM, Adrian Klaver  wrote:

> On 05/09/2017 05:02 PM, armand pirvu wrote:
>> Well
>> Jt1 is prod and jt2 is dev
> 
> You are talking schemas, not databases, correct?
> 
> 

Correct

>> Before someone pushes to prod it does work in dev. The jdbc connection 
> 
> That would concern me, as anything bad that happened in the dev schema could 
> bring the entire database to its knees, including the prod schema.
> 
> How does data get into the prod schema if the connection is to the dev schema?

If you are a user in say category  B you get to dev where you do your thing. If 
you deem okay you push to prod.
If you are a user in say category A you get to prod



> 
>> routes to jt2. In the mean time it wad needed that some tables in prod are 
>> synced at all times from dev. Hence the view/fdw.
>> What I meant by connections was more to say the type of load or users doing 
>> something in each schema.
> 
> The issue being that if you are pushing data from jt2 --> jt1 you are also 
> pushing the load in the same direction.

I see but short of using something like Slony in between the  two schemas I 
don’t see a pretty simple choice


> 
>> So my questions still remain

And about the plan from the fdw am I right or wrong ? I am inclined to say I am 
right based on the numbers in the timings


>> Sent from my iPhone
>> On May 9, 2017, at 6:52 PM, Adrian Klaver > > wrote:
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] views and fdw usage and performance

2017-05-09 Thread Armand Pirvu (home)
Hi 

I have two schemas jt1, and jt2 in the same db
In both I have the same table tbl3
The idea is to keep in sync jt1.tbl3 from jt2.tbl3 each time I have an 
insert/update/delete on jt2.tbl3

So I was thinking about the following cases to avoid replication 

1) in jt2 rather than have the tbl3 table, have a view named tbl3 based on 
jt1.tbl3

2) use the postgtres fdw and in jt2 have a foreign table tbl3 to jt1.tbl3

create table tbl3 (col1 integer, col2 integer, col3 integer, primary key 
(col1));

insert into tbl3 
select generate_series(1, 1000), 111,222;

Questions:

q1 - Any issues with this logic ? Or any other ways to do this better ? jt2 
usage in terms of concurrent users and so on is by far much less than jt1, at 
least one order of magnitude less

q2 - query performance (select * from tbl3 where col1=499123;
) . While using views it is clear cut the output of explain, on fdw not so much



explain analyze select * from jt2.tbl3 where col1=874433;
   QUERY PLAN   


 Foreign Scan on tbl3  (cost=100.00..138.66 rows=11 width=12) (actual 
time=0.204..0.205 rows=1 loops=1)
 Planning time: 0.043 ms
 Execution time: 0.374 ms
(3 rows)


explain analyze select * from jt1.tbl3 where col1=874433;
   QUERY PLAN   
 
-
 Index Scan using tbl3_pkey on tbl3  (cost=0.43..8.45 rows=1 width=12) (actual 
time=0.010..0.011 rows=1 loops=1)
   Index Cond: (col1 = 874433)
 Planning time: 0.035 ms
 Execution time: 0.021 ms
(4 rows)


Do I understand correctly that the output of (explain analyze select * from 
jt2.tbl3 where col1=874433) is in essence (for all practical purposes) the same 
as the one from (explain analyze select * from jt1.tbl3 where col1=874433;) and 
not a sequential scan like the following ?

explain analyze select * from jt1.tbl3 where col2=874433;
   QUERY PLAN   


 Seq Scan on tbl3  (cost=0.00..179053.25 rows=1 width=12) (actual 
time=498.020..498.020 rows=0 loops=1)
   Filter: (col2 = 874433)
   Rows Removed by Filter: 998
 Planning time: 0.030 ms
 Execution time: 498.034 ms
(5 rows)



Thanks
Armand



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] data transformation and replication

2017-05-09 Thread Armand Pirvu (home)
9.5 both

But the enable always trigger I missed that


Once that set it runs


Thank you for your help


Armand

On May 9, 2017, at 8:26 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

> On 05/08/2017 08:31 PM, Armand Pirvu (home) wrote:
>> My bad
>> db1 I have two tables t1 and t2 (or more)
>> db2 has one table t3 for example which can get data aggregated from one or 
>> more multiple tables from the above set . I can updates/inserts/deletes in 
>> db1.t1 and/or db1.t2 which combined may mean related data in db.t3 would 
>> need to be inserted/deleted/updated. Think of it like ETL processing if you 
>> will. This is what I mean by data massaging/transformation
>> db1 and db2 are two different servers.
> 
> What are the Postgres versions?
> 
>> So I was initially thinking that I can have on db2 the same set of tables 
>> from db1, replication being done using pglogical. Once data gets to db2 t1 
>> and t2, I can have on db2 a set of functions/triggers which can transform 
>> the data and as such do the relevant inserts/updates/delete from db2.t3
>> Apparently though that is not possible unless I am missing something
> 
> Probably this:
> 
> https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/
> 
> "4.11 Triggers
> 
> Apply process and the initial COPY process both run with 
> session_replication_role set to replica which means that ENABLE REPLICA and 
> ENABLE ALWAYS triggers will be fired."
> 
> https://www.postgresql.org/docs/9.6/static/sql-altertable.html
> 
> "DISABLE/ENABLE [ REPLICA | ALWAYS ] TRIGGER
> 
>  ...  The trigger firing mechanism is also affected by the configuration 
> variable session_replication_role. Simply enabled triggers will fire when the 
> replication role is "origin" (the default) or "local". Triggers configured as 
> ENABLE REPLICA will only fire if the session is in "replica" mode, and 
> triggers configured as ENABLE ALWAYS will fire regardless of the current 
> replication mode.
> ...
> "
> 
> So did you ENABLE REPLICA or ALWAYS on the db2 table triggers?
> 
> 
>> I reached that conclusion by using a trigger and a function like the 
>> auditing one to track insers/updates/deletes in an audit table
>> Having these said I was thinking
>> (a) -
>> On db1 I will have the t3 table as is on dsb2. All data transformation goes 
>> into db1.t3 which on it's turn will replicate to db2.t3 using pglogical
>> (b) -
>> On db2 I will have the t1 t2 as they are on db1. Those are replicated using 
>> Slony/Bucardo. Once data lands on db2.t1 and db2.t2 another set of 
>> triggers/functions responsible for data transformation will do the 
>> inserts/deletes/updates in db2.t3
>> I wold much prefer pglogical approach as stated in the what I see as a 
>> failed case
>> If the only options is Slony/Bucardo , so be it. but that begs the following 
>> questions
>> - which one has the smallest overhead ?
>> - which one is the easiest to manage ?
>> - which one is the most reliable ?
>> - I recall data transformation can be used in Bucardo but did not see any 
>> examples on that. Any pointers ?
>> Thanks
>> Armand
>> On May 8, 2017, at 4:49 PM, Adrian Klaver <adrian.kla...@aklaver.com 
>> <mailto:adrian.kla...@aklaver.com>> wrote:
>>> On 05/08/2017 12:46 PM, Armand Pirvu (home) wrote:
>>>> Hi
>>>> 
>>>> Here it is a scenario which I am faced with  and I am hoping to find a 
>>>> pointer/tip/help
>>>> 
>>>> db1 is the OLTP system
>>>> db2 is the Reporting system
>>>> 
>>>> The data from db1 needs to get to db2, but the database on those two have 
>>>> tables with different layout/structure and hence data will need to suffer 
>>>> some transformation in between in real time
>>>> 
>>>> I was looking at something like
>>>> 
>>>> db1 -> db2 replicates the same set of tables and with the same structures 
>>>> using pglogical for example
>>>> db2.tbl1 -> db2.tbl2 data gets massages/transformed based on what 
>>>> replicates from db1.tbl1 using triggers and functions
>>>> 
>>>> 
>>>> Other than that I reckon db1 -> db2 would be trigger based using something 
>>>> like slonik maybe (?) and data massage/transformation gets moved from db2 
>>>> to db1 machine and then db1.tbl2 -> db2.tbl2 using pglogical
>>> 
>>> I was following you until the last part, "... moved from db2 to db1 machine 
>>> and then db1.tbl2 -> db2.tbl2 ..."
>>> 
>>> Is this correct?
>>> 
>>> If so why db1 --> db2 --> db1 --> db2?
>>> 
>>> A complete answer is going to depend on at least an outline of what you 
>>> mean by massage/transform?
>>> 
>>>> 
>>>> 
>>>> Is this doable ? If so any pointers as to where to look about it ?
>>>> 
>>>> 
>>>> Many thanks
>>>> Armand
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: [GENERAL] data transformation and replication

2017-05-08 Thread Armand Pirvu (home)


My bad


db1 I have two tables t1 and t2 (or more)
db2 has one table t3 for example which can get data aggregated from one or more 
multiple tables from the above set . I can updates/inserts/deletes in db1.t1 
and/or db1.t2 which combined may mean related data in db.t3 would need to be 
inserted/deleted/updated. Think of it like ETL processing if you will. This is 
what I mean by data massaging/transformation


db1 and db2 are two different servers.


So I was initially thinking that I can have on db2 the same set of tables from 
db1, replication being done using pglogical. Once data gets to db2 t1 and t2, I 
can have on db2 a set of functions/triggers which can transform the data and as 
such do the relevant inserts/updates/delete from db2.t3

Apparently though that is not possible unless I am missing something

I reached that conclusion by using a trigger and a function like the auditing 
one to track insers/updates/deletes in an audit table

Having these said I was thinking

(a) - 
On db1 I will have the t3 table as is on dsb2. All data transformation goes 
into db1.t3 which on it's turn will replicate to db2.t3 using pglogical

(b) -
On db2 I will have the t1 t2 as they are on db1. Those are replicated using 
Slony/Bucardo. Once data lands on db2.t1 and db2.t2 another set of 
triggers/functions responsible for data transformation will do the 
inserts/deletes/updates in db2.t3

I wold much prefer pglogical approach as stated in the what I see as a failed 
case 


If the only options is Slony/Bucardo , so be it. but that begs the following 
questions
- which one has the smallest overhead ?
- which one is the easiest to manage ?
- which one is the most reliable ?
- I recall data transformation can be used in Bucardo but did not see any 
examples on that. Any pointers ?

Thanks
Armand



On May 8, 2017, at 4:49 PM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

> On 05/08/2017 12:46 PM, Armand Pirvu (home) wrote:
>> Hi
>> 
>> Here it is a scenario which I am faced with  and I am hoping to find a 
>> pointer/tip/help
>> 
>> db1 is the OLTP system
>> db2 is the Reporting system
>> 
>> The data from db1 needs to get to db2, but the database on those two have 
>> tables with different layout/structure and hence data will need to suffer 
>> some transformation in between in real time
>> 
>> I was looking at something like
>> 
>> db1 -> db2 replicates the same set of tables and with the same structures 
>> using pglogical for example
>> db2.tbl1 -> db2.tbl2 data gets massages/transformed based on what replicates 
>> from db1.tbl1 using triggers and functions
>> 
>> 
>> Other than that I reckon db1 -> db2 would be trigger based using something 
>> like slonik maybe (?) and data massage/transformation gets moved from db2 to 
>> db1 machine and then db1.tbl2 -> db2.tbl2 using pglogical
> 
> I was following you until the last part, "... moved from db2 to db1 machine 
> and then db1.tbl2 -> db2.tbl2 ..."
> 
> Is this correct?
> 
> If so why db1 --> db2 --> db1 --> db2?
> 
> A complete answer is going to depend on at least an outline of what you mean 
> by massage/transform?
> 
>> 
>> 
>> Is this doable ? If so any pointers as to where to look about it ?
>> 
>> 
>> Many thanks
>> Armand
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



[GENERAL] data transformation and replication

2017-05-08 Thread Armand Pirvu (home)
Hi 

Here it is a scenario which I am faced with  and I am hoping to find a 
pointer/tip/help

db1 is the OLTP system
db2 is the Reporting system

The data from db1 needs to get to db2, but the database on those two have 
tables with different layout/structure and hence data will need to suffer some 
transformation in between in real time

I was looking at something like 

db1 -> db2 replicates the same set of tables and with the same structures using 
pglogical for example 
db2.tbl1 -> db2.tbl2 data gets massages/transformed based on what replicates 
from db1.tbl1 using triggers and functions


Other than that I reckon db1 -> db2 would be trigger based using something like 
slonik maybe (?) and data massage/transformation gets moved from db2 to db1 
machine and then db1.tbl2 -> db2.tbl2 using pglogical 


Is this doable ? If so any pointers as to where to look about it ?


Many thanks
Armand




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting column names

2017-04-04 Thread Armand Pirvu (home)
Thanks David


Worked like a charm and results are correct



Armand

On Apr 4, 2017, at 5:00 PM, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

> On Tue, Apr 4, 2017 at 2:51 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> 
> But if I join back to foo1 like below I get
> 
> select a.audit_id, a.table_name, b[1],b[2]
> from
> foo1 a,
> (select
> array(
> select
> column_name::text from
> information_schema.columns
> where
> table_name=a.table_name
> and
> (
> column_name like '%add_by%'
> or
> column_name like '%add_date%'
> )) b) as foo
> ;
> 
> ERROR:  invalid reference to FROM-clause entry for table "a"
> LINE 10: table_name=a.table_name
> ^
> HINT:  There is an entry for table "a", but it cannot be referenced from this 
> part of the query.
> 
> ​​https://www.postgresql.org/docs/9.6/static/sql-select.html​
> ​
> ​Adding LATERAL before the second "from_item" should get rid of the error - 
> whether it results in a working and/or correct query I don't know.​
> 
> David J.
> 



[GENERAL] getting column names

2017-04-04 Thread Armand Pirvu (home)
Hi 

Can somebody please tell me if the below is possible ? 


I may not see the forest from the trees


Thanks
Armand




levregdb=# select * from foo1;
 audit_id | table_name 
--+
 6012 | foo2
 6013 | foo2
 6014 | foo2

select * from foo2;

levregdb=# select * from foo2;
 foo2_add_by | foo2_add_date 
-+---
(0 rows)

levregdb=# \d foo2;
   Table "csiprev.foo2"
Column | Type  | Modifiers 
---+---+---
 foo2_add_by   | character(10) | 
 foo2_add_date | character(10) | 



My intention is to have an output like

 6012 | foo2|foo2_add_by|foo2_add_date
 6013 | foo2|foo2_add_by|foo2_add_date
 6014 | foo2|foo2_add_by|foo2_add_date


select a.audit_id, a.table_name, b[1],b[2]
from 
foo1 a,
(select
array(
select 
column_name::text from 
information_schema.columns 
where 
table_name='foo2'
and 
(
column_name like '%add_by%'
or
column_name like '%add_date%'
)) b) as foo
;


 audit_id | table_name |  b  |   b   
--++-+---
 6012 | foo2   | foo2_add_by | foo2_add_date
 6013 | foo2   | foo2_add_by | foo2_add_date
 6014 | foo2   | foo2_add_by | foo2_add_date


But if I join back to foo1 like below I get 

select a.audit_id, a.table_name, b[1],b[2]
from 
foo1 a,
(select
array(
select 
column_name::text from 
information_schema.columns 
where 
table_name=a.table_name
and 
(
column_name like '%add_by%'
or
column_name like '%add_date%'
)) b) as foo
;

ERROR:  invalid reference to FROM-clause entry for table "a"
LINE 10: table_name=a.table_name
^
HINT:  There is an entry for table "a", but it cannot be referenced from this 
part of the query.


Any idea what am I doing wrong ?
Can I do it ?


Thanks
Armand



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] store key name pattern search

2017-04-04 Thread home
Thank you Jeff 

Nice . I also looked at  akeys/skeys 


My goal is to get the key by pattern matching and then passing it back like in 
col1->key

Armand




On Apr 4, 2017, at 12:42 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Tue, Apr 4, 2017 at 8:41 AM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> Hi
> 
> I have the following case
> 
> 
> select * from foo;
>   col1
> -
>  "show_id"=>"1", "group_id"=>"32", "group_name"=>"slb", 
> "group_add_by"=>"557651"
>  "show_id"=>"2", "group_id"=>"33", "group_name"=>"slc", 
> "item_add_by"=>"557652"
> (2 rows)
> 
> Is there anyway I can do a pattern search by hstore key name something like
> 
> select * from foo where skeys(col1) like '%add_by%';
> 
> I looked on the doc but did not see anything , or did I miss it ?
> 
> select * from foo where array_to_string(akeys(x),';') like '%add\_by%';
> 
> Note that I back-slashed the underscore, otherwise it acts as a wildcard and 
> may match more than you bargained for.
> 
> Cheers,
> 
> Jeff
> 



Re: [GENERAL] store key name pattern search

2017-04-04 Thread Armand Pirvu (home)
Thank you 


— Armand


On Apr 4, 2017, at 10:50 AM, Oleg Bartunov <obartu...@gmail.com> wrote:

> 
> On Tue, Apr 4, 2017 at 11:41 AM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> Hi
> 
> I have the following case
> 
> 
> select * from foo;
>   col1
> -
>  "show_id"=>"1", "group_id"=>"32", "group_name"=>"slb", 
> "group_add_by"=>"557651"
>  "show_id"=>"2", "group_id"=>"33", "group_name"=>"slc", 
> "item_add_by"=>"557652"
> (2 rows)
> 
> Is there anyway I can do a pattern search by hstore key name something like
> 
> select * from foo where skeys(col1) like '%add_by%';
> 
> I looked on the doc but did not see anything , or did I miss it ?
> 
> No. You may convert to json and use jsquery 
> https://github.com/postgrespro/jsquery 
> 
> Thanks
> 
> Armand
> 
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 



[GENERAL] store key name pattern search

2017-04-04 Thread Armand Pirvu (home)
Hi

I have the following case


select * from foo;
  col1  
 
-
 "show_id"=>"1", "group_id"=>"32", "group_name"=>"slb", "group_add_by"=>"557651"
 "show_id"=>"2", "group_id"=>"33", "group_name"=>"slc", "item_add_by"=>"557652"
(2 rows)

Is there anyway I can do a pattern search by hstore key name something like 

select * from foo where skeys(col1) like '%add_by%';

I looked on the doc but did not see anything , or did I miss it ?

Thanks

Armand



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] audit function and old.column

2017-04-04 Thread Armand Pirvu (home)
Sorry for the delay

This is the one I used
https://wiki.postgresql.org/wiki/Audit_trigger



And the cfg_global_audit



  Column  |   Type   |   Modifiers
--+--+
 audit_id | bigint   | not null
 table_name   | text | not null
 show_id  | bigint   | 
 user_name| character varying(100)   | 
 action_timestamp | timestamp with time zone | not null default now()
 action   | text | not null
 query| text | 
 pflag| boolean  | 
 pjob | bigint   | 
 old_values   | hstore   | 
 new_values   | hstore   | 



I have OLD and NEW that is true

I went this way

  select hstore(new.*)->(select * from (select skeys(hstore(new.*)) ) as 
cfg_skeys where  skeys like ('%_add_by%')) into skeys_add_by;
  insert into cfg_global_audit
  values (audit_tx_id, tg_table_name::text, new.show_id, skeys_add_by, 
current_timestamp, 'U', current_query(),'N', 0,hstore(old.*), hstore(new.*));


See the problem is that I might have group_add_by as a key, but I can also have 
instead of group_add_by, item_add_by

if I would have a function that I could search in an store buy a key name 
pattern matching , aka get me the value for a key like ‘%add_by%’ it would be 
great


In this case I could go around like it

I will post another question related to hstore search since although it has a 
common ground it is different regarding the goal


Thanks
Armand



On Mar 23, 2017, at 6:38 PM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:

> On 03/23/2017 02:00 PM, Armand Pirvu (home) wrote:
> 
> Try number two.
> 
>> Hello
>> 
>> I am using in a project the audit trigger from the wiki
> 
> Witch one?
> (It seems the other spelling is banned)
> 
>> One particular aspect is that not all tables that I am after have the same 
>> columns .
>> And it is one in particular I am stumbling onto , and it ends i n _add_by. 
>> It can be group_add_by, car_add_by and so on.
>> 
>> The old value of this column I need to record in the case of an update for 
>> example. The bad thing is also that it's position is not always the same. 
>> Meaning it can be 4th , 2nd and so on
>> 
>> Just trying to avoid to have a function for each table and a bunch of hard 
>> coding which would be less than ideal
> 
> So in the below what is the table schema for cfg_global_audit?
> 
>> 
>> 
>> 
>> drop function func_global_audit();
>> create or replace function func_global_audit() returns trigger as $$
>> declare
>> audit_tx_id bigint;
>> begin
>>select nextval('seq_aud_hist_prev'::regclass) into audit_tx_id;
>>if tg_op = 'UPDATE' then
>>insert into cfg_global_audit
>>values (audit_tx_id, tg_table_name::text, current_user::text, 
>> current_timestamp, 'U', current_query(),'N', hstore(old.*), hstore(new.*), 
>> akeys(hstore(new.*) - hstore(old.*)));
>>return new;
>>end if;
>> end;
>> $$
>> language plpgsql security definer;
>> 
>> drop trigger trig_cfg_group on cfg_group;
>> create trigger trig_cfg_group after insert or update or delete on cfg_group 
>> for each row execute procedure func_global_audit();
>> 
>> 
>> 
>> levregdb=# select old_values from cfg_global_audit;
>>  
>>  old_values
>> 
>> 
>> -
>> "show_id"=>"1", "group_id"=>"33", "group_name"=>"Region 1", 
>> "group_note"=>"test1", "group_type"=>"Files", "group_add_by"=>"557651", 
>> "group_add_date"=>"2016-09-28 09:52:47.672398-05", "additional_info
>> "=>NULL, "group_description"=>""
>> (1 row)
>> 
>> So the idea is that I need 557651 which is the group_add_by old value 
>> recorded in user_name of the audit table
> 
> Don't you have the OLD values in  wherever you put hstore(OLD.*)?
> 
> You have 557651, so I am not sure what you mean by you need it?
> 
>> 
>> 1- is there any way I can look in the OLD record for such column something 
>

[GENERAL] audit function and old.column

2017-03-23 Thread Armand Pirvu (home)
Hello

I am using in a project the audit trigger from the wiki
One particular aspect is that not all tables that I am after have the same 
columns .
And it is one in particular I am stumbling onto , and it ends i n _add_by. It 
can be group_add_by, car_add_by and so on. 

The old value of this column I need to record in the case of an update for 
example. The bad thing is also that it's position is not always the same. 
Meaning it can be 4th , 2nd and so on

Just trying to avoid to have a function for each table and a bunch of hard 
coding which would be less than ideal 



drop function func_global_audit();
create or replace function func_global_audit() returns trigger as $$
declare
audit_tx_id bigint;
begin
select nextval('seq_aud_hist_prev'::regclass) into audit_tx_id;
if tg_op = 'UPDATE' then
insert into cfg_global_audit
values (audit_tx_id, tg_table_name::text, current_user::text, 
current_timestamp, 'U', current_query(),'N', hstore(old.*), hstore(new.*), 
akeys(hstore(new.*) - hstore(old.*)));
return new;
end if;
end;
$$
language plpgsql security definer;

drop trigger trig_cfg_group on cfg_group;
create trigger trig_cfg_group after insert or update or delete on cfg_group for 
each row execute procedure func_global_audit();



levregdb=# select old_values from cfg_global_audit;

   old_values   
   
 

-
 "show_id"=>"1", "group_id"=>"33", "group_name"=>"Region 1", 
"group_note"=>"test1", "group_type"=>"Files", "group_add_by"=>"557651", 
"group_add_date"=>"2016-09-28 09:52:47.672398-05", "additional_info
"=>NULL, "group_description"=>""
(1 row)

So the idea is that I need 557651 which is the group_add_by old value recorded 
in user_name of the audit table

1- is there any way I can look in the OLD record for such column something like 
OLD.%add_by% ?
2 - I was thinking also getting the column name which is easy and prepend with 
old. string and then do a select old.group_add_by into myvar. Apparently it 
does not work

I mean yes I can do
select 'old.'||column_name from information_schema.columns where table_name = 
'cfg_group' and column_name like '%_add_by%' ;

But 
create or replace function func_global_audit() returns trigger as $$
declare
audit_tx_id bigint;
cfg_schema text;
cfg_by_col text;
cfg_by_col1 text;
begin
  select current_schema into cfg_schema;
  select nextval('seq_aud_hist_prev'::regclass) into audit_tx_id;
 if tg_op = 'UPDATE' then
 execute 'select COLUMN_NAME::text FROM INFORMATION_SCHEMA.COLUMNS WHERE 
table_schema='||quote_literal(cfg_schema)||' and 
table_name='||quote_literal(tg_table_name)||' and column_name like 
'||quote_literal('%_add_by%')  into cfg_by_col;
 cfg_by_col1 := 'old.'||cfg_by_col;
 raise notice '%', cfg_by_col1;
 insert into cfg_global_audit values  (audit_tx_id, 
tg_table_name::text, cfg_by_col1::text, current_timestamp, 'U', 
current_query(),'N', hstore(old.*), hstore(new.*), akeys(hstore(new.*) - 
hstore(old.*)));
 return new;
 end if;
 end;
 $$
language plpgsql security definer;

drop trigger trig_cfg_group on cfg_group;
create trigger trig_cfg_group after insert or update or delete on cfg_group for 
each row execute procedure func_global_audit();


And I get old.group_add_by instead of 557651

3 - I tried


drop function func_global_audit();
create or replace function func_global_audit() returns trigger as $$
declare
audit_tx_id bigint;
cfg_schema text;
cfg_by_col text;
cfg_by_col1 text;
begin
 select current_schema into cfg_schema;
 select nextval('seq_aud_hist_prev'::regclass) into audit_tx_id;
if tg_op = 'UPDATE' then
execute 'select COLUMN_NAME::text FROM INFORMATION_SCHEMA.COLUMNS WHERE 
table_schema='||quote_literal(cfg_schema)||' and 
table_name='||quote_literal(tg_table_name)||' and column_name like 
'||quote_literal('%_add_by%')  into cfg_by_col;
cfg_by_col1 := 'old.'||cfg_by_col;
raise notice '%', cfg_by_col1;
execute 'insert into cfg_global_audit1 select $1'   using  cfg_by_col1;
return new;
end if;
end;
$$
language plpgsql security definer;



And the same thing


Can someone point me what am I doing wrong ? 


Thks




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variable value in array_to_string

2016-11-21 Thread Armand Pirvu (home)
1 - I added some new notes.
2 - That code I know and works fine. I used it in  slightly different contexts 
in other projects

Here it is a case . 

A table with two columns , data type is irrelevant


I have a need in which I may get a.col1,a.col2 and other variations

Point is yes I could build any variation re-running a slightly different way 
the query in the catalogs

But why hit 3 or 4 times when I could manipulate an array  like below ? This 
code just came in my mind (and works)


CREATE OR REPLACE FUNCTION test1 () RETURNS void AS $$
DECLARE
var1 text[];
var2 text;
var3 text;
var4 text;
begin
execute 
'select ARRAY( select column_name::text from information_schema.columns where 
table_name='
||
quote_literal('foo')
||')'  into var1;
raise notice '%', var1;
select array_to_string(array(select  'a.'||unnest(var1)),',') into var4;
raise notice '%', var4;
END;
$$ LANGUAGE plpgsql ;


levregdb1=# select test1(); 

 NOTICE:  
{col1,col2}
NOTICE:  a.col1,a.col2

Note the a. being prepended to each of the array’s elements


3 - I appreciate the help. Keep in mind I am not a developer and this is a new 
venture for me.  On the same token  I am not an ignorant either . What 
irritates me is the TRUE lack in this case of explanations in the docs. I know 
another DBMS just  as bad in the documentation area (worse in fact)


I turned to the community. And yes I know how to make cases since I worked 
support.

Frankly I am turning off since I sure as heck don’t like being patronized. Did 
not accept this (and never will) from my own mom let alone somebody else

My apologies for asking help. Will try not to do it again


Thanks and laters



On Nov 21, 2016, at 7:12 PM, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

> Please don't top-post - it makes following the thread a lot harder.
> 
> On Mon, Nov 21, 2016 at 4:15 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> Played with unnest  but not much luck
> 
> If you want help you will need to show your work - ideally with examples that 
> can execute with meaningful data on an empty database.
> 
> The follow gives the expected results:
> 
> DO $$
> DECLARE
> foo text[];
> foo1 text;
> delim text := ',';
> begin
> foo := ARRAY['one','two']::text[];
> foo1 := array_to_string(foo, delim);
> RAISE NOTICE '%', foo1;
> END;
> $$;
> 
> NOTICE: one,two
> 
> David J.
> 



Re: [GENERAL] variable value in array_to_string

2016-11-21 Thread Armand Pirvu (home)
Played with unnest  but not much luck


NOTICE:  {item_id,show_id}
NOTICE:  item_id


It takes only the first array element in consideration


Ug

On Nov 21, 2016, at 5:02 PM, Armand Pirvu (home) <armand.pi...@gmail.com> wrote:

> My bad on the back tick. No idea why it turned that way
> 
> OK got that David. 
> 
> The idea is that I have the pk columns in an array which I would like to 
> manipulate from the array itself rather than running same query variations 
> multiple times
> 
> For example I get in foo
> 
> {item_id,show_id}
> 
> And from here on I was looking to get say
> 
> a.item_id , a.show_id 
> 
> and all sort of variations
> 
> Thanks for help
> 
> 
> 
> 
> On Nov 21, 2016, at 4:26 PM, David G. Johnston <david.g.johns...@gmail.com> 
> wrote:
> 
>> On Mon, Nov 21, 2016 at 3:09 PM, Armand Pirvu (home) 
>> <armand.pi...@gmail.com> wrote:
>> Hi
>> 
>> Is there anyway I can pass a variable in the array_to_string function ?
>> 
>> 
>> ​Yes, just like you can pass variables to any other function.​..
>> 
>> 
>> CREATE OR REPLACE FUNCTION test1 () RETURNS void AS $$
>> ​​DECLARE
>> foo text;
>> foo1 text;
>> begin
>> execute
>> 'select ARRAY( SELECT d.COLUMN_NAME::text  from 
>> information_schema.constraint_table_usage c, 
>> information_schema.key_column_usage d '
>> ||
>> 'where c.constraint_name=d.constraint_name and c.table_name=d.table_name and 
>> c.table_schema='
>> ||
>> quote_literal(‘myschema')
>> ||
>> ' and c.table_schema=d.table_schema and c.constraint_name like '
>> ||
>> quote_literal('%_pkey%')
>> ||
>> ') ' into foo;
>> raise notice '%', foo;
>> foo1 :=ARRAY_TO_STRING(foo, ', ');
>> raise notice '%', foo1;
>> END;
>> $$ LANGUAGE plpgsql ;
>> 
>> 
>> The idea is that I want to slice and dice and not run the query x amount of 
>> times
>> 
>> 
>> I would suggest you learn to use the "format()" function, EXECUTE USING,​ 
>> and dollar quoting.
>> 
>> 
>> So, I copy-pasted your example and placed it into a DO block
>> 
>> The character before "myschema" came out as a back-tick instead of a 
>> single-quote.
>> 
>> array_to_string has the signature (text[], text) but you've defined foo as 
>> being just text.  Changing that results in a query that executes - though 
>> given my system doesn't have conforming data I get no results.
>> 
>> David J.
> 



Re: [GENERAL] variable value in array_to_string

2016-11-21 Thread Armand Pirvu (home)
My bad on the back tick. No idea why it turned that way

OK got that David. 

The idea is that I have the pk columns in an array which I would like to 
manipulate from the array itself rather than running same query variations 
multiple times

For example I get in foo

{item_id,show_id}

And from here on I was looking to get say

a.item_id , a.show_id 

and all sort of variations

Thanks for help




On Nov 21, 2016, at 4:26 PM, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

> On Mon, Nov 21, 2016 at 3:09 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> Hi
> 
> Is there anyway I can pass a variable in the array_to_string function ?
> 
> 
> ​Yes, just like you can pass variables to any other function.​..
> 
> 
> CREATE OR REPLACE FUNCTION test1 () RETURNS void AS $$
> ​​DECLARE
> foo text;
> foo1 text;
> begin
> execute
> 'select ARRAY( SELECT d.COLUMN_NAME::text  from 
> information_schema.constraint_table_usage c, 
> information_schema.key_column_usage d '
> ||
> 'where c.constraint_name=d.constraint_name and c.table_name=d.table_name and 
> c.table_schema='
> ||
> quote_literal(‘myschema')
> ||
> ' and c.table_schema=d.table_schema and c.constraint_name like '
> ||
> quote_literal('%_pkey%')
> ||
> ') ' into foo;
> raise notice '%', foo;
> foo1 :=ARRAY_TO_STRING(foo, ', ');
> raise notice '%', foo1;
> END;
> $$ LANGUAGE plpgsql ;
> 
> 
> The idea is that I want to slice and dice and not run the query x amount of 
> times
> 
> 
> I would suggest you learn to use the "format()" function, EXECUTE USING,​ and 
> dollar quoting.
> 
> 
> So, I copy-pasted your example and placed it into a DO block
> 
> The character before "myschema" came out as a back-tick instead of a 
> single-quote.
> 
> array_to_string has the signature (text[], text) but you've defined foo as 
> being just text.  Changing that results in a query that executes - though 
> given my system doesn't have conforming data I get no results.
> 
> David J.



[GENERAL] variable value in array_to_string

2016-11-21 Thread Armand Pirvu (home)
Hi

Is there anyway I can pass a variable in the array_to_string function ?




CREATE OR REPLACE FUNCTION test1 () RETURNS void AS $$
DECLARE
foo text;
foo1 text;
begin
execute 
'select ARRAY( SELECT d.COLUMN_NAME::text  from 
information_schema.constraint_table_usage c, 
information_schema.key_column_usage d '
||
'where c.constraint_name=d.constraint_name and c.table_name=d.table_name and 
c.table_schema='
||
quote_literal(‘myschema')
||
' and c.table_schema=d.table_schema and c.constraint_name like '
||
quote_literal('%_pkey%')
||
') ' into foo;
raise notice '%', foo;
foo1 :=ARRAY_TO_STRING(foo, ', ');
raise notice '%', foo1;
END;
$$ LANGUAGE plpgsql ;


The idea is that I want to slice and dice and not run the query x amount of 
times



Thank you

Armand

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] functions - triggers cross schemas

2016-10-13 Thread Armand Pirvu (home)
Ok that was my oversight in understanding that the path does not change upon 
entering a function. I assumed that .it does by simply picking it from the 
schema2.test1 

Thanks a bunch

Armand

On Oct 13, 2016, at 5:31 PM, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

> On Thu, Oct 13, 2016 at 3:18 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> 2 schemas , schema1 and schema2, with same tables , a base table and a 
> tracking one.
> 
> 
> Search path is set to schema1 by default.
> insert into schema2.test1 (col2 , col3) values ('foo1','foo2')
> I get an entry in schema1.test1_hist and not in schema2.test1_hist
> 
> ​See:
> 
> https://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER
> 
> ​TG_TABLE_SCHEMA
> 
> ​Any object name not schema qualified will use search_path for resolution.  
> Search path doesn't change upon entering a function unless the function 
> defines its own - and your's does not.
> 
> David J.
> 



[GENERAL] functions - triggers cross schemas

2016-10-13 Thread Armand Pirvu (home)
2 schemas , schema1 and schema2, with same tables , a base table and a tracking 
one. 


Search path is set to schema1 by default.
insert into schema2.test1 (col2 , col3) values ('foo1','foo2')  
I get an entry in schema1.test1_hist and not in schema2.test1_hist 
I understand that the trigger  inherits the schema of its table. But does that 
mean that 
a - the trigger will be created in the same schema 
or
b - it will use the current schema and pass it to the function ? It sure seems 
so , even if I drop the function and trigger from schema1 I still have the same 
behavior. Even I do 
"CREATE TRIGGER test1_audit_trig_prev
after insert or update or delete on schema2.test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func_prev();
"
I still get the same behavior

The only way I could make it to add in each function
SET search_path = schema1 and SET search_path = schema2

I was expecting since I am doing an insert into schema2.test1 from schema1 to 
see executed the function from schema2 which would then put data in 
schema2.test1_hist

I did play with security definer/invoker but no luck


Any other suggestion other than hardcoding in set search_path within the 
function code, and aside plproxy ?


Thanks
Armand

-- Code
In both schemas I have


create table test1 (col1 bigserial, col2 char(10), col3 char(10), primary key 
(col1, col2));

create table test1_hist (
stmt_seq bigint not null , 
stmt_type char(6) not null,
stmt_subtype char(1) not null,
stmt_date timestamp not null,
like test1);

In schema1
create sequence seq_audit_func;

CREATE OR REPLACE FUNCTION tbl_audit_func () RETURNS TRIGGER AS $$
DECLARE
  temp_new RECORD;
  temp_old RECORD;
  tgopdet1 char(1);
  tgopdet2 char(1);
  vdate timestamp;
begin
  IF (TG_OP = 'INSERT') THEN 
  temp_new := NEW;
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet2, vdate, 
temp_new;
  END IF;
  IF (TG_OP = 'UPDATE') THEN
  temp_old := OLD;
  temp_new := NEW;
  tgopdet1='D';
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet1, vdate, 
temp_old;
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet2, vdate, 
temp_new;
  END IF;
  IF (TG_OP = 'DELETE') THEN
  temp_old := OLD;
  tgopdet1='D';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func'), TG_OP, tgopdet1, vdate, 
temp_old;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql  ;


drop trigger test1_audit_trig on test1;
CREATE TRIGGER test1_audit_trig
after insert or update or delete on test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func();


In schema2
create sequence seq_audit_func_prev;

CREATE OR REPLACE FUNCTION tbl_audit_func_prev () RETURNS TRIGGER AS $$
DECLARE
  temp_new RECORD;
  temp_old RECORD;
  tgopdet1 char(1);
  tgopdet2 char(1);
  vdate timestamp;
begin
  IF (TG_OP = 'INSERT') THEN 
  temp_new := NEW;
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet2, 
vdate, temp_new;
  END IF;
  IF (TG_OP = 'UPDATE') THEN
  temp_old := OLD;
  temp_new := NEW;
  tgopdet1='D';
  tgopdet2='I';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet1, 
vdate, temp_old;
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet2, 
vdate, temp_new;
  END IF;
  IF (TG_OP = 'DELETE') THEN
  temp_old := OLD;
  tgopdet1='D';
  vdate = now();
  EXECUTE 'INSERT INTO  ' || TG_TABLE_NAME::regclass || '_hist  SELECT 
$1,$2,$3,$4, $5.* ' using  nextval('seq_audit_func_prev'), TG_OP, tgopdet1, 
vdate, temp_old;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql ;



drop trigger test1_audit_trig_prev on test1;
CREATE TRIGGER test1_audit_trig_prev
after insert or update or delete on test1
  FOR EACH ROW EXECUTE PROCEDURE tbl_audit_func_prev();







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] executing os commands from a function

2016-09-30 Thread Armand Pirvu (home)

Thanks for replies.
Good deal of info

While plsh seems to be an easier approach , I am also inclined in a an ESQL/C 
type app to listen to an event

@Igor Neyman
It is just maybe I have missed a sample of a C app  in that area, similar like 
ESQLC

Cheers
Armand




On Sep 30, 2016, at 10:45 AM, David G. Johnston <david.g.johns...@gmail.com> 
wrote:

> On Thu, Sep 29, 2016 at 2:41 PM, Armand Pirvu (home) <armand.pi...@gmail.com> 
> wrote:
> I used plsh extension but I had to use two functions and a trigger, see code 
> below
> 
> ​I don't see any way to not use a trigger given your requirement.  And as 
> written I don't see that you need the second function​ - just write the 
> trigger function in plsh.  Admittedly its probably better to have two 
> functions from an architecture standpoint.
> 
> Listen/Notify doesn't seem to make sense as you want the server to ensure 
> that the file exists - not some other client that may or may not be connected.
> 
> ​David J.



[GENERAL] executing os commands from a function

2016-09-29 Thread Armand Pirvu (home)

All

I know this may sound like heresy since it involves executing an OS command 
from a function , but here goes

After an insert in a table, I want to touch a file

I.e

After insert into table test values (100) I want in a dir to have file 100

I used plsh extension but I had to use two functions and a trigger, see code 
below

CREATE or REPLACE FUNCTION func2 (var1 text) RETURNS text AS '
#!/bin/bash
 touch /home/postgres/$1;
' LANGUAGE plsh;
commit;

CREATE FUNCTION func1() RETURNS trigger AS '
BEGIN
perform   func2(NEW.col1);
RETURN NEW;
END;
' LANGUAGE plpgsql;

CREATE TRIGGER trigf1 BEFORE INSERT on test
FOR EACH ROW EXECUTE PROCEDURE func1();

testdb=# insert into test3 values (777);
INSERT 0 1
testdb=# select * from test3;
 col1 
--
  777

[postgres@edb1 ~]$ ls -ltr
-rw--- 1 postgres postgres 0 Sep 29 16:30 777


It works but can I be simpler ? Any other alternatives ? In Ingres for example 
I can use dbevent and an esqlc app which listens 


Thank you 

Armand



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] jsonb search

2016-06-28 Thread Armand Pirvu (home)
Hi

In my quest of JSONB querying and searching without having to actually cast 
into a text, I found JSQuery

I do admit my JSONB knowledge shortcoming and I am not a developer but a DBA. 
As such some examples would be greatly appreciated since I tend to understand 
better

I compiled and installed the extension

1   -   Exact matching without knowing the hierarchy, just the key and 
element, I built a set like

col1 |   col2   
--+--
   1 | {"Home Email": {"EmailAddress": "1...@yahoo.com"}}
   2 | {"Home Email": {"EmailAddress": "2...@yahoo.com"}}
   3 | {"Home Email": {"EmailAddress": "3...@yahoo.com"}}


JSQuqery is super 

SELECT * FROM test1 WHERE col2 @@ '*.EmailAddress = "1...@yahoo.com"';

Now I can do a performance boost using

CREATE INDEX idx1 ON test1 USING GIN (col2 jsonb_value_path_ops);

I see this yield

from 

testdb=# explain analyze^JSELECT * FROM test1 WHERE col2 @@ '*.EmailAddress = 
"1...@yahoo.com"';
Seq Scan on test1  (cost=0.00..12423.00 rows=500 width=68) (actual 
time=0.016..160.777 rows=1 loops=1)
  Filter: (col2 @@ '*."EmailAddress" = "1...@yahoo.com"'::jsquery)
  Rows Removed by Filter: 49
Planning time: 0.042 ms
Execution time: 160.799 ms
(5 rows)


to

testdb-# SELECT * FROM test1 WHERE col2 @@ '*.EmailAddress = "1...@yahoo.com"';
Bitmap Heap Scan on test1  (cost=31.88..1559.32 rows=500 width=68) (actual 
time=0.018..0.019 rows=1 loops=1)
  Recheck Cond: (col2 @@ '*."EmailAddress" = "1...@yahoo.com"'::jsquery)
  Heap Blocks: exact=1
  ->  Bitmap Index Scan on idx1  (cost=0.00..31.75 rows=500 width=0) (actual 
time=0.011..0.011 rows=1 loops=1)
Index Cond: (col2 @@ '*."EmailAddress" = "1...@yahoo.com"'::jsquery)
Planning time: 0.039 ms
Execution time: 0.038 ms
(7 rows)

A whooping 4000 times improvement




But I also noticed a vodka index 


testdb=# CREATE INDEX idx2 ON
testdb-# test1 USING vodka (col2);
ERROR:  access method "vodka" does not exist

What am I missing ?

2   -   Is there anyway I can accomplish a pattern and/or case 
insensitive search using JSQuery similar to 


select * from test2 where upper((col2 -> 'Home Email') ->> 'EmailAddress') 
ilike '%3%YAH%';

select * from test2 where (col2 -> 'Home Email') ->> 'EmailAddress' like 
'%3%yah%';


If so what indexing strategy can be used to have similar gains as above ? 


Many thanks for any help


Armand



Re: [GENERAL] stored proc

2011-09-29 Thread Jacqui Caren-home

On 29/09/2011 02:34, J.V. wrote:

Is it possible to group a bunch of methods and functions into a single file (stored 
procedures  functions) and have a main method that can be called
to orchestrate the methods, pass in params, get back results, log to a file?

I know this can be done with Oracle PL/SQL but a simple google on this does not 
show any examples.


In Oracle is called a package - FWICR there is no direct equivalent to it in 
postgresql.


Also where can I find a list of stored proc data structures (hash maps, arrays, 
lists of arrays) or anything else that would be useful.


If you use windows download pgadmin3 and click the help button - you get a copy 
or link to the
SQL documentation, including data types.

or go to http://www.postgresql.org/docs/current/static/index.html

http://www.postgresql.org/docs/current/static/datatype.html
may be what you are interested in however I would recommend reading the
*majority* of this document - Pg is not Ora!

If you are migrating from Oracle, one point people tend to emphasise is that 
unlike oracle
you cannot commit and rollback the current transaction within procedural SQL - 
this is not
a bug or missing feature, so please dont start asking for it to be added :-)

If you really really want to compartmentialise then you could create schemas 
representing packages
and reference data and functions prefixed with the schema.

i.e. create function schemaname.functionname ...

but this has one major risk that if your functionname is called without the 
schemaprefix and
a matching functionname exists in a schema in the search_path, this will be 
called instead.
Nasty!

My preference is to define a naming scheme such as

XXN_YYY_functionname
where XX is the project
N is P procedure - no return values
 F function = returns data
 T trigger function
 ...
and YYY is the package name :-)

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle to Postgres migration open source tool

2011-07-07 Thread Jacqui Caren-home

On 07/07/2011 15:21, salah jubeh wrote:

 Hello,

 I do no have any experience with oracle, try to dump the oracle database in 
plain format and then try to execute the DDL and DML statements. It might be 
cumbersome to fix all the errors you might get.
 But as an initial solution give it a shot.

I have migrated mysql, oracle, ingres and SqlSvr databases to Postgres and in 
pretty much all cases
around 30% of the activity has been application specific code changes or 
replacing Oracle
or SS7 specific optimisations with pg equivalents.

An example is the insert then update order issue on a unique keyed table.

The order does make a differenet in processing time and if within a stored 
procedure
I often find the procedure has to be redone to ensure it performs as expected.
Things like this tend to be missed by automated Ora-Pg toolsets.

I have never used the commercial Oracle portability layers but have heard 
good things about them.
If you have no procedural code or triggers then migration is usually a few days 
application review
work...

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Schema's

2010-12-15 Thread Jacqui Caren-home

On 15/12/2010 00:20, Carlos Mennens wrote:

Why would anyone in a random scenario want to have independent
schema's to cross query? I'm just trying to see how this would be
useful in any scenario.


One very real example :-)

When migrating from say mysql to PgSQL it is possible to populate a mygration 
schema with functions, domains etc to allow (some)
native mysql SQL to run within PgSQL.

However such code does not give the best performance so the next step is to 
identify the mysql code in use (via notifies?) then replace with new
(hopefully more portable) SQL.

Once you stop getting notifies, you remove the mygration schema from the search 
path and run your test suite once more.

If all passes you know you are now mysql free :-)

---
When developing databases we often end up with hundreds of tables many of which 
are feature/application
specific with a core of common tables. schemas are not just handy but essential 
as by setting the search
path we define what an application can see. not perfect and not a security feature but 
handy
especialy for developers.

One example is a a3c schema used for authentication, authorization and access 
control. My template schema
provides a generate users, roles, rights model for use within apps.

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Modfying source code to read tuples before and after UPDATE...how to?

2010-11-06 Thread Jacqui Caren-home

rmd22 wrote:

Do you think it's possible?


replied offlist suggesting he have a chat with his prof and find the reason 
behind this task...

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] create table as select VS create table; insert as select

2010-10-29 Thread Jacqui Caren-home

I have inherited an application that populates a number of
temp.y tables using create table ... as select ...

This is taking roughly five to ten minutes to run

As this process hammers the database, I can only run benchmarks at night so
am asking here if anyone know if

create table ...; then insert into ... as select... ; would be faster.

or if anyone can suggest an alternative I may have missed.

I am happy to move code server side if need be.

TIA

Jacqui

I know these is not a lot of detail in the above - the system is under NDA
and I need to check with my employer before I give out any system details.

Before you ask it is not a big customer - just a very paranoid one :-)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SPI high level abstraction?

2010-10-04 Thread Jacqui Caren-home

Is there any C libs out there that will abstract the SPI interface?

I have a number of functions that do input process output and
they are all almost identical (tablename fields, change)

I could abstract the code but would prefer to re-use an exissting abstraction 
layer
as the less code I write the less chance of errors creeping in :-)

Jacqui

p.s. I am looking at using C macros but the solution does not smell right to 
me.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgtune

2010-08-11 Thread Jacqui Caren-home

Greg Smith wrote:

Set -c 300 when you run pgtune and it will do the right thing here.


regarding tuning a database we had a rather neat tool for oracle some
years ago that would connect to a live database and monitor the QEP
(query execution plan) cache for badly indexed queries etc.

It would use this information (with the schema meta data) to suggest
creation and deletion of indices or norm or denorm of tables.

At the time is was around 3K per seat but because of the size of the schema
and project it saved a lot more than it cost -custoemr paid BTW :-)

IIRC it was used over a dialup (dialback) ppp tunnel into a customers systems
so net/db overheads were pretty light.

In the month or two it was running it made very few wrong suggestions
but was not a tool to be used by a non DBA.

My questions is - is there anything out there similar for Pg?

I was looking at using pgpool as something to build a query profile from.
Then roll some perl code to build QEP's for this cache.
Then semi-manual analysis of the output - which could end up
as gigs of execution plans :-(

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL versus Postgres

2010-08-10 Thread Jacqui Caren-home

Randal L. Schwartz wrote:

Do not overemphasize the example mode at the cost of presenting concepts
or structure.  You need all three.


Hmm I recently had to find out the various options to get values from a tuple
via SPI - referred by docs to the contrib examples :-(
Docs did not help and the examples only cover very basic cases.
I found the quickest way to find the list of macro's available was to read the 
source code :-)

I suppose the source is the final layer of documentation!

Jacqui



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How Big is Too Big for Tables?

2010-07-29 Thread Jacqui Caren-home

P Kishor wrote:

On Wed, Jul 28, 2010 at 1:38 PM, Stephen Frost sfr...@snowman.net wrote:

* P Kishor (punk.k...@gmail.com) wrote:

Three. At least, in my case, the overhead is too much. My data are
single bytes, but the smallest data type in Pg is smallint (2 bytes).
That, plus the per row overhead adds to a fair amount of overhead.

My first reaction to this would be- have you considered aggregating the
data before putting it into the database in such a way that you put more
than 1 byte of data on each row..?  That could possibly reduce the
number of rows you have by quite a bit and also reduce the impact of the
per-tuple overhead in PG..

each row is half a dozen single byte values, so, it is actually 6
bytes per row (six columns).


Hmm six chars - this would not perchance be bio (sequence) or geospacial data?
If so then there are specialist lists out there that can help.
Also quite a few people use Pg for this data and there are some very neat Pg 
add ons.

Jacqui

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating from MySQL

2010-06-29 Thread Jacqui Caren-home

John R Pierce wrote:

On 06/26/10 11:22 PM, RP Khare wrote:
I downloaded PostgreSQL plus advanced server from EnterpriseDB 
website. Should I go with the original community PGSQL edition or 
EnterpriseDB edition?


thats entirely up to your preferences and needs.Personally, I think 
that the standard PostgreSQL would more than suffice as a MySQL 
replacement.  I thought most of the EntepriseDB specific features were 
oriented around Oracle database compatibility.



See 
http://okbob.blogspot.com/2009/08/mysql-functions-for-postgresql.html?showComment=1250968406041

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] subscribe

2010-05-28 Thread Jacqui Caren-home

subscribe

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] permissions on user-defined functions

2001-01-18 Thread Kevin T. Manley \(Home\)

I am new to PostgreSQL...

I've created a table (table1) and database user (dbuser).
dbuser has only SELECT permission on table1.
Next I created a function insert_table1 that inserts a new row into table1.

I want to give dbuser the ability to invoke the insert_table1 function
without granting dbuser INSERT permission on table1.

Does PostgreSQL make this possible? Can someone kindly point out how to do
it?

Thanks!

Kevin T. Manley
[EMAIL PROTECTED]