Re: [GENERAL] json_populate_recordset and nested object, how to solve?

2014-04-16 Thread Raphael Bauduin
MatheusOl helped me solve this on IRC, sending it here in case it helps
someone looking at the archives of the mailing list.

Here is a test case

create table t(id SERIAL, event json);
insert into t(event) VALUES ('{"type":"show", "products": [ { "id" :
1, "name" : "p1"}] }'::json);
insert into t(event) VALUES ('{"type":"show", "products": [ { "id" :
1, "name" : "p1" , "stock" : [ {"XL" : 1}] }] }'::json);
create type product as (id int, name text );

select rs.* from (select * from t where id=1) e   CROSS JOIN LATERAL
json_populate_recordset(null::product, e.event->'products') rs;--works
select rs.* from (select * from t where id=2) e   CROSS JOIN LATERAL
json_populate_recordset(null::product, e.event->'products') rs;--
error:
ERROR:  cannot call json_populate_recordset on a nested object


and the simple solution I was looking for:

 SELECT (p->>'id')::int AS id, p->>'name' AS name FROM (SELECT
json_array_elements(event->'products') AS p FROM t) t1;

Raph




On Fri, Apr 4, 2014 at 10:25 AM, Raphael Bauduin  wrote:

> Hi,
>
> here is an example of a problem I encounter with json_populate_recordset
> because it does not support nested object. Actually, I would prefer that it
> ignores nested object rather than raise an error, as it fails on a key I
> don't even want to use!
>
> Here's the query:
>
> select e.timestamp::date, e.user_id, rs.similarity from
>   (select * from events where type='suggestion' and timestamp<'2014-04-04'
> and timestamp>'2014-04-03') e
>CROSS JOIN LATERAL
>   json_populate_recordset(null::suggestion, event->'products') rs
> order by e.user_id;
>
>
> event->'products' is an array of json objects, one of this keys (stock)
> being an array of json objects. I can absolutely ignore that key in this
> query, but I don't see how. The suggestion type does not have a stock key,
> so it would be absent of the result anyway.
>
> So, how would you get event->'products' without the stock keys, just to be
> able to call json_populate_recordset?
>
> Thanks.
>
> Raph
>
> PS: this might be seen as a followup to a previous mail thread:
> http://www.postgresql.org/message-id/CAONrwUGMQthsut_F8X4CBGQDuKa5=a+atmxsxb2fdoxh5pd...@mail.gmail.com
> but I don't see how to apply that suggestion here.
>
>


-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


[GENERAL] json_populate_recordset and nested object, how to solve?

2014-04-04 Thread Raphael Bauduin
Hi,

here is an example of a problem I encounter with json_populate_recordset
because it does not support nested object. Actually, I would prefer that it
ignores nested object rather than raise an error, as it fails on a key I
don't even want to use!

Here's the query:

select e.timestamp::date, e.user_id, rs.similarity from
  (select * from events where type='suggestion' and timestamp<'2014-04-04'
and timestamp>'2014-04-03') e
   CROSS JOIN LATERAL
  json_populate_recordset(null::suggestion, event->'products') rs
order by e.user_id;


event->'products' is an array of json objects, one of this keys (stock)
being an array of json objects. I can absolutely ignore that key in this
query, but I don't see how. The suggestion type does not have a stock key,
so it would be absent of the result anyway.

So, how would you get event->'products' without the stock keys, just to be
able to call json_populate_recordset?

Thanks.

Raph

PS: this might be seen as a followup to a previous mail thread:
http://www.postgresql.org/message-id/CAONrwUGMQthsut_F8X4CBGQDuKa5=a+atmxsxb2fdoxh5pd...@mail.gmail.com
but I don't see how to apply that suggestion here.


[GENERAL] need of a lateral join with record set returning function?

2014-02-03 Thread Raphael Bauduin
Hi,

I'm trying to understand what happens here:

I have  atype product defined:
=# \d product
Composite type "public.product"
 Column  |   Type   | Modifiers
-+--+---
 price_advantage | double precision |
 type| integer  |
 gender  | text |
 status  | integer  |
 brand   | integer  |
 price   | double precision |
 id  | integer  |
 algorithm   | text |


which I'm trying to use in this query calling json_populate_recordset

=# select q.* from (select json_populate_recordset(null::product,
event->'products') from events where timestamp>'2014-02-02' and
type='gallery' limit 1) q;
   json_populate_recordset
-
 (68,121,F,3,493,17,88753,)

This query illustrates what I want to achieve:

=# select f.* from
json_populate_recordset(null::product,'[{"id":80723,"type":41,"brand":41,"price":65.0,"status":3,"price_advantage":1.0,"gender":"M",
"algorithm":"v1"}]'::json) f;
 price_advantage | type | gender | status | brand | price |  id   |
algorithm
-+--+++---+---+---+---
   1 |   41 | M  |  3 |41 |65 | 80723 | v1

I see the difference in the query ( the second working directly on the
return value of the function), but in the first example, isn"t the inner
returning a set, from which the outer query can do a select *?
There is a difference with the second query which I've not  identified.
Anyone caring to enlighten me?

Thanks

Raph

PS: to get it working, I have to write the query as this:

=# select q.* from (select * from events where timestamp>'2014-02-02' and
type='gallery') q1 CROSS JOIN LATERAL
json_populate_recordset(null::product, event->'products') q limit 1;
 price_advantage | type | gender | status | brand | price |  id   |
algorithm
-+--+++---+---+---+---
  68 |  121 | F  |  3 |   493 |17 | 88753 |

What I'm interested is an explanation of why this is needed.


Re: [GENERAL] returning json object with subset of keys

2014-01-08 Thread Raphael Bauduin
On Wed, Jan 8, 2014 at 4:05 PM, Merlin Moncure  wrote:

> On Wed, Jan 8, 2014 at 6:37 AM, Raphael Bauduin  wrote:
> > Hi
> >
> > I'm using the json functionalities of postgresql 9.3.
> > I have a query calling json_populate_recordset like this:
> >   json_populate_recordset(null::product, event->'products')
> > but it returns an error:
> > ERROR:  cannot call json_populate_recordset on a nested object
> >
> > There is indeed one key in event->'products' giving access to an array of
> > objects.
> >
> > Is there a way to specify which keys to keep from the object? I haven't
> > found ti in the docs.
> >
> > Here is pseudo code of what I'd like to do:
> >   json_populate_recordset(null::product, event->'products' WITH ONLY KEYS
> > {'f1','f2'})
>
> unfortunately, not without manipulating the json.  this is basically a
> somewhat crippling limitation of the json_populate functions -- they
> can't handle anything but flat tuples.  so you have to do something
> highly circuitous.
>
> problem (one record):
> postgres=# create table foo(a text, b text);
> postgres=# select json_populate_record(null::foo, '{"a": "abc", "b":
> "def", "c": [1,2,3]}'::json);
> ERROR:  cannot call json_populate_record on a nested object
>
> nasty solution:
> postgres=# with data as (select '{"a": "abc", "b": "def", "c":
> [1,2,3]}'::json as j)
> select json_populate_record(null::foo, row_to_json(q)) from
> (
>   select j->'a' as a, j->'b' as b from data
> ) q;
>  json_populate_record
> --
>  (abc,def)
>
> with some extra manipulations you can do a record set. basically, you
> need to get the json 'right' first (or that can be done on the
> client).
>
> merlin
>


ok, thanks for your reply.
Is this considered to be added in the future to the json functions
available? I could use it frequently I think.

Cheers

raph


[GENERAL] returning json object with subset of keys

2014-01-08 Thread Raphael Bauduin
Hi

I'm using the json functionalities of postgresql 9.3.
I have a query calling json_populate_recordset like this:
  json_populate_recordset(null::product, event->'products')
but it returns an error:
ERROR:  cannot call json_populate_recordset on a nested object

There is indeed one key in event->'products' giving access to an array of
objects.

Is there a way to specify which keys to keep from the object? I haven't
found ti in the docs.

Here is pseudo code of what I'd like to do:
  json_populate_recordset(null::product, event->'products' WITH ONLY KEYS
{'f1','f2'})

Thx


-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org event->'products'


Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-08 Thread Raphael Bauduin
On Thu, Nov 7, 2013 at 7:24 PM, Tom Lane  wrote:

> I wrote:
> > It looks like the problem is we're building a MergeAppend plan and not
> > getting the targetlist for the MergeAppend node right.
>
> Found it --- simple oversight in building optimized min/max plans.
> If you need a patch now, see
>
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=5d0731da521f090f80ea39529fe274ac6d6bffa1
>
>
Wow, the patch is available thes same day I supplied the steps to reproduce
the bug! I don't think it's possible to be faster :-)
Thanks a alot!

Raph


regards, tom lane
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-07 Thread Raphael Bauduin
The query is also problematic here, because it returns the full json, and
not only the data I selected in the json.
Below, it should return only '_id', and not the whole json stored in event:

test3=> select max(event->>'_id') from events where event is not null;
  max

 {"_id":"5f93c3a044650105b5074c9a","type":"t2"}

Thanks

raph



On Thu, Nov 7, 2013 at 4:32 PM, Tom Lane  wrote:

> Raphael Bauduin  writes:
> > I have narrowed it a bit. It happens when I create said index on an empty
> > field. Here's the scenario to reproduce it:
>
> Thanks, I've reproduced the problem here.  The query still seems to run OK,
> it's just EXPLAIN that's falling over --- do you see the same?
>
> regards, tom lane
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-07 Thread Raphael Bauduin
Correction: It happens when I create said index on an empty *table*.

Raph


On Thu, Nov 7, 2013 at 11:10 AM, Raphael Bauduin  wrote:

> Hi,
>
> I have narrowed it a bit. It happens when I create said index on an empty
> field. Here's the scenario to reproduce it:
>
> Let me know if you need more info
>
> Cheers
>
> Raph
>
>
> create table events(id SERIAL,
> timestamp timestamp,
> event json);
>
>
> create table events_2012_01( CHECK (timestamp>='2012-01-01' and
> timestamp<'2012-2-01' )) inherits (events) ;
> create table events_2012_02( CHECK (timestamp>='2012-02-01' and
> timestamp<'2012-3-01' )) inherits (events) ;
> insert into events_2012_01 (timestamp, event) values ('2012-01-22
> 08:38:56', '{"_id":"4f93c3a044650105b5074c9a","type":"t1"}');
> insert into events_2012_02 (timestamp, event) values ('2012-02-22
> 08:38:56', '{"_id":"5f93c3a044650105b5074c9a","type":"t2"}');
>
>
> -- create empty table
> create table events_2012_03( CHECK (timestamp>='2012-03-01' and
> timestamp<'2012-4-01' )) inherits (events) ;
> explain select max(event->>'_id') from events where event is not null;
> --OK
>
> --create index
> create index events_2012_03_event_id_index on events_2012_03
> ((event->>'_id'));
> explain select max(event->>'_id') from events where event is not null;
> --BANG
>
> drop index events_2012_03_event_id_index;
> explain select max(event->>'_id') from events where event is not null;
> --OK
>
>
>
> On Mon, Nov 4, 2013 at 8:39 AM, Raphael Bauduin  wrote:
>
>> I'll look at providing such an example later this week.
>>
>> Raph
>>
>>
>> On Thu, Oct 31, 2013 at 3:23 PM, Tom Lane  wrote:
>>
>>> Raphael Bauduin  writes:
>>> > An explain returns an error:
>>> > => explain select max(event->>'_id') from events;
>>> > ERROR:  no tlist entry for key 2
>>>
>>> This is certainly a bug.  Can we see a self-contained example that
>>> triggers that?
>>>
>>> regards, tom lane
>>>
>>
>>
>>
>> --
>> Web database: http://www.myowndb.com
>> Free Software Developers Meeting: http://www.fosdem.org
>>
>
>
>
> --
> Web database: http://www.myowndb.com
> Free Software Developers Meeting: http://www.fosdem.org
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-07 Thread Raphael Bauduin
Hi,

I have narrowed it a bit. It happens when I create said index on an empty
field. Here's the scenario to reproduce it:

Let me know if you need more info

Cheers

Raph


create table events(id SERIAL,
timestamp timestamp,
event json);


create table events_2012_01( CHECK (timestamp>='2012-01-01' and
timestamp<'2012-2-01' )) inherits (events) ;
create table events_2012_02( CHECK (timestamp>='2012-02-01' and
timestamp<'2012-3-01' )) inherits (events) ;
insert into events_2012_01 (timestamp, event) values ('2012-01-22
08:38:56', '{"_id":"4f93c3a044650105b5074c9a","type":"t1"}');
insert into events_2012_02 (timestamp, event) values ('2012-02-22
08:38:56', '{"_id":"5f93c3a044650105b5074c9a","type":"t2"}');


-- create empty table
create table events_2012_03( CHECK (timestamp>='2012-03-01' and
timestamp<'2012-4-01' )) inherits (events) ;
explain select max(event->>'_id') from events where event is not null;
--OK

--create index
create index events_2012_03_event_id_index on events_2012_03
((event->>'_id'));
explain select max(event->>'_id') from events where event is not null;
--BANG

drop index events_2012_03_event_id_index;
explain select max(event->>'_id') from events where event is not null;
--OK



On Mon, Nov 4, 2013 at 8:39 AM, Raphael Bauduin  wrote:

> I'll look at providing such an example later this week.
>
> Raph
>
>
> On Thu, Oct 31, 2013 at 3:23 PM, Tom Lane  wrote:
>
>> Raphael Bauduin  writes:
>> > An explain returns an error:
>> > => explain select max(event->>'_id') from events;
>> > ERROR:  no tlist entry for key 2
>>
>> This is certainly a bug.  Can we see a self-contained example that
>> triggers that?
>>
>> regards, tom lane
>>
>
>
>
> --
> Web database: http://www.myowndb.com
> Free Software Developers Meeting: http://www.fosdem.org
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-03 Thread Raphael Bauduin
I'll look at providing such an example later this week.

Raph


On Thu, Oct 31, 2013 at 3:23 PM, Tom Lane  wrote:

> Raphael Bauduin  writes:
> > An explain returns an error:
> > => explain select max(event->>'_id') from events;
> > ERROR:  no tlist entry for key 2
>
> This is certainly a bug.  Can we see a self-contained example that
> triggers that?
>
> regards, tom lane
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] problem with partitioned table and indexed json field

2013-10-31 Thread Raphael Bauduin
It's postgresql 9.3, from the pgdg apt repository:
9.3.0-2.pgdg10.4+1

Raph


On Thu, Oct 31, 2013 at 1:48 PM, Merlin Moncure  wrote:

> On Thu, Oct 31, 2013 at 5:46 AM, Raphael Bauduin 
> wrote:
> >
> > Hi,
> >
> > I have a partitioned table events, with one partition for each month, eg
> > events_2013_03. The partition is done on the field timestamp, and
> > constraints are set, but insertion of data is done in the partition
> directly
> > (so not with a trigger on the events table)
> > The field event is of type json, and has a field '_id', which I can
> access:
> >
> > => select event->>'_id' from events limit 1;
> >  ?column?
> > --
> >  4f9a786f44650105b50aafc9
> >
> > I created an index on each partition of the table, but not on the events
> > table itself:
> > create index events_${y}_${m}_event_id_index on events_${y}_${m}
> > ((event->>'_id'));
> >
> > Querying the max event_id from a partition works fine:
> > => select max(event->>'_id') from events_2013_03;
> >max
> > --
> >  5158cdfe4465012cff522b74
> >
> >
> > However, requesting on the parent table does return the whole json field,
> > and not only the '_id':
> > => select max(event->>'_id') from events;
> > {"_id":"526eb3ad4465013e3e131a43","origin":. }
> >
> > An explain returns an error:
> > => explain select max(event->>'_id') from events;
> > ERROR:  no tlist entry for key 2
> >
> > This problem appeared when I created the indexes, and removing the index
> > make the explain work fine, but the plan implies a sequential scan on the
> > tables which is exactly what I wanted to avoid with the indexes.
> >
> > Does someone have an explanation, and possibly a way to solve this
> problem?
>
> wow, that looks like a bug.  Can you post the specific postgres version?
>
> merlin
>



-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


[GENERAL] problem with partitioned table and indexed json field

2013-10-31 Thread Raphael Bauduin
Hi,

I have a partitioned table events, with one partition for each month, eg
events_2013_03. The partition is done on the field timestamp, and
constraints are set, but insertion of data is done in the partition
directly (so not with a trigger on the events table)
The field event is of type json, and has a field '_id', which I can access:

=> select event->>'_id' from events limit 1;
 ?column?
--
 4f9a786f44650105b50aafc9

I created an index on each partition of the table, but not on the events
table itself:
create index events_${y}_${m}_event_id_index on events_${y}_${m}
((event->>'_id'));

Querying the max event_id from a partition works fine:
=> select max(event->>'_id') from events_2013_03;
   max
--
 5158cdfe4465012cff522b74


However, requesting on the parent table does return the whole json field,
and not only the '_id':
=> select max(event->>'_id') from events;
{"_id":"526eb3ad4465013e3e131a43","origin":. }

An explain returns an error:
=> explain select max(event->>'_id') from events;
ERROR:  no tlist entry for key 2

This problem appeared when I created the indexes, and removing the index
make the explain work fine, but the plan implies a sequential scan on the
tables which is exactly what I wanted to avoid with the indexes.

Does someone have an explanation, and possibly a way to solve this problem?

thanks

Raph


Re: [GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Raphael Bauduin
On Mon, Sep 23, 2013 at 11:31 AM, Raphael Bauduin  wrote:

> Hi,
>
> I'm experimenting with the json data type and functions in 9.3.
> I'm storing json objects of this form in the event column:
> {type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
> {} ] }
>
> I can issue this query, but notice the limit 1:
>
> select * from json_populate_recordset(null::product, (select
> event->'products' from events limit 1));
>
> The result is (edited for conciseness):
>
>  type | gender |  id
> --++---
>41 | F  | 40003
>41 | F  | 60043
>41 | F  | 27363
>41 | F  | 27373
>41 | F  | 28563
>
> But all these products come from one event.
> Is there a way to return the products from several events?, eg with a
> limit 2 rather than limit 1?
>
>
Some more info, after searching further.

This query

  select
json_populate_record(null::product,row_to_json(json_populate_recordset(null::product,event->'products')))
from (select * from events limit 2) as foo ;

returns what I want but not in the format I want (why?):
   json_populate_record
--
 (33,61,M,3,51,12,54893)
 (20,61,M,3,1,15,59623)
 (17,61,M,3,453,12,59283)
 (30,61,M,3,51,19,55713)
 (26,61,M,3,51,19,54963)

I manage to get the results as json:

select
row_to_json(json_populate_recordset(null::product,event->'products')) from
(select * from events limit 2) as foo ;

row_to_json

 
{"price_advantage":33,"type":61,"gender":"M","status":3,"brand":51,"price":12,"id":54893}
 
{"price_advantage":20,"type":61,"gender":"M","status":3,"brand":1,"price":15,"id":59623}
 
{"price_advantage":17,"type":61,"gender":"M","status":3,"brand":453,"price":12,"id":59283}

but I don't manage to get the results as from a table like in the first
json_populate_recordset query I listed (with limit 1). Any suggestion?

Thanks

Raph


[GENERAL] passing multiple records to json_populate_recordset

2013-09-23 Thread Raphael Bauduin
Hi,

I'm experimenting with the json data type and functions in 9.3.
I'm storing json objects of this form in the event column:
{type: 'event_type, products : [ {id:45, 'type': 3, 'gender':'F',..}, ...,
{} ] }

I can issue this query, but notice the limit 1:

select * from json_populate_recordset(null::product, (select
event->'products' from events limit 1));

The result is:

 type | gender |  id
--++---
   41 | F  | 40003
   41 | F  | 60043
   41 | F  | 27363
   41 | F  | 27373
   41 | F  | 28563

But all these products come from one event.
Is there a way to return the products from several events?, eg with a limit
2 rather than limit 1?

Thanks

Raph


[GENERAL] easy entry forms on a pg database

2009-05-15 Thread Raphael Bauduin
Hi,

Are there any tool/libs to easily and rapidly generate web-based entry
forms for tables in a postgresql database?

Thanks

Raphaƫl

PS: In my search I've found PHP DB Form creator
(http://sourceforge.net/projects/phpdbform/) but last realease is 5
years old.
I also know of ERW, which doesn't seem to fit when starting from an
existing db: http://erw.dsi.unimi.it/


-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ilike and utf-8

2006-04-14 Thread Raphael Bauduin
On 4/14/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Raphael Bauduin" <[EMAIL PROTECTED]> writes:
> > Does the ilike operator work fine with cyrillic text put in a UTF-8
> > encoded database?
>
> If you've initdb'd in an appropriate locale (probably named something
> like ru_RU.utf8) then it should work.  I wouldn't expect a random
> non-Russian locale to necessarily know about Cyrillic case conversions,
> however.

The problem is that the system is serving, at the same time, content
for different locales, so I can't set it at the environment level.
Maybe I should set a user setting so a user can choose which locale to
use.

Thanks for the help!

Raph

>
> Martijn's nearby comment about OS dependency really boils down to the
> fact that different OSes may have different definitions for similarly
> named locales.  We need to know what locale you're using (try "SHOW
> LC_CTYPE") as well as the OS.
>
> regards, tom lane
>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ilike and utf-8

2006-04-14 Thread Raphael Bauduin
It's a Debian GNU/Linux, with a self-compiled 8.1.3 postgresql.

Raph

On 4/14/06, Martijn van Oosterhout  wrote:
> On Fri, Apr 14, 2006 at 03:16:01PM +0200, Raphael Bauduin wrote:
> > Hi,
> >
> > Does the ilike operator work fine with cyrillic text put in a UTF-8
> > encoded database?
> > I've had remarks of a user (of http://myowndb.com, a web database)
> > with text in cyrillic that his searches are not case insensitive,
> > although I use the ilke operator in the code. And it works perfectly
> > for my data (that are not in cyrillic).
>
> UTF-8 support for case-comparison is operatnig system dependant. What
> systems are we comparing here?
>
> Have a nice day,
> --
> Martijn van Oosterhout  http://svana.org/kleptog/
> > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> > tool for doing 5% of the work and then sitting around waiting for someone
> > else to do the other 95% so you can sue them.
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.1 (GNU/Linux)
>
> iD8DBQFEP6gDIB7bNG8LQkwRAgyUAJsGusLIxrdkiaDg11727770bquYCgCfWgCZ
> /SYTVp84hAf/jx8pO+js8pY=
> =afee
> -END PGP SIGNATURE-
>
>
>

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] ilike and utf-8

2006-04-14 Thread Raphael Bauduin
Hi,

Does the ilike operator work fine with cyrillic text put in a UTF-8
encoded database?
I've had remarks of a user (of http://myowndb.com, a web database)
with text in cyrillic that his searches are not case insensitive,
although I use the ilke operator in the code. And it works perfectly
for my data (that are not in cyrillic).

Thanks

Raph

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] history tables with only one function?

2004-08-12 Thread Raphael Bauduin
Andreas Haumer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi!
Raphael Bauduin wrote:
Raphael Bauduin wrote:

Hi,
I'm looking at the logging of a database we'll put in production soon.
I've seen some posts on this list about history tables, like mentioned
in http://www-106.ibm.com/developerworks/web/library/wa-dbdsgn2.html .
I think I'll go that way too, but I still have some questions on the
approach, and would appreciate any advice on it.
Here are some questions I have:
- is it possible to write only one function used for all logging
triggers? As illustrated in
http://www.varlena.com/varlena/GeneralBits/57.php , it is possible to use
one function for tracking last update times for all tables:
  CREATE OR REPLACE FUNCTION setmodtime() RETURNS TRIGGER AS '
  BEGIN
 NEW.modified_timestamp = now();
 RETURN NEW;
  END
  ' LANGUAGE 'plpgsql';
Is it possible to create only one function to insert rows in the
corresponding history table? The name of the history table can be
constructed from the original table.
and I guess all fields of the table can be retrieved from the db's
metadata. Would that be feasible, and more importantly, would it be
usable?
I found a solution to this one, thanks to a post of Tom Lane on the
postgres-novice mailing list:
CREATE FUNCTION "update_log"(text) RETURNS trigger AS '
BEGIN
  insert into $1_log select new.*,''UPDATE'';
  return new;
END;
' LANGUAGE 'plpgsql';
I can then create a trigger and pass the table name as argument (does
the function know
which table fired the trigger?):
CREATE TRIGGER "customers_update_log_t" after UPDATE on "customers" for
each row execute procedure "customers_update_log"('customers');
This creates entries in the customers_log table each time I update a
customer.
And this construct indeed works? I'm stunned!
Which PostgreSQL version is this?
As far as I know your function should have the following problems:
*) Trigger functions can not be declared with arguments in the
   CREATE FUNCTION statement. They can have arguments when they
   are used in the CREATE TRIGGER statement, but trigger functions
   have to read the values of their arguments from the array TG_ARGV[]
*) You can not use variables as a placeholder for table- or column-
   names in SQL statements. You would have to create the SQL statement
   dynamically and execute it in your function with EXECUTE
IMHO this is true at least for PostgreSQL 7.4
See the thread "Trigger functions with dynamic SQL" on pgsql-sql
around July 24th where I described my problems with dynamically
created SQL statements. Finally I got around all the hassels with
quotation marks and my trigger functions work as expected.
Could you please confirm that your function works as you described?

It works as expected:
log=# DROP TRIGGER "customers_update_log_t" on "customers";
DROP TRIGGER
log=# DROP FUNCTION "update_log"();
ERROR:  function update_log() does not exist
log=# CREATE FUNCTION "update_log"(text) RETURNS trigger AS '
log'#  BEGIN  
log'# insert into $1_log select new.*,''UPDATE'';
log'# return new;
log'#  END;
log'#  ' LANGUAGE 'plpgsql';
CREATE FUNCTION
log=# 
log=# CREATE TRIGGER "customers_update_log_t" after UPDATE on "customers" for each row execute procedure "customers_update_log"('customers');
CREATE TRIGGER
log=# select count(*) from customers_log;
count 
---
   18
(1 row)

log=# update customers set name='EDITED AND LOOGED GENERIC FUNCTION NEW'   where customer_id=20003;
UPDATE 1
log=# select count(*) from customers_log;
count 
---
   19
(1 row)

And the row added to customers_log is absolutely correct. version is 7.4.3 (debian 
package)
Raph
PS: I also have a problem of quoting in a trigger (as you seem to have had from the pgsql-sql thread you refered to). 
I want this to be executed:
EXECUTE ''insert into ''|| TG_RELNAME||''_log select ''||new.*||'',''||TG_OP;
but I get this output:
NOTICE:  table = customers
NOTICE:  operation = UPDATE
ERROR:  NEW used in query that is not in a rule
CONTEXT:  PL/pgSQL function "activity_log" line 4 at execute statement

I posted a message to pgsql-novice, but maybe you can help me forward also?
Thanks.


I tried with similar functions and the failed with syntax errors,
so I had to use dynamically created SQL statements.
- - andreas
- --
Andreas Haumer | mailto:[EMAIL PROTECTED]
*x Software + Systeme  | http://www.xss.co.at/
Karmarschgasse 51/2/20 | Tel: +43-1-6060114-0
A-1100 Vienna, Austria | Fax: +43-1-6060114-71
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFBGdE4xJmyeGcXPhERAsbZAJ4rS3E8ng3D/Hx/ywsxKM5CVjRd3ACfcdwi
Dt5vUZsSVPbjDfjTMte/MzY=
=RAJ4
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] history tables with only one function?

2004-08-10 Thread Raphael Bauduin
Hi,
I'm looking at the logging of a database we'll put in production soon.
I've seen some posts on this list about history tables, like mentioned in 
http://www-106.ibm.com/developerworks/web/library/wa-dbdsgn2.html .
I think I'll go that way too, but I still have some questions on the approach, and 
would appreciate any advice on it.
Here are some questions I have:
- is it possible to write only one function used for all logging triggers? As 
illustrated in http://www.varlena.com/varlena/GeneralBits/57.php , it is possible to 
use
one function for tracking last update times for all tables:
   CREATE OR REPLACE FUNCTION setmodtime() RETURNS TRIGGER AS '
   BEGIN
  NEW.modified_timestamp = now();
  RETURN NEW;
   END
   ' LANGUAGE 'plpgsql';
Is it possible to create only one function to insert rows in the corresponding history 
table? The name of the history table can be constructed from the original table.
and I guess all fields of the table can be retrieved from the db's metadata. Would 
that be feasible, and more importantly, would it be usable?
-Another question I have is for those who use this approach: How often o you have 
to flush those history tables
and when you flush the tables, where do you put the flushed data? In another database 
on another server or on tape?
-Would it be possible to use the replication of Slony-I and only log in the history tables in the slave database? Or is
somthing similar possible? 

Thanks.
Raph 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org