[SQL] foreign keys with on delete cascade and triggers

2006-01-11 Thread Dirk Jagdmann
Hello,

I often create foreign keys with "on delete cascade" so I can
conviniently delete rows in multiple tables which are referenced by (a
chain) of foreign keys. Now I've run into an issue and I'd like to
have some opinions if the current behaviour of PostgreSQL is desired.
If have made my tests with versions 8.0.4 and 8.1.1.

The idea behind the sample commands below is, that the whole deletion
should be denied, because a trigger in a cascaded table blocked the
deletion. The trigger works as expected and prevents rows with a value
of "5" being deleted from table "b". However if the deletion was
triggered via the cascaded foreign key (trigger), the deletion in
table "a" is not rolled back, thus the row with "5" in "a" is lost.
This of course leaves the database in an inconsistant state, because
the foreign key in table "b" can no longer be referenced in "a".

Now I'd like to know if this is a bug in the current form of cascaded
deletions; or if this is desired behaviour and the oppression of
deletions via triggers is undefined behaviour in the cascaded case; or
if this issue just hasn't been addressed yet; or something completly
differnt.

create table a ( i int primary key );
create table b ( f int references a on delete cascade on update cascade );
create or replace function f() returns trigger as $$
BEGIN
  IF OLD.f = 5 THEN
RETURN NULL;
  END IF;
  RETURN OLD;
END;
$$ language plpgsql;
create trigger b_del before delete on b for each row execute procedure f();
insert into a values(5);
insert into b values(5);
delete from a where i=5;
select * from a; -- 0 rows
select * from b; -- 1 row containing '5'

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] foreign keys with on delete cascade and triggers

2006-01-12 Thread Dirk Jagdmann
Hello Tom,

> If you want the whole transaction rolled back, raise an error instead
> of returning NULL.

You're right, that's working. But now I have a slightly different problem.

I assume that the trigger which watches the cascaded deletions first
deletes the row in the monitored table and then deletes any dependant
foreign keys. Thus the "foreign key tree" is deleted in a top-down
manner. This hinders any triggers on delete queries in cascaded tables
to query the referenced table any longer, since the referenced row is
already deleted. The following code shows what I mean:

create table a ( i serial primary key, name text );
create table b ( f int references a on delete cascade );
create or replace function f() returns trigger as $$
DECLARE
  n text;
BEGIN
  SELECT name INTO n from a where i=OLD.f;
  IF FOUND THEN
RAISE NOTICE '% deleted me', n;
  END IF;
  RETURN OLD;
END;
$$ language plpgsql;
create trigger b_del before delete on b for each row execute procedure f();
insert into a(name) values('Dirk');
insert into b select currval('a_i_seq');
insert into a(name) values('Tom');
insert into b select currval('a_i_seq');
delete from b where f=1; -- will raise the notice
delete from a where i=2; -- wont raise anything

If the "foreign key tree" would be deleted in a bottom-up (or
depth-first) manner the second delete would be able to retrieve the
row in table a.

Now I'd like to know if the current order of deletions in PostgreSQL
is intended in the top-down way or if that could be changed?

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] table constraint + INSERT

2006-05-18 Thread Dirk Jagdmann

 I have a simple table with constraint

 CREATE TABLE "PART"
 (
   "P_PARTKEY" int4 NOT NULL,
   "P_RETAILPRICE" numeric,
   CONSTRAINT "PART_PRIMARY" PRIMARY KEY ("P_PARTKEY"),
   CONSTRAINT "PART_check" CHECK ("P_RETAILPRICE" = (9 + "P_PARTKEY" /
10 + "P_PARTKEY" / 100)
 );

 And I try to insert a row:
 INSERT INTO "PART" ("P_PARTKEY","P_RETAILPRICE") VALUES(999,90109.89);

 but it fails: ERROR:  new row for relation "PART" violates check constraint
"PART_check"

 When you check using your head or pocket calculator then this INSERT seems
to be correct. Is it some floating point mystery?
 Is there some trick?


Postgres is likely doing integer arithmetic:

test=# select 9+999/10+999/100;
?column?
--
   90108
(1 row)

So you have to cast your check constraint to numeric types:

CREATE TABLE PART
(
 P_PARTKEY int4 NOT NULL,
 P_RETAILPRICE numeric,
 CONSTRAINT PART_PRIMARY PRIMARY KEY (P_PARTKEY),
 CONSTRAINT PART_check CHECK (P_RETAILPRICE = (9 + P_PARTKEY::numeric / 10
);

However if this would be your real SQL Schema I'd recommend using a
view to calculate the R_RETAILPRICE column:

CREATE TABLE PART
(
 P_PARTKEY int4 NOT NULL,
 CONSTRAINT PART_PRIMARY PRIMARY KEY (P_PARTKEY)
);
create view PARTV as
select P_PARTKEY, 9 + P_PARTKEY::numeric / 10 + P_PARTKEY::numeric / 100 as
from PART;

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] table constraint + INSERT

2006-05-18 Thread Dirk Jagdmann

Too bad, some code got truncated...

CREATE TABLE PART
(
 P_PARTKEY int4 NOT NULL,
 P_RETAILPRICE numeric,
 CONSTRAINT PART_PRIMARY PRIMARY KEY (P_PARTKEY),
 CONSTRAINT PART_check CHECK (P_RETAILPRICE = (9 +
P_PARTKEY::numeric / 10 + P_PARTKEY::numeric / 100
);

and the second code should read:

create view PARTV as
select P_PARTKEY, 9 + P_PARTKEY::numeric / 10 + P_PARTKEY::numeric
/ 100 as P_RETAILPRICE
from PART;

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] keeping last 30 entries of a log table

2006-06-20 Thread Dirk Jagdmann

Column id should be indexed indeed.  Anyway, I'm not sure about any performance 
improvement using that last method, as the most consuming operation might be 
the DELETE operation, not really the SELECT operation, when dealing with a huge 
volume of data.


why worry, there are a maximum of 30 Entries in this table anyway. So
even the most unoptimized select and delete combinations should be
very fast...

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Best way to simulate Booleans

2009-07-07 Thread Dirk Jagdmann
> The most transportable method would be to use either a char(1) or an
> int with a check constraint.
>
> mybool char(1) check (mybool in ('t','f'))
> mybool int check (mybool >=0 and <=1)

I would decide depending on the application requirement. If my Oracle
should look similar to PostgreSQL use the char(1). If you have lots of
application code the int is probably better, since you can just use
the created programming language variable (presumably an integer as
well) in your programming language expressions (if, while).

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] please help me on regular expression

2010-02-03 Thread Dirk Jagdmann
Be careful when working with backslashes and regular expressions for
the proper (double) escaping!

# select '70a5' ~ e'\\d+\.\\d+';
 ?column?
--
 t
(1 row)

# select '70a5' ~ e'\\d+\\.\\d+';
 ?column?
--
 f
(1 row)

# select '70.5' ~ e'\\d+\\.\\d+';
 ?column?
--
 t

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] optimal insert

2006-10-08 Thread Dirk Jagdmann

Hello experts,

I have a database that contains three tables:

create table a (
 id serial primary key,
 ... -- some more fields not relevant for my question
);
create table b (
 id serial primary key,
 ... -- some more fields not relevant for my question
);
create table a_b (
 a int not null references a,
 b int not null references b
);

Tables a and b have already been filled with lots of rows. Now my
application needs to insert the relationship of a to b into table a_b
and is currently doing it with inserts like the following:

insert into a_b(a,b) values(1,100);
insert into a_b(a,b) values(1,200);
insert into a_b(a,b) values(1,54);
insert into a_b(a,b) values(1,4577);

So for a batch of inserts the value of a stays the same, while for by
arbitrary values are inserted. Now I have wondered if PostreSQL offers
a smarter way to insert those values? A solution can contains usage of
some plpgsql code.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] optimal insert

2006-10-10 Thread Dirk Jagdmann

Hello Aaron,

thank you for your suggestion. I will have to think if something
similar would be of any benefit for my data.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] optimal insert

2006-10-10 Thread Dirk Jagdmann

Hello George,


And don't forget that \COPY and especially COPY are usually much faster
(and, IMHO, easier to compose/maintain) than gobs of INSERTs.


I did not forget, but my application uses embedded SQL (with the epcg
preprocessor) and I don't think it can handle COPYs :(

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] storing access rights in a postgres database

2006-10-10 Thread Dirk Jagdmann

Hello tv,

I think in your case the PostgreSQL array column type really fits
well. I would just add an array of type integer (or whatever your
primary key in your role table is) to your company, project, module,
... tables. Then you can easy check if a role has access to the
project row by checking if the roles primary key is contained in the
role array. And you can easily select over the entire project table
and matching any values in the role array.

Here are some (untested) SQL statements to clarify my suggestion:

create table role (
 id serial primary key,
 name text );
create table project (
 id serial primary key,
 name text,
 roles int[] );
create table company (
 id serial primary key,
 name text,
 roles int[] );
insert into role values(1,'you');
insert into role values(2,'me');
insert into project values(1,'a',{1,2});
insert into project values(2,'b',{2});
-- check if I can access a project
select id from project where name='a' and 2=ANY(roles); -- 2 is 'my' role id
-- get all companies I have access to
select id,name from company where 2=ANY(roles);

I think the foreign key constraints can not be enforced with the
standard foreign key triggers/functions so you would have to write
your own plpgsql triggers if this is a mandatory requirement.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] MD5 sums of large objects

2007-04-08 Thread Dirk Jagdmann

Hello all together,

I have a database containing lots of large objects. Now I'd like to
compare large objects in my database and I thought of having a
function which creates a hashsum (MD5, SHA-1 or whatever) of my large
object, so I can use that in queries:

create function lo_md5(id oid) returns text...

Now I don't know if something like this is already included in the
PostgreSQL distribution, could be found somewhere on pgfoundry or
thirdly how to do it? If I have to program myself I would go for a
C-language function which would use the libpq large-object functions
to create the hashsums.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] MD5 sums of large objects

2007-04-08 Thread Dirk Jagdmann

Hello Michael,

this works like charm. Although I did fix the argument for lo_lseek:

CREATE OR REPLACE FUNCTION md5(id oid)
RETURNS text
as $$
DECLARE
 fdinteger;
 size  integer;
 hashval   text;
 INV_READ  constant integer := 262144; -- 0x4 from libpq-fs.h
 SEEK_SET  constant integer := 0;
 SEEK_END  constant integer := 2;
BEGIN
 IF id is null THEN
   RETURN NULL;
 END IF;
 fd   := lo_open(id, INV_READ);
 size := lo_lseek(fd, 0, SEEK_END);
 PERFORM lo_lseek(fd, 0, SEEK_SET);
 hashval := md5(loread(fd, size));
 PERFORM lo_close(fd);
 RETURN hashval;
END;
$$
language plpgsql stable strict;
comment on FUNCTION md5(id oid) is 'Calculates the md5 sum of a large object.';

I vote for this function beeing included either somewhere in the
contrib directories, as you often don't need the full power of
pgcrypto is md5 suffices for your hashing needs.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] MD5 sums of large objects

2007-04-09 Thread Dirk Jagdmann

Hello Michael,

thanks for the comments on my corrected function.


You could make a proposal in pgsql-hackers but I think 8.3 is in
feature freeze so don't expect to see it until 8.4, if it's accepted
at all.  There's always PgFoundry :-)


I'll now see how this performs in my application and if I'm satisfied
with this solution I'll post something on pgsql-hackers of pgfoundry.

--
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] EXECUTE 'EXECUTE ...' or how to use dynamic prepared statements ?

2007-09-04 Thread Dirk Jagdmann
Hello Marc,

at first I tried to solve your update of the tables. The example you
gave should be done with an update statement like the following:

update test_table
   set mygroup=(select t.mygroup
  from test_table as t
 where t.family = test_table.family
   and t.rang = test_table.rang+1)
 where rang=0;

If you have to write a function which receives the tablename as an
argument it would look like:

CREATE OR REPLACE FUNCTION test_function(tablename text)
RETURNS integer AS $BODY$
 BEGIN
  EXECUTE 'update ' || tablename || '
   set mygroup=(select t.mygroup
  from ' || tablename || ' as t
 where t.family = test_table.family
   and t.rang = test_table.rang+1)
 where rang=0;'
  RETURN 0;
 END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

Generally you should avoid using explicit for/loop constructs in your
stored procedures if the action can be solved by a single SQL
statement, because the optimizer can make a better execution plan.

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Composite UNIQUE across two tables?

2008-03-11 Thread Dirk Jagdmann
Hello Jamie,

I vote against duplicating site_group_id in the users table and the
proposed unique constraint with a function. Because all those might
fail, if you ever want to change the relationship between a site and a
site group.

My advise would be to have two triggers for insert/update on the site
and users table that check the uniqueness of the username with the
site_group. A have made some tests with inserts and updates on the
existing users and sites and these two functions seem to work.

One remark about your schema: If you use PostgreSQL, use the "text"
datatype for strings, since you don't limit yourself with the string
length. For my tests, I have modified your posted schema a bit, to
unify all column names to "name". You should set up an extensive
testcase if you haven't done already which should check every
combination of insert, update and delete on the three tables and see
if those are supposed to work, or should be restricted because of your
uniqueness constraints.

CREATE TABLE site_groups (
   id serial primary key,
   name text not null
);

CREATE TABLE sites (
   id serial primary key,
   site_group_id integer not null references site_groups,
   name text not null
);

CREATE TABLE users (
   id serial,
   site_id integer not null references sites,
   name text not null
);

create or replace function user_check_unique_site_group()
returns trigger
as $$
declare
  c int;
  sg int;
begin
  -- get site_group id from site
  select into sg site_group_id from sites where id = NEW.site_id;

  -- check if we find any equal user names in the site group
  select into c count(*)
  from users, sites
  where users.site_id = sites.id
and sites.site_group_id = sg
and users.name = NEW.name;

  -- nothing found, this user name is ok
  if c = 0 then
return NEW;
  end if;

  raise exception 'username is not unique with site group';
  return NULL;
end;
$$ language plpgsql;

create trigger user_check_unique_site_group
before update or insert
on users
for each row
execute procedure user_check_unique_site_group();

create or replace function sites_check_unique_username()
returns trigger
as $$
declare
  c int;
begin
  -- if the site group is unmodified we're safe
  if NEW.site_group_id = OLD.site_group_id then
return NEW;
  end if;

  -- check if the same username is in the old and new site group
  select into c count(*)
  from users, sites
  where users.site_id = sites.id
and sites.site_group_id = NEW.site_group_id
and users.name in (
   select users.name
   from users, sites
   where users.site_id = sites.id
 and sites.site_group_id = OLD.site_group_id
  );

  -- nothing found, we're safe
  if c = 0 then
return NEW;
  end if;

  raise exception 'username is not unique with site group';
  return NULL;
end;
$$ language plpgsql;

create trigger sites_check_unique_username
before update
on sites
for each row
execute procedure sites_check_unique_username();

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Composite UNIQUE across two tables?

2008-03-12 Thread Dirk Jagdmann
>  > My advise would be to have two triggers for insert/update on the site
>  > and users table that check the uniqueness of the username with the
>  > site_group. A have made some tests with inserts and updates on the
>  > existing users and sites and these two functions seem to work.
>
>  I think this is the way that I'll go.  I'd hoped to somehow express this
>  solely in the design, if you know what i mean (e.g. without writing
>  SPs), but it looks like this is the best way to do it.

Well I thought about that, but finally came to the conclusion, that
standard SQL constraints can not express this inter-table
relationships. As I'm not a fan of (artificially) breaking up tables I
just wrote those two pl/pgsql functions, because I'd rather have a
simple table design and some complicated constraint checking functions
than the other way.

>  Thank you for your taking the time to write this up, it's very much
>  appreciated.

Most people reading this list like to think about/learn from other
people's problems.

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Insert a space between each character

2008-10-08 Thread Dirk Jagdmann
> Use a regular expression, e.g.:
> select trim(regexp_replace('foobarbaz', '(.)', E'\\1 ', 'g'));

And if we only match characters until the last character in the
string, we can get rid of the outer trim():

# select regexp_replace('foobarbaz', E'(.)(?!$)', E'\\1 ', 'g');
  regexp_replace
---
 f o o b a r b a z
(1 row)

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Add: Special sort querstion

2009-04-01 Thread Dirk Jagdmann
I think it can not be done with default PostgreSQL functions and
operators, because you must compare two different columns of two rows
depending which row is on either side of the "<" comparision.

But I think you can do this with the following steps:

1) create a new type as a 4-tupel of start_lat, end_lat, start_lng, end_lng.
2) write a comparison function for this type
3) write a SQL-Function to convert 4 values into your new type (for
example: ToMyType(start_lat, end_lat, start_lnd, end_lng) returns
MyType...)
4) use ToMyType in the order clause of your select

If this would work, I'm interested in a working example code :-)

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Add: Special sort querstion

2009-04-01 Thread Dirk Jagdmann
I think it can not be done with default PostgreSQL functions and
operators, because you must compare two different columns of two rows
depending which row is on either side of the "<" comparision.

But I think you can do this with the following steps:

1) create a new type as a 4-tupel of start_lat, end_lat, start_lng, end_lng.
2) write a comparison function for this type
3) write a SQL-Function to convert 4 values into your new type (for
example: ToMyType(start_lat, end_lat, start_lnd, end_lng) returns
MyType...)
4) use ToMyType in the order clause of your select

If this would work, I'm interested in a working example code :-)

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ENUM vs DOMAIN vs FKyed loookup table

2009-04-12 Thread Dirk Jagdmann
> When you need to choose between enum types, domain types or lookup tables
> with foreign keys, what do you usualy choose?

When I have a column with valid values that I know when writing my
tables and that will *never* change I use an enum. For example a human
gender type (and remember that there are 4 values for human sex if you
want to model it completely).

Otherwise a simple table with a primary key of type 'text' that is
used as a foreign key in the other table, so I can change/alter the
valid values later. No join needed! Remember that PK/FK do not always
have to be of type 'serial'.

The reason is, that for a user of the SQL language there is hardly any
difference in using an ENUM or a text type, since they are both
strings which must be enclosed in single quotes. Of course under the
hood for the PostreSQL languange parser and interpreter there is a
difference, but we can ignore that.

To revisit your example I would do it this way:

CREATE TABLE code_type (
  t text not null primary key
);
insert into code_type values ('Unapproved'), ('ApprovedByEmail'),
('ApprovedByAdmin');

CREATE TABLE codes (
   code_id integer,
   code_value integer,
   code_type text not null references code_type
);

-- 
---> Dirk Jagdmann
> http://cubic.org/~doj
-> http://llg.cubic.org

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql