Re: [SQL] Updating cidr column with network operator

2005-09-26 Thread Axel Rau


Am 26.09.2005 um 02:05 schrieb Michael Fuhr:


On Fri, Sep 23, 2005 at 09:19:25PM +0200, Axel Rau wrote:

Am 23.09.2005 um 19:32 schrieb Michael Fuhr:

On Fri, Sep 23, 2005 at 06:31:17PM +0200, Axel Rau wrote:

Networks change during time, being diveded or aggregated or you just
enter wrong data during insert.


Have you considered using a CHECK constraint and/or a trigger to
ensure that the network in the network column contains the address
in the id column?  If you have and rejected the idea, what were the
reasons?


I'm sure this would be the cleanest solution but remember networks
change.


Yes, which is why it's a good idea to automatically propogate those
changes to tables that maintain redundant data.

I would not call it redundant but normalized, because network has some
attributes, common to all addresses in the net, 1st of all the netmask.

  If that data isn't
reliable then there's little point in maintaining it.
Reliability is a big issue in my application, because it's some kind of 
data mining
of internet structures (networks, Autonomous Systems and abuse 
addresses).
Whois data is seldom correct, so I changed recently to use the internet 
routing

table for the most important network data.



This constraind would have to update all details (addresses) of a
10/8 being splitted in a 10/9 and a 10.128/9. If this can be done
with pg, it is above my current knowledge level. (But feel free to
send a suggestion).


See the documentation for PL/pgSQL and triggers.  You could write
a trigger function to automatically update the address table whenever
the network table changes.  Or, since the foreign keys already
cascade on update, you could have a trigger on the address table
that checks whether the new network contains the IP address, and
if it doesn't then it looks up the correct network.

I will try this, but be sure I will come back with questions. (-;).



The other point is performance. Inserting new addresses is a realtime
job while correcting network changes is a daily maintenance job.


Triggers on update shouldn't affect insert performance, and since
you already have a foreign key constraint to slow inserts down,
adding a CHECK constraint should have negligible impact.

The need for regular corrections is a sign that perhaps the design
could be improved.  This is one reason to avoid maintaining redundant
data if possible: you have to take additional steps to ensure that
it remains consistent.

I agree.



This update also might not give the results you want if more than
one network matches.


This is not possible, because the pk of network is the net cidr.


Yes, it is possible, because the update's join condition isn't
equality but rather containment.  If the network table contains
10.1.0.0/16 and 10.1.0.0/24, then the IP address 10.1.0.1 would
match both.

You mean, unique does not imply none-overlapping for data-type
network? Oh, I didn't know that.

Who is responsible for this func spec? This is completly contra-
real-world-experience. Can this be re-considered for a future release?

I do understand now the background of your arguments.
First what I have to do, is to fix that network table to forbid 
overlapps.

I expect that I will find overlapps already in the table, because I have
not yet written the maintenance code to deleting/reorganizing nets (-;).

Deleting involves scanning a 300 MB flat file and looking which row
in network has no longer an entry in the flat file. I did try this in 
pg in

the 1st place but could not keep up updating 9 million rows 3 times
in 4 hours on a 2x 900MHz 2GB Powermac G4. I currently have no
idea how to solve that (Possibly worth another thread).


 If your application prevents 10.1.0.0/16 and 10.1.0.0/24
from both being in the network table then *that's* the reason
multiple matches aren't possible, but it's not because of the
primary key.

--
Michael Fuhr


Thank you for taking the time to explain this,
Axel

Axel Rau, Frankfurt, Germany   +49-69-951418-0


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Possible to delete record from all tables at the same time?

2005-09-26 Thread Chris Browne
[EMAIL PROTECTED] writes:
 Is it possible to delete a record from all tables in the database at
 the same time, without having to execute a separate DELETE statement
 for each table?

 I have a situation where I need to delete user records from our system.
 The user account information is spread across a handful of tables. Each
 table has an id field that contains the user's id, so the tables do
 have one field in common. When purging a user account from the system,
 I'd like to be able to issue one command that will delete the user
 account information from all tables, rather than issue separate delete
 commands for each table, something along the lines of:

DELETE FROM ALL_TABLES WHERE userId = whatever;

 Is this possible?

Yes, it is, though not via that mechanism.

http://www.postgresql.org/docs/current/static/ddl-constraints.html

This would be handled via a set of foreign keys of the ON DELETE
CASCADE sort.

Thus, you'd have one central user account, with the id field.

A table associating users with privileges might look like the
following:

CREATE TABLE user_privileges (
privilege_no integer REFERENCES privileges ON DELETE RESTRICT,
user_id integer REFERENCES user(id) ON DELETE CASCADE,
primary key (privilege_no, user_id)
);

Other tables would similarly reference user(id) ON DELETE CASCADE;
whenever you delete from table user, the corresponding entries in
those tables would automatically be deleted.
-- 
(reverse (concatenate 'string moc.enworbbc @ enworbbc))
http://www.ntlug.org/~cbbrowne/lisp.html
Rules  of  the Evil  Overlord  #100.  Finally,  to keep  my  subjects
permanently locked in  a mindless trance, I will  provide each of them
with free unlimited Internet access. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] add column if doesn't exist

2005-09-26 Thread Brandon Metcalf
Is there a way to check for the existence of a column in a table other
than, say, doing a SELECT on that column name and checking the output?

I'm basically looking to do an ALTER TABLE foo ADD COLUMN bar if bar
doesn't exist.

Thanks.

-- 
Brandon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Updating cidr column with network operator

2005-09-26 Thread Michael Fuhr
On Mon, Sep 26, 2005 at 12:34:59PM +0200, Axel Rau wrote:
 Am 26.09.2005 um 02:05 schrieb Michael Fuhr:
  On Fri, Sep 23, 2005 at 09:19:25PM +0200, Axel Rau wrote:
   I'm sure this would be the cleanest solution but remember networks
   change.
 
  Yes, which is why it's a good idea to automatically propogate those
  changes to tables that maintain redundant data.

 I would not call it redundant but normalized, because network has some
 attributes, common to all addresses in the net, 1st of all the netmask.

An attribute is redundant if it repeats a fact that can be learned
without it.  If one table contains IP addresses and another contains
networks, then you can associate IP addresses and networks with a
join of the two tables; indeed, this is how the fix the network
column update works.  Having a network column in the address table
simply repeats what could be learned through the join.

This update also might not give the results you want if more than
one network matches.
  
   This is not possible, because the pk of network is the net cidr.
 
  Yes, it is possible, because the update's join condition isn't
  equality but rather containment.  If the network table contains
  10.1.0.0/16 and 10.1.0.0/24, then the IP address 10.1.0.1 would
  match both.

 You mean, unique does not imply none-overlapping for data-type
 network? Oh, I didn't know that.

 Who is responsible for this func spec? This is completly contra-
 real-world-experience. Can this be re-considered for a future release?

This isn't completely contra-real-world-experience -- it's just
contrary to your particular use case.  The networks 10.1.0.0/16 and
10.1.0.0/24 are in fact different, and in some applications having
both in a table's primary key column would be perfectly legitimate.
For example, a table might store network administration information,
where the administrator for 10.1.0.0/16 as a whole is Group X, and
the administrator for 10.1.0.0/24 in particular is Group Y.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] add column if doesn't exist

2005-09-26 Thread Peter Eisentraut
Brandon Metcalf wrote:
 Is there a way to check for the existence of a column in a table
 other than, say, doing a SELECT on that column name and checking the
 output?

SELECT * FROM information_schema.columns;

Customize to taste.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] how to do 'deep queries'?

2005-09-26 Thread jeff sacksteder
Is there supported syntax to do 'deep' queries? That is where A relates to B relates to C, returning fields from each table?

This doesn't seem to work. Is there a google-able term for this sort of query?

select 
 foo.aaa,
 bar.bbb,
 baz.ccc

from
 foo,bar,baz

where 
 foo.bar_id = bar.id
and
 bar.baz_id = baz.id





[SQL] Why doesn't the SERIAL data type automatically have a UNIQUE CONSTRAINT

2005-09-26 Thread Ferindo Middleton Jr
Is there some reason why the SERIAL data type doesn't automatically have 
a UNIQUE CONSTRAINT. It seems that the main reason for using it is so 
that the value for this field keeps changing automatically and is never 
null so any one record can be identified using it- So why not imply that 
it is always be UNIQUE anyway. I mean, if you were to force another 
value on a SERIAL field that already had that same value, the would 
through the sequence tracking the the fields current value off any way, 
so it just makes sense to me to not let a serial field be duplicated. 
Let's take a poll. Is there anyone out there who actually uses the 
SERIAL data type who would not want it to be UNIQUE?


Ferindo

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] how to do 'deep queries'?

2005-09-26 Thread Stewart Ben (RBAU/EQS4) *
 Is there supported syntax to do 'deep' queries? That is where 
 A relates to B relates to C, returning fields from each table?
 
 This doesn't seem to work. Is there a google-able term for 
 this sort of query?
 
 select 
foo.aaa,
bar.bbb,
baz.ccc
 
 from
foo,bar,baz
 
 where 
foo.bar_id = bar.id
 and
bar.baz_id = baz.id

This works for me..

SELECT table1.state, table2.coursename, table3.firstname
  FROM backend.enrolments table1, backend.courses table2, backend.users
table3
 WHERE table1.user = table3.employeeno
   AND table1.course = table2.courseid;

What errors are you getting?

Best regards,

Ben Stewart

--
Robert Bosch (Australia) Pty. Ltd.
Engineering Quality Services, Student Software Engineer (RBAU/EQS4)
Locked Bag 66 - Clayton South, VIC 3169 - AUSTRALIA
Tel: +61 3 9541-7002 Fax: +61 3 9541-7700
mailto:[EMAIL PROTECTED]
http://www.bosch.com.au/ 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] how to do 'deep queries'?

2005-09-26 Thread Anthony Molinaro








that query is 100% correct.



its just an equijoin (a type of
inner join) between 3 tables.



the syntax you show is how queries should
be written and is more

representative of what a joins between
relations really are:

Cartesian products with filters applied



the ansi syntax, the explicit JOIN 
ON stuff is (imho) unnecessary,

useful only for outer joins since all the
vendors did it differently.



what you have will work for postgreSQL, I used
the syntax you show in my book

for every single join recipe except for
outjoins.



are you seeing errors?



regards,


Anthony

-Original Message-
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of jeff sacksteder
Sent: Monday, September 26, 2005 8:34 PM
To: pgsql-sql@postgresql.org
Subject: [SQL] how to do 'deep
queries'?



Is there supported syntax to do 'deep' queries? That
is where A relates to B relates to C, returning fields from each table?

This doesn't seem to work. Is there a google-able term for this sort of query?

select 
 foo.aaa,
 bar.bbb,
 baz.ccc

from
 foo,bar,baz

where 
 foo.bar_id = bar.id
and
 bar.baz_id = baz.id











Re: [SQL] Why doesn't the SERIAL data type automatically have a UNIQUE CONSTRAINT

2005-09-26 Thread Tom Lane
Ferindo Middleton Jr [EMAIL PROTECTED] writes:
 Is there some reason why the SERIAL data type doesn't automatically have 
 a UNIQUE CONSTRAINT.

It used to, and then we decoupled it.  I don't think I have no use for
one without the other translates to an argument that no one has a use
for it ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Why doesn't the SERIAL data type automatically have a UNIQUE

2005-09-26 Thread Ferindo Middleton Jr
You're right, Tom. I'm sure someone has a use for a serial field that 
isn't unique. I just assumed that it was. I guess I didn't read the 
documentation closely enough. At any rate, I had a table using a serial 
field that I had to restore to a previous date when I noticed that I 
forgot to set the sequence to the most recent value... user continued 
adding data to this table and it started causing some problems. It just 
seems like most situations would want it unique... to ensure integrity. 
But I guess you need to choose constraint for built-in data types that 
follow more of a one-size-fits-all  philosophy.  And hey,  how hard can 
it be to add  the word UNIQUE when I'm creating tables?


Ferindo

Tom Lane wrote:

Ferindo Middleton Jr [EMAIL PROTECTED] writes:
  
Is there some reason why the SERIAL data type doesn't automatically have 
a UNIQUE CONSTRAINT.



It used to, and then we decoupled it.  I don't think I have no use for
one without the other translates to an argument that no one has a use
for it ...

regards, tom lane

  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] how to do 'deep queries'?

2005-09-26 Thread jeff sacksteder
Nevermind. It's late here and I'm not thinking clearly. Problem solved.