Re: [SQL] My Indices doesn't work

2000-08-15 Thread Stephan Szabo


First, make sure you ran vacuum analyze to update
the statistics for the table.

If a large portion of your table is going to be scanned,
Seq Scan is often faster than Index Scan due to possibly 
random seeks within the heap file (the transaction commit
state isn't in the index, so there is still a read from the
heap to check if it's valid).  The optimizer seems to think
333 records match num>2.  Is this reasonable?

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 15 Aug 2000, Martin Dolog wrote:

> Hi *,
> 
> I have pgsql7.0.2 on Linux2.2.16 and table with following indices:
> 
> CREATE TABLE T1 (NUM INT NOT NULL, NAME VARCHAR(10) NOT NULL, POP
> VARCHAR(10) NOT NULL);
> CREATE INDEX T1_I_NUM ON T1(NUM);
> CREATE INDEX T1_I_NAME ON T1(NAME);
> CREATE INDEX T1_I_POP ON T1(POP);
> 
> ... and some data, but I really don't understant how indices work, look
> at that:
> 
> template1=# explain select * from t1 where num=2;
> NOTICE:  QUERY PLAN:
> Index Scan using t1_i_num on t1  (cost=0.00..8.14 rows=10 width=28)
> 
> what is ok, ***BUT***
> 
> template1=# explain select * from t1 where num>2;
> NOTICE:  QUERY PLAN:
> Seq Scan on t1  (cost=0.00..22.50 rows=333 width=28)
> 
> 
> WHY SEQ SCAN ?!!?
> 
> 
> thank you
> 
> -- 
> 
> #md
> 




Re: [SQL] copy from

2000-08-15 Thread Stephan Szabo


Sort of.  You can give the field a default value of
nextval() which means that if you do not specify
the column in an insert, it automatically gets the default
value which should be the next value in the sequence.
Note, that not putting the column is different from inserting a
NULL into the field.

(Example:
sszabo=# create sequence xyzseq; 
CREATE
sszabo=# create table xyzseqtest ( a int default nextval('xyzseq'), b
int);
CREATE
sszabo=# insert into xyzseqtest (b) values (2);
INSERT 172188 1
sszabo=# insert into xyzseqtest (b) values (3);
INSERT 172189 1
sszabo=# select * from xyzseqtest;
 a | b 
---+---
 1 | 2
 2 | 3
(2 rows)
)

There are issues about this dealing with rules and triggers where another
row may be inserted or the default may be evaluated a second time where
you want to get the value you just inserted back, but in general it works.

On Tue, 15 Aug 2000, Adam Lang wrote:

> Hmmm... well, I don't think I have an "explicit" nextval.  I created the
> table and then I did a create sequence broker_id;
> 
> Are you implying that I can set the field to automatically create a nextval?




Re: [SQL] PL/PGSQL Function problem.

2000-08-15 Thread Stephan Szabo


Actually, you can't do too much about it in PL/PGSQL,
because IIRC there isn't a way right now to do that.

I don't remember when PL/TCL came into existance (don't
know TCL), but you'd be able to do it in that, and 
you can also do it from SPI in a C function.

And finally, the obligatory upgrade message... Upgrade
to 7.0.2 if you can, it's nice and stays crunchy in 
milk. :)

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 15 Aug 2000, Dirk Elmendorf wrote:

> RedHat 6.2 /Postgres 6.53
> I'm still very new at PL/PGSQL but I have looked thru all the 
> documentation I could find before sending this in.
> 
> 
> I have a function that I would like to be able to apply to multiple 
> tables without having to duplicate the code. I would like to be able 
> to pass in the table name I would like to apply the function to. I 
> cannot seem to get this to work.  Below is what I have so far.  I 
> would like to be able to have
> 
> computers_equal(comp_one,comp_two,comp_table_one,comp_table_two);
> 
> I cannot seem to find a way to pass in a text string to be used in 
> the select into statements.
> 
> Anyone got any ideas on how I fix this other that writing a function 
> for all the permutations (ordered_parts vs used_parts , ordered_parts 
> vs  new_parts, used_parts vs ordered_parts, new_parts vs 
> ordered_parts,new_parts vs used_parts, used_parts vs ordered_parts)
> 
> Any advice would be appreciated.




Re: [SQL] copy from

2000-08-15 Thread Stephan Szabo



On Tue, 15 Aug 2000, Adam Lang wrote:

> Gotcha. Now, if I do an insert and just don't specify that field at all, it
> will, use the default value, correct?  As long as I don't "touch" the field
> with anything it uses the default.

Right, as long as you don't specify the field in the column list.

There's a special case I should mention.  If you want to insert only
default values into all columns, the correct way is:
"insert into table default values" -- It usually doesn't come up,
but could if you had a table that was only a sequence value;




Re: [SQL] Continuous inserts...

2000-08-17 Thread Stephan Szabo


On Thu, 17 Aug 2000, Joerg Hessdoerfer wrote:

> Hi!
> 
> I have an application, where I have to insert data into a table at several
> rows per second, 24 hours a day, 365 days a year.
> 
> After some period (a week, maybe a month) the data will be reducted to some
> degree and deleted from the table.
> 
> As far as I understood, I would have to use VACUUM to really free the table
> from deleted rows - but VACUUM (esp. on a table with several million rows)
> takes some time and prevents me from inserting new data.
> 
> Now, I thought I could just rename the table, inserting into a temp table, and
> switch the tables back after VACUUMing. Ideally, this should work unnoticed
> (and thus without prog. effort) on the client (inserter) side.
> 
> Question: would it work to use a transaction to perform the rename?
> 
> i.e.: continuous insert into table 'main' from client.
> 
>  From somewhere else, execute:
> 
> begin;
> alter table main rename to vac_main;
> create table main (...);
> end;
> 
> would the inserter notice this? Read: would ALL inserts AT ANY TIME succeed?

Unfortunately -- no.  Also, bad things can happen if the transaction
errors since the rename happens immediately.  There's been talk on 
-hackers about this subject in the past.

However, you might be able to do something like this, but
I'm not sure it'll work and it's rather wierd:

Have three tables you work with, a and b and c

Set up rule on a to change insert to insert on b.
Insert into a.
When you want to vacuum, change the rule to insert to c.
Vacuum b
Change rule back
move rows from a and c into b
vacuum c

[you will slowly lose space in a, but it should only
be an occasional row since you should only insert into
a while you've deleted the insert rule to b, but
haven't yet added the insert rule to c -- not too many
rows here]





Re: [SQL] Continuous inserts...

2000-08-18 Thread Stephan Szabo


On Fri, 18 Aug 2000, Joerg Hessdoerfer wrote:
> Good idea - I immediately tested it - rules rule! That seems to work perfectly,
> and the client doesn't even see it happen (except for 'selects', one would 
> have to setup
> a rule to return something meaningful then...).
> 
> I did:
> Two tables, a and b.
> Normally, insert into a.
> When Vacuuming starts, create rule on a to insert into b
> Vacuum a
> drop rule
> copy records from b to a
> vacuum b
> 
> Why did you suppose three tables? Did I overlook something?

I didn't try with vacuum, I just did a table lock and that
seemed to still hang the inserts with two tables, so I figured 
maximum safety was adding the third table.  If it works with two 
that's much cooler.  Was this with real data or just a small test 
set? 




Re: [SQL] update rule loops

2000-08-18 Thread Stephan Szabo


On Fri, 18 Aug 2000, Poul L. Christiansen wrote:

> Hi
> 
> I'm trying to make a field in my table (datechanged) to automatically be
> updated with the value 'now()' when an update on the table occurs.
> 
> plc=# create rule datechanged_radius AS ON update to radius do update
> radius set datechanged ='now()';
> CREATE 22025360 1
> plc=# update radius set destinationip = '212.055.059.001';
> ERROR:  query rewritten 10 times, may contain cycles
> 
> This means that it's going in a loop, because the rule triggers itself.
> 
> Is there another way to do this?

Two ways I can think of are either have a "view" where you do the work
on the view, but the underlying table is named something else, which
means you actually need to do an instead rule that does the update on
that table and the setting of datechanged. (Not 100% sure of this, 
but should work).

Second is use triggers.  Write a pl/pgsql before update trigger.
Assigning to NEW.datechanged should work I believe.






Re: [SQL] Beginner problems with functions (Was: Is this the wronglist?)

2000-08-17 Thread Stephan Szabo


On Thu, 17 Aug 2000, Andreas Tille wrote:

> On Wed, 16 Aug 2000, Stephan Szabo wrote on [EMAIL PROTECTED]:
> (sorry for the crossposting, just to tell the list that I now switched to
> the right one hopefully)
> 
> > I think the thing is that most people don't have basic examples, they
> Perhaps someone knows one nice doc.  I only found some hints for
> ma problems in the PGSQL-Part of the Bruce Momjian book.  But
> may be PGSQL is in fact the thing I want and so I may possibly stick to
> that.  Now here is the first question about that:
> 
> web=# create function atTest ( varchar )
> web-#   returns bool
> web-#   As ' BEGIN
> web'# Select * From Mitarbeiter Where FName = $1 ;
> web'# IF NOT FOUND THEN
> web'#   RETURN ''f'' ;
> web'# ELSE
> web'#   RETURN ''t'' ;
> web'# END IF ;
> web'#   END; '
> web-#   language 'plpgsql' ;
> CREATE
> web=# SELECT attest ( 'Tille' ) ;
> ERROR:  unexpected SELECT query in exec_stmt_execsql()
> web=# 
> 
> Could somebody enlighten me, what here goes wrong?

What you may need to do is declare a variable of type record
and do SELECT INTO  * From ... rather than just
the SELECT.

> > CREATE FUNCTION hobbies(person)
> >RETURNS setof hobbies_r 
> >AS 'select * from hobbies_r where person = $1.name'
> >LANGUAGE 'sql';
> But it returns just did:
> 
> 
> web=# SELECT my_test ( ) ;
> 
>  ?column?  
> ---
>  136437368
>  136437368
>  136437368
>  ...
> 
> I had the hope to get the contents of the table like if I would
> do 'SELECT * FROM table;'

Yeah, setof  seems fairly wierd.  SETOF  basetype if
you do a SELECT  FROM table seems to work though. 
I sort of expected that the ones in the regression test would
either do something understandable or at least error if they
are testing for brokenness.




Re: [SQL] Speed or configuration

2000-08-20 Thread Stephan Szabo

(It won't really be forever, just probably
a really long time)

You can usually get around it by rewriting the
query to use EXISTS rather than IN.

Stephan Szabo
[EMAIL PROTECTED]

On Sun, 20 Aug 2000, Franz J Fortuny wrote:

> At our company we are presently using a commercial
> database that generates results from this query:
> 
> select xx1,xx2,xx3 from tableX
> where field1 in
> (select field1 from tableY where
> field2=NNN and field3=NNN2 and field4=NNN4)
> 
> tableX has 790,000 rows, and an index on field1
> tableY has abou 175,000 rows and an index that includes
> field2,field3,field4 (and 2 other fields not being used
> here)
> 
> Of course, the order in the indexes is the logical one.
> 
> I have made copies of the tables from the commercial SQL
> server to PostgreSQL and PostgreSQL is consistently
> faster in things like count(*) and certain other
> queries.
> 
> But when it comes to the above mentioned query,
> PostgreSQL simply stays there, forever. The postgres
> backend must be killed in order to free the client
> program.




Re: [SQL] Re: Beginner problems with functions

2000-08-21 Thread Stephan Szabo


On Mon, 21 Aug 2000, Andreas Tille wrote:

> On Thu, 17 Aug 2000, Stephan Szabo wrote:
> 
> > What you may need to do is declare a variable of type record
> > and do SELECT INTO  * From ... rather than just
> > the SELECT.
> Thanks, that worked.
>  
> > Yeah, setof  seems fairly wierd.  SETOF  basetype if
> > you do a SELECT  FROM table seems to work though. 
> > I sort of expected that the ones in the regression test would
> > either do something understandable or at least error if they
> > are testing for brokenness.
> Is there any kind of documentation how to cope with that problem?
> 
> I try to explain my problem once more:
> 
> My servlets contain code like:
> 
> rs = stmt.executeQuery("stored_procedure arg1, arg2");
> while ( rs.next() )
> do_something(rs.getString("col1"), rs.getString("col2"),
> rs.getString("col3"), rs.getString("col4") );
> 

I haven't thought of an elegant way to do it, although you could
fake some of it with a table of the appropriate structure with a sequence.
It's really ugly, but the pl/sql(tcl/perl/etc...) function gets the
next value of the sequence and inserts the results into a table with
the sequence number and returns the number to you.

So, it'd be something like
select stored_procedure(arg1, arg2);
(get the value into variable)
select * from table_sp_ where intval=
while (...)
 do_something(...)
delete from table_sp_ where intval=

That might be safe in so far as the sequence number should stop
concurrent transactions from clobbering each other, but it requires
that you do the deletes manually and that table will need to be
vacuumed fairly often probably.





Re: [SQL] Continuous inserts...

2000-08-22 Thread Stephan Szabo

Wierd, I've not seen that behavior really, although I've never
done time sensitive stuff.  It might be the time before the
shared cache updates?  Not sure really.  If you do the rule
inline with your inserts (rather than a second transaction)
does it still wait?

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 22 Aug 2000, Joerg Hessdoerfer wrote:

> Hi!
> 
> At 08:18 18.08.00 -0700, you wrote:
> [...]
> 
> >I didn't try with vacuum, I just did a table lock and that
> >seemed to still hang the inserts with two tables, so I figured
> >maximum safety was adding the third table.  If it works with two
> >that's much cooler.  Was this with real data or just a small test
> >set?
> 
> It was a test set ... ~2 records, *BUT* I found that postgres
> decides when it starts to use the rule - means, if you do continous
> inserts on the table and create the rule, there's a varying time until
> the rule applies. In my first tests, I re-connected the DB very often,
> and the the change seemed immediate.
> 
> Any ideas on how to 'promote' the rules faster?!?
> 




Re: [SQL] Question on string value expression wildcarding

2000-08-24 Thread Stephan Szabo

Do you have any odd locale settings or anything and what's
the table definition for the table in question?

It seems to do what I expect under my 7.0.2 system:

create table kp (name text);
insert into kp values ('kp.dhs.a');
insert into kp values ('kp.dhs.');
insert into kp values ('kp.dhs,d');
select * from kp where name like 'kp.dhs.%';
   name   
--
 kp.dhs.a
 kp.dhs.
(2 rows)

select * from kp where name like 'kp.dhs%';
   name   
------
 kp.dhs.a
 kp.dhs.
 kp.dhs,d
(3 rows)

Stephan Szabo
[EMAIL PROTECTED]

On Thu, 24 Aug 2000, Steve Wampler wrote:

> 
> I have LIKE expressions:
> 
>   (a) name LIKE 'kp.dhs.%'
> (b) name LIKE 'kp.dhs%'
> 
> where the name column contains strings prefixed with "kp.dhs.".
> 
> I'm using postgresql  7.0.2.
> 
> Expression (a) fails to match any names while (b) matches
> all strings prefixed with "kp.dhs", including (as expected)
> those prefixed with "kp.dhs.".
> 
> So I take it that ".%" has some special meaning in wildcarding,
> but my (limited) SQL references don't mention this case.
> Is this To Be Expected SQL behavior?  If so, what
> expression can be used to match only strings prefixed with
> "kp.dhs."?




Re: [SQL] Select subset of rows

2000-08-27 Thread Stephan Szabo


Of course immediately after sending the last message and logging off my
ISP I figured out the simpler way for the third one:

begin;
select salary into temp saltemp from employee order by salary desc 
 limit 5;
select name from employee where exists (select * from saltemp where
 saltemp.salary=employee.salary);
end;

Stephan Szabo
[EMAIL PROTECTED]

On Sun, 27 Aug 2000, Paulo Roberto Siqueira wrote:

>   Hi folks,
> 
>   I have this table
> 
> CREATE TABLE EMPLOYEE (ID_EMP INT4 PRIMARY KEY, NAME VARCHAR(35), SALARY
> NUMERIC(5,2));
> 
>   I want to select only the employees' names who have the 5 highest salaries.
> 
> 
> 
> 
> Paulo Siqueira
> 




Re: [SQL] Select subset of rows

2000-08-27 Thread Stephan Szabo


On Sun, 27 Aug 2000, Paulo Roberto Siqueira wrote:

>   Hi folks,
> 
>   I have this table
> 
> CREATE TABLE EMPLOYEE (ID_EMP INT4 PRIMARY KEY, NAME VARCHAR(35), SALARY
> NUMERIC(5,2));
> 
>   I want to select only the employees' names who have the 5 highest salaries.

Well, that depends on what you want to do in case of duplicate salaries.

If you don't care about duplicate salaries, and you don't mind the
possibility of non-deterministic responses in some cases:
select name from employee order by salary desc limit 5;

If you want to take the 5 highest salary values and find however many
people have those salaries, I think this works:
begin;
select distinct salary into temp saltemp from employee 
 order by salary desc limit 5;
select name from employee where exists (select * from saltemp where
 saltemp.salary=employee.salary);
end;

If you want to get at least 5 people but don't want to cut off at 5
if the 5th, 6th, etc people are tied, I'm sure there's a better way,
and I probably made at least one mistake in writing this out, but...
begin;
select distinct salary into temp saltemp from employee 
 order by salary desc limit 5;
select salary, count(*) into temp saltemp2 from employee
 where exists(select * from saltemp where saltemp.salary=
 employee.salary) group by salary;
select saltemp2.salary, sum(case when saltemp2_2.salary<=
 saltemp2.salary then 0 else saltemp2_2.count end) into 
 temp saltemp3 from saltemp2,saltemp2 saltemp2_2 group by
 saltemp2.salary;
select name from employee,saltemp3 where employee.salary=
 saltemp3.salary and saltemp3.sum<5 order by employee.salary
 desc;
end;




Re: [SQL] Select subset of rows

2000-08-27 Thread Stephan Szabo

On Sun, 27 Aug 2000, John McKown wrote:

> On Sun, 27 Aug 2000, Stephan Szabo wrote:
> 
> > 
> > Of course immediately after sending the last message and logging off my
> > ISP I figured out the simpler way for the third one:
> > 
> > begin;
> > select salary into temp saltemp from employee order by salary desc 
> >  limit 5;
> > select name from employee where exists (select * from saltemp where
> >  saltemp.salary=employee.salary);
> > end;
> > 
> > Stephan Szabo
> > [EMAIL PROTECTED]
> > 
> 
> I wonder if the following might be a bit faster? Or would it be slower?
> 
> select salary into temp saltemp from employee order by salary desc
> limit 5;
> 
> select min(salary) as minsal into test minsal from saltemp;
> 
> select name, salary from employee, minsal
>  where salary >= minsal;
> 
> I don't know the speed difference between my second select and your
> subselect within an EXISTS clause might be.

Not sure.  Probably depends on if the subplan/seq scan for the 
subquery on the one is more or less expensive than the time on the
additional query and processing for the query with the min.

Actually, i guess you might be able to use offset in there to get rid
of the second query too...
if you do like
select salary into temp saltemp from employee order by salary desc
 limit 1 offset 4;
you can probably get the 5th one right out.




Re: [SQL] Problems with complex queries ...

2000-08-29 Thread Stephan Szabo


Without seeing the schema or anything, a similar
query to your first one appears to run on my 
Postgres 7.0.2 setup.  It's probably worth upgrading.

On Wed, 30 Aug 2000, J. Fernando Moyano wrote:
> I try this on my system: (Postgres 6.5.2, Linux)
> 
> "select n_lote from pedidos except select rp.n_lote from relpedidos rp,
> relfacturas rf where  rp.n_lote=rf.n_lote group by rp.n_lote having
> sum(rp.cantidad)=sum(rf.cantidad)"
> 
> and I get this result:
> 
> ERROR: rewrite: comparision of 2 aggregate
> columns not supported 
> 
> but if I try this one:
> 
> "select rp.n_lote from relpedidos rp, relfacturas rf where 
> rp.n_lote=rf.n_lote group by rp.n_lote having sum(rp.cantidad)=sum(rf.cantidad)"
> 
> It's OK !!
> 
> What's up???
> Do you think i found a bug  ???
> Do exists some limitation like this in subqueries??
> 
> (Perhaps Postgres don't accept using aggregates in subqueries ???)
> 
> I tried this too:
> 
> "select n_lote from pedidos where n_lote not in (select rp.n_lote from
> relpedidos rp, relfacturas rf where  rp.n_lote=rf.n_lote group by rp.n_lote
> having sum(rp.cantidad)=sum(rf.cantidad))"
> 
> but the result was the same !
> 
> And i get the same error message (or similar) when i try other variations.




Re: [SQL] Create Primary Key?

2000-08-29 Thread Stephan Szabo


We don't currently support the SQL syntax for adding
a PK to a table.  However, if you have the columns
as NOT NULL already, adding a unique index to the
columns in question has the same general effect.

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 29 Aug 2000, Webb Sprague wrote:

> Apropos of my last question:
> 
> Is there syntax to create a primary key after the
> table has been defined and populated?  I think I could
> speed things up quite a bit by not having any indexes
> at all when I do my mass copies.




Re: [SQL] Optimizing huge inserts/copy's

2000-08-29 Thread Stephan Szabo


On Tue, 29 Aug 2000, Jie Liang wrote:

> Hi, there,
> 
> 1. use copy ... from '.';
> 2. write a PL/pgSQL function and pass multiple records as an array.
> 
> However, if your table have a foreign key constraint, it cannot be speed
> up,
> 
> I have same question as you, my table invloving 9-13 million rows, I
> don't
> know how can I add a foreign key them also?

I haven't tried it on really large tables, but does it turn out faster to
use ALTER TABLE ADD CONSTRAINT to add the foreign key constraint after the
data is loaded and the indexes are created?




Re: [SQL] Create Primary Key?

2000-08-30 Thread Stephan Szabo


On Wed, 30 Aug 2000, D'Arcy J.M. Cain wrote:

> Thus spake Stephan Szabo
> > We don't currently support the SQL syntax for adding
> > a PK to a table.  However, if you have the columns
> > as NOT NULL already, adding a unique index to the
> > columns in question has the same general effect.
> 
> Except for interfaces such as PyGreSQL that recognize the primary key
> and use it.

True.  You'd have to see what it was doing to determine the primary key
and try to do the same things.





Re: [SQL] Optimizing huge inserts/copy's

2000-08-30 Thread Stephan Szabo


On Wed, 30 Aug 2000, Jie Liang wrote:

> Hi,
> 
> I knew that if no constarint, it populate very quick, my question is:
> when two tables have been
> reloaded, then I want to add a foreign key constraint to it, say:
> tableA has primary key column (id)
> tableB has a column (id) references it, so I say:
> ALTER TABLE tableB ADD CONSTRAINT distfk FOREIGN KEY (id) REFERENCES
> tableA(id)  ON DELETE CASCADE ;

Yeah, the alter table has to check that the constraint is valid.  There
might be a faster way than the current "scan through table calling
trigger function" mechanism, although doing most of them starts pulling
logic for the obeying constraint into multiple places.




Re: [SQL] Query-ing arrays

2000-09-01 Thread Stephan Szabo


I'd suggest checking the array utilities in contrib.  I believe
it has functions/operators for element in set.

Stephan Szabo
[EMAIL PROTECTED]

On Fri, 1 Sep 2000, Jon Lapham wrote:

> Stupid FAQ probably:
> 
> Is it possible to query an array for an item WITHOUT KNOWING where in the
> array the item might be held?
> 
> For example, I have a tbale with the following character array:
> > category char(3)[],
> 
> I would like to find all records in which 'category' contains an array
> element of a certain type, like maybe 'xxx' for instance.  So, for a
> particular record, 'category' may be set to '{'gfe','qwe','xcs','xxx'}'
> 
> I cannot do:
> SELECT * FROM blah WHERE category[4]='xxx';
> because I do not know that the 'xxx' will always be in the 4th position.




Re: [SQL] Cascading Deletes

2000-09-06 Thread Stephan Szabo


I think a references constraint on ID
referencing _ID with ON DELETE CASCADE
should do what you want.

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 6 Sep 2000, Craig May wrote:

> Hi,
> 
> I have a tables having this structure:
> 
> ID (int) | _ID (int) | Name (String)
> 
> 
> _ID is the parent of ID.
> 
> I'm looking for the best method to perform a cascade delete.  For example, I
> delete _ID, it in turn deletes ID, and should ID also be an entry within _ID, it
> would continue through the chain.
> 
> For example:
> 
> 0 0 Base
> 1 0 Sib1
> 2 0 Sib2
> 3 0 Sib3
> 4 1 Sib1_1
> 5 1 Sib1_2
> 
> 
> Deleting Base would remove all the entries.  Deleting Sib1 would delete Sib1_1
> and Sib1_2.
> Can anyone help out here?
> 
> Regards,
> Craig May
> 
> Enth Dimension
> http://www.enthdimension.com.au
> 




Re: [SQL] Weighted Searching

2000-09-12 Thread Stephan Szabo


I'm not sure how fast it is, but something like this
would work, right?

WHERE 
(CASE WHEN degree='MBA' THEN 10 ELSE 0 END +
CASE WHEN years_experience='5' THEN 10 ELSE 0 END _
CASE WHEN location_state='Arizona' THEN 10 ELSE 0 END)
>=20

Also, wouldn't you usually want to be searching where the
weight was given if you had the years_experience or greater?

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 12 Sep 2000, Mitch Vincent wrote:

> I emailed the list a while back about doing some weighted searching, asking
> if anyone had implemented any kind of weighted search in PostgreSQL.. I'm
> still wondering the same thing and if anyone has, I would greatly appreciate
> a private email, I'd like to discuss it in detail.. I have several ideas but
> most of them are pretty dirty and slow..
> 
> What I need to do is allow the user to assign weights to fields and then
> specify a minimum weight which would dictate results..
> 
> Example :
> 
> A search on two fields, degree and years_experience, location_state.
> 
> The user assigns degree a weight of 10, years_experience a weight of 10 and
> location_state a weight of 10. Then specifies the minimum weight as 20,
> meaning that any results returned would have to have at least two of the
> fields an exact match (any two that where the sum of the weight equals 20).
> This could be carried out to many, many fields and extremely high weights..
> 
> The problem I'm having is figuring out a good way to assign the weights to
> individual fields and test to see if an individual field is exactly matched
> in the query (without running a single query for each field searched on.
> 
> Example:
> 
> The SQL query for the search above might be :
> 
> SELECT * FROM people WHERE degree='MBA' and years_experience='5' and
> location_state='Arizona'
> 
> I would want people that have an MBA and 5 years experience but they
> wouldn't necessarily have to be in Arizona (because our minimum weight is
> 20, only two would have to match)..
> 
> Hopefully I'm not over-explaining to the point of confusion.. If anyone
> would have any ideas, please drop me an email.. Thanks!!!
> 
> -Mitch
> 
> 
> 
> 
> 




Re: [SQL] work on some tables in the same time.

2000-09-13 Thread Stephan Szabo


I'm assuming you want "n" to be replaced by the numbers up to
the last one where there is a table1_n and table2_n, right?

I'd suggest looking and PL/TCL (or if you can wait for 7.1, you
can probably do it in PL/PGSQL as well with the new exec stuff.)
and writing a function that does the inserts for you.

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 13 Sep 2000, Jerome Raupach wrote:

> I want to execute this query on some table2 in the same time :
> 
> INSERT INTO table2_n(f1, f2, f3)
>   SELECT DISTINCT f1, f2, f3 FROM table1_n ;
> 
> --
> 
> CREATE TABLE table1_1( INT2 f1, INT2 f2, INT2 f3, INT2 f4 );
> CREATE TABLE table1_2( INT2 f1, INT2 f2, INT2 f3, INT2 f4 );
> ...
> 
> CREATE TABLE table2_1( INT2 f1, INT2 f2, INT2 f3 );
> CREATE TABLE table2_2( INT2 f1, INT2 f2, INT2 f3 );
> ...
> 
> --
> 
> Anyone can help me ?
> Thanks in advance.
> 
> Jerome.
> 





Re: [SQL] left and outer joins?

2000-09-14 Thread Stephan Szabo


If you were looking in just -sql, you'll have missed
out on the later information which was on -hackers...

I believe Tom Lane just committed changes recently
to put in support for them with a few caveats, 
so unless something comes up, you should see some 
support in 7.1.

On Thu, 14 Sep 2000, Michael Teter wrote:

> I see in the TODO that left and outer joins are
> supposedly a priority.
> 
> I do wonder what the status is, since the email
> correspondence shows the most recent email as being
> from something like march or july of 1999.
> 
> what is the status on left and outer joins?




Re: [SQL] [GENERAL] Foreign Keys Help Delete!

2000-09-20 Thread Stephan Szabo

On Wed, 20 Sep 2000, Josh Berkus wrote:

> Timothy, Tom:
> 
> > >1. a. Create new record with new key value in hosts table with the
> > >desired value
> > >   b. Update the routes record to reference the new value
> > >   c. Delete the old record in the hosts table
> > >
> > 
> > Yes, that's what I tried.
> > 
> > 1. foo.old.com exists in "hosts" table and "routes" table
> > 2. create foo.new.com in "hosts" table
> > 3. delete foo.old.com in "routes" table
> > 4. add foo.new.com into "routes" table
> > 5. try to delete foo.old.com and it complains!
> 
> Tom - not to interrupt your coding :-) this sounds like a bug.  Any
> thoughts?  

Probably doesn't need to go all the way to Tom... :)

Hmm, on my 7.0.2 box, 
sszabo=# create table hosts (fqdn varchar(30));
CREATE
sszabo=# create table routes (fqdn varchar(30),foreign key(fqdn)
references hosts(fqdn));
NOTICE:  CREATE TABLE will create implicit trigger(s) for FOREIGN KEY
check(s)
CREATE
sszabo=# insert into hosts values ('foo.old.com');
INSERT 181159 1
sszabo=# insert into routes values ('foo.old.com');
INSERT 181160 1
sszabo=# begin;
BEGIN
sszabo=# insert into hosts values ('foo.new.com');
INSERT 181161 1
sszabo=# delete from routes where fqdn='foo.old.com';
DELETE 1
sszabo=# insert into routes values ('foo.new.com');
INSERT 181162 1
sszabo=# delete from hosts where fqdn='foo.old.com';
DELETE 1
sszabo=# end;
COMMIT

-- 

To original complainant:
Since you won't be able to post the trigger information either probably,
can you check pg_trigger to make sure there are no dangling constraint
triggers?
You should have three rows that look like:

  181144 | RI_ConstraintTrigger_181153 |   1644 | 21 | t | t
| |181120 | f| f  |   6 |
| \000routes\000hosts\000UNSPECIFIED\000fqdn\000fqdn\000
  181120 | RI_ConstraintTrigger_181155 |   1654 |  9 | t | t
| |181144 | f| f  |   6 |
| \000routes\000hosts\000UNSPECIFIED\000fqdn\000fqdn\000
  181120 | RI_ConstraintTrigger_181157 |   1655 | 17 | t | t
| |181144 | f| f  |   6 |
| \000routes\000hosts\000UNSPECIFIED\000fqdn\000fqdn\000

Except that the relation oids are likely to be different (important ones
are the tgrelid and tgconstrrelid).  The function oids (1644, 1654, 1655)
should be the same I believe.

> > >2. a. Drop the Foriegn Key constraint
> > >   b. Update both the routes and hosts tables
> > >   c. Re-establish the foriegn key constraint
> > 
> > This is the part that I'm fuzzy on.  I've tried this before
> > with complete DB corruption resulting. I had to dump each table
> > one by one, edit my schema with vi, create new DB, import tables
> > one by onevery painful!
> 
> This also sounds like a problem.  One should be able to drop a
> constraint, the re-create the restraint and check existing records
> against it.  You can do this in MSSQL and Oracle.

Well, we don't have ALTER TABLE ... DROP CONSTRAINT right now.  Dropping
the constraint requires removing the triggers manually.  We can do an
ADD CONSTRAINT which will check the data, but not the corresponding DROP.




Re: [SQL] sql query not using indexes

2000-09-20 Thread Stephan Szabo

On Wed, 20 Sep 2000, User Lenzi wrote:

> if I start a query:
> 
> explain select * from teste where login = 'xxx'
> results:
> Index  Scan using  teste1 on teste (cost=0.00..97.88 rows=25 )
> 
> 
> however a query:
> explain select * from teste where login > 'AAA'
> results:
> Seq Scan on teste 
> 
> 
> On a machine running version 6.5 both queries results index scan.
> 
> this results that the version 6.5 is faster than version 7.0.2 on this
> kind of
> query.
> 
> 
> Any explanation???

Have you done a vacuum analyze on the table?  Also, what does the row
count for the second query look like?  It's probably deciding that
there are too many rows that will match login >'AAA' for index scan
to be cost effective.  So, actually, also, what does
select count(*) from teste where login>'AAA" give you on the 7.0.2 box.





Re: [SQL] How do I run a search on array

2000-09-20 Thread Stephan Szabo


On Thu, 21 Sep 2000, Indraneel Majumdar wrote:

> select col1 from table while array_col[1][1:4]='2';
> 
> how do I do this sort of thing? There seems to be no docs ;-(
> 
> my array is {{"1","2","4","2"},{"3","2","5"},{"6","3","7","9"}}

You'll want to check out the array utilities in the contrib directory.
They include element is member of array and other such functions and
will probably do what you need.
 
> I would also like to know that if I have an array as a large object, is it
> possible to do a search on it using rows and columns (or by any other
> way)?
You're putting array style data into a large object with the lo_
functions?  Probably not in a meaningful way, no (although I'd guess that
toast might work for that kind of application when 7.1 comes out.)





Re: [SQL] sql query not using indexes

2000-09-21 Thread Stephan Szabo

On Thu, 21 Sep 2000, Sergio de Almeida Lenzi wrote:

> > > On a machine running version 6.5 both queries results index scan.
> > > 
> > > this results that the version 6.5 is faster than version 7.0.2 on this
> > > kind of
> > > query.
> > > 
> > > 
> > > Any explanation???
> > 
> > Have you done a vacuum analyze on the table?  Also, what does the row
> > count for the second query look like?  It's probably deciding that
> > there are too many rows that will match login >'AAA' for index scan
> > to be cost effective.  So, actually, also, what does
> > select count(*) from teste where login>'AAA" give you on the 7.0.2 box.
> 
> Ok I agree with you on the real database there are 127,300 rows and there
> are certanly a great number of rows > 'AAA'. But, supose I make a query
> select * from table where code > 'AAA' limit 10. it will read the entire
> table only to give me the first 10 while in release 6.5 it will fetch the
> index for the first 10 in a very fast manner, indeed the 6.5 release
> resolves in 1 second while the 7.0 release resolves in 10-20 sec.
Hmm, I believe Tom Lane was doing alot of stuff with the optimizer and
limit but I don't remember if that was before or after the 7.0 release.
It might be worth trying on current sources to see if that goes back to
an index scan.  Or if your data set is safe to give out, I could try
it on my current source machine.

> Is there a way to tell the optimizer to consider going on indixes??
Well, there is a SET you can do to turn off seqscans unless that's the
only way to go, but that's a broad instrument since it affects all
statements until you change it back.




Re: [SQL] Multiple Index's

2000-09-21 Thread Stephan Szabo


On Thu, 21 Sep 2000, Brian C. Doyle wrote:

> Hello all,
> 
> How would I prevent a user from submitting information to a table once they 
> have already done so for that day.  I would need them to be able 
> information on future dates as well as have information in the table from 
> past dates from that user.
> 
> I am looking for something like insert user_id, date, info where user_id 
> and date are not the same... does that make sense?

If you want the first data to go through, maybe a unique index on
(user_id, date) would work.




Re: [SQL] sql query not using indexes

2000-09-22 Thread Stephan Szabo

On Fri, 22 Sep 2000, Tom Lane wrote:

> indexscans; the current code may have overcorrected a shade, but I think
> it's closer to reality than 6.5 was.
> 
> As Hiroshi already commented, the difference in results suggests that
> the desired data is very nonuniformly scattered in the table.  7.0
> computes cost estimates on the assumption that the target data is
> uniformly scattered.  For a sufficiently nonselective WHERE condition
> (ie, one that the planner thinks will match a large fraction of the
> table's rows) it looks better to do a seqscan and pick up the matching
> rows than to follow the index pointers.  Adding a LIMIT doesn't change
> this equation.
>
> I like Hiroshi's recommendation: add an ORDER BY to help favor the
> indexscan.

Yeah, I didn't notice the lack of the order by when I responded.  I
forget that order by isn't required to use limit since it's fairly
ugly to not use one ("What, you wanted to get a implementation defined
effectively random 10 rows?")





Re: [SQL] select

2000-09-23 Thread Stephan Szabo


I'd assume this would work:
select * from table where booleanfield is null;

Stephan Szabo
[EMAIL PROTECTED]

On Sat, 23 Sep 2000, Jeff MacDonald wrote:

> how would i select all rows where a boolean value is neither
> t nor f.. ?
> 
> ie if someone inserted without setting the boolean tag.





Re: [SQL] Select between two databases

2000-09-24 Thread Stephan Szabo

On Sun, 24 Sep 2000, Indraneel Majumdar wrote:

> Does any one know how I may select between two databases running on same
> machine or on different machines eg.
> 
> select colA1 from tableA1 where colA2 in (select colB1 from tableB1);
> 
> here tableA1 and tableB1 are in different databases. The two databases
> might be under the same postmaster or on different machines under
> different postmasters. How do I query both simultaneously? Is it possible?

Currently postgres databases may not be spanned in queries.




Re: [SQL] [GENERAL] Foreign Keys Help Delete!

2000-09-19 Thread Stephan Szabo

On Tue, 19 Sep 2000, Timothy Covell wrote:

> Schema:
> hosts table with fqhn column
> routes table with fqhn foreign key hosts(fqhn)
> 
> Problem: 
> 
> 1. When I try to change fqhn in hosts, it complains that
> I have now violated entry in "routes" table.
>
> 2. When I try to update "routes" table, it updates.
> 
> 3. Go back to "hosts" table and now try to rename/delete
> old fqhn and it complains about object missing with OID=x.

Can you give the table structure and statements you were
doing specifically?




[SQL] Re: [HACKERS] foreign key introduces unnecessary locking ?

2000-10-09 Thread Stephan Szabo

On Mon, 2 Oct 2000, Rini Dutta wrote:

> When two tables (table2 and table3) have foreign keys
> referring to a common table(table1), I am unable to
> have 2 concurrent transactions - one performing insert
> on table1 and the other on table2, when the records
> being inserted have the same foreign key. 
> 
> If I use JDBC, one of the transactions aborts.
> If I open 2 psql sessions and try the same, one just
> waits and does not show the prompt until the other
> transaction has been committed or aborted.
> 
> For example,
> create table tmp1(idx int4, data int4);
> create table tmp2(idx2 int4, col2 int4, constraint
> tmpcon2 foreign key(col2) references tmp1(idx));
> create table tmp3(idx3 int4, col3 int4, constraint
> tmpcon3 foreign key(col3) references tmp1(idx));
> insert into tmp1 values(1, 1);
> 
> Transaction 1 :
> begin work;
> insert into tmp2 values(2, 1);
> 
> Transaction2 :
> begin work;
> insert into tmp3 values(3,1);
> 
> Since such transactions are common for me, for the
> time-being I have dropped the foreign key constraint.
> Any ideas ?

Each is attempting to grab row locks on tmp1 to prevent 
the rows from going away while we're testing the 
references. The second transaction is waiting for the row 
lock to go away so that it can do its row lock.
I'm not sure why its failing in JDBC though.





Re: [SQL] SQL to retrieve foreign keys

2000-10-11 Thread Stephan Szabo


Actually, right now it isn't trivial with the implementation.
You need to decode things out of the arguments to triggers which
are in a bytea and there aren't alot of reasonable sql level stuff
to decode it.  If you don't mind doing some of the work on the front end,
you should be able do it.

If you do select tgconstrname, proname, tgargs, tgdeferrable,
tginitdeferred, tgnargs from pg_trigger,pg_proc where tgname like
'RI_ConstraintTrigger%' and tgfoid=pg_proc.oid;

That will get you three rows per foreign key constraint.  
tgconstrname is the constraint name given to the constraint (currently, if
none is given,  is used).
proname tells you about what the constraint does.  One will be 
 RI_FKey_check_ins which just checks new values in the fk table.
 The other two are more interesting, one will be something like
 RI_FKey__del, and the other RI_FKey__upd.
 These tell the defined referential actions (no actions, cascade, etc)
 for delete and update.
tgdeferrable says whether or not the constraint is DEFERRABLE
tginitdeferred says whether or not the constraint is INITIALLY DEFERRED
tgnargs holds the number of arguments to the trigger
tgargs holds the arguments in a bytea separated by \000
 The arguments are as follows:
  constraint name
  fk table
  pk table
  match type
  fk col1
  pk col1
  ...
  fk coln
  pk coln

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 11 Oct 2000, Colleen Williams wrote:

> Hi,
> 
> I would like to write some SQL to extend the phpPgAdmin tool where it only 
> displays primary keys related to a table. I have looked at the programmers 
> doco and searched the discussions on foreign keys but was not able to 
> decipher how to write the SQL. It is probably really trivial for someone 
> who knows how the PostgreSQL catalog tables are related. Can someone please 
> help me. Many Thanks.
> 
> Colleen.
> 




Re: [SQL] Referential integrity: broken rule?

2000-10-11 Thread Stephan Szabo


Yes, I believe it is mentioned in the docs (well at least the
sgml source, I don't have a compiled doc set) as a bug
of the implementation somewhere in the section on references.  
7.1 should fail for this case, although it doesn't properly deal with
dropping the unique constraint later.

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 11 Oct 2000, Franz J Fortuny wrote:

> This table:
> 
> create table things
> (
>   idthing integer not null,
>   isthis boolean not null
>   primary key(idthing,isthis)
> 
> 
> )
> 
> would be referenced by this one:
> 
> create table forthings
> (
>   fromthing integer not null references things(idthing),
> 
> 
> 
> )
> 
> The above SHOULD NOT be accepted, since table "things" did not declare
> idthing as UNIQUE. However, it IS accepted under PostgreSQL (7.0.2).




Re: [SQL]

2000-10-23 Thread Stephan Szabo

On Mon, 23 Oct 2000 [EMAIL PROTECTED] wrote:

> Hello,
>  I have following problem with PostgreSQL 6.5.3, I haven't possiblity
> to check it on 7.0 and I want to know is it possible to run such query:
> 
> SELECTk.pic, id_g, id_k, count(*)
> FROM kart k, pictues p
> WHERE k.pic = p.pic
> GROUP BY k.pic
> 
> PICTURES(pic,id_g,id_k)
> KART  (pic,email,mess,date)
> 
> I've got answer from Postgres
> "Illegal attributes or non-group column"
> 
> Is it error in query or in parser ?

AFAICS, the above construct isn't a legal
GROUP BY query.  All columns in the select
list must either be grouped columns or 
in some sort set value function.

Assuming that pictures.pic is unique, you
can add p.id_g and p.id_k to the group
by clause.  Otherwise, you need to decide
which id_g and id_k you want (min or 
max is often useful).





Re: [SQL] Add Constraint

2000-10-26 Thread Stephan Szabo


On Wed, 25 Oct 2000, Sivagami . wrote:

> Hi all,
> 
> I am a newbie to Postgresql, but I am familiar with SQL. I am trying to add a 
>constraint to my table using the ALTER TABLE command. The command goes like this :
> 
> ALTER TABLE USER_SIGNUP ADD CONSTRAINT
> P_USER_SIGNUP_USER_ID PRIMARY KEY(user_id);
> 
> But it is returning the error 
> ERROR:  ALTER TABLE / ADD CONSTRAINT is not implemented 
> 
> Can anyone guide me in the right direction???

We don't have full add constraint support yet.
7.0 allows you to add the subset of foreign keys supported
and current sources should allow check constraints as well.
To get the unique constraint portion, you can add
a unique index on user_id (it's what the system would
have done anyway).  To get the non-null part is a little
harder, you need to find the row in pg_attribute and
set the attnotnull to true, and then make sure there aren't
any null values already in the data set.





Re: [SQL] Query Problem

2000-10-26 Thread Stephan Szabo


What is the explain output for the queries you've tried?

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 25 Oct 2000, Josh Berkus wrote:

> 
> Folks:
> 
> Here's the problem, in abstract:  I need to select every record in table
> A that does not have a link in table B Join Table C where Table
> C.account = 11
> 
> The relevant fields:
> 
> Table_A
> CaseID
> Data
> 
> Table_B
> GroupID
> CaseID
> Amount
> 
> Table_C
> GroupID
> AccountID
> 
> Thus, I need to select:
> 
> SELECT Data FROM Table A 
> WHERE CaseID NOT IN (
>   SELECT CaseID FROM Table_B, Table_C
>   WHERE Table_B.GroupID = TableC.GroupID
>   AND TableC.AccountID = 11)
> 
> The problem is, since Table_B and Table_C are large (10,000 records +)
> this exclusion query takes several *minutes* to run.
> 
> I've fooled around with drectional joins, views, and temporary tables,
> but I can seem to find anything that works faster.  Suggestions?
> 
> -Josh Berkus
> 
> -- 
> __AGLIO DATABASE SOLUTIONS___
> Josh Berkus
>Complete information technology  [EMAIL PROTECTED]
> and data management solutions   (415) 436-9166
>for law firms, small businesses   fax  436-0137
> and non-profit organizations.   pager 338-4078
>   San Francisco
> 




Re: [SQL] PL/PGSQL beginning is hard....

2000-11-02 Thread Stephan Szabo


> i am a beginner at SQL and PL/pgsql and thus have some surely
> already known problems...
> 
> i have set up some tables, and wanted to play around with inbuild
> functions, and set up the following function:
> 
> CREATE FUNCTION balance (int4) RETURNS int4 AS '
> DECLARE
>compte ALIAS FOR $1;
>actplus accounts.num%TYPE;
>actminus accounts.num%TYPE;
>actres accounts.num%TYPE;
>BEGIN
>SELECT SUM(amount) INTO actplus FROM journal WHERE plus=compte;
>select sum(amount) INTO actminus from journal where minus=compte;
>actres := actplus - actminus;
>RETURN actres;
>  END;
>   ' LANGUAGE 'plpgsql';
> 
> 
> Now this works fine, until it hits one of the cases where either of the
> selects returns an empty result (meaning that no line conforming to the
> contraint could be found) in this case even if the other select returns
> a value, the whole function does return nothing
> 
> what did i wrong, or what do i have to change, to assume the value 0 if
> no hit was found to the select?

Probably this would do it:
 select coalesce(sum(amount),0) ...


> BTW i am wondering if the same thing could have been achieved with sole
> SQL, and if yes, how

You might be able to do this with subselects..
 (select coalesce(sum(amount), 0) from ... ) - (select coalesce...)

So, maybe something like this, if you were say going over a table which
had the compte values:
 select (select coalesce(sum(amount), 0) from journal where plus=compte)
   -(select coalesce(sum(amount), 0) from journal where minus=compte)
 from table_with_compte_values;




[SQL] Re: [GENERAL] Problem with coalesce..

2000-11-01 Thread Stephan Szabo


There are still some contexts in which subqueries in expressions are
wierd. The example below appears to work in current sources however.

Stephan Szabo
[EMAIL PROTECTED]

On Tue, 31 Oct 2000, George Henry C. Daswani wrote:

> Hello,
> 
> Was wondering if such a call is unsupported, or a bug?
> 
> 7.0.2 (postgresql, linux redhat 7.0)
> 
> SELECT COALESCE ((SELECT NULL,1) 
> 
> returns a 1, correct..
> 
> however..
> 
> SELECT COALESCE ((SELECT CURVALUE FROM TABLEKEYS WHERE TABLENAME = 'BUYER'),1)
> 
> returns a "UNKNOWN expression type 501"
> 
> SELECT CURVALUE FROM TABLEKEYS WHERE TABLENAME = 'BUYER'; returns a '0'
> 
> Thanks..
> 
> George
> 
> 
> 




Re: [SQL] [sql]Joins

2000-11-09 Thread Stephan Szabo


On Fri, 10 Nov 2000, Najm Hashmi wrote:

> I am facing a dilemma at my work,   I am using postgres first time. For
> some reason, my co-workers think that in Postgres joins i.e.
> simple joins of two or three tables are so bad that their cost is
> exponential. They believe that postgres simply takes  Cartesian product
> of joining table in order to obtain desired tuples i.e. no optimization
> is done.
>I refused to believe  so. I need your help to convince
> them that it is okay to join tables --two or three tables :)-- so we can 
> eliminate redundancies from the database. I also want to know how
> postgres optimizes a join query.  Thank you very much your help.

Umm, I don't know where they got that idea.  Tom Lane can go into details 
as the optimizer guru, but as a start, if you use EXPLAIN on your queries,
the system will tell you what plan it would use if you were to run the
query.  That will get you some idea of what the system is doing.

I've had no problems really until about 8-12 tables joined when you might
tickle a bug in some versions of postgres which cause bogus plans to be
generated.




Re: [SQL] how to continue a transaction after an error?

2000-11-13 Thread Stephan Szabo

On Mon, 13 Nov 2000, Cristi Petrescu-Prahova wrote:

> Hello,
> 
> I would like to insert a bunch of rows in a table in a transaction. Some of
> the insertions will fail due to constraints violation. When this happens,
> Postgres automatically ends the transaction and rolls back all the previous
> inserts. I would like to continue the transaction and issue the
> commit/rollback command myself.
> 
> How to do it?
> Is there any setting I am missing?
> Is it possible at all?

Currently, postgres treats all errors as critical ones that require a
complete rollback of transaction (although I believe it does not
immediately do the rollback, it should go into abort state).  There's been
on and off talk about changing this, but nothing really has been decided i
believe.





Re: [SQL] how to continue a transaction after an error?

2000-11-13 Thread Stephan Szabo


> >When you start a transaction,
> >you're telling the backend "treat all of these statements as one, big,
> >all or nothing event." 
> 
> This is actually contrary to the standard. Statements are atomic, and a
> failed statement should not abort the TX:
> 
> The execution of all SQL-statements other than SQL-control 
> statements is atomic with respect to recovery. Such an 
> SQL-statement is called an atomic SQL-statement.
> 
> ...
> 
> An SQL-transaction cannot be explicitly terminated within an 
> atomic execution context. If the execution of an atomic 
> SQL-statement is unsuccessful, then the changes to SQL-data or schemas
> made by the SQL-statement are canceled.

This I agree with in general.  You can almost defend the current behavior
by saying all errors cause an "unrecoverable error" (since I don't see a
definition of unreverable errors), but we're doing that wrong too since
that should initiate a rollback as opposed to our current behavior.
Admittedly, having an SQLSTATE style error code would help once we had
that so you could actually figure out what the error was.

> >If you want (need, if you're using large objects) transactions, you
> >really need to think about your transaction boundries. Don't just wrap
> >your whole frontend in one big, long lived transaction
> 
> Totally agree; transactions will keep locks. Release them as soon as the
> business rules and application design says that you can. Note that
> commit-time constraints may make the commit fail; in this case PG will
> force a rollback, but it *should* allow corrective action and another
> attempt at a commit.

This I disagree with for commit time constraints unless stuff was changed
between the draft I have and final wording:
"When a  is executed,
 all constraints are effectively checked and, if any constraint
 is not satisfied, then an exception condition is raised and the
 transaction is terminated by an implicit ."

Other places they are a little less explicit about failed commits, but it
certainly allows a cancelation of changes:
"If an SQL-transaction is
 terminated by a  or unsuccessful execution of
 a , then all changes made to SQL-data or schemas
 by that SQL-transaction are canceled. Committed changes cannot be
 canceled. If execution of a  is attempted, but
 certain exception conditions are raised, it is unknown whether or
 not the changes made to SQL-data or schemas by that
 SQL-transaction are canceled or made persistent.

And I think this makes sense.  If you're committing then you're saying
you're done and that you want the transaction to go away.  If you just
want to check deferred constraints, there's set constraints mode.  I could
almost see certain recoverable internal state things being worth not doing
a rollback for, but not constraints.




Re: [SQL] how to continue a transaction after an error?

2000-11-14 Thread Stephan Szabo


On Tue, 14 Nov 2000, Philip Warner wrote:

> >I could
> >almost see certain recoverable internal state things being worth not doing
> >a rollback for, but not constraints.
> 
> Not true, eg, for FK constraints. The solution may be simple and the
> application needs the option to fix it. Also, eg, the triggered data
> *could* be useful in reporting the error (or fixing it in code), so an
> implied rollback is less than ideal. Finally, custom 'CHECK' constraints
> could be designed for exactly this purpose (I have done this in DBs before).

I was actually talking about commit time rollback there, not statement
time.  I could theoretically see commit time non-rollback in cases of a
presumed transient internal state thing (now, I can't think of any in
practice, but...)  

For a commit time check, I still think preceding with a set constraints
all immediate is better if you want to actually see if you're safe to
commit.





Re: [SQL] Using Array-Values in subselect

2000-11-14 Thread Stephan Szabo


If you look in contrib of the source, there is a set of
array operators(functions) including element in set.
That'll probably do what you want (you don't do an in
actually, it'll be like   )

On Tue, 14 Nov 2000, Alvar Freude wrote:

> Roberto Mello schrieb:
> > Looks like you want something similar to Oracle's CONNECT BY statement.
> > There are some solutions to that. At OpenACS we had to deal with that so
> > implemented something like what you described here.
> > However, the methods described by Joe Celko is his book "SQL For
> > Smarties" on chapters 28 and 29 (I think) are better and more robust. If
> > you search for "trees" and related topics at the openacs.org and
> > arsdigita.com's web/db web bulletin boards, you'll find several hits.
> 
> Thanks, the solutions discussed on these sites suggest the use of lookup
> tables with parent and child for each record. I have thought about the
> same, but thought the arrays in Postgres could provide a more elegant
> way of building a tree.
> 
> I want to migrate from MySQL to Postgres mainly because the array and
> subselect features. And finally the article at
> http://www.phpbuilder.com/columns/tim20001112.php3 convinced me it could
> run on my small machine. But it appears to me that the arrays always are
> handled like strings and are not really useful for anything advanced. I
> can't really believe it, please prove me wrong! I have set great hope in
> Postgres. :)





Re: [SQL] Bug or feature

2000-11-22 Thread Stephan Szabo


What is it actually giving you as an error
message in the failing case? Someone pointed
out a problem in deferred constraints recently
and I think this may be related. 

Stephan Szabo
[EMAIL PROTECTED]

On Mon, 20 Nov 2000, Kyle wrote:

> Here's an interesting test of referential integrity.  I'm not sure if
> this is working the way it should or if it is a bug.
> 
> I'm trying to update the primary key in records that are linked together
> from the two different tables.  My initial assumption was that because
> of the cascade, I could update the primary key only in the gl_hdr table
> and it would cascade to the gl_items table.  I have two separate updates
> of gl_items shown below.  One updates the key in gl_items explicitly,
> the other tries to wait and allow the cascade to do it.  Only the first
> one works (try commenting one in/out at a time).
> 
> Unless I update the glid explicitly in gl_items, I get an RI violation
> when it tries to update the gl_hdr record.
> 
> 
> --Test RI in the general ledger
> 
> drop table gl_hdr;
> drop table gl_items;
> 
> create table gl_hdr (
> glidint4,
> hstat   varchar(1),
> constraint gl_hdr_pk_glid primary key (glid)
> );
> 
> create table gl_items (
> glidint4,
> inumint4,
> istat   varchar(1),
> primary key (glid, inum),
> 
> constraint gl_items_fk_glid
> foreign key (glid) references gl_hdr
> on update cascade
> deferrable initially deferred
> );
> 
> insert into gl_hdr (glid,hstat) values (1,'w');
> insert into gl_items (glid,inum,istat) values (1,1,'w');
> insert into gl_items (glid,inum,istat) values (1,2,'w');
> 
> select * from gl_hdr h, gl_items i where h.glid = i.glid;
> 
> begin;
> 
> --This one works:
> --  update gl_items set glid = 1000, istat = 'c' where glid = 1;
> 
> --This one doesn't:
> update gl_items set istat = 'c' where glid = 1;
> 
> 
> 
> update gl_hdr   set glid = 1000, hstat = 'c' where glid = 1;
> end;
> 
> select * from gl_hdr h, gl_items i where h.glid = i.glid;
> 
> 
> 




Re: [SQL] Cache lookup failure

2000-11-29 Thread Stephan Szabo


Do you have any triggers, rules or check constraints defined on the
table?  If so, you may have fallen pray to the thing that you cannot
drop and re-create a function that's used in a trigger without recreating
the trigger as well.

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 29 Nov 2000, Najm Hashmi wrote:

> Hi All,
> I am trying to do a simple insert, and I am getting cache look failure
> error.
> Could someone explain what causing it? My query and error are given
> below:
> fliprdb=# insert into collection(name, artist_id) values('El Baile
> Aleman',2);
> ERROR:  fmgr_info: function 24011: cache lookup failed
> Thanks in advance for all your help
> Regards
> Najm
> 
> 




Re: [SQL] delete rows

2000-11-29 Thread Stephan Szabo


On Thu, 30 Nov 2000, Astrid Hexsel wrote:

>  After trying for two weeks to delete more than one row at the time from a form
> - checkbox input, I was told that the only way different rows would be inputed
> or changed with the information from the form would be with the INSERT command.
> 
> 
> This way the best alternative I could think of, would be 
> to still have my table cart:
> For eaxmple:
> 
> session_id|range_id|colour_id|
> --
> 122   | 4004-4 | 4002 
> 122   | 4004-4 | 4003
> 122   | 4004-4 | 4004
> 
> which contains all the items added to the cart.
> 
> Then when a deletion is required, the information from the form would be inserted
> into "cartchange table":
> For example:
> 
> session_id| colour_id|condition|
> -
> 122   | 4001   | yes
> 122   | 4002   | yes 
> 
> 
> Now what I am trying to do is to delete the rows
> from cart that appear in the cartchange table.
> 
> I have tried to join the tables , but I could not find the right query to delete
> similar rows.

Would a subselect work here?  something like...
delete from cart where exists (select * from cartchange where
 cartchange.session_id=cart.session_id and 
 cartchange.colour_id=cart.colourid);




Re: [SQL] I get an error with Foreign Keys

2000-11-30 Thread Stephan Szabo


On Thu, 30 Nov 2000, Brian Powell wrote:

> I have around 40 tables defined, many relying on data in others so I have
> foreign key constraints.  However, whenever I try to delete from any table,
> I get:
> 
> ERROR:  SPI_execp() failed in RI_FKey_cascade_del()
> 
> What generates this?  I need assistance in knowing where to begin.  I
> checked again, and I could not find any circular foreign key constraints...

Hmm, that might mean that the statement being generated inside the
trigger is incorrect in some way that isn't being reported.  Can you 
send a schema dump of your tables and I'll see if I can reproduce it.

BTW: The code in question is probably in src/backend/utils/ri_*.c [I
can't remember the exact ending of the filename].  It should be building
a statement and attempting to execute it using the SPI interface and
it looks like the execute is failing I'd guess by the message.




Re: [SQL] FOREIGN KEY errors.

2000-12-08 Thread Stephan Szabo


There was a bug (which should be fixed for 7.1) that got the 
arguments wrong for the alter time check of the existing data.
I think I should be able to get a patch together to fix it once
I get a copy of the 7.0.3 source.

Can you send the table schema as well so I can test it out?

Stephan Szabo
[EMAIL PROTECTED]

On Thu, 7 Dec 2000, Joseph Shraibman wrote:

> When trying to alter a table and add a foreign key, I am getting this
> error if the table has any data in it:
> 
> playpen=# alter table message add FOREIGN KEY (pod,originator)
> REFERENCES usertable (podkey,userkey);
> NOTICE:  ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s)
> for FOREIGN KEY check(s)
> ERROR:  constraint : table usertable does not have an attribute
> originator
> 
> If I do the alter before I put any data in the table:
> 
> playpen=#  alter table message add FOREIGN KEY (pod,originator)
> REFERENCES usertable (podkey,userkey);
> NOTICE:  ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s)
> for FOREIGN KEY check(s)
> CREATE
> 
> playpen=# select version();
>version   
> -
>  PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66
> (1 row)
> 
> Reversing the order creates a different message:
> playpen=# alter table message add FOREIGN KEY (originator,pod)
> REFERENCES usertable (userkey,podkey);
> NOTICE:  ALTER TABLE ... ADD CONSTRAINT will create implicit trigger(s)
> for FOREIGN KEY check(s)
> ERROR:  constraint : table usertable does not have an attribute
> pod
> 
> 
> Am I just misunderstanding how to use FOREIGN KEY?  Then why would it
> work one time and not the other?
> 
> http://www.postgresql.org/docs/aw_pgsql_book/node159.html doesn't have
> any column names after 'refrences '.
> 
> 
> -- 
> Joseph Shraibman
> [EMAIL PROTECTED]
> Increase signal to noise ratio.  http://www.targabot.com
> 




Re: [SQL] Null comparison

2000-12-13 Thread Stephan Szabo

On Wed, 13 Dec 2000, Al Lewis wrote:

> I am migrating to postgress from msql and am encountering numerous problems
> in the differences in NULL usage and comparison.
> 
> 1. Why are 2 fields not equal if they are both NULL?

Because that's what the SQL spec says.  If either value is NULL
the result is unknown (because NULL is not a value really,
is this unknown value equal to some other unknown value...).

> 2. Is there a way to easily modify my sql select statements to account for
> either column being NULL and having them return true if they both are NULL?
>   select a.name, b.cost from a, b where a.type=b.type

where a.type=b.type or (a.type is null and b.type is null)
should do it.

> I'd like to make this as easy as possible so I can put it into a
> "translation" function.  Currently I have a regsub that handles <> and
> NULLs, since <> doesn't work on a NULL field.





Re: [Re: [SQL] postgres]

2000-12-15 Thread Stephan Szabo

On 14 Dec 2000, Marc Daoust wrote:

> Thank you very much Reberto,
> 
> It appears that your co-workers are not inerested in potential funding.
> For the rude onesmaybe/perhaps people like myself were givin the email
> address ever think of that.
> 
> A potential client that is having second thoughts.

PostgreSQL does have two companies that do work on it and for support
(PgSQL, Inc and Great Bridge,  pgsql.com and greatbridge.com
respectively), however the project is an open source one and most of us
here on the mailing list do not belong to either organization.  Probably
pgsql-general would be a more appropriate list in general for these sorts
of questions although there isn't a huge amount of segmentation between
them.

You can also follow the commercial support link on the website
(www.postgresql.org).




Re: [SQL] SQL query not working when GROUP BY / HAVING is used

2000-12-19 Thread Stephan Szabo

On Tue, 19 Dec 2000 [EMAIL PROTECTED] wrote:

> Hello there
> 
> I have a question regarding a SQL statement.
> 
> When I execute (and that's what I need)
> 
> SELECT Zylinder.Z_durch_soll, Zylinder.Z_id, Zylinder.Z_durch_ist,
> ((Zylinder.Z_durch_soll+0.12) - Zylinder.Z_durch_ist) AS durchmesserdelta,
> (Zylinder.Z_durch_soll+0.12) AS effektiv
> FROM Auftrag,Zylinder_Typen, Zylinder
> WHERE Auftrag.A_nr = '11'
> AND (  Auftrag.A_farbe_1  > '0' AND Zylinder_Typen.Z_farbe='1' AND
> Zylinder_Typen.Z_SW='0')
> AND Zylinder_Typen.Z_durch_soll = Zylinder.Z_durch_soll
> AND Auftrag.A_Ztyp=Zylinder.Z_typ
> AND Z_A_nr = NULL
> AND Z_status = 'zcu'
> GROUP BY Zylinder.Z_durch_soll
> HAVING durchmesserdelta >= 0.085
> ORDER BY Zylinder_Typen.Z_durch_soll desc

This query is not legal SQL.  All columns in the select list of
a group by query must either be grouped columns or set value
functions (pretty much anyway).  The general construct is legal
but there are syntax rules for GROUP BY that you are violating.

If Z_id and Z_durch_ist are unique for values of Z_durch_soll
you might try grouping on all of them.  If not, the query above
is indeterminate since you are not specifying which Z_id and
which Z_durch_ist to use for a particular Z_durc_soll value.





Re: [SQL] substring ..

2000-12-19 Thread Stephan Szabo

On Tue, 19 Dec 2000, Jeff MacDonald wrote:

> hi folks..
> 
> i want to do this to a datetime field..
> 
> select foo from table where substr(datefoo,1,11) = '2000-12-14';
> 
> it returns no results yet..
> 
> select substr(datefoo,1,11) does return some values that say
> 2000-12-14
> 
> any clues ?

My guess is that it's a trailing space thing.  '2000-12-14' is
only 10 characters not 11...  What you're actually getting is
probably '2000-12-14 '





Re: [SQL] how to alter/drop check contraint?

2000-12-20 Thread Stephan Szabo


There's no good way currently to drop the check constraint
really.  You probably can do it by removing the row for the
constraint from pg_relcheck and changing the pg_class row
for the table to have the correct number in relchecks.
In 7.1, you'd probably be able to add the check constraint
using ALTER TABLE ADD CONSTRAINT, but before that adding
the constraint would probably be difficult.

You're probably best off dumping the table, changing the
constraint and then restoring it.

[If you don't have any important data and you like the idea
of potentially causing yourself great deals of pain and suffering,
it might be possible to change the 10 to 20 by directly editing
the pg_relcheck row.  I have not attempted to do this though,
so I'm not sure it would work.]

Stephan Szabo
[EMAIL PROTECTED]

On Wed, 20 Dec 2000, hubert depesz lubaczewski wrote:

> the subject should be self-explanatory, but:
> i have table:
> create table a (b text check (length(b)<10));
> and for some reason i want to drop this check or alter this to length(b)<20.
> how can i do so?
> 
> or maybe using trigger in plpgsql will be better?
> how to make trigger which will stop insert or update when something occurs?




Re: [SQL] Create table doesn't work in plpgsql

2000-12-23 Thread Stephan Szabo


I believe (although I haven't tried it) that pltcl will allow
you to do things such as this.

On Thu, 21 Dec 2000, Volker Paul wrote:

> Hi,
> 
> > I don't think you can use DDL(data definition language) in PL/SQL.
> > create table is not DML(data munipulation language) instead
> > it's a DDL.
> Thanks, but that leaves me with a problem.
> What I really want to do is something like
> select str from person where id=1234;
> where str is a string that contains an expression like 
> famname || ', ' || givname
> i.e. the final select is 
> select famname || ', ' || givname from person where id=1234;
> I know it's possible by building the select e.g. in bash
> and calling psql with it as an argument, but do you see a possibility
> that is closer to Postgres, e.g. in plpgsql?




Re: [SQL] Invoice number

2000-12-23 Thread Stephan Szabo


> I'm wondering how people creates guaranteed sequential numbers - in my case 
> for invoice numbers.
> 
> - Sequences are not rollback'able.
> - It seems overkill to have a table just for this.
> - What else?

You'll probably need a table (although you may be able to get away with
only one for all of these you want to do).  The big issue here is locking
since a second transaction looking to get a number needs to wait for 
an earlier transaction that has already gotten a number to either commit
or rollback to reuse the number if necessary.




Re: [SQL] Optimization recommendations request

2000-12-29 Thread Stephan Szabo


What does explain show for your query?

On Sat, 23 Dec 2000, Joe Conway wrote:

> Hello,
> 
> I'm working on an application where I need to design for one table to grow
> to an extremely large size. I'm already planning to partition the data into
> multiple tables, and even possibly multiple servers, but even so each table
> may need to grow to the 10 - 15 million tuple range. This table will be used
> for a keyed lookup and it is very important that the query return in well
> under a second. I've done a small test using a dual ppro 200 server with 512
> MB RAM, running RedHat Linux 6.1, and a late November 7.1 pre-beta snapshot
> of PostgreSQL (PGOPTS="-D/opt/pgsql/data -i  -B 25000"). I used a test table
> with about 5 million tuples.
> 
> Details:
> 
> CREATE TABLE foo(
> guid varchar(20) not null,
> ks varchar(20) not null
> );
> 
> --> insert 5,000,000 rows, both guid and ks are 20 byte sha-1 hashes
> -- tried this first
> -- create index foo_idx1 on foo(guid);
> -- then tried
> create index foo_idx1 on foo using HASH (guid);
> 
> SELECT ks FROM foo WHERE guid = 'f9ee1930f4010ba34cc5ca6efd27d988eb4f921d';
> 
> The query currently takes in excess of 40 seconds. I would appreciate any
> suggestions for optimizing to bring this down substantially.




[ADMIN] Re: [SQL] Removing a constraint?

2001-01-01 Thread Stephan Szabo


It should work if you remove all three triggers for the constraint
using drop trigger, don't delete rows from pg_trigger unless you go
through and manually change the row in pg_class for the relation
the trigger is for.

On Mon, 1 Jan 2001, Michael Davis wrote:

> Does anyone know how to completely and accurately remove or drop a 
> constraint, specifically a foreign key constraint?  I tried to remove a 
> constraint by deleting it's trigger from pg_triggers.  This caused some 
> undesirable side effects with other tables involved with the constraint.  I 
> have several tables that I need to change the column constraints and 
> foreign key constraints on.  Recreating (drop and create) the table every 
> time I need to change a column constraint is a pain because all the objects 
> that reference the table would also need to be recreated (i.e. views and 
> triggers).  How do production DBAs successfully make changes to their 
> tables?
> 
> FYI, I was able to alter table add the same constraint many times.  Is this 
> a problem?  This created a new trigger in pg_triggers every time.




Re: [SQL] Extracting user db tabel info from system tables???

2001-01-05 Thread Stephan Szabo


On Fri, 5 Jan 2001, Marc Cromme wrote:

> I have some problems on making the right joins on system tables to extract
> the 
> structure of some user defined tables/databases. I use PostgreSQL 7.0.2 on
> an 
> RedHat 7.0 box.
> 
> PROBLEM 1: I tried to make a Foreign key constraint from the primary key of 
> table 'pred' to the table 'prey'. The PRIMARY KEY ("yeartime", "pred",
> "pred_age") 
> of ' pred' should be a Foreign key in 'prey'. Hovever, when I make a dump I
> get this: 

Later on in the dump, there should be a line of the form:
CREATE CONSTRANT TRIGGER ... 
referencing the tables in question.  It'll probably be near the end.  We 
currently dump the fk constraints as their internal representation
(constraint triggers) rather than as the original constraints.

> PROBLEM 2:
> I try to make some queries on POSTGRES system tables to determine the table
> definitions 
> dynamically in a PHP script- the idea is that I do not want to toutch the
> PHP code in case 
> that the database table structure changes. I can retrieve the structure of
> the 'prey' table
> primary keys by the following SQL query:  
> 
> baltic=> SELECT a.attname, ic.relname, i.indisunique, i.indisprimary 
> FROM pg_class bc, pg_class ic, pg_index i, pg_attribute a 
> WHERE bc.oid = i.indrelid and ic.oid = i.indexrelid and a.attrelid = bc.oid 
> and bc.relname = 'prey' and (i.indkey[0] = a.attnum or i.indkey[1] =
> a.attnum 
> or i.indkey[2] = a.attnum or i.indkey[3] = a.attnum or i.indkey[4] =
> a.attnum 
> or i.indkey[5] = a.attnum or i.indkey[6] = a.attnum or i.indkey[7] =
> a.attnum) 
> ORDER BY ic.relname, a.attname;
> 
>  attname  |  relname  | indisunique | indisprimary 
> --+---+-+--
>  pred | prey_pkey | t   | t
>  pred_age | prey_pkey | t   | t
>  prey | prey_pkey | t   | t
>  prey_age | prey_pkey | t   | t
>  yeartime | prey_pkey | t   | t
> (5 rows)
> 
> Question 2: How can I avoid the sequences of OR statements, which are
> errorprone (and unelegant) 
> in case that there are more than 7 fields in the primary key?

You could probably look at the array stuff in contrib for the 
element in array functions/operators and use that.

> PROBLEM 3:
> I can get a nice description of all the 'prey' table fields by issuing the
> following SQL query:
> 
> baltic=> SELECT c.relname,  u.usename, c.relacl, a.attname, t.typname,
> a.attlen, a.attnotnull 
> FROM pg_class c, pg_attribute a, pg_type t , pg_user u 
> WHERE u.usesysid = c.relowner AND c.relname = 'prey' AND a.attnum > 0 AND
> a.attrelid = c.oid 
> AND a.atttypid = t.oid 
> ORDER BY a.attnum;
> 
>  relname | usename | relacl  | attname  | typname | attlen |
> attnotnull  
> -+-+-+--+-++
>  
>  prey| mac | {"=r","mac=rw"} | yeartime | float8  |  8 | t
>  prey| mac | {"=r","mac=rw"} | pred | varchar | -1 | t
>  prey| mac | {"=r","mac=rw"} | pred_age | int8|  8 | t
>  prey| mac | {"=r","mac=rw"} | prey | varchar | -1 | t
>  prey| mac | {"=r","mac=rw"} | prey_age | int8|  8 | t
>  prey| mac | {"=r","mac=rw"} | wstom| float8  |  8 | f
>  prey| mac | {"=r","mac=rw"} | stomcon  | float8  |  8 | f
> (7 rows)
> 
> 
> QUESTION 3: How do I merge the two above queries to get a table like this
> (Outer Join  Union???
> I know how to emulate outer joints by an Union and Where ... Not In
> (select..), but I can't find out
> how to join two queries, and not two tables..)

Your best bet is probably to make views for the two queries and then do
the outer join using those in which case they effectively look like 
tables.  You could do it without the views, but that'll be kind of 
long and hard to read.

> QUESTION 4: How do I extract also information on foreign keys from the
> system tables, 
> and add two columns to the above table like the following?
> 
> fkey   | ftable
> ---+-
> pred_pkey  | pred
> pred_pkey  | pred
> pred_pkey  | pred
> NULL   | NULL
> NULL   | NULL
> NULL   | NULL
> NULL   | NULL

The foreign key constraint information is stored in pg_trigger.  The
problem is that there's no good way to get the column information from
within sql right now (they're stored as arguments in tgargs).




Re: [SQL] Possible bug? WAS :Bad (null) varchar() externalrepresentation.

2001-01-10 Thread Stephan Szabo

On Thu, 11 Jan 2001, Justin Clift wrote:

> I haven't seen a mention of a maximum number of constraints of similar
> applying to a table.  If so, then could someone please point me to it...
> 
> The reason I mention this is because I've found what seems to be causing
> this problem I'm experiencing with Postgres 7.03 :
> 
> CREATE TABLE "staff_details" (
> "userid" character varying(24) NOT NULL,
> "password" character(13) NOT NULL,
> "name" character varying(96) NOT NULL,
> "role" int2 NOT NULL,
> "dob" date NOT NULL,
> "phone_one" character varying(14) NOT NULL,
> "phone_two" character varying(14),
> "phone_three" character varying(14),
> "address" character varying(280),
> "status" int2,
> "managers_notes" character varying(600),
> CONSTRAINT "staff_details_uesrid" CHECK ((length(userid) < 25)),
> CONSTRAINT "staff_details_password" CHECK ((length("password") <
> 14)),
> CONSTRAINT "staff_details_name" CHECK ((length(name) < 97)),
> CONSTRAINT "staff_details_dob" CHECK
> (date_ge(date(("timestamp"('2001-01-08'::date) - '18 years
> 00:00'::"interval")), dob)),
> CONSTRAINT "staff_details_phone_one" CHECK ((length(phone_one) <
> 17)),
> CONSTRAINT "staff_details_phone_two" CHECK ((length(phone_two) <
> 17)),
> CONSTRAINT "staff_details_phone_three" CHECK
> ((length(phone_three) < 17)),
> CONSTRAINT "staff_details_address" CHECK ((length(address) <
> 281)),
> CONSTRAINT "staff_details_managers_notes" CHECK
> ((length(managers_notes) < 601)),
> PRIMARY KEY ("userid")
> );
> 
> When I attempt to insert data into this table, I get the following error
> :
> 
> foobar=# insert into staff_details values ('',
> encrypt('foo'), 'Joshua', 1, '1970-07-01', '(03) 9867 5432', '(041) 309
> 2819', NULL, '1 Blankety-Blank Way\nBazzville', NULL, NULL);
> 

Current source seem to insert fine (don't have a 7.0 system to test on
anymore).  Also, aren't alot of these length checks meaningless?  I think
the values are converted to the correct type first, so the phone number
really shouldn't possibly be longer than 16 since it's a varchar(14).




Re: [SQL] Using a rule as a trigger.

2001-01-10 Thread Stephan Szabo


As someone else said a serial is probably easier, but 
a trigger is probably a better bet than a rule for this purpose.
Using a plpgsql before insert trigger will do it.

On Wed, 10 Jan 2001, Andrew Higgs wrote:

> Hi all,
> 
> I have looked at some previous posting and thought that I had found 
> exactly what I need. What I need is to insert an id (from a sequence) 
> when a new record is inserted. The following example almost does what I 
> need :
> 
> CREATE TABLE  topics (id int, topic varchar(50), descriotion text);
> CREATE SEQUENCE nextid start 1;
> CREATE RULE ins_topic AS ON INSERT TO topics WHERE id ISNULL DO UPDATE 
> topics SET id=nextval('nextid') WHERE  id ISNULL;
> 
> 
> This example updates the last insert. I need it to update the currnet 
> insert. How do I do this?




Re: [SQL] How to display a unixtimestamp from a timestamp record?

2001-01-15 Thread Stephan Szabo


>  select user_name, date_part( 'epoch' , timestamp 'acct_timestamp') from
> tbacct limit 2;
> 
> it said ERROR:  Bad timestamp external representation 'acct_timestamp'
> how should i represent date_part( 'epoch' , timestamp 'acct_timestamp')
> to work?

select user_name, date_part ('epoch', acct_timestamp) from tbacct limit 2;
should work... the single quotes are making a literal string value, so
your query is saying take the epoch of the timestamp represented by the
literal 'acct_timestamp' rather than the value of the field.




Re: [SQL] deferred constraints failing on commit

2001-01-18 Thread Stephan Szabo


Okay, yep, seems like a garden variety bug to me...

What's happening is that the update trigger is checking to make sure
that there are no rows referencing the one that was changed, but that's
not sufficient for the deferred no action case possibly.  It's got to be
that there are no rows that now fail the constraint after the update (no
rows that reference the one that has changed and do not reference a row
that does exist in the table).  There is some confusion on the spec to
some details that we're still working out.

This has gotten mentioned on -hackers, but noone's been completely able
to determine what's supposed to happen for all of the combinations of
referential actions on these types of deferred cases.  

On Tue, 16 Jan 2001, Michael Richards wrote:

> Here is a test case that illustrates the problem. I figured I was 
> doing it all wrong before and didn't bother to distill and include a 
> test case.
> 
> create table objects(
> revisionid int4,
> primary key (revisionid));
> 
> create table objcatalog(
> minrev int4,
> maxrev int4,
> foreign key (minrev) references objects(revisionid) INITIALLY 
> DEFERRED,
> foreign key (maxrev) references objects(revisionid) INITIALLY 
> DEFERRED);
> 
> insert into objects values (999);
> insert into objcatalog values (999,999);
> 
> begin;
> SET CONSTRAINTS ALL DEFERRED;
> update objects set revisionid=1;
> insert into objects values (999);
> 
> select * from objects;
> select * from objcatalog;
> commit;




Re: [SQL] deferred constraints failing on commit

2001-01-18 Thread Stephan Szabo


Can you send the full schema of the tables you are using for
this?  

On Tue, 16 Jan 2001, Michael Richards wrote:

> Hi.
> 
> I'm having trouble with committing a transaction. Intuitively it 
> should work but does not.
> 
> I've got a table with 2 foreign keys, minrev and maxrev. They refer 
> to a the revisionid value in another table. I need to update the 
> second table to reduce the revisionid, then insert a new row. At the 
> end of this all the keys match up yet the commit fails.
> 
> urdr=> begin;
> BEGIN
> urdr=> update objects set revisionid=2 where id=2 and 
> revisionid=;  
>
> UPDATE 1
> urdr=> insert into objects 
> (id,typeid,repositoryid,parentid,deleted,permissions,revisionid,name) 
> values (2,2,1,NULL,'f',NULL,,'test.sql'); 
> INSERT 246107 1
> urdr=> select id,revisionid from objects; 
>  id | revisionid 
> +
>   1 |   
>   2 |  1
>   2 |  2
>   2 |   
> (4 rows)
> urdr=> select * from objcatalog ;
>  objectid | repositoryid |  minrev  |  maxrev  |   key|  data
> --+--+--+--+--+--
> 2 |1 |  |  | mimetype |text/plain
> (1 row)
> 
> urdr=> commit;
> ERROR:   referential integrity violation - key in objects 
> still referenced from objcatalog
> 
> At commit all the keys check out properly. minrev and maxrev both 
> point to the same revisionid in the row we just inserted.
> 
> Is this a bug or me just misreading how things should work again?
> 
> -Michael
> _
>  http://fastmail.ca/ - Fast Free Web Email for Canadians
> 




Re: [SQL] problem to count (distinct number)

2001-01-18 Thread Stephan Szabo


What version are you using?  I believe this was added in 7.0.x, but
I could be wrong about that.  A query of this sort works on my 
7.1beta3 system.

On 17 Jan 2001, Mikael Hedin wrote:
> I have a table with a column (int4), and I want to know how many
> different numbers there are.  In an example book I saw the query
> 
> SELECT COUNT (DISTINCT data ) FROM media;
> 
> by then I get 
> 
> ERROR:  parser: parse error at or near "distinct"
> 
> Is the query in error or does PostgreSQL not support this?  How do I
> then get the number of distinct data from my table?




Re: [SQL] notice on transaction abort?

2001-01-18 Thread Stephan Szabo


On Thu, 18 Jan 2001, Kovacs Zoltan Sandor wrote:

> I realized that an error will abort a transaction all the time.
> Unfortunately I usually send millions of rows of INSERTs in a transaction
> and if there is some error in the middle of the code I got the message
> if I try to INSERT a new row:
> 
> NOTICE:  current transaction is aborted, queries ignored until end of
> transaction block
> *ABORT STATE*
> 
> OK, this is a nice feature. But I sometimes make mistakes at the end of
> the transaction and I got no such message. There is no easy way to find
> out if there was an aborted transaction or not. Can I consider this as a
> bug? Probably not, but I suggest dropping something similar immediately
> after aborting. Your opinion, please? :-)

Well, you should have gotten an error message from the statement that was
in error in any case, but maybe a message on the commit/end that says
that the transaction was aborted due to errors would be nice.




Re: [SQL] Making a foreign key chain - good idea or bad idea?

2001-01-24 Thread Stephan Szabo


On Wed, 24 Jan 2001, Frank Joerdens wrote:

> I just did something which seems to work alright and which makes sense
> to me now but which I have a funny feeling about. It may be good
> standard practice (and I just don't know about it) or dangerously
> foolish or just plain silly: I created a foreign key reference on a
> column that is also the primary key for this table as in
> 
> create table institute (
> idint4 references index ( id ) PRIMARY KEY,
>  . . .
> 
> and then used that column as a reference for a foreign key constraint in
> a couple of other tables:
> 
> create table boss (
> institute_id   int4 references institute ( id ),
>  . . . 
> 
> create table staff (
> institute_id  int4 references institute ( id ),
>  . . . 
> 
> I am not really sure what happens when I delete or modify the id column
> in the table at the top of this "chain". Except for this uncertainty I
> don't think this scheme would pose a problem, but I may be wrong. Can
> anyone enlighten me?

If you mean index(id), as long as there exists an institute(id) that
references it, you won't be allowed to delete or update it to a distinct
value.  And, you won't be able to delete or update institute(id) as long
as there exists at least one boss(id) or staff(id) that references it.
You know that there shouldn't be orphaned boss(id) or staff(id) rows 
because those can't exist without a institute(id) row of the correct 
value and that requires the index(id) value.





Re: [SQL] Don't want blank data

2001-01-25 Thread Stephan Szabo


On Thu, 25 Jan 2001, David Olbersen wrote:

> Greetings,
>   Is there a way to have postgresql always return a value for each row
>   requested? To be more clear, if I were using a Perl SQL hybrid I would write
>   something like
> 
>   SELECT computer_ip or 'unset' FROM computers;
> 
>   So that if computers.computer_ip is NULL or '' I will get 'unset' back from
>   the database. I hope this makes sense and somebody can point me in a good
>   direction

Perhaps:
 select case when computer_ip is null or computer_ip='' then 'unset'::text
   else computer_ip end from computers;

(the ::text should probably be whatever type computer_ip is)...




Re: [SQL] SQL Help

2001-01-26 Thread Stephan Szabo


On Fri, 26 Jan 2001, Mark A. Summers wrote:

> I am having trouble with the following query taking forever:
> -
> SELECT * FROM ret108108_00, product
> WHERE ret108108_00."isbn" = product."Item1"
> 
> AND   product."SuperCategory" = '1'
> AND   product."PublisherCode" = 'ZON'
> ORDER BY ret108108_00.qty DESC LIMIT 100
> 
> The problem is the second AND -- if I take this out it runs fine -- is there
> any way to optimize it ?  I had a similar problem with just Category so I
> created a new field in the product file called FullCat which combined
> SuperCategory and Category and thus eliminating the 2nd AND clause -- and
> that fixed it -- I just didn't want to do the same thing with Publisher.
> Before I tried to index Category, SuperCategory ... nothing seemed to help

Have you run a VACUUM ANALYZE on the table?  And what does
EXPLAIN show for the query?




Re: [SQL] how to simulate UPdate ...?

2001-02-05 Thread Stephan Szabo


Isn't this equivalent to?
update table1 set na1=
 (select table2.na1 from table2 where
   table1.no=table2.no);

Which can also be invalid if there can be
 multiple rows returned by the subselect
 since there'd be no way to know which
 table2.na1 you'd want without more info.

On Fri, 2 Feb 2001, guard wrote:

> I not run update in Pgsql
> 
> please tell me other method
> thanks
> 
> ==
> update table1 a set na1=
> (select table2.na1 from table2,table1
> where table1.no=table2.no
> and a.no=table1.no);
> 
> 
> 
> 




Re: [SQL] parse error in create index

2001-02-06 Thread Stephan Szabo


Functional indexes cannot currently take constant values to the function,
so it's complaining about the constant 'month'.  The current workaround is
probably to create a function that does the date_part('month', ) for
you and then use that function in the index creation.

On Sat, 3 Feb 2001, Hubert Palme wrote:

> Hi,
> 
> could someone, please, explain me the following parse error?
> 
> adressen=> \d geburtstage
> Table= geburtstage
> +--+--+---+
> |  Field   |  Type|
> Length|
> +--+--+---+
> | lfd_nr   | int4
> | 4 |
> | geburtstag   | date
> | 4 |
> +--+--+---+
> adressen=> create index Monat_Tag on geburtstage (date_part('month',
> Geburtstag));
> ERROR:  parser: parse error at or near "'"
> adressen=>
> 
> Thanks in advance,
> 
> -- 
> Hubert Palme
> [EMAIL PROTECTED]
> 




Re: [SQL] Is this a bug, or is it just me?

2001-02-07 Thread Stephan Szabo


Technically you are not allowed to make an FK to non-unique
values.  What you're closer to looking for is MATCH PARTIAL
which we don't support (because it's a real pain - although
with the new memory management stuff in 7.1 it may be less
of one - since the fundamental problem is storing values
from other iterations of the trigger for this last update/delete
for ref actions).

7.1 won't let you define such a constraint with the create 
table or alter table syntaxes (I guess theoretically it would
let you create constraint trigger and bring the broken 
constraint from an older version).  Right now we don't
support constraining views because we don't have a mechanism
in place to rewrite the constraint to actually work.

On Tue, 6 Feb 2001, Josh Berkus wrote:

> Tom et al.
> 
> Discovered this quirk in foriegn keys:
> 
> In the preliminary version of a database, I added foriegn
> key constraints to a number of tables, linking them to a
> column in a shared reference table (status.status) that was
> only one-half of a composite primary key (and thus the
> values were not unique).  When I tried to delete a row
> containing a "2" in the status column from the status
> relation, I received a Foreign Key violation error event
> though there were other "2"'s in the table still present.
> 
> So ... is this a bug in forign key implementation, or just
> my fault for keying off a non-unique value?
> 
> And, if the latter, is there a way I can construct a foreign
> key constraint that keys onto a view or query?




Re: [SQL] Bug reports for 7.1 beta?

2001-02-07 Thread Stephan Szabo


On Wed, 7 Feb 2001, Josh Berkus wrote:

> Folks,
> 
>   Where do I send bug reports for 7.1 beta?  I;'ve looked on the web
> site, and don't see an address or bugtraq forum.

Probably the best is the pgsql-bugs mailing list at:
[EMAIL PROTECTED]




Re: [SQL] Query never returns ...

2001-02-08 Thread Stephan Szabo


After you load the data, you need to run vacuum analzye.  That'll
get statistics on the current data in the table.  Of course, I'm
not sure that'll help in this case.

On Thu, 8 Feb 2001, Brice Ruth wrote:

> Stephan,
> 
> Here is what EXPLAIN shows:
> 
> NOTICE:  QUERY PLAN:
> 
> Sort  (cost=0.02..0.02 rows=1 width=64)
>   ->  Nested Loop  (cost=0.00..0.01 rows=1 width=64)
> ->  Seq Scan on tblmedcond  (cost=0.00..0.00 rows=1 width=36)
> ->  Seq Scan on tblsidedruglink  (cost=0.00..0.00 rows=1 width=28)
> 
> As for vacuum analyze - prior to running into these problems, I deleted
> all data from the database (using delete from ) and then ran
> vacuumdb -a, after which I loaded the data into the tables using 'copy
> ... from' - there have been no updates to the database since then -
> merely selects.




Re: [SQL] Query never returns ...

2001-02-08 Thread Stephan Szabo


What does explain show for the query and have you run
vacuum analyze recently on the tables?

On Thu, 8 Feb 2001, Brice Ruth wrote:

> The following query:
> 
> SELECT 
>   tblSIDEDrugLink.DrugID, 
>   tblSIDEDrugLink.MedCondID, 
>   tblMedCond.PatientName AS MedCondPatientName, 
>   tblMedCond.ProfessionalName AS MedCondProfessionalName, 
>   tblSIDEDrugLink.Frequency, 
>   tblSIDEDrugLink.SeverityLevel 
> FROM 
>   tblSIDEDrugLink, 
>   tblMedCond 
> WHERE 
>   (tblSIDEDrugLink.DrugID IN ('DG-18698')) AND 
>   (tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID) 
> ORDER BY 
>   tblSIDEDrugLink.DrugID, 
>   tblSIDEDrugLink.Frequency, 
>   tblSIDEDrugLink.SeverityLevel, 
>   tblSIDEDrugLink.MedCondID;
> 
> seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
> following structure:
> 
> CREATE TABLE TBLSIDEDRUGLINK
> (
> DRUGID  VARCHAR(10) NOT NULL,
> MEDCONDID   VARCHAR(10) NOT NULL,
> FREQUENCY   INT2,
> SEVERITYLEVEL   INT2,
> CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
> );
> 
> with the following index:
> CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);
> 
> This table has 153,288 rows.
> 
> Table 'tblMedCond' has the following structure:
> 
> CREATE TABLE TBLMEDCOND
> (
> MEDCONDID   VARCHAR(10) NOT NULL,
> PROFESSIONALNAMEVARCHAR(58),
> PATIENTNAME VARCHAR(58),
> CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
> );
> 
> This table has 1,730 rows.
> 
> The query above is made by a third-party API that I don't have the
> source for, so I can't modify the query in the API, though the
> third-party has been quite willing to help out - they may even ship me a
> 'special' version of the API if there's something in this query that
> PostgreSQL for some reason doesn't implement efficiently enough.
> 
> If it would help anyone to see the query plan or such - I can modify the
> logs to show that, just let me know.
> 
> Btw - I've let this query run for a while & I haven't seen it complete
> ... s ... I don't know if it would ever complete or not.
> 
> Any help at all is as always, appreciated.
> 
> Sincerest regards,
> -- 
> Brice Ruth
> WebProjkt, Inc.
> VP, Director of Internet Technology
> http://www.webprojkt.com/
> 





Re: [SQL] parse error in create index

2001-02-08 Thread Stephan Szabo


You can use two quote characters to get a single quote in the quoted
string, so ''month''

On Thu, 8 Feb 2001, Hubert Palme wrote:

> Stephan Szabo wrote:
> > 
> > Functional indexes cannot currently take constant values to the function,
> > so it's complaining about the constant 'month'.  The current workaround is
> > probably to create a function that does the date_part('month', ) for
> > you and then use that function in the index creation.
> 
> Hmm... Perhaps, it's better I post to the novice group, because I'm new
> to SQL. 
> 
> Anyway -- That's my trial:
> 
> adressen=> CREATE FUNCTION geb_monat (date) RETURNS integer AS
> adressen-> 'SELECT date_part('month', $1)::integer;'
> adressen-> LANGUAGE 'sql';
> ERROR:  parser: parse error at or near "month"
> 
> The point are the nested strings, I guess. How can I render a "'" in an
> SQL string?
> 
> Thanks for your help!




Re: [SQL] String Concatnation

2001-02-09 Thread Stephan Szabo


I was able to do a function that took two arguments and did
that under 7.1beta3 with no trouble.
What message are you getting?

On Sat, 10 Feb 2001, Najm Hashmi wrote:

> Hi,
>  How can I concatnate   two varialbles, seperated by a |,  that are type text
> together?
> v, v1 text;
> some work
> then
> res:= v ||''|''|| v1;
> this syntex does not work in plpgsql??
> Any ideas how to do it ???
> Thanks.
> Najm
> 




Re: [SQL] Huh? Mysterious Function Error

2001-02-20 Thread Stephan Szabo


Does the function have any SELECTs that aren't
SELECT INTOs?

On Tue, 20 Feb 2001, Josh Berkus wrote:

> Tom, Stephan, Jan, Etc.
> 
>   Can you help me with this one?  I can't figure out what is meant by
> this error message:
> 
> I have a long, complicated PL/pgSQL function called
> fn_modify_candidates(INT4,INT4,INT4).
> 
> When I call:
> SELECT fn_modify_candidate(1004,5,278417036)
> 
> I get:
> ERROR:  unexpected SELECT query in exec_stmt_execsql()
> 
> Help?




[GENERAL] Re: [SQL] two tables - foreign keys referring to each other...

2001-02-20 Thread Stephan Szabo


You have to use ALTER TABLE to add the constraint to one of the tables.
Deferred refers to the checking of the constraint itself, not really
to the check to see if the table is there.

On Wed, 21 Feb 2001, Chris Czeyka wrote:

> Hey to all,
> 
> I got two tables, linked to each other. How can I tell the first CREATE TABLE
> (institute_t) to wait to check the foreign key for the second table??? just
> like "hold on a little bit... you'll receive your admin_t" :-) ? I thoght
> DEFERRABLE, DEFERRED and transaction with BEGIN/COMMIT take care of this.
> 
> ..or generally: how do you create two crosslinked foreign keyed tables?
> 
> hopefully an easy problem for the real professionals!
> 
> 
> -> here we go
> BEGIN; -- begin table transaction -- Only Postgresql
> CREATE TABLE institute_t (
> nameVARCHAR(48) PRIMARY KEY,
> street  VARCHAR(48) NOT NULL,
> zip VARCHAR(16),
> townVARCHAR(32) NOT NULL,   
> country CHAR(2) NOT NULL, /* country codes ISO-3166*/
> phone   VARCHAR(32) NOT NULL,
> fax VARCHAR(32),
> admin   VARCHAR(16) REFERENCES admin_t
> ON UPDATE CASCADE
> ON DELETE SET NULL
> DEFERRABLE
> INITIALLY DEFERRED
> );
> 
> CREATE TABLE admin_t (
> login   VARCHAR(16) PRIMARY KEY,
> passwordVARCHAR(16) NOT NULL,
> email   VARCHAR(32) NOT NULL,
> real_name   VARCHAR(32) NOT NULL,
> street  VARCHAR(48) NOT NULL,
> zip VARCHAR(16),
> townVARCHAR(32) NOT NULL,   
> country CHAR(2) NOT NULL, /* country codes -- refer to
> ISO-3166*/
> phone   VARCHAR(32) NOT NULL,
> fax VARCHAR(32),
> access  INTEGER NOT NULL,
> institute   VARCHAR(48) REFERENCES institute_t
> ON UPDATE CASCADE
> ON DELETE SET NULL
> DEFERRABLE
> INITIALLY DEFERRED
> );
> COMMIT;
> 
> 
> of course I get the ERROR, that admin_t doesn't exist. So? help the stupid!
> pls!
> 
> best greets,
> Chris
> 




Re: [SQL] Strange parse error??

2001-02-22 Thread Stephan Szabo

On Thu, 22 Feb 2001, [ISO-8859-1] Bjørn T Johansen wrote:

> I am trying to do a simple update (or at least I thought it was
> simple), but I just keep getting a parse error, saying:
> 
> Error executing query
> 
> Update "Config" Set "Wave" = 'F:\wav\BTJ.wav',"Answer" = 20,
> "Recordwav" ='F:\wav\',"CalledID" = '12345678' where "Recno" = 1
> 
> PostgreSQL error message:
> ERROR: parser: parse error at or near "12345678"

I believe that it's because postgres treats \ as an escape character
so the 'F:\wav\' was probably treating the closing \' as an escaped
quote inside the string.  You'll probably want to double the \ 
characters.





Re: [SQL] How can i escape a '+' or a '+' in a regexp ?

2001-02-23 Thread Stephan Szabo


I believe you'll need two \ characters to escape the + or *.
titulo ~ '\\+'

On Fri, 23 Feb 2001, Gabriel Fernandez wrote:

> Hi fellows,
> 
> I'm trying to the following query:
> 
> select * from areas where titulo ~ '+'  or titulo ~ '*'
> 
> and the answer is:
> 
> ERROR:  regcomp failed with error repetition-operator operand invalid
> 
> I have tried to escape the '+' and the '*'  with a backslash, as
> follows:
> 
> select * from areas where titulo ~ '\+'  or titulo ~ '\*'
>  but the answer is the same.
> 
> If I use the LIKE operator, then I have the problem with '%' and '_'
> :-)
> 
> As long as the values in the field can contain either '+' or '*' or '%'
> or '_'  I need to escape these characters. How can i do it ?




Re: [SQL] Controlling Reuslts with Limit

2001-02-23 Thread Stephan Szabo


It returns the first five rows it finds.  Running the same
query over again if there are no updates is safe, but if the
table is updated there is the possibility it would find a different
five rows.  If the query would do a seq scan and you updated
a row, the rows would be in a different order in the heap file and so
you'd get a different ordering of rows...

On Sat, 24 Feb 2001, Najm Hashmi wrote:

> Hi,
>  I was reading through Bruce's on line . I found follwing bit unclear...
> 
> "Notice that each query uses ORDER BY . Although this clause is not required,
> LIMIT without ORDER BY returns random rows from the query, which would be
> useless. "
> 
> When I run a query several time  I get the same results as given
> flipr=# select song_id from songs  limit 5;
>  song_id
> -
>  945
>  946
>  947
>  948
>  949
> (5 rows)
> ...





Re: [SQL] conversion

2001-02-26 Thread Stephan Szabo


It looks like you have some rows for pyear which do 
not convert cleanly into a number like '  '.  What
do you want it to do in such cases?

On Sun, 25 Feb 2001, Ken Kline wrote:

> follow up
> actually the destination column is defined
> as a numeric(4)
> 
> the following are the statements again with there error messages:
> 
> SELECT pseason, to_number(pyear,'') from temp;
> ERROR: Bad numeric input format  '  '
> 
>  SELECT pyear::int from temp;
> ERROR: Cannot cast type 'varchar' to 'int4'
> 
> 
> 
> 
> Ken Kline wrote:
> 
> > Hello,
> > another brain twister, at least for me...
> > i have a table of varchar and one of the values I want
> > to insert into another table, one of the columns is
> > defined as INTEGER in destination table, column...
> > and none of these statements seem to work
> >
> >  INSERT INTO pledge_classes (semester, year)
> >  SELECT pseason, to_number('pyear','') from temp;
> >
> >  INSERT INTO pledge_classes (semester, year)
> >  SELECT pseason, pyear::integer from temp;
> >
> >  INSERT INTO pledge_classes (semester, year)
> >  SELECT pseason, pyear::numeric(4) from temp;
> 




Re: [SQL] DLookup('field', 'table', ['condition'])

2001-02-26 Thread Stephan Szabo

On Tue, 27 Feb 2001, Herbert Ambos wrote:

> I'm trying to create Domain Aggregate function that mimic Access' Dlookup
> function, but without any luck, after digging (i think) all the docs
> don't have the solution yet.
> 
> Syntax:
> 
>   DLookup ('field', 'table|view', ['condition'])
> 
>   where:
>   field --   column name or calculation
> table|view --
>   condition (optional) -- SQL WHERE condition without the
>   WHERE keyword
>   
>   If the query returns multiple rows then it will only get the
>   topmost column.
>   If the query retuns 0 rows then NULL would be returned
> 
> 
> An example argument to the function would be
> 
>   DLookup ('id', 'student', 'name=\'Bill Gates\'') --> '2001-432'

Is this supposed to give effectively the same result as the subquery 
(select student.id where name='Bill Gates' limit 1)?  I don't think
that subquery is supported in 7.0, but will be in 7.1 along with 
EXECUTE for plpgsql which should let you build a query out of the
parts you give it.  I think you'd possibly be able to do this in 7.0 
using pltcl but I don't know tcl so I can't help there.




Re: [SQL] Weird NOT IN effect with NULL values

2001-03-01 Thread Stephan Szabo


On Thu, 1 Mar 2001, Frank Joerdens wrote:

> When doing a subselect with NOT IN, as in
> 
> SELECT name
> FROM customer
> WHERE customer_id NOT IN (
> SELECT customer_id
> FROM salesorder
> );
> 
> (from Bruce Momjian's book)
> 
> I get no rows if the result column returned by the subselect
> contains NULL values. It works as expected if I remove the NULL values
> from the result set. Is this behaviour correct and if so, why?
> 
> I am using 7.1 beta 4.

I believe it may be actually correct.  If my reading of the spec is
correct (which it possibly is not), customer_id NOT IN (subselect) is
effectively, NOT ( customer_id = ANY (subselect) ) and then:

Using the rules for ANY,
If customer_id= for at least one row, IN returns true
 so NOT IN returns false.
If customer_id= is false for every row, IN returns
 false so NOT IN returns true.
Otherwise IN and NOT IN both return unknown.

Since customer_id=NULL is unknown, you're getting at least one unknown in
the ANY expression so NOT IN doesn't return true, it returns unknown
which is not sufficient for making the where clause return the row.




Re: [SQL] Help creating rules/triggers/functions

2001-03-02 Thread Stephan Szabo


If you're only doing a simple check for reference, why not use
foreign keys?

In general however, you probably want to use plpgsql to define the
trigger.  And trigger functions don't take parameters in the normal 
sense, the function should be created taking no args and returning
opaque; the parameters you add in create trigger are passed in
TG_ARGS (i believe).  You might want to look at the user and programmer
guides for more information on trigger functions.

On Tue, 27 Feb 2001, Blaise Carrupt wrote:

> Hi all !
> 
> I use PostgreSQL 7.0.2 on a HP-UX system.
> 
> I would like to create a simple function and a simple trigger (or rule) that 
> deny a delete from a table if the row is referenced in another table.
> 
> I though it should look like this (from my Ingres experience... :) :
> 
> create function A_del(int4 i_id)
> BEGIN
>SELECT id
>  FROM b
>  where a_id = :i_id;
>  
>if rowcount > 0 then
>   RAISE EXCEPTION "not allowed !"
>end if;
> END
> 
> 
> create trigger before delete from A for each row execute procedure A_del(old.id)
> 
> 
> But it seems to be much more complicated with Postgres (create a C function 
> using CurrentTriggerData,...). May I have missed something or is it really much 
> more complicated ?



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] random

2001-03-05 Thread Stephan Szabo

On Mon, 5 Mar 2001, Bruce Momjian wrote:

> > Jelle Ouwerkerk <[EMAIL PROTECTED]> writes:
> > > Also, is there a way to randomize the order of a result set?
> > 
> > There's always
> > SELECT * FROM foo ORDER BY random();
> > 
> 
> How does that work?
> 
>   test=> select random();
> random   
>   ---
>0.896045367650709
>   (1 row)
> 
> However:
>   
>   test=> select * from pg_class order by random();
> 
> does return some output.  Is it random, and if so, how?

As a guess...
I'd assume that if random() is not marked as cachable, it
would call random() once for each output row after any
where clauses are done so it'd get different random
numbers for each row that it'd use for the sorting.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] cannot get CREATE TABLE AS to work

2001-03-09 Thread Stephan Szabo

On Fri, 9 Mar 2001, Josh Berkus wrote:

> Robert,
> 
> > I suspect that the INSERT INTO SELECT in this case will take longer than a
> > CREATE TABLE AS because of the referential integrity check needed on every
> > INSERT (per Tom Lane).
> 
> In that case, what about:
> 
> a) dropping the referential integrity check;

Unfortunately if he adds it back in with ALTER TABLE, that's going to be
slow as well.  I did it in a fashion I felt was cleaner code, but in
practice, I think the implementation's performance is poor enough that 
it might be worth doing in the less clean way (running a single select
looking for failing rows when possible on alter table rather than checking
each row -- less clean because it means keeping information on what the
fk check is in multiple places. :( )

> 2) making the referential integrity check deferrable (there's a way to
> do this, it was discussed a couple weeks ago - ask Tom).

Well, you can always add deferrable initially immediate to the constraint
and run a SET CONSTRAINTS ALL DEFERRED, but I'm not sure that'll actually
be much faster, it still does a check per row I believe.

It's hacky, but I'd say, if you don't have other triggers you care about, 
twiddle pg_class.reltriggers for the class to 0, do the insert, set it
back to what it was before and then run selects to make sure the data is
valid (ie, would the constraint have failed).

[
assuming one column, something like:

select * from fktable where not exists
 (select * from pktable where pktable.pkcol=fktable.fkcol);
]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



RE: [SQL] cannot get CREATE TABLE AS to work

2001-03-09 Thread Stephan Szabo

On Fri, 9 Mar 2001, Creager, Robert S wrote:

> 
> Well, that explains why I wasn't seeing any appreciable speed increase with
> the INITIALLY DEFERRED.  I tried mucking in pg_class, and saw a 3 fold
> increase in insert speed on inserts into my table with 2 relational
> triggers.  SET CONSTRAINTS ALL DEFERRED does nothing to very little to
> increase the insertion speed.  15min 'INITIALLY DEFERRED' vs 13min 'ALL
> DEFERRED' vs 5min 'pg_class update'.  And that 15 vs 13 could be machine
> activity.

Yeah, theoretically if we could do something where it knew that there were
alot of them and tried to fall back to doing a single big check rather
than lots of little ones we'd get a performance increase, but I can't
really think of a good way to do that with what we have right now...


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Optimization via explicit JOINs

2001-03-09 Thread Stephan Szabo


On Fri, 9 Mar 2001, David Olbersen wrote:

> Greetings,
>   I've been toying aroudn with postgres 7.1beta5's ability to control the
>   planner via explicitely JOINing tables. I then (just for giggles) compare the
>   difference in the EXPLAIN results.
> 
>   I'm no super-mondo-DBA or anything, but in my two attempts so far, the numbers
>   I get out of EXPLAIN have been about 1/2 as small.
> 
>   Below are two EXPLAIN results, am I correct in reading that one is indeed
>   "twice as fast" as the other? I say twice as fast because the top-most cost in
>   the first query is 58.62, but in the second one it's only 32.09. Am I reading
>   this correctly?

Not entirely.  Those are only estimates, so they don't entirely line up
with reality.  Also, I notice the first estimates 14 rows and the second
1,  which is probably why the estimate is higher.  In practice it probably
won't be significantly different.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Optimization via explicit JOINs

2001-03-09 Thread Stephan Szabo


> On Fri, 9 Mar 2001, Stephan Szabo wrote:
> 
> ->Not entirely.  Those are only estimates, so they don't entirely line up
> ->with reality.  Also, I notice the first estimates 14 rows and the second
> ->1,  which is probably why the estimate is higher.  In practice it probably
> ->won't be significantly different.
> 
> So really I'm just getting back estimations of cost and rows returned?
> Incidentally, both queries returned the same data set, that's a Good Thing (tm).

Yeah, explain is mostly ofr showing what it's going to do and a little bit
of why it thinks it's a good idea.  Hmm, what were the two queries anyway?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Optimization via explicit JOINs

2001-03-09 Thread Stephan Szabo


On Fri, 9 Mar 2001, David Olbersen wrote:

> On Fri, 9 Mar 2001, Stephan Szabo wrote:
> 
> -> Hmm, what were the two queries anyway?
> 
> The "slower" query
> 
> SELECT
>   to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in,
>   s.nameas title,
>   a.nameas artist,
>   s.length  as length
> FROM
>   playlist p,
>   songss,
>   artists  a
> WHERE
>   p.waiting   = TRUE  AND
>   p.song_id   = s.song_id AND
>   s.artist_id = a.artist_id
> ORDER BY p.item_id
> 
> The "faster" query
> 
> SELECT
>   to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in,
>   s.nameas title,
>   s.length  as length,
>   a.nameas artist
> FROM
>   playlist p JOIN songs s USING (song_id),
>   artists  a
> WHERE
>   p.waiting   = TRUE  AND
>   p.song_id   = s.song_id AND
>   s.artist_id = a.artist_id
> ORDER BY p.item_id;
> 
> Notice how the only difference is in the FROM clause?

Yeah.  It's getting the same plan, just a slightly different number
of estimated rows (14 and 1) from the join of p to s.
As a question, how many rows does
select * from playlist p join songs s using (song_id) where
p.waiting=TRUE;
actually result in?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Optimization via explicit JOINs

2001-03-09 Thread Stephan Szabo


Darn.  Well, one of the queries picked that 1 row was going to survive
the nested loop step and the other said 14.  I was wondering which one
was closer to being correct at that time.

On Fri, 9 Mar 2001, David Olbersen wrote:

> On Fri, 9 Mar 2001, Stephan Szabo wrote:
> 
> ->As a question, how many rows does
> ->select * from playlist p join songs s using (song_id) where
> ->p.waiting=TRUE;
> ->actually result in?
> 
> Well it depends. Most of the time that playlist table is "empty" (no rows where
> waiting = TRUE), however users can (in a round about way) insert into that
> table, so that there could be anywhere from 10, to 2,342, to more.
> 
> Why do you ask?
> 
> (The reason those plans chose 14 was because, at the time, there were 14 rows in
> playlist)


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Optimization via explicit JOINs

2001-03-09 Thread Stephan Szabo


Actually, just thought of something else.  If you remove
the probably redundant p.song_id=s.song_id from the second 
query (since the join ... using should do that) does it
change the explain output?

On Fri, 9 Mar 2001, David Olbersen wrote:

> On Fri, 9 Mar 2001, Stephan Szabo wrote:
> 
> ->As a question, how many rows does
> ->select * from playlist p join songs s using (song_id) where
> ->p.waiting=TRUE;
> ->actually result in?
> 
> Well it depends. Most of the time that playlist table is "empty" (no rows where
> waiting = TRUE), however users can (in a round about way) insert into that
> table, so that there could be anywhere from 10, to 2,342, to more.
> 
> Why do you ask?
> 
> (The reason those plans chose 14 was because, at the time, there were 14 rows in
> playlist)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] help

2001-03-12 Thread Stephan Szabo


What is the schema of the table in question, does it have any references
to other tables and what is an example insert statement?

On Wed, 7 Mar 2001, chard wrote:

> 
> help me pls.
> i got an error like this when i do an insert to a table, my table dont
> have bpchar type of field.
> 
> ERROR:  Unable to identify an operator '=' for types 'bpchar' and 'varchar'
> You will have to retype this query using an explicit cast
> 
> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] How does this query work.....?

2001-03-12 Thread Stephan Szabo


pg_class holds the relation information (tables, etc)
pg_attribute holds attribute information (attname), it 
  keeps the oid of the relation it's on in attrelid and the
  oid of the type as atttypid
pg_type holds type information (typname)

The attnum>0 is to limit the check to user attributes.
There are additional attributes (oid, xmin, etc...) 
defined on the tables that have attnum<0 and you usually
don't care about that.

On Fri, 9 Mar 2001 [EMAIL PROTECTED] wrote:

> Hi,
> 
> I am using the following query to find the attributes of a given table, and 
> their datatypes:
> 
> select typname,attname   
> from pg_class c, pg_attribute a,pg_type t
> where relname = 'table_name' and
> attrelid = c.oid and
> atttypid = t.oid and
> attnum > 0;
> 
> Can anybody explain how this query actually works - I cannot figure it. 
> Thanks in advance.
> 
> Rob Burne.
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] VACUUM kills Index Scans ?!

2001-03-15 Thread Stephan Szabo


On Thu, 15 Mar 2001, Gerald Gutierrez wrote:

> 1) When I create a empty table, and then immediate create an index on a 
> column, I can get /index scans/ when searching on that column. But when I 
> then run VACUUM, the same search becomes a /sequential scan/. Shouldn't it 
> still be an index scan? What's going on here?

> -
> 
> 2) If I already have some data in a table and I create an index on a 
> column, why doesn't subsequent searches then change from sequential scans 
> to index scans?

With a small number of rows, a sequence scan will require less
reads/seeks from the filesystem.  It's not always correct for the
optimizer to choose to use an index even if it's there.

If you put in lots of rows with distinct values and vacuum analyze (you
want to do that rather than just vacuum) and do a comparison it should
use the index, with only a few rows, the seq scan is probably better.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



  1   2   3   4   5   6   7   8   >