[SQL] finding (and recycling) holes in sequences

2000-06-26 Thread Kyle Bateman



 
If one has a unique-id generating sequence that sometimes is bound
to
have holes in it (ie: it could happen that a nextval(seq) happens
without a corresponding INSERT in the table), then how could one
efficiently scan for these holes to recycle them in subsequent INSERTs?
I'm just looking for a "standard" way of doing this if such a thing
exists.
Louis-David Mitterrand - [EMAIL PROTECTED] - http://www.apartia.fr
 
If you want to create a sequence of numbers with no holes, you might consider
NOT using sequences at all (the are really meant for something different). 
Rather, each time you wish to do an insert, query the existing data to
see what the maximum value is existing in the database.  Add 1 to
that and use this as your new value.  This is guaranteed to not have
any holes in the sequence.  To make inserts easier, you can define
an sql function that will return the max value.  Here's an example:
create function maxid () returns int4 as '
    declare
    mv int4;
    begin
    select max(id) into
mv from mytab;
    if mv is null then return
0; end if;
    return mv;
    end;
' language 'plpgsql';
create table mytab (
    id int4 primary key default (maxid() + 1),
    data text
);
insert into mytab (data) values ('Hi there');
insert into mytab (data) values ('Howdy');
insert into mytab (data) values ('Guten Tag');
insert into mytab (data) values ('Terve');
select * from mytab;
 
 
BTW hackers, this is a common enough request, I wonder if there should
be a built-in feature to make it easier for people to put this into the
default value in the table definition i.e.:
create table mytab (
    keycolumn int4 primary key default coalesce(max(keycolumn),0)+1,
    data text
);
Wouldn't this eliminate the need for rollback-able sequences?
 

begin:vcard 
n:Bateman;Kyle
tel;fax:801-377-8096
tel;work:801-377-8033x101
x-mozilla-html:FALSE
url:www.actiontarget.com
org:Action Target Inc
adr:;;PO Box 636;Provo;UT;84603;US
version:2.1
email;internet:[EMAIL PROTECTED]
title:President
x-mozilla-cpt:;-15520
fn:Kyle Bateman
end:vcard



[SQL] A subselect in an aggregate

2000-06-26 Thread Bryan White

This statement works:
select date, (select sum(qty * price) from orderdetail d where d.orderid =
orders.orderid) from orders

But when I try to do something like this:

select date, sum(select sum(qty * price) from orderdetail d where d.orderid
= orders.orderid) from orders group by date

I get ERROR: parser: parse error at or near "select"

Is there a way to apply an agregate function to a subselect like this?





Re: [SQL] A subselect in an aggregate

2000-06-26 Thread Ed Loehr

Bryan White wrote:
> 
> This statement works:
> select date, (select sum(qty * price) from orderdetail d where d.orderid =
> orders.orderid) from orders
> 
> But when I try to do something like this:
> 
> select date, sum(select sum(qty * price) from orderdetail d where d.orderid
> = orders.orderid) from orders group by date
> 
> I get ERROR: parser: parse error at or near "select"
> 
> Is there a way to apply an agregate function to a subselect like this?

Avoiding the question, I'm wondering if this simpler form wouldn't be
what you're after?

select o.date, sum(d.qty * d.price) 
from orderdetail d, orders o
where d.orderid = o.orderid
group by o.date

Regards,
Ed Loehr



[SQL] pg_dump problem

2000-06-26 Thread Graham Vickrage

I am trying to backup a production database running on v6.5 and restore it
on our test machine running v6.5.

The largest table has about 750,000 rows, the other 5 tables are very small
approx 100 rows.

When I try to restore the database using  "psql -e database < db.out" I get
the error message
"query buffer max length of 16384 exceeded" after each row.

Would somebody please tell me how to increse this buffer (assuming this is
whats required to solve the problem) as I have looked though the
documentation and I am still struggling :-(

Cheers

Graham





Re: [SQL] pg_dump problem

2000-06-26 Thread Ed Loehr

Graham Vickrage wrote:
> 
> I am trying to backup a production database running on v6.5 and restore it
> on our test machine running v6.5.
> 
> The largest table has about 750,000 rows, the other 5 tables are very small
> approx 100 rows.
> 
> When I try to restore the database using  "psql -e database < db.out" I get
> the error message
> "query buffer max length of 16384 exceeded" after each row.
> 
> Would somebody please tell me how to increse this buffer (assuming this is
> whats required to solve the problem) as I have looked though the
> documentation and I am still struggling :-(

You probably don't need to increase the buffer.  More likely, there is an
unterminated string constant somewhere in db.out.  Yes, that would
probably be a bug in pg_dump, but I do vaguely recall problems in 6.5*
with pg_dump.  You might search deja for your error message.

Regards,
Ed Loehr



Re: [SQL] A subselect in an aggregate

2000-06-26 Thread Bryan White



> Bryan White wrote:
> >
> > This statement works:
> > select date, (select sum(qty * price) from orderdetail d where d.orderid
=
> > orders.orderid) from orders
> >
> > But when I try to do something like this:
> >
> > select date, sum(select sum(qty * price) from orderdetail d where
d.orderid
> > = orders.orderid) from orders group by date
> >
> > I get ERROR: parser: parse error at or near "select"
> >
> > Is there a way to apply an agregate function to a subselect like this?
>
> Avoiding the question, I'm wondering if this simpler form wouldn't be
> what you're after?
>
> select o.date, sum(d.qty * d.price)
> from orderdetail d, orders o
> where d.orderid = o.orderid
> group by o.date

This is very slow (acutally I killed it after about 5 minutes):
select o.date,sum(d.qty * d.price) from orderdetail d,orders o where o.date
= '6/1/2000' group by o.date;
Explained:
Aggregate  (cost=0.00..41074641.24 rows=120300337 width=20)
  ->  Group  (cost=0.00..38067132.80 rows=1203003374 width=20)
->  Nested Loop  (cost=0.00..35059624.37 rows=1203003374 width=20)
  ->  Index Scan using iorddate3 on orders o
(cost=0.00..6917.34 rows=1808 width=4)
  ->  Seq Scan on orderdetail d  (cost=0.00..12733.78
rows=665378 width=16)

Doing a squential scan of orderdetail inside a loop seems to be what kills
it.

This is quick (it takes a couple of seconds):
select o.date,(select sum(od.qty * od.price) from orderdetail od where
od.orderid = o.orderid) from orders o where o.date = '6/1/2000';
Explained:
Index Scan using iorddate3 on orders o  (cost=0.00..6917.34 rows=1808
width=8)
  SubPlan
->  Aggregate  (cost=45.24..45.24 rows=1 width=16)
  ->  Index Scan using iodid on orderdetail od  (cost=0.00..45.21
rows=11 width=16)

I suppose I can do my own grouping on the result data but I would like to
avoid that if I could.  Which brings me back to the original question:  Is
there a way to apply an aggregate function to a subselect?






Re: [SQL] A subselect in an aggregate

2000-06-26 Thread Tom Lane

"Bryan White" <[EMAIL PROTECTED]> writes:
> This is very slow (acutally I killed it after about 5 minutes):
> select o.date,sum(d.qty * d.price) from orderdetail d,orders o where o.date
> = '6/1/2000' group by o.date;

> This is quick (it takes a couple of seconds):
> select o.date,(select sum(od.qty * od.price) from orderdetail od where
> od.orderid = o.orderid) from orders o where o.date = '6/1/2000';

Well, they're not computing the same thing, are they?  Since there's no
constraint on d.orderid in the first example, you're asking for the sum
over ALL orderdetail records ... repeated over again for each order
record.  For equivalent constraints, I'd expect the first form to be
at least as fast as the second, probably faster.

> Which brings me back to the original question:  Is
> there a way to apply an aggregate function to a subselect?

Sure --- you just forgot that a sub-select expression requires its
very own parentheses, so you need two sets:
sum((select ))
But the other way is probably better...

regards, tom lane



Re: [SQL] Merging two columns into one

2000-06-26 Thread Gary MacMinn

All,

Many thanks for your thoughts on the merging of columns. The tr method would have 
killed all the delimiters as a few people noted. The || method in SQL was the winner, 
although the final table this has to be done to (4 million records) should kill the 
system for a while! Must make sure I've got enough disk space to do it all as 
well. Maybe doing it a section at a time would be in order.

Thanks to all,
Gary MacMinn




[SQL] Sub_select problem

2000-06-26 Thread Bernie Huang

Good day, everyone,

I have 2 tables and 1 datafile as following:

vehicle_tb
---
v_idint4 primary key
display_idint4
yeartext
maketext
modeltext
...


vehicle_borrow_log

employee_idint4 (foreign key to emp_tb)
v_idint4 (foreign key to vehicle_tb)
borrow_timetimestamp
return_timetimestamp
...


datafile.gnu ( a text file)
---
# DateDisplay_id
2000-XX-XX1
2000-xx-xx1

2000-XX-XX2
2000-xx-xx2

2000-XX-XX1
2000-xx-xx1

...


What I want to do is to fetch the display_id in datafile using PHP (by
the way, this part is done).  And then use that display_id to get the
borrow_time and return time in vehicle_borrow_log.  So, I used this
sub-select SQL, but it doesn't seem to be working.

SELECT v_id, borrow_time, return_time
FROM vehicle_borrow_log vbl
WHERE (SELECT equip_id
   FROM vehicle_tb vtb
   WHERE vtb.v_id=vbl.v_id
   AND display_id=$id_from_datafile_let_say_it_is_2);

Could anyone please help?  Thanks!


- Bernie


begin:vcard 
n:Huang;Bernie
tel;fax:(604)664-9195
tel;work:(604)664-9172
x-mozilla-html:TRUE
org:Environment Canada;Standards and Technology Services
adr:;;700-1200 West 73 Ave.;Vancouver;BC;V6P 6H9;Canada
version:2.1
email;internet:[EMAIL PROTECTED]
title:Programmer
x-mozilla-cpt:;0
fn:Bernie Huang
end:vcard



[SQL] Re: unique sequences

2000-06-26 Thread K Parker

> Rather, each time you wish to do an insert,
> query the existing data to see what the
> maximum value is existing in the database.
> Add 1 to that and use this as your new value.
> This is guaranteed to not have any holes in
> the sequence.

True, but alas it _doesn't_ guarantee uniqueness, since there's nothing to prohibit 2 
or more back ends getting the same max() at the
same time...

Thinking off the top of my head, I'm not sure you can apply FOR UPDATE to an aggregate 
function, so there's probably no solution there, either.



Join 18 million Eudora users by signing up for a free Eudora Web-Mail account at 
http://www.eudoramail.com