Re: [SQL] generic return for functions

2003-06-02 Thread Avi Schwartz
Exactly.  It is used with named parameters when the parameter is not 
provided.

Avi

On Sunday, Jun 1, 2003, at 07:21 America/Chicago, Rod Taylor wrote:

thing that causes me some minor grief is the fact that currently you
cannot have default values to function parameters, a feature we use a
lot.
The default value is used when the parameter is NULL or unprovided?

fn(integer, integer, integer default 32)

select fn(integer, integer); <- Third argument would be '32'?

When PostgreSQL gets named parameters the above probably makes sense to
add.
A TODO item?
--
Avi Schwartz
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] generic return for functions

2003-06-02 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes:
>> thing that causes me some minor grief is the fact that currently you=20
>> cannot have default values to function parameters, a feature we use a=20
>> lot.

> fn(integer, integer, integer default 32)
> select fn(integer, integer); <- Third argument would be '32'?
> When PostgreSQL gets named parameters the above probably makes sense to
> add.

> A TODO item?

That isn't ever going to happen.  We have enough difficulty resolving
overloaded functions as it is.  If we couldn't even be sure how many
arguments there were supposed to be, it'd become completely intractable.

You can however achieve similar effects at the user level by adding
auxiliary functions: declare fn(int,int) as a one-line SQL function
that calls fn($1, $2, 32).

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [SQL] generic return for functions

2003-06-02 Thread Avi Schwartz
On Sunday, Jun 1, 2003, at 10:46 America/Chicago, Tom Lane wrote:

Rod Taylor <[EMAIL PROTECTED]> writes:
thing that causes me some minor grief is the fact that currently 
you=20
cannot have default values to function parameters, a feature we use 
a=20
lot.

fn(integer, integer, integer default 32)
select fn(integer, integer); <- Third argument would be '32'?
When PostgreSQL gets named parameters the above probably makes sense 
to
add.

A TODO item?
That isn't ever going to happen.  We have enough difficulty resolving
overloaded functions as it is.  If we couldn't even be sure how many
arguments there were supposed to be, it'd become completely 
intractable.

You can however achieve similar effects at the user level by adding
auxiliary functions: declare fn(int,int) as a one-line SQL function
that calls fn($1, $2, 32).
			regards, tom lane
I understand why it will not be implemented with overloaded functions.  
Is there a possibility to allow this only for functions that are not 
overloaded?  The SQL function solution is really not going to help in 
my case since the function builds a select statement dynamically based 
on which parameters have a non-null value.  The number of parameters is 
something like 12 or 13 and the control on which parameters are set is 
determined by a complex combination of program logic and user 
selections.  What I did to solve this problem was to force all 
variables to be initialized to null and then set the non-null ones 
before the call to the function.

On another note, somewhat related, when we started looking at a 
replacement to SQL Server 7, I looked at SAPDB, MySQL and now 
PostgreSQL.  MySQL lost immediately since the current version is 
missing a lot of functionality we were looking for.  I was not 
impressed by SAPDB's performance, their documentation is extremely hard 
to follow and I found the database very hard to manage.  So far 
PostgreSQL seems to be the best choice for us.  I am yet to find a show 
stopper and the speed is fantastic.  I didn't do extensive comparisons 
yet and I don't have hard numbers, but from what I have seen so far, 
PostgreSQL 7.3.2 is at least as fast as SQL Server 7 in real life 
situations (Of course count(*) is still much faster in SQL Server for 
very large tables (some of our tables are > 5M rows) :-) .  What makes 
it more impressive is the fact that SS runs on a 4 CPU machine with 2 
GB of memory while PostgreSQL on a single CPU machine with 384M memory 
running SuSE 8.2.  In the near future I will be moving the PostgreSQL 
database to a similar configuration as SS.  It will be interested to 
compare them then.

To PostgreSQL developers, thank you for a great product!

Avi
--
Avi Schwartz
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] PostgreSQL and industry

2003-06-02 Thread Josh Berkus
Walid,

> I am searching for big industrial project using PostgreSQL as a database
> system  manager. Can anyone give references or bibliograpghy other than
> the official postgres site?

advocacy.postgresql.org

We're working on a case study section, but it's not ready yet.  The biggest I 
can mention is, of course, that the .ORG web registry runs on PostgreSQL.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] generic return for functions

2003-06-02 Thread Josh Berkus
Avi,

> I understand why it will not be implemented with overloaded functions.
> Is there a possibility to allow this only for functions that are not
> overloaded?  

Unfortunately, no.  There's simply no way for the database to tell the 
difference between a function call relying on defaults, and one with the 
wrong parameters.  SQL Server's approach with defaults works *because* SQL 
Server does not support overloaded procedures.

> The SQL function solution is really not going to help in
> my case since the function builds a select statement dynamically based
> on which parameters have a non-null value.  The number of parameters is
> something like 12 or 13 and the control on which parameters are set is
> determined by a complex combination of program logic and user
> selections.  What I did to solve this problem was to force all
> variables to be initialized to null and then set the non-null ones
> before the call to the function.

This sounds like a good solution to me.

BTW, named parameters for PostgreSQL Functions are on the to-do list, but I 
don't think anyone is currently working on them.

> very large tables (some of our tables are > 5M rows) :-) .  What makes
> it more impressive is the fact that SS runs on a 4 CPU machine with 2
> GB of memory while PostgreSQL on a single CPU machine with 384M memory
> running SuSE 8.2.  In the near future I will be moving the PostgreSQL
> database to a similar configuration as SS.  It will be interested to
> compare them then.

That's a very nice testimonial!  Thanks.

BTW, you will probably wish to join the PGSQL-Performance mailing list to make 
sure that you can tune your PostgreSQL database properly.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] SQL problem: bank account

2003-06-02 Thread Erik G. Burrows
It seems to me this is a simple problem, but the solution eludes me.

I have a table:

bank_account (
  transaction_id int not null serial,
  customer_id int not null references customer(id),
  ts timestamp not null default now(),
  amount float not null,
  balance float not null,
  primary key(transaction_id)
)

I need to get the most recent transaction for each customer. I need only
the transaction ID, but the entire row would be best.



I have two solutions, both of which are too slow for use in my
interactive web-based interface:

Solution1: Outer left self join:
SELECT
  ba1.*
FROM
  bank_account ba1
  LEFT OUTER JOIN bank_account ba2
ON ba1.customer_id = ba2.customer_id AND ba1.ts < ba2.ts
WHERE
  ba2.ts IS NULL;

This query works great on tables of less than a few thousand rows. For
my 300k row table, it takes several hours.


Solution2: max-concat trick
SELECT
  split_part(max(
  extract(EPOCH from ts)::VARCHAR || '' ||
  transaction_id::VARCHAR), '', 2)::INT
FROM
  bank_account
GROUP BY
  customer_id

This is an ugly and obviously inefficient solution, but it does the job
in about 1/2 hour. Still too long though.



I've been working on this problem for days, and consulting friends. No
elegant, fast solution is presenting itself. As I said, I feel I'm not
seeing the obvious solution in front of my face.

In the mean-time I can use this query to do the job on a per-customer
basis:

select
  *
from
  bank_account
where
  id = 
  and ts = (select max(ts) from bank_account ba2 where ba2.customer_id =
bank_account.customer_id);


However, doing this for all 40,000 customers is not workable as a manual
process. My last resort is to do it this way to pre-generate the report,
but I'd far rather do it real-time.

Help! My brain hurts!

-- 
Erik G. Burrows - KG6HEA   www.erikburrows.com
PGP Key: http://www.erikburrows.com/[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] SQL problem: bank account

2003-06-02 Thread Andrew J. Kopciuch

> I need to get the most recent transaction for each customer. I need only
> the transaction ID, but the entire row would be best.
>

Why don't you alter the customer table to hold the transaction ID of the most 
recent transaction?



Some questions though:

Do you vacuum the database regularly?
Do you have any indexes created?


Some things to think about.



Andy



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] SQL problem: bank account

2003-06-02 Thread Sean Chittenden
> It seems to me this is a simple problem, but the solution eludes me.
>
> I have a table:
> 
> bank_account (
>   transaction_id int not null serial,
>   customer_id int not null references customer(id),
>   ts timestamp not null default now(),
>   amount float not null,
>   balance float not null,
>   primary key(transaction_id)
> )
> 
> I need to get the most recent transaction for each customer. I need only
> the transaction ID, but the entire row would be best.

For the sake of being explicit, change your table definition (though
what you have above is a-okay and works):

CREATE SEQUENCE transaction_id_seq;
CREATE TABLE bank_account (
  transaction_id int not null DEFAULT NEXTVAL('transaction_id_seq'::TEXT),
  customer_id int not null references customer(id),
  ts timestamp not null default now(),
  amount float not null,
  balance float not null,
  primary key(transaction_id)
);

Once you insert a value into the bank_account table, SELECT
CURRVAL('transaction_id_seq') will be what you're looking for.  Read
up on CURRVAL() at:

http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=functions-sequence.html

-sc

-- 
Sean Chittenden

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] CASE returning multiple values (was SQL Help)

2003-06-02 Thread Richard Huxton
On Friday 30 May 2003 7:01 pm, C F wrote:
> I was afraid someone was going to ask that :)
> Okay, I'll do my best at explaining where I'm coming from
[snip long but detailed description I asked for -thanks]

Right - I've done something similar to this before, and I ended up building my 
SQL on the fly with something like:

rule_where_cond:
 rs | rulename | tblname | colname | test | test_value
+--|-+-+--+---
  1 |  us-only |city | country |= |   USA

Note the "rs" is "rule-set" where all criteria for a specific query have the 
same rs value.

rule_sizing_cond:
 rs | rulename | tblname |colname |  minval |  maxval | resultval
+--+-++-+-+---
  1 | cityicon |city | population |   0 | 100 | small_dot.jpg
  1 | cityicon |city | population | 101 | | large_dot.jpg
  1 | showname |city | population |   0 | 100 | false
  1 | showname |city | population | 101 | |  true

This one gains you a lot because you just join to it rather than using CASE 
clauses.

rule_select_cols:
 rs | rulename | tblname |   colname | aliasname
+--+-+---+---
  1 | showname |city | city_name |label1

So you're query will now return the city_name as "label1" but you can get a 
flag from the "rule_sizing_cond" to say whether to show it or not.

Now, it depends how complicated your conditions can be as to how complicated 
the setup of these tables is and also how complicated the query-builder can 
be. However, I have used something similar to build queries myself and it 
does provide you with a lot of flexibility.

HTH

-- 
  Richard Huxton

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] SQL problem: bank account

2003-06-02 Thread listrec
How about:

select max(transaction_id) from bank_account group by customer_id

Should work...

Detlef

-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Auftrag von Erik G. Burrows
Gesendet: Montag, 2. Juni 2003 07:30
An: [EMAIL PROTECTED]
Betreff: [SQL] SQL problem: bank account


It seems to me this is a simple problem, but the solution eludes me.

I have a table:

bank_account (
  transaction_id int not null serial,
  customer_id int not null references customer(id),
  ts timestamp not null default now(),
  amount float not null,
  balance float not null,
  primary key(transaction_id)
)

I need to get the most recent transaction for each customer. I need only
the transaction ID, but the entire row would be best.



I have two solutions, both of which are too slow for use in my
interactive web-based interface:

Solution1: Outer left self join:
SELECT
  ba1.*
FROM
  bank_account ba1
  LEFT OUTER JOIN bank_account ba2
ON ba1.customer_id = ba2.customer_id AND ba1.ts < ba2.ts
WHERE
  ba2.ts IS NULL;

This query works great on tables of less than a few thousand rows. For
my 300k row table, it takes several hours.


Solution2: max-concat trick
SELECT
  split_part(max(
  extract(EPOCH from ts)::VARCHAR || '' ||
  transaction_id::VARCHAR), '', 2)::INT
FROM
  bank_account
GROUP BY
  customer_id

This is an ugly and obviously inefficient solution, but it does the job
in about 1/2 hour. Still too long though.



I've been working on this problem for days, and consulting friends. No
elegant, fast solution is presenting itself. As I said, I feel I'm not
seeing the obvious solution in front of my face.

In the mean-time I can use this query to do the job on a per-customer
basis:

select
  *
from
  bank_account
where
  id = 
  and ts = (select max(ts) from bank_account ba2 where ba2.customer_id =
bank_account.customer_id);


However, doing this for all 40,000 customers is not workable as a manual
process. My last resort is to do it this way to pre-generate the report,
but I'd far rather do it real-time.

Help! My brain hurts!

--
Erik G. Burrows - KG6HEA   www.erikburrows.com
PGP Key: http://www.erikburrows.com/[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly