[SQL] Using & - operator

2003-06-03 Thread Rado Petrik
Hi, 

I have table users;

id  name  bin
--
 1   xx   9

dec(1) & dec(9) = dec(1)  
bin(0001) & bin(1001) = bin(0001) 

This query is bad. 

SELECT name FROM users WHERE id_user=1 AND (bin & 1) 

This query return 
"Warning: PostgreSQL query failed: ERROR: right- hand side of AND is
type 'integer', not 'boolean' in"


Wow I write corect this query ? 

Thanks. 
 

-- 
Rado Petrik <[EMAIL PROTECTED]>



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] SQL problem: bank account

2003-06-03 Thread Tom Lane
"Erik G. Burrows" <[EMAIL PROTECTED]> writes:
> I need to get the most recent transaction for each customer. I need only
> the transaction ID, but the entire row would be best.

If you don't mind a not-standard-SQL solution, the SELECT DISTINCT ON
construct is designed for this sort of thing.  See the "weather report"
example in the SELECT reference page.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] SQL problem: bank account

2003-06-03 Thread Dmitry Tkach
I am afraid, this looks even uglier then your second solution, but 
should work, and be quicker...

-- You need this to avoid having to rescan the whole table for each 
customerid every time and resort the results
create index customer_txstamp_idx on bank_account (customer_id, ts);

select ba.* from bank_account ba where
transaction_id = (select transaction_id from bank_account where 
customer_id = ba.customer_id order by customer_id desc, ts desc limit 1);

Now, note that, if you have lots of different customers in that table, 
it will still take a while to fetch them all (although, it should still 
be a lot quicker then half an hour) - in that case, you may consider 
either getting them one-by-one (by adding ... and customer_id=? to the 
above query) or using cursors...

I hope, it helps...

Dima

Erik G. Burrows wrote:

It seems to me this is a simple problem, but the solution eludes me.

I have a table:

bank_account (
 transaction_id int not null serial,
 customer_id int not null references customer(id),
 ts timestamp not null default now(),
 amount float not null,
 balance float not null,
 primary key(transaction_id)
)
I need to get the most recent transaction for each customer. I need only
the transaction ID, but the entire row would be best.


I have two solutions, both of which are too slow for use in my
interactive web-based interface:
Solution1: Outer left self join:
SELECT
 ba1.*
FROM
 bank_account ba1
 LEFT OUTER JOIN bank_account ba2
   ON ba1.customer_id = ba2.customer_id AND ba1.ts < ba2.ts
WHERE
 ba2.ts IS NULL;
This query works great on tables of less than a few thousand rows. For
my 300k row table, it takes several hours.
Solution2: max-concat trick
SELECT
 split_part(max(
 extract(EPOCH from ts)::VARCHAR || '' ||
 transaction_id::VARCHAR), '', 2)::INT
FROM
 bank_account
GROUP BY
 customer_id
This is an ugly and obviously inefficient solution, but it does the job
in about 1/2 hour. Still too long though.


I've been working on this problem for days, and consulting friends. No
elegant, fast solution is presenting itself. As I said, I feel I'm not
seeing the obvious solution in front of my face.
In the mean-time I can use this query to do the job on a per-customer
basis:
select
 *
from
 bank_account
where
 id = 
 and ts = (select max(ts) from bank_account ba2 where ba2.customer_id =
bank_account.customer_id);
However, doing this for all 40,000 customers is not workable as a manual
process. My last resort is to do it this way to pre-generate the report,
but I'd far rather do it real-time.
Help! My brain hurts!

 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Using & - operator

2003-06-03 Thread Dmitry Tkach
Rado Petrik wrote:

Hi, 

I have table users;

id  name  bin
--
1   xx   9

dec(1) & dec(9) = dec(1)  
bin(0001) & bin(1001) = bin(0001) 

This query is bad. 

SELECT name FROM users WHERE id_user=1 AND (bin & 1) 

This query return 
"Warning: PostgreSQL query failed: ERROR: right- hand side of AND is
type 'integer', not 'boolean' in"
 

I think, you want:

... AND (bin & 1) = 1

just bin&1 returns an *integer*, and you need a *boolean* expression for 
your criteria.

I hope, it helps...

Dima

Wow I write corect this query ? 

Thanks. 

 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Maintaining a counter up-to-date

2003-06-03 Thread Samuel Tardieu
I have an "ips" table with 10+ records, each record having a
"catid" field representing its category. "catid" references a row in a
table called "categories".

For statistics purpose (generation of images with the evolution of the
number of rows by category), I am trying to reduce the load on the
database.

The request I was doing at the beginning was:

  SELECT catid, COUNT(*) FROM ips GROUP BY catid;

I then added a "nentries" field to the "categories" table with some
rules to maintain the counters up-to-date:

  CREATE RULE cat_ins AS
ON INSERT TO ips
DO
  UPDATE categories
SET nentries = (categories.nentries + 1)
WHERE (categories.catid = new.catid);

  CREATE RULE cat_del AS
ON DELETE TO ips
DO
  UPDATE categories
SET nentries = (categories.nentries - 1)
WHERE (categories.catid = old.catid);

  CREATE RULE cat_upd AS
ON UPDATE TO ips
WHERE old.catid <> new.catid
DO
 (UPDATE categories
SET nentries = (categories.nentries - 1)
WHERE (categories.catid = old.catid);
  UPDATE categories
SET nentries = (categories.nentries + 1)
WHERE (categories.catid = new.catid); );

This works fine when inserting, deleting or updating one row in the
"ips" table. However, when i/d/u several rows at a time with the same
"catid", I only got an increment or decrement by one of the counter.

I have not found an easy way to maintain the counter up-to-date.
I have found a complex solution: I created a "counter" table with two
fields, "catid" and "value". The idea is to put 1 in "value" for every
insertion or new value for update, or -1 for every deletion or old
value for update.

  CREATE RULE counter_ins AS
ON INSERT TO ips
DO
 (INSERT INTO counter (catid, value) VALUES (new.catid, 1);
  UPDATE categories
SET nentries = nentries +
(SELECT sum(*) FROM counter
 WHERE counter.catid = categories.catid)
WHERE (categories.catid = counter.catid);
  DELETE FROM counter; );

(I do not show the equivalent "ON DELETE" and "ON UPDATE" rules)

I have two questions:

  1) Is this way of doing things correct? Do I have the guarantee that
 all the commands in the "DO" part will be executed in a
 transaction even if the initial insertion into "ips" isn't?

  2) What is the simplest way of doing this? I guess doing stats in a
 database is quite a pretty usual operation.

Thanks in advance.

  Sam

PS/ the real problem is more complex, as we need to do those
statistics on several fields, not only "catid"
-- 
Samuel Tardieu -- [EMAIL PROTECTED] -- http://www.rfc1149.net/sam


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] SQL problem: bank account

2003-06-03 Thread Jonathan Gardner
On Monday 02 June 2003 00:49, listrec wrote:
> How about:
>
> select max(transaction_id) from bank_account group by customer_id
>

And if you want the rest of the data in the rows:

SELECT b.*
FROM bank_account b
JOIN  (
SELECT max(transaction_id) AS transaction_id
FROM bank_account
GROUP BY customer_id
) AS subselect ON subselect.transaction_id=b.transaction_id
;

I use a similar query here at work to find the first time a guy has visited 
our site through an advertiser.

-- 
Jonathan Gardner <[EMAIL PROTECTED]>
(was [EMAIL PROTECTED])
Live Free, Use Linux!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] SQL problem: bank account

2003-06-03 Thread Jonathan Gardner
On Monday 02 June 2003 00:49, listrec wrote:
> How about:
>
> select max(transaction_id) from bank_account group by customer_id
>

And if you want the rest of the data in the rows:

SELECT b.*
FROM bank_account b
JOIN  (
SELECT max(transaction_id) AS transaction_id
FROM bank_account
GROUP BY customer_id
) AS subselect ON subselect.transaction_id=b.transaction_id
;

I use a similar query here at work to find the first time a guy has visited 
our site through an advertiser.

-- 
Jonathan M. Gardner
Smooth Corporation - Perl Programmer
[EMAIL PROTECTED] - (425) 460-4780
Live Free, Use Linux!

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] SQL problem: bank account

2003-06-03 Thread Erik G. Burrows

This is the solution I was looking for! 20 seconds to compile the whole
list.

Thanks everyone for your help, I very much appreciate it. Even though
this is kind of 'hackey' being that it's non-standard SQL, it keeps the
database from having to to 40,000 selects, as would have to do with any
correlated sub-select statement.

Thanks!

> "Erik G. Burrows" <[EMAIL PROTECTED]> writes:
> > I need to get the most recent transaction for each customer. I need only
> > the transaction ID, but the entire row would be best.
> 
> If you don't mind a not-standard-SQL solution, the SELECT DISTINCT ON
> construct is designed for this sort of thing.  See the "weather report"
> example in the SELECT reference page.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/docs/faqs/FAQ.html
-- 
Erik G. Burrows - KG6HEA   www.erikburrows.com
PGP Key: http://www.erikburrows.com/[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] Maintaining a counter up-to-date

2003-06-03 Thread Richard Huxton
On Monday 02 Jun 2003 4:47 pm, Samuel Tardieu wrote:
> I have an "ips" table with 10+ records, each record having a
> "catid" field representing its category. "catid" references a row in a
> table called "categories".
>
> For statistics purpose (generation of images with the evolution of the
> number of rows by category), I am trying to reduce the load on the
> database.
>
> The request I was doing at the beginning was:
>
>   SELECT catid, COUNT(*) FROM ips GROUP BY catid;
>
> I then added a "nentries" field to the "categories" table with some
> rules to maintain the counters up-to-date:
>
>   CREATE RULE cat_ins AS
[snip]
> This works fine when inserting, deleting or updating one row in the
> "ips" table. However, when i/d/u several rows at a time with the same
> "catid", I only got an increment or decrement by one of the counter.

You want to use triggers not rules here (see the "server programming" and 
"procedural language" manual sections). A trigger will be fired for each row 
inserted/deleted/updated. Of course this means it will be fired 5000 times 
for 5000 updated rows.

-- 
  Richard Huxton

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Maintaining a counter up-to-date

2003-06-03 Thread Samuel Tardieu
On  2/06, Richard Huxton wrote:

| You want to use triggers not rules here (see the "server programming" and 
| "procedural language" manual sections). A trigger will be fired for each row 
| inserted/deleted/updated. Of course this means it will be fired 5000 times 
| for 5000 updated rows.

Ouch. At the current time, I reverted to count(*) style which seems to be
performed in much less time in 7.3.2 than in 7.2.x.

I'll look at the triggers, thanks.

  Sam


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]