[SQL] column alias and group by/having/order

2004-06-01 Thread Tomasz Myrta
Hi
I found some strange column alias behaviour:
select val1+val2 as val
from some_table
group by val;
result - OK
select val1+val2 as val
from some_table
order by val;
result - OK
select val1+val2 as val
from some_table
group by val having val1+val2>1;
result - OK
select val1+val2 as val
from some_table
group by val having val>1;
ERROR:  Attribute "val" not found
Is it a bug or a feature?
Regards,
Tomasz Myrta

 PostgreSQL 7.3.4 on i386-pc-linux-gnu, compiled by GCC i386-linux-gcc 
(GCC) 3.3.2 20031005 (Debian prerelease)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] assistance on self join pls

2004-06-01 Thread email lists
Hi all,

I have the following firewall connection data. 

  datetime   | protocol | port  |   inside_ip|   outside_ip
| outbound_count | outbound_bytes
-+--+---++--
--++---
 2004-05-05 05:00:00 |6 |21 | 192.168.11.191 |
205.227.137.53 |  6 |   3881
 2004-05-05 05:00:00 |6 | 22326 | 192.168.11.191 |
205.227.137.53 |  1 |   2592
 2004-05-05 05:00:00 |6 | 38005 | 192.168.11.191 |
205.227.137.53 |  1 |  51286
 2004-05-05 05:00:00 |6 | 51861 | 192.168.11.191 |
205.227.137.53 |  1 |  42460
 2004-05-05 05:00:00 |6 | 52095 | 192.168.11.191 |
205.227.137.53 |  1 |   2558
 2004-05-05 05:00:00 |6 | 59846 | 192.168.11.191 |
205.227.137.53 |  1 |118
 2004-05-05 05:00:00 |6 | 60243 | 192.168.11.191 |
205.227.137.53 |  1 |   2092
 2004-05-05 06:00:00 |6 |21 | 192.168.11.185 |
205.227.137.53 |  6 |   3814
 2004-05-05 06:00:00 |6 | 29799 | 192.168.11.185 |
205.227.137.53 |  1 |118
 2004-05-05 06:00:00 |6 | 30138 | 192.168.11.185 |
205.227.137.53 |  1 |   2092
 2004-05-05 06:00:00 |6 | 30215 | 192.168.11.185 |
205.227.137.53 |  1 |  42460
 2004-05-05 06:00:00 |6 | 51279 | 192.168.11.185 |
205.227.137.53 |  1 |   1332
 2004-05-05 06:00:00 |6 | 52243 | 192.168.11.185 |
205.227.137.53 |  1 |  51286
 2004-05-05 06:00:00 |6 | 60079 | 192.168.11.185 |
205.227.137.53 |  1 |   2558

I am wanting to aggregate / collapse each entry to something similar to:

  datetime   | protocol | port  |   inside_ip|   outside_ip
| outbound_count | outbound_bytes
-+--+---++--
--++---
 2004-05-05 05:00:00 |6 |21 | 192.168.11.191 |
205.227.137.53 | 12 | 104987
 2004-05-05 06:00:00 |6 |21 | 192.168.11.185 |
205.227.137.53 | 12 | 103660

I have not had much success - any assistance greatly appreciated

Darren

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] sum ( time) problem

2004-06-01 Thread Bruno Wolff III
On Thu, May 27, 2004 at 11:14:58 +,
  Willem de Jong <[EMAIL PROTECTED]> wrote:
> >>
> >> If i do a sum(time) the result is like this '1 day 18:00:00'. But i'd
> >> like to get a result like this '42:00:00'.
> >>
> >> How can i realise is by a query?

You can do something like the following: (not completely tested)
select extract(epoch from sum(time))/3600 || ':' ||
  extract(minutes from sum(time)) || ':' ||
  extract(seconds from sum(time));

I am not sure if the above will work right for negative intervals.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Schemata & User-Defined-Type casting issues

2004-06-01 Thread Chris Gamache
PostgreSQL 7.4.2 ...

Background: I'm attempting to migrate tables which were created in the
pre-schema days to a sensible schema setup. I'm using the "uniqueidentifier"
column in some of these tables. When I created the new schema, I created an
instance of "uniqueidentifier" and its supporting functions and casts within
the new schema. When I try to "INSERT INTO myschema.mytable ... SELECT ... FROM
public.mytable;" It's having difficulty seeing that the data types are
compatible across the schema. An explicit cast (without first casting to a
neuter data-type) won't work for the same reason.

I'm torn: Should I create a "cast" to allow for casting of this data-type
across schemas, or should I have created the table referencing the user-defined
type in the public schema? 

I expect that this problem will rise up now and again. I'd like to solve it in
the this early phase with a proper deisgn-based fix.

If it makes a difference, I would like to not include this schema in the search
path, to explicitly refer to it as myschema.mytable anywhere I need to
reference it.

CG




__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Schemata & User-Defined-Type casting issues

2004-06-01 Thread Tom Lane
Chris Gamache <[EMAIL PROTECTED]> writes:
> I'm using the "uniqueidentifier" column in some of these tables. When
> I created the new schema, I created an instance of "uniqueidentifier"
> and its supporting functions and casts within the new schema. When I
> try to "INSERT INTO myschema.mytable ... SELECT ... FROM
> public.mytable;" It's having difficulty seeing that the data types are
> compatible across the schema.

Indeed, since as far as the system knows those two datatypes have
nothing to do with each other.  I'd go ahead and define an assignment
cast WITHOUT FUNCTION to let you do the conversion.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Query becoming slower on adding a primary key

2004-06-01 Thread mallah


Hi ,

After adding a primary key in one of the participant tables
the query never finishes. The live table has a primary key
so it cannot be removed. I made a copy of the live table
using create table t_a as select * from tab. the query works
fine . when i ad the pkey like i have in the live table it does
not work. Can anyone please help me with this problem?
below are the details.
thanks in advance.

Regds
Mallah.



explain analyze select  email_id ,email ,contact from t_a a join
email_source f using(email_id) join email_subscriptions h using(email_id)
where 1=1 and f.source_id =1 and h.sub_id = 3  ;
QUERY PLAN
---
 Hash Join  (cost=134818.15..221449.12 rows=306921 width=40) (actual
time=9457.000..17700.227 rows=283763 loops=1)
   Hash Cond: ("outer".email_id = "inner".email_id)
   ->  Seq Scan on email_subscriptions h  (cost=0.00..70323.77 rows=746257
width=4) (actual time=0.054..3434.639 rows=746721 loops=1)
 Filter: (sub_id = 3)
   ->  Hash  (cost=131485.92..131485.92 rows=308491 width=44) (actual
time=9456.757..9456.757 rows=0 loops=1)
 ->  Hash Join  (cost=26878.00..131485.92 rows=308491 width=44)
(actual time=2293.378..8978.407 rows=299873 loops=1)
   Hash Cond: ("outer".email_id = "inner".email_id)
   ->  Seq Scan on email_source f  (cost=0.00..26119.84
rows=308491 width=4) (actual time=0.123..1094.661
rows=317504 loops=1)
 Filter: (source_id = 1)
   ->  Hash  (cost=18626.80..18626.80 rows=800080 width=40)
(actual time=2275.979..2275.979 rows=0 loops=1)
 ->  Seq Scan on t_a a  (cost=0.00..18626.80
rows=800080 width=40) (actual time=0.009..1297.728
rows=800080 loops=1)
 Total runtime: 17856.763 ms
(12 rows)

tradein_clients=# ALTER TABLE t_a add primary key (email_id);
NOTICE:  ALTER TABLE / ADD PRIMARY KEY will create implicit index
"t_a_pkey" for table "t_a"
ALTER TABLE
Time: 6322.116 ms
tradein_clients=# VACUUM analyze t_a;
VACUUM
Time: 809.840 ms



tradein_clients=# explain analyze select  email_id ,email ,contact from
t_a a join email_source f using(email_id) join email_subscriptions h
using(email_id) where 1=1 and f.source_id =1 and h.sub_id = 3  ;

Runs for Ever.





ROW COUNTS:

t_a : 8,00,080
email_source: 15,45,056
email_subscriptions: 40,41,133


Structures:


tradein_clients=# \d email_source
   Table "public.email_source"
  Column   |  Type   | Modifiers
---+-+---
 email_id  | integer | not null
 source_id | integer | not null
Indexes:
"email_source_pkey" primary key, btree (email_id, source_id)
"email_source_sid" btree (source_id)
Foreign-key constraints:
"$1" FOREIGN KEY (source_id) REFERENCES sources(source_id) ON UPDATE
CASCADE ON DELETE CASCADE



\d t_a
Table "public.t_a"
  Column   |   Type   | Modifiers
---+--+---
 email_id  | integer  | not null
 userid| integer  |
 email | character varying(100)   |
 country   | character varying(100)   |
 city  | character varying(50)|
 contact   | character varying(100)   |
 last_sent | timestamp with time zone |
 pref  | character varying(1) |
 website   | character varying(255)   |
 address   | text |

\d email_subscriptions
Table "public.email_subscriptions"
  Column  |  Type   | Modifiers
--+-+---
 email_id | integer | not null
 sub_id   | integer | not null
Indexes:
"email_subscriptions_pkey" primary key, btree (email_id, sub_id)















---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Query becoming slower on adding a primary key

2004-06-01 Thread Tom Lane
[EMAIL PROTECTED] writes:
> tradein_clients=# explain analyze select  email_id ,email ,contact from
> t_a a join email_source f using(email_id) join email_subscriptions h
> using(email_id) where 1=1 and f.source_id =1 and h.sub_id = 3  ;

> Runs for Ever.

So what does plain explain say about it?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] most efficient way to manage ordering

2004-06-01 Thread Matthew Nuzum
Sorry if this is confusing, it is somewhat difficult to explain.

I find myself frequently creating solutions to the same problem.  I'm not
yet happy with the way I've done any of them and I'd like to find a purely
SQL way of doing this if possible.

Here's what I have.  For a contrived illustration, let's say we have a
database of photo galleries, each having some number of images.  Our tables
would look like this:

galleries
-
galleryid   | int4 (pkey)
name| text


images
-
imageid | int4 (pkey)
galleryid   | int4 (fkey)
image   | text
dsply_order | int4 (index)


Now, the same database holds many different galleries.  Each gallery has
some number of images and the users want the images to show in a certain
order. This is done by inserting the images in the order you want them to
appear.  However, it may be necessary for the user to re-order them.  I
provide a MOVE UP, MOVE DOWN, MOVE TO TOP and MOVE TO BOTTOM option so that
they can change the order.  Also, people occasionally delete images.

If a person maintaining galleryid 1 which has 6 images, a "SELECT * FROM
images WHERE galleryid = 1 ORDER BY dsply_order" might show this:
imageid | galleryid | image  | dsply_order
+---+--+
4 | 1 | 1/me.gif | 1
7 | 1 | 1/aa.gif | 2
12| 1 | 1/bb.gif | 3
11| 1 | 1/cc.gif | 4
15| 1 | 1/dd.gif | 5
18| 1 | 1/ee.gif | 6

Now, when a person decide to re-order, it's no problem to do this:
To move imageid 12 to the top:
UPDATE images SET dsply_order = CASE WHEN imageid = 12 THEN 1 ELSE
dsply_order + 1 END WHERE galleryid = 1;

That however leaves a gap at dsply_order 4: 
imageid | galleryid | image  | dsply_order
+---+--+
12| 1 | 1/bb.gif | 1
4 | 1 | 1/me.gif | 2
7 | 1 | 1/aa.gif | 3
11| 1 | 1/cc.gif | 5
15| 1 | 1/dd.gif | 6
18| 1 | 1/ee.gif | 7

Similar problem occurs when someone deletes an item.

Ideally, I'd like to figure out a single SQL query that can be run
afterwards to clean up the dsply_order to make sure that each number occurs
only one time and that there are no gaps.

I know I can write a sp for this, but the problem is, I do this very task on
lots of different tables that all have different formats and different types
of data.  I'd rather not have to maintain many different procedures if I can
find an alternate.

Right now, I get the job done in code, but it feels inefficient.

Matthew Nuzum   | ISPs: Make $200 - $5,000 per referral by
www.followers.net   | recomending Elite CMS to your customers!
[EMAIL PROTECTED]   | http://www.followers.net/isp




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] most efficient way to manage ordering

2004-06-01 Thread Rod Taylor
> Ideally, I'd like to figure out a single SQL query that can be run
> afterwards to clean up the dsply_order to make sure that each number occurs
> only one time and that there are no gaps.

Well... by far the easiest way to approach this is not to clean up the
gaps. Removing gaps will only make things pretty, not easier or faster.

This is one of the many times it is best to differentiate between what
is displayed and what is used for functional purposes.

CREATE TEMPORARY SEQUENCE images_display_count;
  SELECT nextval('images_display_count') AS display_order
   , *
FROM images
   WHERE galleryid = 1
ORDER BY real_order;
DROP SEQUENCE images_display_count;

There are ways of replacing the sequence that may be faster, but this
will address your concern. Do your updates, etc. via real_order and show
the user display_order.



---(end of broadcast)---
TIP 8: explain analyze is your friend