[SQL] Storing images from Delphi to postgresql

2005-10-13 Thread NSO
Hello,

 I am trying to store images (bin files) from delphi (ADO components) to
postgresql, I found data type bytea for that, but I could not make it
work. May be anyone has sample of delphi code? or can give any help with it?

thx
Lukas



-- 
This message has been scanned for viruses and
dangerous content, and is believed to be clean.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] SEVEN cross joins?!?!?

2005-10-13 Thread Daryl Richter

Frank Bax wrote:

At 09:00 AM 10/12/05, Daryl Richter wrote:


Richard Huxton wrote:


Frank Bax wrote:



[snip]



Richard, you've summed it up nicely.

Splitting locations into subsets (like 2,2,3) doesn't work because it is 
possible that low values in one location can be offset by high values in 
another location, and still result in an excellent combo.


The good news is these suggestions got me thinking outside the box.  I 
think I can program a modified brute-force that bypasses large numbers 
of combos early.  It might still be too large/slow, so I'd be interested 
in finding more info about these "smarter algorithms" in option 2.  
Where do I look?




If you're mathematically inclined, I would first look at using
Lagrangian Relexation, it may be appropriate for your problem:

http://www.2112fx.com/lagrange.html


Greg: my son's the gamer; I'm just trying to help him out.


---(end of broadcast)---
TIP 6: explain analyze is your friend



--
Daryl Richter
Director of Technology

(( Brandywine Asset Management  )
 ( "Expanding the Science of Global Investing"  )
 (  http://www.brandywine.com   ))



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] UPDATE Trigger on multiple tables

2005-10-13 Thread Muralidharan Ramakrishnan

Hi
 
  CREATE TRIGGER associates trigger function on a table and it is not possible to give more than one table seperated by comas.
 
  CREATE TRIGGER emp_cust_stamp BEFORE INSERT OR UPDATE ON  customersFOR EACH ROW EXECUTE PROCEDURE last_updated_stamp();
CREATE TRIGGER emp_cust_stamp BEFORE INSERT OR UPDATE ON employeesFOR EACH ROW EXECUTE PROCEDURE last_updated_stamp();
 
Regards,
R.MuralidharanFerindo Middleton Jr <[EMAIL PROTECTED]> wrote:
Is it possible to have a single trigger on multiple tables simultaneously? Example:CREATE TRIGGER emp_cust_stamp BEFORE INSERT OR UPDATE ON employees, customersFOR EACH ROW EXECUTE PROCEDURE last_updated_stamp();I tried something like the above but get an error message at the comma. I tried using the keyword AND as well. I couldn't find anything on this in the docs. I have many different tables in my databases which all have a "last_updated" field and it seems less tedious to be able to enforce updating this trigger database-wide using just one trigger. Is it possible?Ferindo---(end of broadcast)---TIP 4: Have you searched our list archives?http://archives.postgresql.org
		 
Yahoo! India Matrimony: Find your partner now.

Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Andrew Sullivan
On Thu, Oct 13, 2005 at 12:24:55AM -0400, Greg Stark wrote:

> Well the "constants and the like" are precisely the point. There
> are plenty of cases where adding the column to the GROUP BY is
> unnecessary and since Postgres makes no attempt to prune them out,
> inefficient.

But inefficient pruning is an optimiser problem, not something that
should lead one to invent a whole syntax change that (a) violates the
standard and (b) is at least somewhat opaque in meaning.  The right
thing to do is surely to make the optimiser smarter, no?  (Think,
"What does DB2 have that we don't?")

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Update timestamp on update

2005-10-13 Thread Andrew Sullivan
On Wed, Oct 12, 2005 at 10:52:04PM -0400, Tom Lane wrote:
> the documentation of the "core" system shouldn't rely on them ... but
> that leaves us presenting C-code triggers as the only examples in
> chapter 35.  There is a paragraph in there suggesting you go look at
> the PL languages first, but obviously it's not getting the job done.
> 
> Anybody have a better idea?

It could just be made a little friendlier, I think.  At the beginning
of the trigger chapter is this:

--snip--
This chapter describes how to write trigger functions. Trigger
functions can be written in C or in some of the available procedural
languages. It is not currently possible to write a SQL-language
trigger function.
--snip--

We could just add a little note by way of modification.  Here's a
(somewhat verbose, I fear) suggestion:

--snip--
This chapter describes how to write trigger functions. Trigger
functions can be written in C or in some of the available procedural
languages. This chapter deals only with functions that are written in
C. If you are unfamiliar with C, you may want also to look at the
chapters on procedural languages, because there are some examples
there that may be easier for you to understand.  To use a procedural
language for a trigger, you will need to install that language; see
the relevant chapter for instructions on how to do so.  It is not
currently possible to write a SQL-language trigger function.
--snip--

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Scott Marlowe
On Wed, 2005-10-12 at 20:13, Greg Stark wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> 
> > Hehe.  When I turn on my windshield wipers and my airbag deploys, is it
> > a documented "feature" if the dealership told me about this behaviour
> > ahead of time?  
> 
> Well it's more like my car where the dashboard dims when I turn on my
> headlights which annoys me to no end since I learned to always put my
> headlights on even in the day.

Sorry, but it's worse than that.  It is quite possible that two people
could run this query at the same time and get different data from the
same set and the same point in time.  That shouldn't happen accidentally
in SQL, you should know it's coming.

But it's not the same as the air bag deploying, it's like a different
random part of my car activates / deactivates each time.  The horn, a
turn indicator, the trunk opens.  

> > In much the same way, while this behaviour may be documented by MySQL, I
> > can't imagine it really being called a feature. But at least this
> > misbehaviour is documented. However, I think most people in the MySQL
> > universe just stumble onto it by accident when they try it and it works. I'd
> > at least prefer it to throw a warning or notice or something.
> 
> I don't see why you think people stumble on this by accident. I think it's
> actually an extremely common need. So common that Postgres has the same
> feature (though less general) and invented a whole syntax to handle it.

Because I answer a boatload of questions on phpbuilder, where there are
tons of MySQL and PostgreSQL beginners who learn by stumbling around in
their database of choice.

Most MySQL users think that the select a,b,c from table group by a is a
valid query, and don't even realize that they are getting theoretically
different results each time.  It's one of those many things they learn
wrong on MySQL and have to unlearn everywhere else.

They didn't go looking for this behaviour, and almost none of them
realized when they were doing it that they could get a different answer
each time.

> I think most MySQL users don't stumble on it, they learn it as the way to
> handle the common use case when you join a master table against a detail table
> and then want to aggregate all the detail records.

This isn't really common sense all the time though.  It's more about the
law of unintended consequences.  People write these queries, and never
realize that they are actually random responses coming back.  And if
they aren't random responses, then their data likely isn't normalized.

> In standard SQL you have to
> write GROUP BY ... and list every single column you need from the master
> table. Forcing the database to do a lot of redundant comparisons and sort on
> uselessly long keys where in fact you only really need it to sort and group by
> the primary key.

But again, you're getting whatever row the database feels like giving
you.  A use of a simple, stupid aggregate like an any() aggregate would
be fine here, and wouldn't require a lot of overhead, and would meet the
SQL spec.

The real reason this thing exists today and not an any() aggregate or
some equivalent in MySQL is because of all the legacy code using the
messed up group by syntax.  It's hard to change that kind of stuff when
you've got a lot of market share to hold on to.

> Remember, most MySQL users learn MySQL first, and only later learn what is
> standard SQL and what isn't. 

Hehe.  I'll never forget, remember, I answer questions on databases on
phpbuilder.  I'm always amazed at how many folks come there who are just
starting out and making the same mistakes we all made when beginning.

> > A Subselect would let you do such a thing as well, and while it's more
> > complicated to write, it is likely to be easier to tell just what it's
> > doing.
> 
> Subselects have their own problems here. Mainly Postgres's optimizer, as good
> as it is, doesn't treat them with the same code paths as joins and can't find
> all the same plans for them. But in any case you cannot always write a
> subselect that's equivalent to an arbitrary join.

Actually, for things like aggregates, I've often been able to improve
performance with sub selects in PostgreSQL.  Although, back in the 7.2
days it was still pretty pokey at that kind of stuff.  

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Strange join...maybe some improvements???

2005-10-13 Thread Thomas F. O'Connell
What indexes do you have on these tables?And have you ANALYZEd all three recently? --Thomas F. O'ConnellCo-Founder, Information ArchitectSitening, LLCOpen Source Solutions. Optimized Web Development.http://www.sitening.com/110 30th Avenue North, Suite 6Nashville, TN 37203-6320615-469-5150615-469-5151 (fax) On Oct 12, 2005, at 8:52 AM, Andy wrote:  I have 3 tables: CREATE TABLE orders(  id int4 SERIAL,  id_ag int4,  id_modell int4 ->> this is linked to the modell.id )    CREATE TABLE modell(  id int4 SERIAL,  id_hersteller int4)    CREATE TABLE contactpartner(  id int4 SERIAL,  id_ag int4, ->> this is linked to order.id_ag or modell.id_hersteller  id_user int4  ).   I get a list of id_ag from the contactpartner which belongs to a user(AG_LIST). Then I have to selectselect/count all the data's from the order table that have the order.id_ag in the AG LIST or which have the modell.id_hersteller in the AG_LIST.    I have this query:   SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellWHERE o.id_ag IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15) OR m.id_hersteller IN (SELECT id_ag FROM contactpartner cp WHERE id_user=15)   Aggregate  (cost=7828.60..7828.60 rows=1 width=4) (actual time=1145.150..1145.151 rows=1 loops=1)  ->  Hash Join  (cost=1689.64..7706.32 rows=48913 width=4) (actual time=153.059..1136.457 rows=9395 loops=1)    Hash Cond: ("outer".id_modell = "inner".id)    Join Filter: ((hashed subplan) OR (hashed subplan))    ->  Seq Scan on orders o  (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.031..94.444 rows=65217 loops=1)    ->  Hash  (cost=1218.07..1218.07 rows=66607 width=8) (actual time=151.211..151.211 rows=0 loops=1)  ->  Seq Scan on modell m  (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.044..87.154 rows=66607 loops=1)    SubPlan  ->  Index Scan using contactpartner_id_user_idx on contactpartner cp  (cost=0.00..6.02 rows=2 width=4) (actual time=0.010..0.018 rows=4 loops=1)    Index Cond: (id_user = 15)  ->  Index Scan using contactpartner_id_user_idx on contactpartner cp  (cost=0.00..6.02 rows=2 width=4) (actual time=0.092..0.116 rows=4 loops=1)    Index Cond: (id_user = 15)Total runtime: 1145.689 ms   I tried also this one:   SELECT count(o.id) FROM orders oINNER JOIN modell m ON m.id=o.id_modellINNER JOIN contactpartner cp ON cp.id_user=15 AND (o.id_ag=cp.id_ag OR cp.id_ag=m.id_hersteller)   Aggregate  (cost=11658.63..11658.63 rows=1 width=4) (actual time=1691.570..1691.570 rows=1 loops=1)  ->  Nested Loop  (cost=7752.40..11657.27 rows=542 width=4) (actual time=213.945..1683.515 rows=9416 loops=1)    Join Filter: (("inner".id_ag = "outer".id_ag) OR ("outer".id_ag = "inner".id_hersteller))    ->  Index Scan using contactpartner_id_user_idx on contactpartner cp  (cost=0.00..6.02 rows=2 width=4) (actual time=0.108..0.188 rows=4 loops=1)  Index Cond: (id_user = 15)    ->  Materialize  (cost=7752.40..8723.57 rows=65217 width=12) (actual time=37.586..352.620 rows=65217 loops=4)  ->  Hash Join  (cost=1677.59..7368.18 rows=65217 width=12) (actual time=150.220..1153.872 rows=65217 loops=1)    Hash Cond: ("outer".id_modell = "inner".id)    ->  Seq Scan on orders o  (cost=0.00..3129.17 rows=65217 width=12) (actual time=0.034..95.133 rows=65217 loops=1)    ->  Hash  (cost=1218.07..1218.07 rows=66607 width=8) (actual time=149.961..149.961 rows=0 loops=1)  ->  Seq Scan on modell m  (cost=0.00..1218.07 rows=66607 width=8) (actual time=0.032..86.378 rows=66607 loops=1)Total runtime: 1696.253 ms    but this brings me some double information(the same o.id) in the situation in which the o.id_ag and m.id_hersteller are different, but still both in the AG_LIST.      Is there any way to speed up this query???   Regards,  Andy.

Re: [DOCS] [SQL] Update timestamp on update

2005-10-13 Thread Jim C. Nasby
On Wed, Oct 12, 2005 at 10:52:04PM -0400, Tom Lane wrote:
> Jeff Williams <[EMAIL PROTECTED]> writes:
> > Thanks. Triggers was my first thought, but chapter 35 on Triggers didn't
> > really indicate a way I could do this easily and scared me with a lot of
> > c code.
> 
> Yeah.  This is a documentation issue that's bothered me for awhile.
> The problem is that we treat the PL languages as add-ons and therefore
> the documentation of the "core" system shouldn't rely on them ... but
> that leaves us presenting C-code triggers as the only examples in
> chapter 35.  There is a paragraph in there suggesting you go look at
> the PL languages first, but obviously it's not getting the job done.

Chapter 35 is plpgsql.. do you mean chapter 32.4?

> Anybody have a better idea?

What about a See Also section ala man pages that links to trigger info
for other languages?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [DOCS] [SQL] Update timestamp on update

2005-10-13 Thread Mike Diehl
Is a working example something that people would like to see?  Or is this 
considered a good use of research time?

On Thursday 13 October 2005 11:20 am, Jim C. Nasby wrote:
> On Wed, Oct 12, 2005 at 10:52:04PM -0400, Tom Lane wrote:
> > Jeff Williams <[EMAIL PROTECTED]> writes:
> > > Thanks. Triggers was my first thought, but chapter 35 on Triggers
> > > didn't really indicate a way I could do this easily and scared me with
> > > a lot of c code.
> >
> > Yeah.  This is a documentation issue that's bothered me for awhile.
> > The problem is that we treat the PL languages as add-ons and therefore
> > the documentation of the "core" system shouldn't rely on them ... but
> > that leaves us presenting C-code triggers as the only examples in
> > chapter 35.  There is a paragraph in there suggesting you go look at
> > the PL languages first, but obviously it's not getting the job done.
>
> Chapter 35 is plpgsql.. do you mean chapter 32.4?
>
> > Anybody have a better idea?
>
> What about a See Also section ala man pages that links to trigger info
> for other languages?

-- 
Mike Diehl,
Network Monitoring Tool Devl.
SAIC at Sandia National Laboratories.
(505) 284-3137

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Greg Stark

Scott Marlowe <[EMAIL PROTECTED]> writes:

> Sorry, but it's worse than that.  It is quite possible that two people
> could run this query at the same time and get different data from the
> same set and the same point in time.  That shouldn't happen accidentally
> in SQL, you should know it's coming.

I'm pretty unsympathetic to the "we should make a language less powerful and
more awkward because someone might use it wrong" argument.

> > In standard SQL you have to
> > write GROUP BY ... and list every single column you need from the master
> > table. Forcing the database to do a lot of redundant comparisons and sort on
> > uselessly long keys where in fact you only really need it to sort and group 
> > by
> > the primary key.
> 
> But again, you're getting whatever row the database feels like giving
> you.  A use of a simple, stupid aggregate like an any() aggregate would
> be fine here, and wouldn't require a lot of overhead, and would meet the
> SQL spec.

Great, so I have a user table with, oh, say, 40 columns. And I want to return
all those columns plus their current account balance in a single query.

The syntax under discussion would be:

select user.*, sum(money) from user join user_money using (user_id) group by 
user_id

You would prefer:

select user_id, 
   any(username) as username, any(firstname) as firstname, 
   any(lastname) as lastname, any(address) as address,
   any(city) as city, any(street) as street, any(phone) as phone,
   any(last_update) as last_update, any(last_login) as last_login,
   any(referrer_id) as referrer_id, any(register_date) as register_date,
   ...
   sum(money) as balance,
   count(money) as num_txns
  from user join user_money using (user_id) group by user_id


Having a safeties is fine but when I have to disengage the safety for every
single column it starts to get more than a little annoying. 

Note that you cannot write the above as a subquery since there are two
aggregates. You could write it as a join against a view but don't expect to
get the same plans from Postgres for that.


> Actually, for things like aggregates, I've often been able to improve
> performance with sub selects in PostgreSQL.  

If your experience is like mine it's a case of two wrongs cancelling each
other out. The optimizer underestimates the efficiency of nested loops which
is another problem. Since subqueries' only eligible plan is basically a nested
loop it often turns out to be faster than the more exotic plans a join can
reach.

In an ideal world subqueries would be transformed into the equivalent join (or
some more general join structure that can cover both sets of semantics) and
then planned through the same code path. In an ideal world the user should be
guaranteed that equivalent queries would always result in the same plan
regardless of how they're written.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Strange join...maybe some improvements???

2005-10-13 Thread Andy



Indexes are on all join fields. In the shown 
example on all fields I have indexes. 
 
Yes I vacuum the database regulary. 
 
Andy.
 
- Original Message - 

  From: 
  Thomas F. 
  O'Connell 
  To: Andy 
  Cc: pgsql-sql@postgresql.org 
  Sent: Thursday, October 13, 2005 7:58 
  PM
  Subject: Re: [SQL] Strange join...maybe 
  some improvements???
  What indexes do you have on these tables?
  
  And have you ANALYZEd all three recently?
  
  
  --
  Thomas F. O'Connell
  Co-Founder, Information Architect
  Sitening, LLC
  
  Open Source Solutions. Optimized Web Development.
  
  http://www.sitening.com/
  110 30th Avenue North, Suite 6
  Nashville, TN 37203-6320
  615-469-5150615-469-5151 (fax)
  
  On Oct 12, 2005, at 8:52 AM, Andy wrote:
  
I have 3 tables:
CREATE TABLE orders(  id int4 
SERIAL,  id_ag int4,  id_modell int4 ->> this is 
linked to the modell.id
) 
 
CREATE TABLE 
modell(  id int4 SERIAL,  id_hersteller 
int4)

 
CREATE TABLE contactpartner(  id 
int4 SERIAL,  id_ag int4, ->> this is linked to order.id_ag 
or modell.id_hersteller

id_user int4
 ).
 
I get a list of id_ag from the contactpartner which belongs to a 
user(AG_LIST). Then I have to selectselect/count all the data's from the 
order table that have the order.id_ag in the AG LIST or which have the 
modell.id_hersteller in the AG_LIST. 
 
I have this query:
 
SELECT count(o.id) FROM orders oINNER JOIN modell m ON 
m.id=o.id_modellWHERE o.id_ag IN (SELECT id_ag FROM contactpartner cp 
WHERE id_user=15) OR m.id_hersteller IN (SELECT id_ag FROM contactpartner cp 
WHERE id_user=15)
 
Aggregate  (cost=7828.60..7828.60 rows=1 width=4) (actual 
time=1145.150..1145.151 rows=1 loops=1)  ->  Hash 
Join  (cost=1689.64..7706.32 rows=48913 width=4) (actual 
time=153.059..1136.457 rows=9395 
loops=1)    Hash Cond: 
("outer".id_modell = 
"inner".id)    Join Filter: 
((hashed subplan) OR (hashed 
subplan))    ->  Seq Scan 
on orders o  (cost=0.00..3129.17 rows=65217 width=12) (actual 
time=0.031..94.444 rows=65217 
loops=1)    ->  
Hash  (cost=1218.07..1218.07 rows=66607 width=8) (actual 
time=151.211..151.211 rows=0 
loops=1)  
->  Seq Scan on modell m  (cost=0.00..1218.07 rows=66607 
width=8) (actual time=0.044..87.154 rows=66607 
loops=1)    
SubPlan  
->  Index Scan using contactpartner_id_user_idx on contactpartner 
cp  (cost=0.00..6.02 rows=2 width=4) (actual time=0.010..0.018 rows=4 
loops=1)    
Index Cond: (id_user = 
15)  ->  
Index Scan using contactpartner_id_user_idx on contactpartner cp  
(cost=0.00..6.02 rows=2 width=4) (actual time=0.092..0.116 rows=4 
loops=1)    
Index Cond: (id_user = 15)Total runtime: 1145.689 ms
 
I tried also this one:
 
SELECT count(o.id) FROM orders oINNER JOIN modell m ON 
m.id=o.id_modellINNER JOIN contactpartner cp ON cp.id_user=15 AND 
(o.id_ag=cp.id_ag OR cp.id_ag=m.id_hersteller)
 
Aggregate  (cost=11658.63..11658.63 rows=1 width=4) (actual 
time=1691.570..1691.570 rows=1 loops=1)  ->  Nested 
Loop  (cost=7752.40..11657.27 rows=542 width=4) (actual 
time=213.945..1683.515 rows=9416 
loops=1)    Join Filter: 
(("inner".id_ag = "outer".id_ag) OR ("outer".id_ag = 
"inner".id_hersteller))    
->  Index Scan using contactpartner_id_user_idx on contactpartner 
cp  (cost=0.00..6.02 rows=2 width=4) (actual time=0.108..0.188 rows=4 
loops=1)  
Index Cond: (id_user = 15)    
->  Materialize  (cost=7752.40..8723.57 rows=65217 width=12) 
(actual time=37.586..352.620 rows=65217 
loops=4)  
->  Hash Join  (cost=1677.59..7368.18 rows=65217 width=12) 
(actual time=150.220..1153.872 rows=65217 
loops=1)    
Hash Cond: ("outer".id_modell = 
"inner".id)    
->  Seq Scan on orders o  (cost=0.00..3129.17 rows=65217 
width=12) (actual time=0.034..95.133 rows=65217 
loops=1)    
->  Hash  (cost=1218.07..1218.07 rows=66607 width=8) (actual 
time=149.961..149.961 rows=0 
loops=1)  
->  Seq Scan on modell m  (cost=0.00..1218.07 rows=66607 
width=8) (actual time=0.032..86.378 rows=66607 loops=1)Total runtime: 
1696.253 ms
 
 but this brings me some double 
information(the same o.id) in the situation in which the o.id_ag and 
m.id_hersteller are different, but still both in the AG_LIST. 
 
 
Is there any way to speed up this query???
 
Regards, 
Andy.


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Scott Marlowe
On Thu, 2005-10-13 at 13:26, Greg Stark wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> 
> > Sorry, but it's worse than that.  It is quite possible that two people
> > could run this query at the same time and get different data from the
> > same set and the same point in time.  That shouldn't happen accidentally
> > in SQL, you should know it's coming.
> 
> I'm pretty unsympathetic to the "we should make a language less powerful and
> more awkward because someone might use it wrong" argument.

I'm in favor of getting the right answer.  If my database can't do that,
then it's not worth the bits to download it.  But I'm funny that way.

> > > In standard SQL you have to
> > > write GROUP BY ... and list every single column you need from the master
> > > table. Forcing the database to do a lot of redundant comparisons and sort 
> > > on
> > > uselessly long keys where in fact you only really need it to sort and 
> > > group by
> > > the primary key.
> > 
> > But again, you're getting whatever row the database feels like giving
> > you.  A use of a simple, stupid aggregate like an any() aggregate would
> > be fine here, and wouldn't require a lot of overhead, and would meet the
> > SQL spec.
> 
> Great, so I have a user table with, oh, say, 40 columns. And I want to return
> all those columns plus their current account balance in a single query.
> 
> The syntax under discussion would be:
> 
> select user.*, sum(money) from user join user_money using (user_id) group by 
> user_id

> You would prefer:
> 
> select user_id, 
>any(username) as username, any(firstname) as firstname, 
>any(lastname) as lastname, any(address) as address,
>any(city) as city, any(street) as street, any(phone) as phone,
>any(last_update) as last_update, any(last_login) as last_login,
>any(referrer_id) as referrer_id, any(register_date) as register_date,
>...
>sum(money) as balance,
>count(money) as num_txns
>   from user join user_money using (user_id) group by user_id

I's select the SINGLE entries from a child table that matched the parent
id and add the sum(money) to it.  Then, there'd be no need for aggregate
functions on those fields, or inaccurate / possibly random data.

But I'm funny that way.

> Having a safeties is fine but when I have to disengage the safety for every
> single column it starts to get more than a little annoying. 
> 
> Note that you cannot write the above as a subquery since there are two
> aggregates. You could write it as a join against a view but don't expect to
> get the same plans from Postgres for that.

I'd just write is a big join.  Again, getting the right answer is
important to me.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Andrew Sullivan
On Thu, Oct 13, 2005 at 02:26:58PM -0400, Greg Stark wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > could run this query at the same time and get different data from the
> > same set and the same point in time. 
> 
> I'm pretty unsympathetic to the "we should make a language less powerful and
> more awkward because someone might use it wrong" argument.

That's not what Scott's saying.  Scott is saying that the syntax
you're talking about is _formally wrong_.  That's surely not "more
powerful", except in the sense that stepping on a land mine is more
powerful than many other ways you could shoot yourself in the foot.  

> path. In an ideal world the user should be guaranteed that
> equivalent queries would always result in the same plan regardless
> of how they're written.

And again, I say it sounds like you're actually arguing for "the
optimiser needs to get better".  Special-purpose, formally wrong
syntax is surely not better than making the optimiser get the right
syntax right every time, is it?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.  What do you do sir?
--attr. John Maynard Keynes

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Anthony Molinaro
Greg,
 
"
You would prefer:

select user_id, 
   any(username) as username, any(firstname) as firstname, 
   any(lastname) as lastname, any(address) as address,
   any(city) as city, any(street) as street, any(phone) as phone,
   any(last_update) as last_update, any(last_login) as last_login,
   any(referrer_id) as referrer_id, any(register_date) as
register_date,
   ...
   sum(money) as balance,
   count(money) as num_txns
  from user join user_money using (user_id) group by user_id
"

yes, that's right!

Guess what? It's been that way for years. Why change it now?

You're arguing something that works perfectly 
and has been understood for years.
Changing the syntax cuz pg doesn't optimize it the way you like is
ridiculous.

Perhaps this change would make the newbies happy but I cant imagine
an experienced developer asking for this, let alone argue for it.

> I'm pretty unsympathetic to the "we should make a language less
powerful 
> and more awkward because someone might use it wrong" argument.

More awkward? What *you're* suggesting is more awkward. You realize that
right?
How can syntax that is understood and accepted for years be more
awkward?

Again, you're asking for changes that no one but a newbie would ask
for

I'm not at all suggesting you are/aren't a newbie 
(so don't take offense to this :), 
all I'm saying is that for experienced developers, 
we'd hope that the source code developers for pg/oracle/db2 etc 
are focusing on more important things, not rewriting things that already
work because something doesn't wanna type out column names...

regards,
  Anthony

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, October 13, 2005 2:27 PM
To: Scott Marlowe
Cc: Greg Stark; Stephan Szabo; Rick Schumeyer; pgsql-sql@postgresql.org
Subject: Re: [SQL] pg, mysql comparison with "group by" clause


Scott Marlowe <[EMAIL PROTECTED]> writes:

> Sorry, but it's worse than that.  It is quite possible that two people
> could run this query at the same time and get different data from the
> same set and the same point in time.  That shouldn't happen
accidentally
> in SQL, you should know it's coming.

I'm pretty unsympathetic to the "we should make a language less powerful
and
more awkward because someone might use it wrong" argument.

> > In standard SQL you have to
> > write GROUP BY ... and list every single column you need from the
master
> > table. Forcing the database to do a lot of redundant comparisons and
sort on
> > uselessly long keys where in fact you only really need it to sort
and group by
> > the primary key.
> 
> But again, you're getting whatever row the database feels like giving
> you.  A use of a simple, stupid aggregate like an any() aggregate
would
> be fine here, and wouldn't require a lot of overhead, and would meet
the
> SQL spec.

Great, so I have a user table with, oh, say, 40 columns. And I want to
return
all those columns plus their current account balance in a single query.

The syntax under discussion would be:

select user.*, sum(money) from user join user_money using (user_id)
group by user_id

You would prefer:

select user_id, 
   any(username) as username, any(firstname) as firstname, 
   any(lastname) as lastname, any(address) as address,
   any(city) as city, any(street) as street, any(phone) as phone,
   any(last_update) as last_update, any(last_login) as last_login,
   any(referrer_id) as referrer_id, any(register_date) as
register_date,
   ...
   sum(money) as balance,
   count(money) as num_txns
  from user join user_money using (user_id) group by user_id


Having a safeties is fine but when I have to disengage the safety for
every
single column it starts to get more than a little annoying. 

Note that you cannot write the above as a subquery since there are two
aggregates. You could write it as a join against a view but don't expect
to
get the same plans from Postgres for that.


> Actually, for things like aggregates, I've often been able to improve
> performance with sub selects in PostgreSQL.  

If your experience is like mine it's a case of two wrongs cancelling
each
other out. The optimizer underestimates the efficiency of nested loops
which
is another problem. Since subqueries' only eligible plan is basically a
nested
loop it often turns out to be faster than the more exotic plans a join
can
reach.

In an ideal world subqueries would be transformed into the equivalent
join (or
some more general join structure that can cover both sets of semantics)
and
then planned through the same code path. In an ideal world the user
should be
guaranteed that equivalent queries would always result in the same plan
regardless of how they're written.

-- 
greg


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

-

Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Tom Lane
 In standard SQL you have to
 write GROUP BY ... and list every single column you need from the master
 table.

This thread seems to have gone off on a tangent that depends on the
assumption that the above is a correct statement.  It's not.  It *was*
true, in SQL92, but SQL99 lets you omit unnecessary GROUP BY columns.

The gripe against mysql, I think, is that they don't enforce the
conditions that guarantee the query will give a unique result.

The gripe against postgres is that we haven't implemented the SQL99
semantics yet.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Anthony Molinaro
Well...

An additional gripe is that this isn't a good feature (standard or not).
Oracle doesn't do it. Db2 doesn't do it. I strongly suggest you guys
don't 
do it.

If you wanna do the optimizations under the covers, cool, but I can't
imagine how this would be useful other than for saving some typing...

Seems more trouble than it's worth and changes a concept that's tried
and true for many years.

Regards,
  Anthony

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Thursday, October 13, 2005 2:50 PM
To: Scott Marlowe
Cc: Greg Stark; Stephan Szabo; Rick Schumeyer; pgsql-sql@postgresql.org
Subject: Re: [SQL] pg, mysql comparison with "group by" clause 

 In standard SQL you have to
 write GROUP BY ... and list every single column you need from the
master
 table.

This thread seems to have gone off on a tangent that depends on the
assumption that the above is a correct statement.  It's not.  It *was*
true, in SQL92, but SQL99 lets you omit unnecessary GROUP BY columns.

The gripe against mysql, I think, is that they don't enforce the
conditions that guarantee the query will give a unique result.

The gripe against postgres is that we haven't implemented the SQL99
semantics yet.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Tom Lane
"Anthony Molinaro" <[EMAIL PROTECTED]> writes:
> An additional gripe is that this isn't a good feature (standard or not).
> Oracle doesn't do it. Db2 doesn't do it.

You sure about that?  It's hard to believe that the SQL committee would
put a feature into the spec that neither Oracle nor IBM intended to
implement.  Those two pretty much control the committee after all ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] pg, mysql comparison with "group by" clause

2005-10-13 Thread Anthony Molinaro
Tom,

  I'm sure there's all sorts of cool optimizations under the covers 
  to perhaps maybe to this short circuiting,  
  but as the sql goes, yeah, I'm sure.

Here's an example on oracle 10g release 2 (copy paste from my screen so
you can see the error messages and all):



SQL> create table foo(id number primary key, name varchar2(10));

Table created.

SQL> insert into foo values (1,'sam');

1 row created.

SQL> insert into foo values (2,'sam');

1 row created.

SQL> commit;

Commit complete.

SQL> select id, count(*) from foo;
select id, count(*) from foo
   *
ERROR at line 1:
ORA-00937: not a single-group group function


SQL> select name, count(*) from foo;
select name, count(*) from foo
   *
ERROR at line 1:
ORA-00937: not a single-group group function


SQL> select name, count(*) from foo group by id;
select name, count(*) from foo group by id
   *
ERROR at line 1:
ORA-00979: not a GROUP BY expression


SQL> select name, count(*) from foo group by name;

NAME COUNT(*)
-- --
sam 2

SQL> select name, count(*) from foo group by name,id;

NAME COUNT(*)
-- --
sam 1
sam 1

SQL>


I can't imagine Oracle making a change such as the one we're discussing
at this point. Perhaps in 8.1.6, ~7 years ago, when *tons* of sql
changes were implemented (analytics, CASE, ordering in inline views,
CUBE, ROLLUP), 
but not now...

then again, oracle is 100% completely driven by money, so, if enough
customers ask for it, it will happen eventually. I just can't imagine
anyone asking for this feature when we're paying 40k per cpu just to
run oracle; there are much more important things for them to be workin
on...


Regards,
  Anthony

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 13, 2005 3:17 PM
To: Anthony Molinaro
Cc: Scott Marlowe; Greg Stark; Stephan Szabo; Rick Schumeyer;
pgsql-sql@postgresql.org
Subject: Re: [SQL] pg, mysql comparison with "group by" clause 

"Anthony Molinaro" <[EMAIL PROTECTED]> writes:
> An additional gripe is that this isn't a good feature (standard or
not).
> Oracle doesn't do it. Db2 doesn't do it.

You sure about that?  It's hard to believe that the SQL committee would
put a feature into the spec that neither Oracle nor IBM intended to
implement.  Those two pretty much control the committee after all ...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] copy tables

2005-10-13 Thread Shavonne Marietta Wijesinghe



 HelloI have 2 
postgreSQL databases called "DATA1" and "DATA2" with several tables inside them 
(table A, B, C). I modify the details in table "A" in database "DATA1" 
How can I copy table "A" from database "DATA1" and paste it in database 
"DATA2" using the same table name ??Thank you.