Re: [GENERAL] Problem with oids for table names getting out of sync?

2007-04-05 Thread Omar Eljumaily
Alvaro and Tom, thanks so much.  I was getting worried that I was going 
to have to ask my customers to dump and restore periodically, ugh.  I 
think I need to learn a bit more about postgresql internals to help me 
with my project.  Not thinking about selecting for oids is kind of 
embarrassing.


Thanks,

Omar

Alvaro Herrera wrote:

omar wrote:
  
SELECT relfilenode, relname FROM pg_class WHERE relname !~ '^(pg_|sql_)' 
AND relkind = 'r'
   


   Oid o = PQftable(_res, i);
 


Um ... are you laboring under some delusion about relfilenode being the
same as relation OID?

 
  
Apparently I am.  libpq docs claim that You can query the system table 
pg_class to determine exactly which table is referenced. for PQftable.  
I query pg_class and the only column that looks remotely like a unique 
oid is relfilenode.



select oid, relname from pg_class where ...

relfilenode is just the file name given to the table, which is the same
as the OID when the table is just created, but changes after certain
operations (CLUSTER, TRUNCATE, REINDEX for indexes, maybe others)

  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Problem with oids for table names getting out of sync?

2007-04-04 Thread Omar Eljumaily
This is with 8.1.8, but I don't see any mention of any bug fixes that 
cover this.


I've run into this sort of obscure problem.  I'm using libpq with a 
front end database api where I need to track column names and how 
they're returned in libpq queries.  What's happening is that I start out 
with a set of table names when I open my database with a query:


SELECT relfilenode, relname FROM pg_class WHERE relname !~ '^(pg_|sql_)' 
AND relkind = 'r'


But these don't agree with the oids when I subsequently fetch my rows 
and use the following:


   Oid o = PQftable(_res, i);
   std::string s1 = _con-_oidTableNames[o];

_con-_oidTableNames is my own array that I've created with the above 
select query.  The reason I need to get table names after queries is 
that queries often return with multiple cases of a single column name, 
but with different table names.  I need to be able to decifer the 
table.column combination when people refer to a column in that way.


My problem is that the Oids returned by PQftable sometimes do not return 
from the select query.  For instance


Oid returned from PQftable = 654989
select relname from FROM pg_class WHERE relfilenode = 654989
returns empty.

The way I can solve this problem is by dumping and restoring the 
database.  That's the only way I can fix it.


I don't know how to reproduce the problem.  Has anybody heard of this?  
I'm going to update to the latest version this weekened.  Maybe that 
will fix it. 

Is there any other way to get table names from libpq queries without 
using the oid method I use above?


One other thing I just remembered.  The next time it happens I'm going 
to try a postgresql java driver with similar queries to see if the 
table.column combinations in ResultSet.getString() get messed up as 
well.  That will tell me something.  I suppose I can look and see how 
they get tablename information.


Thanks




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with oids for table names getting out of sync?

2007-04-04 Thread Omar Eljumaily
One other thing about this issue.  A work around is that I can parse sql 
statements myself to come up with the table names.  Yes, it's a bit of 
work, but not too bad.  My question is whether or not column/table 
combinations are guaranteed to return in the order that the table names 
appear in a query.  It seems like they're always returned this way.  For 
instance:


select * from projects join employees on projects.manager = employees.id;

projects.id would always appear in the return list before employees.id?



Omar Eljumaily wrote:
This is with 8.1.8, but I don't see any mention of any bug fixes that 
cover this.


I've run into this sort of obscure problem.  I'm using libpq with a 
front end database api where I need to track column names and how 
they're returned in libpq queries.  What's happening is that I start 
out with a set of table names when I open my database with a query:


SELECT relfilenode, relname FROM pg_class WHERE relname !~ 
'^(pg_|sql_)' AND relkind = 'r'


But these don't agree with the oids when I subsequently fetch my rows 
and use the following:


   Oid o = PQftable(_res, i);
   std::string s1 = _con-_oidTableNames[o];

_con-_oidTableNames is my own array that I've created with the above 
select query.  The reason I need to get table names after queries is 
that queries often return with multiple cases of a single column name, 
but with different table names.  I need to be able to decifer the 
table.column combination when people refer to a column in that way.


My problem is that the Oids returned by PQftable sometimes do not 
return from the select query.  For instance


Oid returned from PQftable = 654989
select relname from FROM pg_class WHERE relfilenode = 654989
returns empty.

The way I can solve this problem is by dumping and restoring the 
database.  That's the only way I can fix it.


I don't know how to reproduce the problem.  Has anybody heard of 
this?  I'm going to update to the latest version this weekened.  Maybe 
that will fix it.
Is there any other way to get table names from libpq queries without 
using the oid method I use above?


One other thing I just remembered.  The next time it happens I'm going 
to try a postgresql java driver with similar queries to see if the 
table.column combinations in ResultSet.getString() get messed up as 
well.  That will tell me something.  I suppose I can look and see how 
they get tablename information.


Thanks




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Omar Eljumaily
I think you can coax the date_trunc function to give you a proper start 
day.  I think it's more than adding an integer to your date, though.  
You also have to do some mod work after the function returns, I think.  
I agree that the point isn't that you can't do it with some effort, 
however.  It's mainly that it's a bit linguistically unintuitive.  It 
would be nice to have a start date as an argument to the function.


Having said that, my own personal use of it will definitely be inside 
another wrapper function because I need database platform 
independence, so I need to abstract the function to look the same on all 
of my platforms.



Jorge Godoy wrote:

Bruno Wolff III [EMAIL PROTECTED] writes:

  

On Thu, Mar 08, 2007 at 20:32:22 -0300,
  Jorge Godoy [EMAIL PROTECTED] wrote:


Alvaro Herrera [EMAIL PROTECTED] writes:

As I said, it is easy with a function. :-)  I was just curious to see if we
had something like Oracle's NEXT_DAY function or something like what I
described (SET BOW=4; -- makes Thursday the first day of week):
  

If you are actually using date you can get the effect you want by adding
a constant integer to the date in the date_trunc function. That seems
pretty easy.




I couldn't see where to specify that integer.  Or, if it to sum it up to the
date, something that calculates it automatically.

http://www.postgresql.org/docs/8.2/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC

Adding an integer I'd still have to write the verifications (such as the one I
mention below for Oracle's NEXT_DATE()) to get the desired result.


Just to repeat my question:

(I don't want to write a function, I can do that pretty easily...  And I was
asking if there existed some feature on the database that...  It's just a
curiosity)

  Given a date X it would return me the first day of the week so that I can
  make this first day an arbitrary day, e.g. Friday or Wednesday.


Oracle's NEXT_DAY() gets closer to that, but would still require a few
operations (checking if the returned date is before the given date or if after
then subtract one week from this returned value, kind of a
PREVIOUS_DATE()...).


With a function I could make it easily, but then I'd have to wrap all
calculations with that...  It was just something to make life easier.  From
the answers I'm getting I see that there's no way to do that without a
function and that I'm not missing any feature on PG with regards to that ;-)

  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Omar Eljumaily
Ted, my reason for asking the question that I believe precipitated this 
thread was that I wanted a single sql statement that aggregated time 
data by week.  Yes, I could do the aggregation subsequently in my own 
client side code, but it's easier and less error prone to have it done 
by the server.



Ted Byers wrote:

It is not hard to calculate, as you can see... but it would be nice if
date_trunc('week', date) could do that directly.  Even if it became
date_trunc('week', date, 4) or date_trunc('week', date, 
'Wednesday') it

would be nice...  :-)  And that is what I was trying to ask ;-)


Use date_trunc('week', current_day + 1) and date_trunc('dow', 
current_day + 1)

to have a one day offset from the standard first day of the week.



I believe there's more than that...  Probably the +1 should be 
outside the

date_trunc, anyway.  It might help, but I still see the need to to do
calculations...  Specially if it was Tuesday today...


Out of curiosity, why does the database need to know this, or to be 
able to calculate it?  There are lots of things that would be useful 
to me, if the RDBMS I'm using at the time supported them (particularly 
certain statistical functions - ANOVA, MANOVA, nonlinear least squares 
regression, time series analysis, c.), but given that I can readily 
obtain these from other software I use, and can if necessary put the 
requisite code in a middleware component, I would rather have the 
PostgreSQL developer's focus on issues central to having a good DB, 
such as ANSI standard compliance for SQL, or robust pooling, c. and 
just leave me a mechanism for calling functions that are external to 
the database for the extra stuff I need.  I would prefer a suite of 
applications that each does one thing well than a single application 
that does a mediocre job on everything it allegedly supports. What 
would be 'nice' and what is practical are often very different things. 
I know what you're after is simple, but remember the good folk 
responsible for PostgreSQL have only finite time available to work on 
it, and thus, when they're making choices about priorities, I'd rather 
they ignore even simple ancillary stuff and focus on what really matters.


I just recently finished a project in which the data processing needed 
information similar to what you're after, but instead of doing it in 
the database, we opted to do it in the Perl script I wrote that fed 
data to the database.  In fact, it wasn't so much the day of the week 
that mattered to the processing algorithm but the resulting dates for 
the immediately preceding business day and the immediately following 
business day.  It was those dates we fed to the database rather than 
the weekday.  There are several Perl packages (see CPAN) supporting 
this kind of calculation.  These are generally outstanding (and would 
probably be useful if you want to create your own stored function 
implemented in Perl), but you may have to customize them by providing 
additional configuration information such as timezone and statutory 
and religious holidays if you need to determine business days in 
addition to just the day of the week.  the day of the week can be 
obtained in Perl with a single function call!


I just took a quick break to read about the date functions available 
within PostgreSQL, and while apparently nice, you have much greater 
flexibility, and many more functions, in these Perl packages I 
mentioned.  If you just want a function call, I'd suggest you create a 
function that just dispatches a call to the Perl function that best 
meets your needs.  In a sense, you are not really rolling your own.  
You're just dispatching the call to a function in a Perl package.


Cheers

Ted


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Omar Eljumaily
But you're always returning Monday, right?  Your grouping will be 
correct, but to get the actual truncation date, you have to subtract back.


select (date_trunc('week', '2007-03-07'::date + 5)::date-5);
select (date_trunc('week', '2007-03-06'::date + 5)::date-5);
select (date_trunc('week', '2007-03-08'::date + 5)::date-5);


Bruno Wolff III wrote:

On Fri, Mar 09, 2007 at 23:07:26 -0300,
  Jorge Godoy [EMAIL PROTECTED] wrote:
  

But how to get the date if the first day of the week is a Wednesday?  This
example is like the ones I've sent with separate queries that needed being
combined -- in a function, probably -- to get the desired result. 



If you want to group on weeks that start on Wednesdays add 5.

postgres=# select date_trunc('week', '2007-03-07'::date + 5);
   date_trunc

 2007-03-12 00:00:00-05
(1 row)

postgres=# select date_trunc('week', '2007-03-06'::date + 5);
   date_trunc

 2007-03-05 00:00:00-06
(1 row)

postgres=# select date_trunc('week', '2007-03-08'::date + 5);
   date_trunc

 2007-03-12 00:00:00-05
(1 row)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Tabulate data incrementally

2007-03-08 Thread Omar Eljumaily
I want to tabulate time data on a weekly basis, but my data is entered 
on a daily basis.


create table time_data
{
   employee varchar(10),
   _date date,
   job varchar(10),
   amount
}

So I want to tabulate with a single sql command.  Is that possible?

If I had a separate week end table
create table week_ends
{
   end_date date
}

I could do something like.

select *, (select sum(amount) from time_data where _date  end_date - 7 
and _data = end_date) from week_ends;


But the week_end table would be a pain to manage for a number of 
reasons.  Is it possible to do this without the week_end table?


Thanks.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tabulate data incrementally

2007-03-08 Thread Omar Eljumaily
Thanks Tom and Richard for the tip on date_trunc.  Is it possible in an 
sql select statement to create an iterator? 


For instance

select myItFunc(1,10);

would give 1,2,3,4,5,6,7,8,9,10

I'm a bit embarrassed that I don't know how to do this.  My 
understanding of sql functions is that not being object oriented, they 
don't store state.


The reason I'm asking is that if I wanted to to use date_trunc, I think 
I would need some sort of iterator to get multiple rows in one statement.


What I'm looking for is:

Employee Week   Amount
John1/1  100
Mary1/1 0
Edward  1/2  100
etc

I'd also like to return zero or null values when the data doesn't 
exist.  Wouldn't I need an iterator to do that? 


Thanks,

Omar


Tom Lane wrote:

Omar Eljumaily [EMAIL PROTECTED] writes:
  
I want to tabulate time data on a weekly basis, but my data is entered 
on a daily basis.



Something involving GROUP BY date_trunc('week', _date) might work for
you, if your definition of week boundaries matches date_trunc's.
If not, you could probably make a custom function that breaks at the
boundaries you want.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/
  



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tabulate data incrementally

2007-03-08 Thread Omar Eljumaily
Thanks Alvaro.  That's good to know.  Actually I was spacing on the need 
for this.  The date_trunc function with group by actually works for me.


select sum(amount), date_trunc('week', period_end) as dt from time_data 
group by dt;



Alvaro Herrera wrote:

Omar Eljumaily wrote:
  
Thanks Tom and Richard for the tip on date_trunc.  Is it possible in an 
sql select statement to create an iterator? 



Yes, use the generate_series() function.

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] group by and aggregate functions on regular expressions

2007-03-08 Thread Omar Eljumaily
select count(*), address where  address ~* 'magil' or  address ~* 
'whitewater'  etc group by address


would that work?


Rhys Stewart wrote:

Hi all,
i have a table with an address column. I wanted to count the number of
rows with a given regex match. so i ended up with the following very
verbose query:


select
address ~* 'magil' as Magil ,
address ~* 'whitewater' as whitewater,
(address ~* 'inswood' or address ~* 'innswood') as innswood,
(address ~* 'eltham' AND address ~* 'view') as eltham_view,
(address ~* 'eltham' AND address ~* 'acre') as eltham_acres,
(address ~* 'eltham' AND address ~* 'vista') as eltham_vista,
count(prem)

from prem_info
where
address ~* 'magil'
or (address ~* 'eltham' AND address ~* 'view')
or (address ~* 'eltham' AND address ~* 'acre')
or (address ~* 'eltham' AND address ~* 'vista')
or address ~* 'whitewater'
or (address ~* 'inswood' or address ~* 'innswood')
and parish = 'SpanishTown'
group by  Magil, whitewater, innswood, eltham_view, 
eltham_acres,eltham_vista


and i got this:

magilwhitewaterinnswoodeltham_vieweltham_acres
eltham_vistacount

ftffff650
tfffff361
ffffft181
fffftf462
ffftff542
fftfff686


useful but not in the format that would be nice. so the question:
is there any way to rewrite this query or are there any existing
functions that would give me a tabular output like so:

communitycount
magil361
whitewater650
inswood686
eltham_view  542

etc..

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] OT: Canadian Tax Database

2007-03-08 Thread Omar Eljumaily
Since this thread has already degraded, I'll offer my two cents.  The 
biggest screw ups in US history have been instigated by groups of 
privileged White men.  I know my name may sound otherwise, but I'm a 
White American male, so I'm not pointing the finger at another group.  
Let's see, Enron, Arthur Anderson, the entire Bush Administration and 
its fiascos in Iraq, Katrina, foreign policy in general, etc.  I've 
worked for large, major IT providers and I can tell you that 
incompetency shows no racial or ethnic boundaries.  It tends to exist in 
large, politically connected, no bid contractors, not low bid 
contractors or ones who benefited from affirmative action.



Ted Byers wrote:

 Richard Huxton wrote:
 http://www.thestar.com/News/article/189175

 For instance, in some cases the field for the social insurance 
number

 was instead filled in with a birth date.

 Unbelievable. Sixty years of electronic computing, fifty years 
use in
 business and the professionals who built the tax system for a 
 wealthy

 democratic country didn't use data types.

 This is Unbelievable? This is commonplace.

And due at least in part to government (and other institutions 
operated by
damned fools) opting for the least expensive provider rather than 
paying for
someone who actually knows what they're doing.  Just as buying cheap 
junk
always comes back to get you, hiring incompetent fools that don't 
know their

ass from a hole in the ground will come back to get you too.


Come on, they don't hire incompetent fools. The hire the people


You CAN'T be serious!  Have you ever dealt with them or with the 
consequences of their incompetence?



they need to fill their quota regardless of how well trained
and experienced they are. I am not saying that non white males
are in any way less competent than white males, but by removing
them from the pool does not make things better. The biggest
problem with quotas is not hiring less qualified staff, it is
that less qualified staff know why they were hired and know that
they are very unlikely to be fired, so they have little incentive
to work hard or attempt to do their best, they can always fail
upwards.

What does this have to do with anything?  No one here, except you, has 
said anything about the profile of the people involved WRT race, 
gender, religion, c.  Nor has anyone said anything about 
qualifications.  The only thing that has been said is that, based on 
what is seen in the work, the people responsible for that work must 
be incompetent.  It is an inference based on what is seen in what has 
been done and has nothing to do with any of the prohibited grounds for 
discrimination used as excuses for affirmative action.  And yes, I 
have seen cases where less qualified, even unqualified, people have 
been hired as a result of these affirmative action initiatives (and I 
have been told, by HR personelle in government, that certain favoured 
groups are deemed to be superior to white men, even if the favoured 
party has no education nor experience and the latter have earned 
doctorates and decades of experience), but no one has said anything 
about such people being employed on the projects to which I referred.  
But this is an aspect of our present society that is bound to 
degenerate into a flame war, launched by the politically correct, so 
we ought to say little, or even leave it alone.  Those in power tend 
to be vicious, especially when there are no effective checks on their 
conduct and no consequences for what they do.


Cheers

Ted


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] OT: Canadian Tax Database

2007-03-08 Thread Omar Eljumaily
Thank God the DOI is inefficient.  If they were good at what they do, 
which is generally malicious, we'd all be in trouble.



Your story reminded me of a dear friend who works for the department of
the interior here in the US who routinely was dressed down for writing
functional, reliable software quickly and with a minimum of bugs and
fuss.  He made all the other people in his office feel bad.

sigh.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Weird behaviour on a join with multiple keys

2007-03-08 Thread Omar Eljumaily

What happens if you do an outer join instead of an inner join?

Charlie Clark wrote:

Hi,

I'm getting unexpected results on a query which involves joining two 
tables on two common variables (firstname and lastname).


This is the basic query:

SELECT table1.lastname, table1.firstname
FROM  table1
INNER JOIN table2 ON
(table2.name = table1.name
AND
table2.vorname = table1.vorname)

This is returning many rows fewer than I expect and is ignoring a lot 
where table1.firstname = table2.firstname AND table1.lastname = 
table2.lastname. In fact when I extend the query by a WHERE clause 
such as WHERE table1.lastname IS LIKE 'Sch%' I get results which are 
not returned by the original query.


I'm not very au fait with the inner workings of PostgreSQL but EXPLAIN 
does not seem, to me at least, to provide an explanation for the 
missing results.


Merge Join  (cost=1987.97..2121.24 rows=34 width=22)
  Merge Cond: ((outer.?column3? = inner.?column3?) AND 
(outer.?column4? = inner.?column4?))

  -  Sort  (cost=364.97..375.99 rows=4409 width=22)
Sort Key: (table1.lastname)::text, (table1.firstname)::text
-  Seq Scan on table1  (cost=0.00..98.09 rows=4409 width=22)
  -  Sort  (cost=1623.00..1667.00 rows=17599 width=21)
Sort Key: (table2.lastname)::text, (table2.firstname)::text
-  Seq Scan on table2 (cost=0.00..381.99 rows=17599 width=21)

Am I missing something big and obvious here?

Charlie
--
Charlie Clark
Helmholtzstr. 20
Düsseldorf
D- 40215
Tel: +49-211-938-5360
GSM: +49-178-782-6226




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] real multi-master replication?

2007-03-05 Thread Omar Eljumaily




I don't believe there is, or can be, any asynchronous multi-master
replication system for any database that will work with all possible
general purpose constructs.



I believe it's possible in theory if you have system wide transaction 
locking, i.e. synchronous.  However, if you have to have system wide 
transaction locking, what's the point?  You have server X, that has to 
wait for a transaction to finish on server Y, why don't you make them 
the same server?


It would be nice to have some sort of paradigm for synchronizing 
databases that go on and offline with each other and even have diverse 
data structures.  I know there's a whole science of transaction 
processing which is simple in concept, but very often difficult to 
implement in practice.  It's a matter of matching transactions/records 
up at an atomic level and replicating them across different servers.


The sort of holy grail for me, and I believe a lot of other people, is to:

1. have a server that can easily and transparently replicate itself 
in different environments for speed, security, and fault tolerant purposes.
2. allow for people to go offline with their datasets, for instance 
on a laptop on an airplane, and then go back online with relative ease.
   3. Have a well defined and simple system for identifying and dealing 
with conflicts that arise from multiple copies of the same dataset.


Just ideas that I have on this topic.  I wonder if anybody's doing any 
work on the subject. 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] M:M table conditional delete for parents

2007-03-05 Thread Omar Eljumaily
I think a foreign key restraint is basically a trigger that throws an 
exception (RAISE statement) when the restraint is violated.


Something trigger function like:

 If table1
 if not in table1
  raise
 else if table2
 if not in table2
  raise
  end
I think that should work, but I've never tried it.


[EMAIL PROTECTED] wrote:


Postgresql 8.1.4 on Redhat 9

I have a table which stores M:M relationships. I can't put foreign 
keys to the parents of this table because the relationships being 
stored go to several tables. This was done so that only two fields 
have to be searched in order for all relationships to be found for an 
item. For an oem number there might be 50 to 100 relationships and 40 
different tables having to do with materials, locations, revisions, 
specifications, customer, etc. that might be referenced.


Is there some way I can make a mock foreign key restraint on the 
parents so the parent would search the M:M table for  a matching value 
in key1 if the relate-key is 22, 23, 25 or 100 before it allows the 
row to be deleted?


relate-keyrelate-typekey1table1   
 key2 table2
22product-material23oem545 
   material
22product-material23oem546 
   material
23product-engine23oem   
 15engine
25product-stage23oem3 
   stage
100product-revision23oem2270   
 specifications



*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 
*** *** ***

Margaret Gillon, IS Dept., Chromalloy Los Angeles



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Why does group by need to match select fields?

2007-02-28 Thread Omar Eljumaily
Sorry if this isn't exactly postgresql specific.  I periodically run 
into this problem, and I'm running into it now.  I'm wondering if 
there's something about group by that I don't understand.  As an 
example what I'd want to do is return the id value for the check to 
each payee that has the highest amount.  It seems like there's no 
problem with ambiguity in logic, but postgresql + other sql servers balk 
at it.  The group by fields need to explicitly match the select fields 
with the exception of the aggregate function(s?).


create table checks
{
   id serial,
   payee text,
   amount double
};

select max(amount), payee, id from checks group by payee;

Why won't the above work?  Is there another way to get the id for the 
record with the highest amount for each payee?


Thanks.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Why does group by need to match select fields?

2007-02-28 Thread Omar Eljumaily
OK, I see what's going on.  I can have more than one max(amount) with 
the same amount and payee.  Thanks so much.  Like I said, it's sort of 
dogged me off and on many times.


Thanks.


Bill Moran wrote:

Omar Eljumaily [EMAIL PROTECTED] wrote:
  
Sorry if this isn't exactly postgresql specific.  I periodically run 
into this problem, and I'm running into it now.  I'm wondering if 
there's something about group by that I don't understand.  As an 
example what I'd want to do is return the id value for the check to 
each payee that has the highest amount.  It seems like there's no 
problem with ambiguity in logic, but postgresql + other sql servers balk 
at it.  The group by fields need to explicitly match the select fields 
with the exception of the aggregate function(s?).


create table checks
{
id serial,
payee text,
amount double
};

select max(amount), payee, id from checks group by payee;

Why won't the above work?  Is there another way to get the id for the 
record with the highest amount for each payee?



Because it's ambiguous.  If you're grabbing max() for amount, which
id tuple do you want?

Perhaps the way you're storing your data, those answers aren't ambiguous,
but the database doesn't know that.  Take this query as an example:

select max(amount), max(checknumber), payee from checks group by payee;

In that case, the highest checknumber and the highest check amount
probably won't come from the same tuple.  If you were to throw in
there:

select max(amount), max(checknumber), payee, id from checks group by payee;

Which id does it give you?  The one that matches max(amount) or the one
that matches max(checknumber)?

  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match