Re: [SQL] Am I crazy or is this SQL not possible

2006-06-01 Thread Aaron Bono
select my_sub.max_date, broadcast_history.statusfrom (SELECT MAX(date_sent) max_date, broadcast_idFROM broadcast_historyGROUP BY broadcast_id) my_subinner join broadcast_history on (broadcast_history.broadcast_id = my_sub.broadcast_id
and broadcast_history.date_sent = my_sub.max_date);This should work if the combined broadcast_id, date_sent is unique.  If not, you will need to decide what record to pick in case of a tie.
On 6/1/06, Collin Peters <[EMAIL PROTECTED]> wrote:
I am having some serious mental block here.  Here is the abstractversion of my problem.  I have a table like this:unique_id (PK)   broadcast_id   date_sent  status1  1 2005-04-0430
2  1 2005-04-01 303  1 2005-05-20 104  2 2005-05-29 30So it is a table that stores broadcasts including the broadcast_id,
the date sent, and the status of the broadcast.What I would like to do is simply get the last date_sent and it'sstatus for every broadcast.  I can't do a GROUP BY because I can't putan aggregate on the status column.
SELECT MAX(date_sent), statusFROM broadcast_historyGROUP BY broadcast_idHow do I get the status for the most recent date_sent using GROUP BY?DISTINCT also doesn't workSELECT DISTINCT ON (email_broadcast_id) *
FROM email_broadcast_historyORDER BY date_sentAs you have to have the DISTINCT fields matching the ORDER BY fields.I have to ORDER BY date_sent, but I can't do a DISTINCT on date_sentI keep thinking am I missing something.  Does anybody have any ideas?
---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster-- ======
 Aaron Bono PresidentAranya Software Technologies, Inc. http://www.aranya.com We take care of your technology needs. Phone: (816) 695-6071
==


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-01 Thread Aaron Bono

It is a hack, but when someone wants you to do something in a way
different from the norm, aren't they asking for a hack?

SQL Server does something like
select top (1) from 

I am thinking this is NOT a SQL-99 standard.

-Aaron

On 6/1/06, Yasir Malik <[EMAIL PROTECTED]> wrote:

> select max(date_sent) from table;
> would equal
> select date_sent from broadcast_history order by date_sent DESC limit 1;
>
That sounds like a hack.  Is limit a SQL-99 standard?  Is there are
another way to do this?
Sorry to take over your topic, Collin.

Thanks,
Yasir


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-01 Thread Aaron Bono
Is this SQL-99 compliant or a PostgreSQL specific query?  I really like it and have never seen this before.-AaronOn 6/1/06, Tom Lane <
[EMAIL PROTECTED]> wrote:"Collin Peters" <
[EMAIL PROTECTED]> writes:> What I would like to do is simply get the last date_sent and it's> status for every broadcast.  I can't do a GROUP BY because I can't put> an aggregate on the status column.
You missed the key idea about how to use DISTINCT ON.SELECT DISTINCT ON (email_broadcast_id) *FROM email_broadcast_historyORDER BY email_broadcast_id, date_sent DESCYou order by the DISTINCT ON fields, then one or more additional fields
to select the representative row you want within each DISTINCT ON group.


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-02 Thread Aaron Bono
Only if you assume that ordering by unique_id and by date_sent are equivalent.  That may be the case but I personally hate making assumptions like that.  When someone goes into the database and updates records (clean up bad data, etc.) your perfectly running query can suddenly produce bad results.
-AaronOn 6/2/06, Klay Martens <[EMAIL PROTECTED]> wrote:
Sorry to stick my nose in here...would not this work better?SELECT broadcast_id,date_sent,status from broadcast_history whereunique_id in (SELECT max(unique_id) from broadcast_history group by broadcast_id);
Seems like a simpler option. 


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-03 Thread Aaron Bono

This reminds me of an interview question:  I was asked how to get a
maximum column from a table without using max.  How would you do that?




Select my_column
from my_table
order by my_column desc
limit 1

--
==
Aaron Bono
PresidentAranya Software Technologies, Inc.
http://www.aranya.com We take care of your technology needs.
Phone: (816) 695-6071
==

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Am I crazy or is this SQL not possible

2006-06-03 Thread Aaron Bono

I think this approach will only work if each broadcast_id has the same
maximum date_sent value.  You really need to do the group by in a
sub-query to bring the broadcast_id together with the max date_sent.

On 6/1/06, Yasir Malik <[EMAIL PROTECTED]> wrote:

> What I would like to do is simply get the last date_sent and it's
> status for every broadcast.  I can't do a GROUP BY because I can't put
> an aggregate on the status column.
>
> SELECT MAX(date_sent), status
> FROM broadcast_history
> GROUP BY broadcast_id
>
You could try the following:
select status
   from broadcast_history bh
  where bh.date_sent =
(select max(bh2.date_sent)
   from broadcast_history bh2);



======
Aaron Bono
PresidentAranya Software Technologies, Inc.
http://www.aranya.com We take care of your technology needs.
Phone: (816) 695-6071
==

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Fwd: Stalled post to pgsql-sql

2006-06-03 Thread Aaron Bono

I think this approach will only work if each broadcast_id has the same
maximum date_sent value.  You really need to do the group by in a
sub-query to bring the broadcast_id together with the max date_sent.

-Aaron Bono

On 6/1/06, Yasir Malik <[EMAIL PROTECTED]> wrote:

> What I would like to do is simply get the last date_sent and it's
> status for every broadcast.  I can't do a GROUP BY because I can't put
> an aggregate on the status column.
>
> SELECT MAX(date_sent), status
> FROM broadcast_history
> GROUP BY broadcast_id
>
You could try the following:
select status
   from broadcast_history bh
  where bh.date_sent =
(select max(bh2.date_sent)
   from broadcast_history bh2);



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Advanced Query

2006-06-06 Thread Aaron Bono
Don't forget that support is a very important part of making a decision about whether to or not to use a technology.  Having people who are happy to read and respond to any question is part of great support for the product.
And I am glad to see that most people on this list agree with me on the importance of even the basic questions like this.What was the original question again?  Heh.-Aaron
On 6/6/06, codeWarrior <[EMAIL PROTECTED]> wrote:
I would hope that your choice to use postgreSQL is because it is superiortechnology that scales well financially... not because you get a warm fuzzyfrom all your friends on the mailing lists...


Re: [SQL] How to get list of days between two dates?

2006-06-06 Thread Aaron Bono
Though there may be a more eligant way to do it, when we did things like this in the past we created a function (or stored procedure) that got the min and max dates and then created a result set that iterated through the dates to create a virtual table of days.  Then you can inner join that list of days with your physical table.
I am interested in other approaches though.-AaronOn 6/6/06, Christine Desmuke <[EMAIL PROTECTED]
> wrote:Hello,I'm trying to write a query and cannot figure out how to do it (or
whether it can be done in SQL alone). Given a table containing eventswith their starting and ending days (may be single- or multi-dayevents), I need a list of the events occurring each day:CREATE TABLE test_events (
   event_id serial,   event_name text,   start_time date,   end_time date,   CONSTRAINT event_pkey PRIMARY KEY (event_id));INSERT INTO test_events (event_name, start_time, end_time) VALUES
('First Event', '05/01/2006', '05/04/2006');INSERT INTO test_events (event_name, start_time, end_time) VALUES('Second Event', '05/02/2006', '05/02/2006');INSERT INTO test_events (event_name, start_time, end_time) VALUES
('Third Event', '05/04/2006', '05/05/2006');INSERT INTO test_events (event_name, start_time, end_time) VALUES('Fourth Event', '05/07/2006', '05/07/2006');The query results should look like:5/1/2006First Event
5/2/2006First Event5/2/2006Second Event5/3/2006First Event5/4/2006First Event5/4/2006Third Event5/5/2006Third Event5/7/2006Fourth Event
I've been experimenting with set-returning functions, but I haven'tstumbled on the answer. Suggestions?


Re: [SQL] Concat two fields into one at runtime

2006-06-08 Thread Aaron Bono
select id, first || ' ' || lastfrom mytable;On 6/8/06, George Handin <[EMAIL PROTECTED]> wrote:
Is there a way using built-in PostgreSQL functions to combine two datafields into a single field at runtime when querying data?
For example, the query now returns:idfirstlast---   ---  --1 Goerge   Handin2 Joe  RachinI'd like it to return:idname---   ---
1 George Handin2 Joe Rachin---(end of broadcast)---TIP 5: don't forget to increase your free space map settings


Re: [SQL] empty set

2006-06-08 Thread Aaron Bono
That will work if foo is never NULL.  If foo takes on a NULL value you will get those records where it is NULL.Is there a reason you cannot do an if/else statement on the list size?  That is what I do on my queries.
-AaronOn 6/8/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Thu, 2006-06-08 at 16:40, CG wrote:> PostgreSQL 8.1>> I've been trying to write a SQL prepare routine. One of the challenging> elements I'm running into is an empty set ...>> "select foo from bar where foo in ? ;"
>> What if "?" is an set with zero elements? What is the proper value to use to> replace "?" indicating an empty set?NULL?---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Fwd: [SQL] COPY to table with array columns (Longish)

2006-06-12 Thread Aaron Bono
Can you provide an example?Thanks,AaronOn 6/11/06, 
Phillip Smith <[EMAIL PROTECTED]
> wrote:















Hi All,

 

Hope someone can help me – our main company system
runs on Raining Data PICK/D3 (if anyone familiar with it) which stores records
in it's "tables" as variable length items. Every item has a
unique Primary Key (per table) then each item can have a variable number of
fields. These fields are delimited by Char 254, then each field can have
sub-values delimited by Char 253, then sub-sub-values delimited by Char 252.

 

Anyway, we are trying to export everything to Postgres for
reporting and querying etc (not to actually run the system…. Yet) and
hasn't been a problem so far – everything like stock and purchase
orders, sales orders etc can pretty easily be turned in to a flat file with
standard number of columns and consistent data. We truncate every table each
night then import that latest TSV export from D3 using a COPY command.

 

The problem arises with tables like our SYS table which
store generic system data, so one record could have 3 fields, but the next
could have 300. The only way I can work out how to export multi-valued data like
this to Postgres is to use an array column. So the table has 2 columns –
the pkey and a data array.

 

How do I get this imported to the truncated table each
night? At the moment I think my best option is to modify the export for the SYS
table to call PSQL and use standard SQL INSERT statements to directly insert it
instead of exporting to a flat file, then import to Postgres.

 

Thanks all,

-p

 

For those who are interested, or if it might help, here's
a rough comparison of the database structure of D3:

Windows   = PICK/D3 = Postgres

Drive     = Account = Database

Directory     = File    = Table

File      = Item    = Row

Line in text file = Attribute   = Field

(none)    = Value   = Array
Element (?)

(none)    = Sub
Value   = (none?)

 

Phillip Smith

IT Coordinator

Weatherbeeta P/L

8 Moncrief Rd

Nunawading, VIC, 3131

AUSTRALIA

 

E. [EMAIL PROTECTED]


 




Re: [SQL] COPY to table with array columns (Longish)

2006-06-12 Thread Aaron Bono
I agree with Tom.  Personally I cannot think of a time I would use an array column over a child table.  Maybe someone can enlighten me on when an array column would be a good choice.What language are you using to do the export if I may ask?
-AaronOn 6/12/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Phillip Smith" <[EMAIL PROTECTED]> writes:> The whole sys file is variable length records like this - they range => from 1> to over 17,000 fields per record.
17000?  I think you really need to rethink your schema.  While you couldtheoretically drop 17000 elements into a PG array column, you wouldn'tlike the performance --- it'd be almost unsearchable for instance.
I'd think about two tables, one with a single row for each SYS recordfrom the original, and one with one row for each detail item (theinvoice numbers in this case).  With suitable indexes and a foreign key
constraint, this will perform a lot better than an array-basedtranslation.And no, in neither case will you be able to import that file withoutmassaging it first.regards, tom lane



Re: [SQL] COPY to table with array columns (Longish)

2006-06-12 Thread Aaron Bono

I think two tables should suffice: ZKCOST and ZPRECMPL.

So you would have

ZKCOST
   zkcost_id,
   zkcost_value

and

ZPRECMPL
   zkcost_id,
   zprecmpl_id,
   zprecmpl_value

where zkcost_id is the primary key for ZKCOST and zkcost_id,
zprecmpl_id together are the primary key for ZPRECMPL and zkcost_id is
a foreign key from ZPRECMPL to ZKCOST.

That will work won't it?

-Aaron

On 6/12/06, Phillip Smith <[EMAIL PROTECTED]> wrote:


So you're suggesting creating a child table for each SYS record? Ie, a table called 
"ZPRECMPL" etc?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] COPY to table with array columns (Longish)

2006-06-13 Thread Aaron Bono

So how about creating a sys table too:

SYS
  sys_id

ZKCOST
  sys_id,
  zkcost_id,
  zkcost_value

and

ZPRECMPL
  sys_id,
  zprecmpl_id,
  zprecmpl_value

This gives you the flexibility to expand to as many "columns" for
ZPRECMPL as you want.  The bottom line is, I think it would be much
more efficient storage to determine a way to turn your variable number
of columns into rows of a value table.

For example, I have a web site for role playing games.  Since each
game has different attributes for the characters you play, I need a
flexible way to define the list of attributes and then allow people to
enter the values of those attributes.  Below is a simplified version
of my table structure:

attribute
  attribute_id (PK),
  attribute_name

character
  character_id (PK),
  character_name

character_attribute
  character_attribute_id (PK),
  character_id (FK),
  attribute_id (FK),
  value

It is a little different than your problem but demonstrates how a
variable number of columns (in this case a variable number of
attributes for a character) can be stored with one row representing
each column.

Because I don't understand the context of your problem as well as you
do, you will probably have to determine how to tweak this to meet your
needs.  But I think, from the information you have provided, that this
"pivoted" table approach will work for you.

-Aaron

On 6/13/06, Phillip Smith <[EMAIL PROTECTED]> wrote:

Not quite... ZKCOST and ZPRECMPL are two completely different things. They
have no relation except they're both stored in the SYS table in D3.

If we put it in a tree:
SYS
 |
 +- ZKCOST
 | \- 
 |
 +- ZPRECMPL
 | +- 
 | +- 
 | +- 
 | \- 

or table:
SYS
+---+-+-+-+-+
| ZKCOST|  | | | |
| ZPRECMPL  |  |  |  |  |
+---+-+-+-+-+


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Efficient Searching of Large Text Fields

2006-06-13 Thread Aaron Bono

In another post on a different topic, Rod Taylor said the following:

"A\tcat in   the\nhat" might be stored as ARRAY['A', 'cat', 'in', 'the', 'hat'].

This got me thinking.  I have a discussion forum for gamers and want
to provide searching capabilities so the user can type in a phrase
like "magical bow" and get all posts, sorted by relevance that contain
these words.

My questions are:
1. Will storing the posts in an ARRAY help improve performance of
these searches?  If so, by how much?
2. What functions or libraries are available to make such searching
easy to implement well?
3. What is the best way to sort by relevance?

Thanks,
Aaron Bono

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Efficient Searching of Large Text Fields

2006-06-13 Thread Aaron Bono

I will look that tsearch (at .  It appears their 8.1.x version is
still in development and I use PostgreSQL 8.1.3 but it is worth trying
- I'm not in a hurry for that feature anyway.

I also looked at PHPBB a little - it appears their database stores
words but the code is so difficult to dig through I was not sure about
their implementation or even what they used it for.  Would it be worth
the work to save the text into a separate searchable table that kept
individual words and word counts or would that be more work and eat up
more space than it is worth?  You could actually index the words that
way and get much quicker searches.  Then again, as I read through
tsearch, it may make this approach unnecessary...

I have also seen what looks like people using search results tables
that, after a search is performed, save a list of the results.  For
example, if I were doing a search of a forum, I could save the search
in a table like this:

forum_topic
  forum_topic_id (PK)
  forum_topic_name
  etc...

forum_topic_search
  forum_topic_search_id (PK)
  forum_topic_search_dt
  forum_topic_search_desc

forum_topic_search_results
  forum_topic_search_results_id (PK)
  forum_topic_search_id (FK)
  sort_index (int to tell us the order the results are returned in)
  forum_topic_id (FK)

This way you can allow users to page through the results without
having to constantly research or cache the results somewhere in
memory.

Has anyone tried an approach like this?

When do you clean these search tables out?  They could get quite large
after a while.

Thanks!
Aaron

On 6/13/06, Rod Taylor <[EMAIL PROTECTED]> wrote:

It won't help at all. Fast partial matches against arrays is nearly
impossible. You might take a look at tsearch though.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] COPY to table with array columns (Longish)

2006-06-13 Thread Aaron Bono

I guess I still don't understand...

If you take the approach operationsengineer1 and I suggested, you
should only need 3 or 4 tables regardless of the number of SYS file
records.

Good luck with your implementation.

-Aaron

On 6/13/06, Phillip Smith <[EMAIL PROTECTED]> wrote:

Thanks Aaron - There are currently 8175 records in my SYS file - I might
need to go with this approach but be selective about which items I export so
I don't end up with 8000 tables related to SYS! There's probably a lot of
 in there that doesn't actually need to be exported.

Thanks again,
-p


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] to_char with time

2006-06-13 Thread Aaron Bono

Try select to_char(now(),'HH24:MI');

-Aaron

On 6/13/06, chester c young <[EMAIL PROTECTED]> wrote:

this does not work: select to_char(current_time,'HH24:MI')

what am I missing?  is it possible to format a time column in a select?

thanks,
stumped, aka, chester


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] how to replace 0xe28093 char with another one?

2006-06-14 Thread Aaron Bono

That character is the EN Dash.  Are you by chance copying and pasting
from MS Word or some other program that does smart replace while you
type?  I don't see this character in your select.  Is there something
else that is running that may be causing this problem?

-Aaron

On 6/14/06, Sergey Levchenko <[EMAIL PROTECTED]> wrote:

hi.

When I execute "SELECT specification FROM armature WHERE id = 96;"
query I get WARNING:  ignoring unconvertible UTF-8 character 0xe28093.
How can I replace this (0xe28093) char with another one?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Repetitive code

2006-06-15 Thread Aaron Bono
Each of your queries has the filter xxx >= $dt where the xxx is the first column in each select.  You could simplify the query by turning the unioned selects into a sub-query and then putting the $dt filter in the outer query.
I don't know if this will cause performance problems though.  If PostgreSQL completes the inner query before filtering by your $dt you may be better off leaving the $dt filters where they are.I know Oracle has materialized views.  Does PostgreSQL also have materialized views?  If so, you could get great performance from your views AND simplify your SQL.
-Aaron BonoOn 6/15/06, Joe <[EMAIL PROTECTED]> wrote:
Hi,This is prompted by the previous thread on "SQL Technique Question".  Ihave the following query, extracted from a PHP script, where $dt is adate provided to the script. SELECT created, topic_id, 0, 0, 0, 0 FROM topic
 WHERE created >= $dt AND page_type IN (1, 2)   UNION SELECT updated, topic_id, 1, 0, 0, 0 FROM topic WHERE date_trunc('day', updated) != created   AND updated >= $dt AND page_type IN (1, 2)
   UNION SELECT e.created, subject_id, 0, 1, entry_id, subject_type FROM entry e, topic WHERE subject_id = topic_id AND e.created >= $dt   AND page_type IN (1, 2)   UNION SELECT 
e.created, actor_id, 0, 1, entry_id, actor_type FROM entry e, topic WHERE actor_id = topic_id AND e.created >= $dt   AND page_type IN (1, 2)   UNION SELECT e.updated, subject_id, 1, 1, entry_id, subject_type
 FROM entry e, topic WHERE date_trunc('day', e.updated) != e.created   AND subject_id = topic_id AND e.updated >= $dt   AND page_type IN (1, 2)   UNION SELECT e.updated, actor_id, 1, 1, entry_id, actor_type
 FROM entry e, topic WHERE date_trunc('day', e.updated) != e.created   AND actor_id = topic_id AND e.updated >= $dt   AND page_type IN (1, 2)   UNION SELECT e.created, e.topic_id
, 0, 1, entry_id, rel_type FROM topic_entry e, topic t WHERE e.topic_id = t.topic_id AND e.created >= $dt   AND page_type IN (1, 2)   UNION SELECT e.updated, e.topic_id, 1, 1, entry_id, rel_type
 FROM topic_entry e, topic t WHERE e.topic_id = t.topic_id   AND date_trunc('day', e.updated) != e.created   AND e.updated >= $dt AND page_type IN (1, 2);As you can see, there's quite a bit of repetitive code, so the previous
thread got me to thinking about simplifying it, perhaps through a view,perhaps through the use of CASE statements, particularly since I'm aboutto add at least one other table to the mix.As background, each table has a 'created' date column and an 'updated'
timestamp column and the purpose of the various selects is to find therows that were created or updated since the given $dt date.  The third_expression_ in each select list is an indicator of NEW (0) or CHANGED
(1).  The fourth item is a code for row type (topic=0, entry=1, but anew code is coming).I've been trying to figure out if simplifying into a view (one or more)is indeed possible.  One factoring out that I can see is the "topics of
interest" restriction (i.e., the join of each secondary table back totopic to get only topics whose page_types are 1 or 2).  Anotherredundancy is the "date_trunc('day', updated) != created" which is there
to avoid selecting "changed" records when they're actually new.However, although creating these views may simplify the subqueries itdoesn't seem there is a way to avoid the eight-way UNION, or is there?
TIAJoe


Re: [SQL] concurrency problem

2006-06-16 Thread Aaron Bono
I would use a BIGSERIAL for the ID.  It simplifies your inserts, you don't have to mess with any locking and the sequence is maintained for you outside your transaction so two transactions can do inserts without stepping on each other.
This is how I handle auto generated numbers.The only downside is if an insert fails for some reason - then a number will be skipped.  You would have to have some really restrictive requirements for this to matter though.
-Aaron BonoOn 6/15/06, sathish kumar shanmugavelu <[EMAIL PROTECTED]
> wrote:Dear group,    Let me explain my issue.   We use              Database  - 
postgresql-8.1  JDBC Driver  - postgresql-8.1-407.jdbc3.jar  Java - jdk1.5              The default transaction isolation level is - Read Committed
  Auto Commit is false    In our application we used a single connection object. We open the connection in the MDI form and close it only when the MDI closes , simply when the application closes. 
    I give a insert statment like 
 INSERT INTO rcp_patient_visit_monitor (               entry_no, patient_id, visit_date, is_newpatient, visit_type, is_medical,    is_review, is_labtest, is_scan, is_scopy, is_xray, weight, height) 
   VALUES ((SELECT coalesce(max(entry_no)+1, 1) FROM rcp_patient_visit_monitor),?,current_timestamp,?,?,?,?,?,?,?,?,?,?)       The point to note here is the select statement which gets the max entry_no and adds one to it and save the new value. entry_no is the primary key of the above table.
       Now i run the same program (different instance) from two systems, save the form simultaneously, only one entry is saved, in the other system the error says - duplicate key violates.   If i use the transaction level - Serializable - again one entry is saved. Only on closing this application (closing the connection) the application running in other system is getting saved.
    If i lock the table and create a transaction - by sending the commands  con.createStatement().executeUpdate("begin"); con.createStatement().executeUpdate("lock table rcp_patient_visit_monitor");
 int rows = psSave.executeUpdate(); con.createStatement().executeUpdate("commit");      The form in one system is saved, in another system an error says - ' Deadlock detected  .'
       When i test the above said commands in dbvisualizer from two different systems , it works, but here it does not. why.       how to solve this concurrency problem.Thanks in advance,
-- 
Sathish Kumar.SSpireTEK




Re: [SQL] sessions and prepared statements

2006-06-16 Thread Aaron Bono
If you are using pooled connections, doesn't PostgreSQL manage the prepared statements for you?  I would expect that, once I prepare a statement, if I attempt to do it again, PostgreSQL would say, "righty then, already done it, here you go".  Then again, I don't know what PostgreSQL does under the covers.
This seems like a lot of work to squeek out a small amount of efficiency.  Would it really give you much value?-Aaron BonoOn 6/16/06, Michael Fuhr
 <[EMAIL PROTECTED]> wrote:On Fri, Jun 16, 2006 at 08:55:16AM -0400, Rod Taylor wrote:
> BEGIN;> SAVEPOINT;> SELECT * FROM temporary_prepared_statement;> ROLLBACK TO SAVEPOINT < on failure>;> CREATE TEMPORARY TABLE temporary_prepared_statement ...;> COMMIT;
>> Now you have a place to store and retrieve prepared connection state for> the lifetime of the database backend provided PHP doesn't remove> temporary tables on the connection.This doesn't help today, but 
8.2 will have a pg_prepared_statementsview.http://archives.postgresql.org/pgsql-committers/2006-01/msg00143.php
http://developer.postgresql.org/docs/postgres/view-pg-prepared-statements.htmltest=> PREPARE stmt (integer) AS SELECT * FROM foo WHERE x = $1;test=> \xExpanded display is on.test=> SELECT * FROM pg_prepared_statements;
-[ RECORD 1 ]---+--name| stmtstatement   | PREPARE stmt (integer) AS SELECT * FROM foo WHERE x = $1;prepare_time| 2006-06-16 07:07:
41.682999-06parameter_types | {integer}from_sql| t--Michael Fuhr


Re: [SQL] concurrency problem

2006-06-16 Thread Aaron Bono
I know this is a Java issue but I would recommend something more like:
    Statement stmt = con.createStatement();    try {   stmt.execute("begin");   stmt.execute("lock table rcp_patient_visit_monitor");    psSave.executeUpdate(); //psSave is a prepared statement
   
stmt.execute("commit");   con.commit(); is called    } catch (Exception e) {   con.rollback(); is called    } finally {   stmt.close();    }
On 6/16/06, sathish kumar shanmugavelu <[EMAIL PROTECTED]> wrote:
Dear all,   I tried the lock table option today.   yes it works fine when saving simultaneously.   but after two or three times even when all of us close our application, the lock exists there in the database. when we run the select query from other db tool it hangs up.
    our code looks like   Statement stmt = con.createStatement();   while(true){   try{   stmt.execute("begin");   stmt.execute("lock table rcp_patient_visit_monitor");
   break;   }catch(SQLException e){   stmt.execute("commit");     }   }   psSave.executeUpdate(); //psSave is a prepared statement   
stmt.execute("commit");    if saved then  con.commit(); is called   if not saved then  con.rollback(); is called


Re: [SQL] Repetitive code

2006-06-16 Thread Aaron Bono
I haven't stared at your query as long as you have so I may have missed something but it looks like in all the selects you are combining the first column in the select is the column you filter on.  So the the outer query doesn't have to know wiether it is a new or changed row:
SELECT * FROM (     SELECT created as my_date, topic_id, 0, 0, 0, 0 FROM topic     WHERE page_type IN (1, 2)   UNION     SELECT updated as my_date, topic_id, 1, 0, 0, 0 FROM topic     WHERE date_trunc('day', updated) != created
       AND page_type IN (1, 2)   UNION     SELECT e.created as my_date, subject_id, 0, 1, entry_id, subject_type     FROM entry e, topic     WHERE subject_id = topic_id AND page_type IN (1, 2)   UNION
     SELECT e.created as my_date, actor_id, 0, 1, entry_id, actor_type     FROM entry e, topic     WHERE actor_id = topic_id  AND page_type IN (1, 2)   UNION     SELECT e.updated as my_date, subject_id, 1, 1, entry_id, subject_type
     FROM entry e, topic     WHERE date_trunc('day', e.updated) != e.created       AND subject_id = topic_id AND page_type IN (1, 2)   UNION     SELECT e.updated as my_date, actor_id, 1, 1, entry_id, actor_type
     FROM entry e, topic     WHERE date_trunc('day', e.updated) != e.created       AND actor_id = topic_id AND page_type IN (1, 2)   UNION     SELECT e.created as my_date, e.topic_id, 0, 1, entry_id, rel_type
     FROM topic_entry e, topic t     WHERE e.topic_id = t.topic_id AND page_type IN (1, 2)   UNION     SELECT e.updated as my_date, e.topic_id, 1, 1, entry_id, rel_type     FROM topic_entry e, topic t
     WHERE e.topic_id = t.topic_id       AND date_trunc('day', e.updated) != e.created       AND page_type IN (1, 2)) my_unionwhere my_union.my_date >= $dtI would almost be tempted to create a view for each small query and name them something meaningful and then another view that does the union.  It would make the queries easier to understand at least (self documented).
-AaronOn 6/16/06, Joe <[EMAIL PROTECTED]> wrote:
Aaron Bono wrote:> Each of your queries has the filter xxx >= $dt where the xxx is the> first column in each select.  You could simplify the query by turning> the unioned selects into a sub-query and then putting the $dt filter in
> the outer query.It would probably have to be two subqueries unless I can find a way tomerge the differences between new and changed rows.


Re: [SQL] concurrency problem

2006-06-17 Thread Aaron Bono
When in this situation I:1. Wait until I have enough data to do a complete commit before even bothering to save any data to the database.  I want the life of my transactions to last no more than milliseconds if possible.
2. Use a BIGSERIAL for the primary keys so the IDs are assigned automatically through triggers and sequence IDs.3. Do a "SELECT currval('my_sequence') AS seq_number;" to determine what ID was assigned so I can use it on child tables.
-Aaron BonoOn 6/16/06, sathish kumar shanmugavelu <[EMAIL PROTECTED]
> wrote:Dear group
   Its my mistake that i did not reveal the whole scenario.   Actually  within that  begin  and  commit, i insert in 10 tables. The above said table is the key table.   I fetch the consultatioin_no and add one to it, i should know this consultation_no to save the other 10 tables. because i use this number as foreign key in other tables. Also in my program, the data for that 10 tables are collected in different java classes and save coding is also there. I initiate this save coding for all the 10 forms in the one form (some main form). 
    so if any error occurs i have to roll back the whole transaction.        Is there any method to release the lock explicitly, where postgres store this locking information.    Is both stmt.execute

("commit");  con.commit();    are both same. should i have to call con.commit() method after stmt.execute("commit")     Now Iam also thinking to use sequence. but please clear the above doubts.
-- Sathish Kumar.SSpireTEKOn 6/16/06, Ash Grove <
[EMAIL PROTECTED]> wrote:
>INSERT INTO rcp_patient_visit_monitor (>entry_no, patient_id, visit_date,> is_newpatient,> visit_type, is_medical,>is_review, is_labtest, is_scan,
> is_scopy, is_xray,> weight, height)>VALUES ((SELECT> coalesce(max(entry_no)+1, 1) FROM>rcp_patient_visit_monitor),?,current_timestamp,?,?,?,?,?,?,?,?,?,?)

You are only working on one table so you sholdn't haveto manage a transaction or deal with explicit locking.Just let the database handle this for you with asequence. Your concurrency issues will disappear.

1) create a sequence:create sequence entry_no_sequence2) set the new sequence's value to your table'scurrent entry_no value (n):select setval('entry_no_sequence',n)3) recreate your table so that the entry_no will get
it's value from calling nextval() on your newsequence:entry_no integer not null defaultnextval('entry_no_sequence')Thereafter, when an insert is made on your table, theenry_no field will get its value from the sequence and
the sequence will be incremented. You would then dropentro_no from your insert statement and it wouldbecome something like:INSERT INTO rcp_patient_visit_monitor (patient_id, visit_date, is_newpatient, visit_type,
is_medical, is_review,is_labtest, is_scan, is_scopy, is_xray, weight,height)VALUES (?,current_timestamp,?,?,?,?,?,?,?,?,?,?)


Re: [SQL] concurrency problem

2006-06-18 Thread Aaron Bono
Looks good but you really shoud put your stmt.close() and conn.close() in a finally block so even if there is an error everything gets cleaned up properly.  That syntax is for Java but the principle is the same for any programming language - always make sure you clean up your connections no matter what errors occur.
-Aaron BonoOn 6/17/06, Ash Grove <[EMAIL PROTECTED]> wrote:
Locks are released when the containing transactioncommits. There is no explicit "release."Instead of calling "begin" and "commit" as statements,I do something more like below. As Aaron mentioned,
this is JDBC, not SQL. Sorry people.try {...conn.setAutoCommit(false);//do the insert on the table that generates theprimary key via a sequencePreparedStatement pstmt =conn.prepareStatement
("my prepared statement");pstmt.executeUpdate();//your prepared statement above should do an//insert on a table that calls nextval().//Calling currval() below will guarantee that
you'll get//the value created by the insert statement//Check out the documentation on sequencefunctions//get the new primary keyString get_pkey = "{ ? = call currval('my_seq')
}";CallableStatement = conn.prepareCall(get_pkey);cstmt.registerOutParameter(1, Types.BIGINT);cstmt.execute();long new_pkey = cstmt.getLong(1);//do all of your updates/inserts on tables using
new_pkey as a foreign key//I like to do this in batchesStatement stmt = conn.createStatement();stmt.addBatch("insert into... )stmt.addBatch("update whatever set... )stmt.executeBatch
();conn.commit();stmt.close();conn.close();} catch(SQLException e1) {//do something with error 1if (conn != null) {try {conn.rollback();} catch(SQLException e2) {
//do something with error 2}}}


Re: [SQL] Displaying first, last, count columns

2006-06-21 Thread Aaron Bono
I would suggest:selectmax(time_occurred) AS last_seen,min(time_occurred) AS first_seen,count(*),prog_datafrom tgroup by
prog_dataI would also suggest you use inner joins rather than put all your tables in the from and join in the where clause.  It is much easier to read and understand what you are trying to do.  The query you have is not exactly the same as what I put above but I bet the performance is bad because you have inner queries that have constraints based on the outer query.  I usually avoid this as much as possible.
-Aaron BonoOn 6/21/06, Worky Workerson <[EMAIL PROTECTED]> wrote:
I'm having a bit of a brain freeze and can't seem to come up withdecent SQL for the following problem:I have a table "t" of the form "time_occurred TIMESTAMP, prog_dataVARCHAR" and would like to create a query that outputs something of
the form "first_seen, last_seen, count, prog_data".I have the current query which gets the first_seen and last_seen viasubqueries, alaSELECT t1.time_occurred AS first_seen, t2.time_occurred
 AS last_seen,t3.count, t1.prog_dataFROM t AS t1, t AS t2WHERE t1.prog_data = t2.prog_dataAND t1.time_occurred IN (SELECT min(time_occurred) FROM t WHEREprog_data = t1.prog_data)AND t2.time_occurred
 IN (SELECT max(time_occurred) FROM t WHEREprog_data = t1.prog_data)but I can't seem to work out how to get the count of all the recordsthat have.  I figure that this is probably a relatively common idiom
... can anyone suggest ways to go about doing this.  Also, theperformance of this is pretty horrible, but I figure that creating acolumn on t.prog_data should speed things up noticably, right?Thanks!



Re: [SQL] Date ranges + DOW select question

2006-06-22 Thread Aaron Bono
I am a little confused.  Where are you casting dateStart and dateEnd?  I don't see either in your query.  I assume dayOfWeek is a number between 0 and 6, or maybe not?A little more detail would help.Thanks,
Aaron BonoOn 6/15/06, joseppi c <[EMAIL PROTECTED]> wrote:
Hi,I have a table which contains starttime, endtime andDOW; i.e. a weekly list of times for when a processmust be started and ended.TABLE: cronTimesFIELDS: starttime, endtime, dayOfWeekI have another table which contains date ranges.
TABLE: dateRangesFIELDS: dateStart, dateEndI need to get a list of cronTimes records for a recordin dateRanges and push these to a temporary table.i.e. tell me which dates are affected by cronTimes.
I have got a little way on this but need someassistance.SELECT * FROM cronTimes WHEREstarttime >= '00:00:00' AND endtime <= '23:59:59'AND dayOfWeek >= (EXTRACT(DOW FROM TIMESTAMP'2006-06-26')) AND dayOfWeek <=  (EXTRACT(DOW FROM
TIMESTAMP '2006-07-04'));The problem with the above is that by casting thedateStart and dateEnd they become numbers between 0and6 which inturn invalidates the < & > as they arenolonger working on dates, nor a sequence as numbers
can be repeated.Do I need to generate a sequence of dates somehow sothat each date in the range can be compared to thecronTimes table (so I can use the 'IN' condition)?Am I in the realms of plpgsql?
Any advice on the above welcome.Joseppic.


Re: [SQL] join on next row

2006-06-22 Thread Aaron Bono
I would use a stored procedure or function for this.  You order your results first by employee and then event date and finally even time.  Then you create a new result set from the first and return that.That would probably be the most straight forward approach.
You could also try doing some thing like this (I have not tested it and so cannot vouch for its syntax but it should lead you close to another solution):selecteventjoin.employee,eventjoin.eventdate
,eventjoin.eventtime,eventjoin.eventtype,eventjoin.maxeventtime,e3.eventtypefrom (selecte1.employee,e1.eventdate,e1.eventtime,e1.eventtype,max(e2.eventtime) as maxeventtimefrom events e1
inner join events e2 on (e1.employee = e2.employeeand e1.eventDate = e2.eventDateand e1.eventTime > e2.eventTime)order by e1.employeee1.eventDatee1.eventTime) eventjoininner join event e3 on (
e3.employee = eventjoin.employeeand e3.eventdate = eventjoin.eventdateand e3.eventtime = eventjoin.maxeventtime);Who knows what the performance of this will be.  I would highly recommend you have employee in a separate table if you do not already.
-Aaron BonoOn 6/18/06, Sim Zacks <[EMAIL PROTECTED]> wrote:
I am having brain freeze right now and was hoping someone could help mewith a (fairly) simple query.I need to join on the next row in a similar table with specific criteria.I have a table with events per employee.
I need to have a query that gives per employee each event and the eventafter it if it happened on the same day.The Events table structure is:EventIDEmployeeEventDateEventTimeEventType
I want my query resultset to beEmployee,EventDate(1),EventTime(1),EventType(1),EventTime(2),EventType(2)Where Event(2) is the first event of the employee that took place afterthe other event.Example
EventID EmployeeEventDate   EventTime   EventType1   John6/15/2006   7:00A2   Frank   6/15/2006   7:15B3   Frank   6/15/2006   7:17C
4   John6/15/2006   7:20C5   Frank   6/15/2006   7:25D6   John6/16/2006   7:00A7   John6/16/2006   8:30R
Expected ResultsJohn, 6/15/2006, 7:00, A, 7:20, CFrank, 6/15/2006, 7:15, B, 7:17, CFrank, 6/15/2006, 7:17, C, 7:25, DJohn, 6/16/2006, 7:00, A, 8:30, RTo get this result set it would have to be an inner join on employee and
date where the second event time is greater then the first. But I don'twant the all of the records with a greater time, just the first event after.Thank YouSim


Re: [SQL] Start up question about triggers

2006-06-22 Thread Aaron Bono
Why not just create a history table and have the trigger copy the data out of the table into the history table with a time stamp of the change.  Then you don't need the query.For exampleTable Aa_id,a_value1,
a_value2Table A_hista_id,a_dt,a_value1,a_value2Then A_hist has a PK of a_id, a_dtThis would also be a lot easier to see WHAT changed and WHEN.  You can use the NEW.col and OLD.col
 to see the new and old values during inserts and updates.Of course, I don't know your need so this may not be achieving your goal.-Aaron BonoOn 6/22/06, 
Forums @ Existanze <[EMAIL PROTECTED]> wrote:





Sorry This is the complete message
 

Hello 
all,
 
I know that this 
question may be really simple, but I have decided to ask here due to fact that I 
don't know how to search for this on google or on the docs.
 
I created a trigger 
fuction which updates a specific row in some table A. Is it possible to retain 
the query that was used to trigger the function. For example
 
Table 
A
query_row_id
query_row
 
 
 
TABLE 
B
id
name
 
 
 
 
 
If I create a 
trigger on table B that says that after the insert command to write the query 
into  table A. So if I do 
 
insert into B 
values(1,"Blah")
 
this will trigger my 
trigger. Is there any way to get the "insert into B values(1,"Blah")? At the 
moment I can see only the type of query that it is (INSERT UPDATE 
DELETE)
 
best 
regards,
Fotis

  
  
  From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]] On Behalf Of Forums @ 
  ExistanzeSent: 22 June 2006 12:19To: 
  pgsql-sql@postgresql.orgSubject: [SQL] Start up question about 
  triggers
  
  
  Hello 
  all,
   
  I know that this 
  question may be really simple, but I have decided to ask here due to fact that 
  I don't know how to search for this on google or on the 
  docs.
   
  I created a 
  trigger fuction which updates a specific row in some table A. Is it possible 
  to retain the query that was used to trigger the function. For 
  example
   
  Table 
  A
  query_row_id
  query_row
   
   
   
  TABLE 
  B
  id
  name




Fwd: [SQL] Start up question about triggers

2006-06-22 Thread Aaron Bono
I did some research and can't even find a way to get meta data in a trigger.In a trigger, is there a way to inspect OLD and NEW to see what columns are there and see what has changed?  If so, you may not be able to grab the actual query but you could create a generic trigger that reconstructs a possible update/insert/delete for any table in your database.
Does anyone know of a good place to go get information about using meta data in a stored procedure or trigger?Thanks,Aaron
On 6/22/06, 
Forums @ Existanze <[EMAIL PROTECTED]> wrote:






Thank you for your answer,
 
We had though about your solution, the problem is that we 
have around 80 tables at the moment so your method would suggest adding another 
80.
 
I was wondering if it was possible to retrieve the query in 
the trigger function, cause what we wanted to achieve was to trigger a query log 
when any insert or update or delete operation was made on any of the 80 tables. 
This way we would have something like a query log table. Which will have the 
queries in the order that they were executed by n  number of clients. 

 
Say one client updates a row, and the next client deletes 
it, we want to know the queries that occurred in that particular 
order.
 
I hope this makes some sense :-)
 
I should also mention that what we are trying to achieve is 
some sort of partial backup operation. Because of bad initial design, we didn't 
foresee this comming. So now we have two options, changing all the 
tables,queries and code, to contain two timestamps columns 
representing created and updated row, a flag for deleted row, and have some 
sort of maintanance operation that will clean all the deleted records, and 
create insert/update statements for those records that have been updated ie( 
time_of_update > time_of_creation). This will give us a list of operation 
(INSERT OR UPDATE statements) that can be written to a file, and run from a 
file.
 
So if I had 10 partiall backups and ran them sequencially I 
would in theory have the data that I originally had. At the moment we are doing 
full back using pgdump, but this is another type of 
requirement.
 
 
Any ideas greatly appreciated.
 
Best Regards,
Fotis




Re: Fwd: [SQL] Start up question about triggers

2006-06-23 Thread Aaron Bono
This is why I was searching for good meta data.Here is a thought.  If your trigger has the OLD and NEW, is there a way to get a list of fields from OLD and NEW?  If TG_RELNAME is the name of the table, could you just ask PostgreSQL what the columns are in that table, iterate through those columns, get the values for each of these columns out of OLD and NEW and save the old/new values?
What I really cannot find is a way to _dynamically_ in the trigger ask what COLUMNS are in OLD and NEW.  If we had:table affected (TG_RELNAME?)columns that are in the tableold values for each of these columns
new values for each of these columnsThen you could store this information into two tables:modify_table   modify_table_id   modify_dt   table_namemodify_value   modify_value_id
   modify_table_id   old_value   new_valueI wish I had more experience with stored procedures - I know what I would try to do, just not if it is possible or how to implement it.Tom makes a very good point that having the actual query is not going to help in a general sense.  If someone does an insert or update which fires a trigger that does further updates and inserts or even changes values on the fly, the inserts and updates you record will NOT reveal exactly what is going on.  Keeping the values from OLD and NEW at the very end would be much more useful.
-Aaron BonoOn 6/23/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Andrew Sullivan <[EMAIL PROTECTED]> writes:> On Fri, Jun 23, 2006 at 06:48:49PM +0300, Forums @ Existanze wrote:>> Then there exist a TG_QUERY parameter that we could use to get the actual
>> query ran by a user, so if I ran the imaginary query> Which "actual query"?  By the time the trigger fires, the query might> already have been rewritten, I think.  No?  I _think_ that even
> BEFORE triggers happen after the rewriter stage is called, but> someone who has more clue will be able to correct me if I'm wrong.Even if you could get hold of the user query text, it'd be a serious
mistake to imagine that it tells you everything you need to know aboutthe update.  Aside from rule rewrites, previous BEFORE triggers couldhave changed fields that are mentioned nowhere in the query.  The only
safe way to determine what's going on is to compare the OLD and NEWrow values.regards, tom lane


Re: [SQL] avg(interval)

2006-06-26 Thread Aaron Bono
Right, the 23 is just less than 1 day, not 23 days.The good news: your query is working!-AaronOn 6/26/06, Joe <
[EMAIL PROTECTED]> wrote:Tom Lane wrote:> "Jeremiah Elliott" <
[EMAIL PROTECTED]> writes:>> however if i don't average them here is what i get:>> "7 days 22:24:50.62311";"*2420">> "9 days 22:21:
02.683393";"*2420">> "23:21:35.458459";"*2420">> "4 days 22:47:41.749756";"*2420">> "3 days 06:05:59.456947";"*2420"
>>> which should average to just over nine days ->> Uh ... how do you arrive at that conclusion?  I haven't done the math,> but by eyeball an average of four-something days doesn't look out of
> line for those values.It seems he's calculating (7 + 9 + 23 + 4 + 3) / 5 ...Joe


Re: [SQL] generate_series with left join

2006-06-28 Thread Aaron Bono
How about one of these two:select    year_list.year,    count(one.*),    count(two.*)from (    select years    from generate_series(2006,2009) as years) year_listleft outer join mytable as one on (
    date_part('year', one.date) = year_list.years    and one.cause = 1)left outer join mytable as two on (    date_part('year', two.date) = year_list.years    and two.cause = 2)group by
    year_list.year;select    year_list.year,    mytable.cause,    count(mytable.*)from (    select years    from generate_series(2006,2009) as years) year_listleft outer join mytable on (
    date_part('year', mytable.date) = year_list.years)group by    year_list.year,    mytable.cause;I think one of the problems many people have is the writing of their SQL in paragraph form.  It makes the SQL really hard to read and even harder to understand and debug.  Formatting your SQL like I did above may make it easier to see what is wrong.
-Aaron BonoOn 6/28/06, Pedro B. <[EMAIL PROTECTED]> wrote:
Greetings.I'm having some difficulties with my first use of the generate_seriesfunction.Situation: cause| integer date | timestamp(2) without time zonecause | date
--++---+1 | 2006-03-23 15:07:53.63 |2 | 2006-02-02 12:13:23.11 |2 | 2006-11-12 16:43:11.45 |1 | 2005-03-13 18:34:44.13 |3 | 2006-01-23 11:24:41.31 |(etc)
What i need to do, is to count the 'cause' column for the values '1' and'2', and group them by year, using left joins in order to also have theserialized years with empty values in the output.My needed output for a series of (2005,2007) would be:
 year | one  | two--+--+-- 2005 |1 |0 2006 |1 |2 2007 |0 |0I have tried something like#select s, (select count(cause) from mytable where cause=1 ) as one,
COUNT (cause) as two from generate_series(2006,2009) AS s(d) left JOINmytable o ON (substr(o.date,1,4) = s.d and cause=2) GROUP BY s.d ORDERBY 1;which obviously is wrong, because of the results:  s   | one  | two
--+--+-- 2006 | 3769 | 1658 2007 | 3769 |0 2008 | 3769 |0 2009 | 3769 |0As far as the 'two', the left join was successful, however i can notfind a way to join the 'one'. The output value is correct, but the
result shown should be only for the year 2006, not for all the values ofthe series.Maybe i've looked at it TOO much or maybe i'm completely failing to finda working logic.Any suggestions?Any and all help is humbly appreciated.
\\pb


Re: [SQL] generate_series with left join

2006-06-28 Thread Aaron Bono
Sorry, I think I see the mistake - it is getting all the rows for 1 and all the rows for 2 and joining them.  Try splitting up the query into two inner queries like so:select    one_list.year,    one_list.one_count,
    two_list.two_countFROM(    select        year_list.year,        count(one.*) as one_count    from (        select years        from generate_series(2006,2009) as years    ) year_list
    left outer join mytable as one on (        date_part('year', one.date) = year_list.years        and one.cause = 1    )    group by        year_list.year) one_list,(    select        year_list.year,
        count(two.*) as two_count    from (        select years        from generate_series(2006,2009) as years    ) year_list    left outer join mytable as two on (        date_part('year', two.date
) = year_list.years        and two.cause = 2    )    group by        year_list.year) two_listWHERE one_list.year = two_list.year;On 6/28/06, 
Pedro B. <[EMAIL PROTECTED]> wrote:
On Wed, 2006-06-28 at 13:34 -0500, Aaron Bono wrote:> select> year_list.year,> count(one.*),> count(two.*)> from (> select years> from generate_series(2006,2009) as years
> ) year_list> left outer join mytable as one on (> date_part('year', one.date) = year_list.years> and one.cause = 1> )> left outer join mytable as two on (> date_part('year', 
two.date) = year_list.years> and two.cause = 2> )> group by> year_list.year> ;>>> select> year_list.year,> mytable.cause,> count(mytable.*)
> from (> select years> from generate_series(2006,2009) as years> ) year_list> left outer join mytable on (> date_part('year', mytable.date) = year_list.years> )
> group by> year_list.year,> mytable.cause> ;>Aaron,Thank you so much for your reply.However, the 2 examples you provided have "weird" outputs:The first:
 years |  count  |  count---+-+-  2009 |   0 |   0  2008 |   0 |   0  2007 |   0 |   0  2006 | 7802080 | 7802080(4 rows)Time: 87110.753 ms  << yay.
The second: years | cause | count---+-+---  2009 | | 0  2008 | | 0  2007 | | 0  2006 |   6 | 1  2006 |   1 |  4030
  2006 |   2 |  1936  2006 |   3 |  4078  2006 | 100 |  3159  2006 |  98 |  2659  2006 |  99 |  2549My need is really to only group the counts of where cause=1 and cause=2
for each year, none of the others.> I think one of the problems many people have is the writing of their> SQL in paragraph form.  It makes the SQL really hard to read and even> harder to understand and debug.  Formatting your SQL like I did above
> may make it easier to see what is wrong.Indeed. Note taken, i'll improve my formatting.\\pb


Re: [SQL] generate_series with left join

2006-06-28 Thread Aaron Bono
This should work too:select    year_list.year,    one_list.one_count,    two_list.two_countFROM (    select years    from generate_series(2006,2009) as years) year_listleft outer join (
    select        date_part('year', one.date) as one_year,        count(one.*) as one_count    from mytable as one    where one.cause = 1    group by        date_part('year', one.date)) one_list on (year_list.years = one_year)
left outer join (    select        date_part('year', two.date) as two_year,        count(two.*) as two_count    from mytable as two    where two.cause = 2    group by        date_part('year', 
two.date)) two_list on (year_list.years = two_year);On 6/28/06, Aaron Bono <[EMAIL PROTECTED]
> wrote:Sorry, I think I see the mistake - it is getting all the rows for 1 and all the rows for 2 and joining them.  Try splitting up the query into two inner queries like so:
select    one_list.year,    one_list.one_count,
    two_list.two_countFROM(    select        year_list.year,        count(one.*) as one_count    from (        select years        from generate_series(2006,2009) as years
    ) year_list
    left outer join mytable as one on (        date_part('year', one.date) = year_list.years        and one.cause = 1    )    group by        year_list.year
) one_list,(    select        year_list.year,
        count(two.*) as two_count    from (        select years        from generate_series(2006,2009) as years    ) year_list    left outer join mytable as two on (
        date_part('year', two.date
) = year_list.years        and two.cause = 2    )    group by        year_list.year) two_listWHERE one_list.year = two_list.year;
On 6/28/06, 
Pedro B. <[EMAIL PROTECTED]> wrote:

On Wed, 2006-06-28 at 13:34 -0500, Aaron Bono wrote:> select> year_list.year,> count(one.*),> count(two.*)> from (> select years> from generate_series(2006,2009) as years
> ) year_list> left outer join mytable as one on (> date_part('year', one.date) = year_list.years> and one.cause = 1> )> left outer join mytable as two on (> date_part('year', 
two.date) = year_list.years> and two.cause = 2> )> group by> year_list.year> ;>>> select> year_list.year,> mytable.cause,> count(mytable.*)
> from (> select years> from generate_series(2006,2009) as years> ) year_list> left outer join mytable on (> date_part('year', mytable.date) = year_list.years> )
> group by> year_list.year,> mytable.cause> ;>Aaron,Thank you so much for your reply.However, the 2 examples you provided have "weird" outputs:The first:
 years |  count  |  count---+-+-  2009 |   0 |   0  2008 |   0 |   0  2007 |   0 |   0  2006 | 7802080 | 7802080(4 rows)Time: 87110.753 ms  << yay.
The second: years | cause | count---+-+---  2009 | | 0  2008 | | 0  2007 | | 0  2006 |   6 | 1  2006 |   1 |  4030
  2006 |   2 |  1936  2006 |   3 |  4078  2006 | 100 |  3159  2006 |  98 |  2659  2006 |  99 |  2549My need is really to only group the counts of where cause=1 and cause=2

for each year, none of the others.> I think one of the problems many people have is the writing of their> SQL in paragraph form.  It makes the SQL really hard to read and even> harder to understand and debug.  Formatting your SQL like I did above
> may make it easier to see what is wrong.Indeed. Note taken, i'll improve my formatting.\\pb


Re: [SQL] SELECT Aggregate

2006-06-28 Thread Aaron Bono
I would recommend against using a function.  If you are selecting a large number of rows, the function will run for each row returned and will have to do a select for each row.  So if you get 1000 rows returned from your query, you will end up with 1001 select statements for your one query.
Assuming trans_no is your primary key (or at least unique) then a group by on all columns in the select EXCEPT sale_price should do the trick:
SELECT  trans_no,

    customer,

    date_placed,

    date_complete,

    date_printed,

    ord_type,

    ord_status,

    
SUM(soh_product.sell_price),

    customer_reference,

    salesman,

    parent_order,

    child_order,

    order_number

FROM    sales_orders,
soh_product


WHERE   (trans_no Like
'8%' AND order_number Like '8%')

 OR (trans_no
Like '9%' AND order_number Like '8%')

 OR     (trans_no
Like '8%' AND order_number Like '9%')

 OR     (trans_no
Like '9%' AND order_number Like '9%')

 AND    (warehouse='M')

 
AND    (sales_orders.trans_no
= soh_product.soh_num)

 AND    (date_placed
> (current_date + ('12 months ago'::interval)))

GROUP BY    trans_no,



    customer,


    date_placed,


    date_complete,


    date_printed,


    ord_type,


    ord_status,
    customer_reference,


    salesman,


    parent_order,


    child_order,


    order_number


ORDER BY trans_no DESCOn 6/28/06, Phillip Smith <
[EMAIL PROTECTED]> wrote:















Hi all,

I have two tables which are storing all our sales orders /
invoices as below. sales_order.trans_no and soh_product.soh_num are the common
columns. This is PostgreSQL 8.1.4 (ie, the latest release)

 

We have some issues that I've been able to identify
using this SELECT:

SELECT  trans_no,

    customer,

    date_placed,

    date_complete,

    date_printed,

    ord_type,

    ord_status,

    customer_reference,

    salesman,

    parent_order,

    child_order,

    order_number

FROM    sales_orders

WHERE   (trans_no Like
'8%' AND order_number Like '8%')

 OR (trans_no
Like '9%' AND order_number Like '8%')

 OR     (trans_no
Like '8%' AND order_number Like '9%')

 OR     (trans_no
Like '9%' AND order_number Like '9%')

 AND    (warehouse='M')

 AND    (date_placed
> (current_date + ('12 months ago'::interval)))

ORDER BY trans_no DESC

 

But I want to add in a wholesale value of each order –
SUM(soh_product.sell_price) – How would be best to do this? Would it be
easiest to create a function to accept the trans_no then do a SELECT on
soh_product and return that value?

 

Thanks,

-p

 

I've tried to do this but Postgres complains about
having to include all the other columns in either an aggregate or the GROUP BY.

SELECT  trans_no,

    customer,

    date_placed,

    date_complete,

    date_printed,

    ord_type,

    ord_status,

    
SUM(soh_product.sell_price),

    customer_reference,

    salesman,

    parent_order,

    child_order,

    order_number

FROM    sales_orders,
soh_product

WHERE   (trans_no Like
'8%' AND order_number Like '8%')

 OR (trans_no
Like '9%' AND order_number Like '8%')

 OR     (trans_no
Like '8%' AND order_number Like '9%')

 OR     (trans_no
Like '9%' AND order_number Like '9%')

 AND    (warehouse='M')

 
AND    (sales_orders.trans_no
= soh_product.soh_num)

 AND    (date_placed
> (current_date + ('12 months ago'::interval)))

GROUP BY soh_product.soh_num

ORDER BY trans_no DESC

 

CREATE TABLE sales_orders

(

  trans_no varchar(6) NOT NULL,

  customer varchar(6),

  date_placed date,

  date_complete date,

  date_printed date,

  ord_type varchar(1),

  ord_status varchar(1),

  discount float8,

  customer_reference text,

  warehouse varchar(3),

  salesman varchar(3),

  username text,

  ordered_value float8 DEFAULT 0,

  supplied_value float8 DEFAULT 0,

  ordered_qty int8,

  supplied_qty int8 DEFAULT 0,

  frieght float8 DEFAULT 0,

  delivery_instructions text,

  parent_order varchar(6),

  child_order varchar(6),

  apply_to_order varchar(6),

  fo_release date,

  order_number varchar(6),

  orig_fo_number varchar(6),

  CONSTRAINT soh_pkey PRIMARY KEY (trans_no)

)

CREATE TABLE soh_product

(

  soh_num varchar(6) NOT NULL,

  prod_code varchar(6) NOT NULL,

  qty_ordered numeric(8),

  qty_supplied numeric(8),

  cost_price numeric(10,2),

  sell_price numeric(10,2),

  sales_tax numeric(10,2),

  discount numeric(10,2),

  cost_gl varchar(5),

  if_committed varchar(1)

)


Re: [SQL] SELECT Aggregate

2006-06-29 Thread Aaron Bono
I am not familiar enough with how postgres optimizes the queries but won't this end up with total number of queries run on DB = 1 query + 1 query/row in first queryWhat would be more efficient on a large database - a query like Richard submitted (subquery in the select) or one like I submitted (join the two tables and then do a group by)?  My guess is it depends on the % of records returned out of the sales_orders table, the smaller the % the better Richard's query would perform, the higher the % the better the join would run.
The database I am working with aren't big enough yet to warrant spending a lot of time researching this but if someone with more experience knows what is best I would love to hear about it.Thanks,Aaron Bono
On 6/29/06, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> SELECT  trans_no,> customer,> date_placed,> date_complete,> date_printed,> ord_type,> ord_status,
  select ( SUM(sell_price) from soh_product where sales_orders.trans_no = soh_product.soh_num  ) as transact_sum,
> customer_reference,> salesman,> parent_order,> child_order,> order_number> FROMsales_orders> WHERE   (trans_no Like '8%' AND order_number Like '8%')
>  OR (trans_no Like '9%' AND order_number Like '8%')>  OR (trans_no Like '8%' AND order_number Like '9%')>  OR (trans_no Like '9%' AND order_number Like '9%')>  AND(warehouse='M')
>  AND(date_placed > (current_date + ('12 months ago'::interval)))> ORDER BY trans_no DESC


Re: [SQL] can any one solve this problem

2006-06-29 Thread Aaron Bono
I suggest you give a first stab at it and show us what you are doing.  That would help us see your table relationships better (please use inner/outer joins to make it clearer) and get a better idea of what you are trying to do.
Also, providing data examples like some of the other posts really help us help you get a good solution.-AaronOn 6/29/06, Penchalaiah P.
 <[EMAIL PROTECTED]> wrote:













emp_table(

Cdacno varchar (7) (primary key),

Personal_No varchar (10)(foreign key),

Name varchar (40));

 

personal_table (

Personal_No varchar (10) (primary
key),

Cdacno varchar (7),

Date_Of_Birth date);

 

unit_master (

Unit id varchar (10) (primary key),

Unit_Name varchar(25),

Unit_Location varchar(25));

 

Unit_Details_table (

 Unit_id varchar (foreign key)

CDA_No varchar(7) foreign key);

 

rank_table(

Rank_ID numeric(2)(primary key),

Rank_Code numeric(2),

Rank_Name varchar (25));

 

Rank_Date_table (

  Rank_Date__ID numeric NOT
NULL,

  CDA_No varchar(7) (foreign key),

  Rank_ID numeric(2));

 

My query is ….if I give cdacno I have to get per_no
from personal_table.. With this I have to display rank_name from rank_table
,name from emp_table, unit_name from unit_master..

 

Like that if I give per_no I have to get cdacno from
emp_table.. .. With this I have to display rank_name from rank_table ,name from
emp_table, unit_name from unit_master..

 

And here unit_name is depends on unit_details_table ..and
rank_name is depends on rank_date_table.. 

 

Doing these things first it has to check when we r giving
cdacno.. whether  per_no is null or not.. like this if I give per_no it
has to check cdacno is null or not.

 

Let me know the solution..

But I written one function to this to get per_no if I give
cdacno………. 


Re: [SQL] Data Entry and Query forms

2006-06-29 Thread Aaron Bono
I agree, using ODBC is probably a good first step.  Especially for M$Access.For developers I recommend the EMS Manager tools.  They are a commercial product but I have been very pleased with them and we use the tools every day.  They are not exactly like SQL Manager but they serve the same purpose.  See 
http://www.sqlmanager.net/ .Are there any web based management tools for PostgreSQL (like Mysql PHP Admin except for PostgreSQL)?  I thought I saw a post sometime back about one but don't remember the name.
-AaronOn 6/29/06, Markus Schaber <[EMAIL PROTECTED]> wrote:
Hi, Anthony,Anthony Kinyage wrote:> Before continuing our Plans, I need to know how can I do with PostgreSQL> in order to have Data Entry and Query Forms on clients side (How can I> design Data Entry and Query Forms).
PostgreSQL itsself is a database server, not a front-end form designer.However, using the PostgreSQL ODBC driver, you should be able tocontinue using your current front-ends (like Access).If you don't like this, maybe you can look at GNU Enterprise,
OpenOffice.org database module, Delphi/Kylix or others. (I admit I'veused none of those yet, we use PostgreSQL as backend for "real"applications.)HTH,Markus--Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GISFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org



Re: [SQL] Alternative to Select in table check constraint

2006-07-01 Thread Aaron Bono
This is more of an implementation option, but when I worry about what is active/inactive I put start/end dates on the tables.  Then you don't need active indicators.  You just look for the record where now() is >= start date and now() <= end date or end date is null.  You can even activate/deactivate a badge on a future date.  Of course, this complicates the data integrity - you will need some kind of specialized trigger that checks the data and makes sure there are no date overlaps to ensure you don't have two badges active at the same time.  But is also gives you a history of badges and their activities.
-AaronOn 6/30/06, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> > CHECK   ( 1 = ALL ( SELECT COUNT(STATUS)> > FROM BADGES> > WHERE STATUS = 'A'> > GROUP BY EMPNO))
>>  From the PostgreSQL docs (http://www.postgresql.org/docs/8.1/> interactive/sql-createtable.html)>> CREATE UNIQUE INDEX one_a_badge_per_employee_idx
> ON badges (empno)> WHERE status = 'A';> http://www.postgresql.org/docs/8.1/interactive/sql-createindex.html> 
http://www.postgresql.org/docs/8.1/interactive/indexes-partial.htmlMichael,Partial indexs seem to be "what the doctor ordered!"   And your suggest is right on, the idea of
the constraint is to allow only one active badge status at a time.But now that I think about it, using the authors suggestion (if it actually worked), how wouldwould it be possible to change the active status from one badge to another?
Oh well, partial index are obvious the superior solution since the entire table doesn't not haveto be scanned to determine if the new badge can be set to active.Once again thanks for the insight.
Regards,Richard Broersma Jr.


Re: [SQL] SQL (Venn diagram type of logic)

2006-07-04 Thread Aaron Bono
You can start by creating 3 views for your 3 categories:CREATE OR REPLACE VIEW cat_a (   account_id,   sales_cat_a) ASSELECT
   account_id,
   sum(sale_price) as sales_cat_aFROM   sales -- Assuming sales is the table with the dataWHERE   product_id in ('prod1', 'prod2')GROUP BY   account_idThen do the same with a cat_b and cat_c view.
This will give you subtotals for each category.As for your Venn Diagram, these views may be helpful or may not.  To determine what to do, please provide a little more information:1. What do you want your final result to look like - what columns?
2. Will each row returned represent one account, one of the 7 sections of the diagram or a combination of these two?3. When you say total amount do you total amount spent in that section of the diagram or the total amount spent by that person?
On 6/29/06, Vinnie Ma <[EMAIL PROTECTED]> wrote:
Hello everyone.I have a customer purchase table that keeps track oftransaction details.  I'm looking sum up total spentby each customer and classify each customer based onwhich products they bought.
Data looks like...account_id, date, product_id, sale_price-cust1, 03/21/2005, prod1, 50cust1, 03/22/2005, prod4, 35cust1, 05/08/2005, prod2, 50cust2, 04/21/2005, prod16, 20
cust3, 04/16/2005, prod1, 50etc..Setup:I'm picturing a Venn Diagram in my head but I needhelp with the decision logic to classify eachcustomer.Example:Category A: Prod1, Prod2
Category B: Prod3, Prod4Category C: All products Not in Class A or Class B-A customer who has bought Prod1, Prod2 would be inthe A only category.-A customer who has bought Prod1, Prod3 would be in
the Class A&B category-A customer who has bought Prod18 would be in the Ccategory-A customer who has bought Prod4, Prod16 would be inthe B&C category-A customer who has bought Prod1, Prod4, Prod15 would
be in the A&B&C category-etc...Then for each comination of categories (7 in total?),i will need of number of accounts in that category andtotal spent by those accounts.Any help or direction would be greatly appreciated.
Thank you in advance.-Vince


Re: [SQL] Problem with array subscripts in plpgsql trigger function

2006-07-05 Thread Aaron Bono
On 7/5/06, Erik Jones <[EMAIL PROTECTED]> wrote:
Ok, I have a trigger set up on the following (stripped down) table:CREATE TABLE members (member_id   bigint,member_status_id   smallint,member_is_deletedboolean);Here's a shortened version of the trigger function:
CREATE OR REPLACE FUNCTION update_member() RETURNS TRIGGER AS $um$DECLAREstatus_deltas integer[];BEGINIF(NEW.member_status_id != OLD.member_status_id ANDNEW.member_is_deleted IS NOT TRUE) THEN
   status_deltas[NEW.member_status_id] := 1;   status_deltas[OLD.member_status_id] := -1;END IF;/*and after a couple more such conditional assignments I use thevalues in status_deltas to update another table holding status totals here*/
END;$um$ LANGUAGE plpgsql;on the two lines that access set array values I'm getting the followingerror:ERROR:  invalid array subscriptsWhat gives?What values are being used for member_status_id? 



Re: [SQL] Problem with array subscripts in plpgsql trigger function

2006-07-05 Thread Aaron Bono
On 7/5/06, Erik Jones <[EMAIL PROTECTED]> wrote:
Aaron Bono wrote:> On 7/5/06, *Erik Jones* <[EMAIL PROTECTED] [EMAIL PROTECTED]>> wrote:>> Ok, I have a trigger set up on the following (stripped down) table:
>> CREATE TABLE members (> member_id   bigint,> member_status_id   smallint,> member_is_deletedboolean> );>> Here's a shortened version of the trigger function:
>> CREATE OR REPLACE FUNCTION update_member() RETURNS TRIGGER AS $um$> DECLARE> status_deltas integer[];> BEGIN> IF(NEW.member_status_id != OLD.member_status_id
 AND> NEW.member_is_deleted IS NOT TRUE) THEN>status_deltas[NEW.member_status_id] := 1;>status_deltas[OLD.member_status_id] := -1;> END IF;> /*and after a couple more such conditional assignments I use the
> values in status_deltas to update another table holding status> totals here*/> END;> $um$ LANGUAGE plpgsql;>> on the two lines that access set array values I'm getting the
> following> error:>> ERROR:  invalid array subscripts>> What gives?>>>> What values are being used for member_status_id?>1,  2, and 3
I did some digging through the documentation and cannot find any examples of using arrays like this.  Do you have to initialize the array before you use it?Does anyone know where to look for informaiton about using arrays in stored procedures?
-Aaron 


Re: [SQL] week ending

2006-07-06 Thread Aaron Bono
On 7/5/06, Keith Worthington <[EMAIL PROTECTED]> wrote:
Hi All,I just finished writing a query that groups data based on the week number.SELECT EXTRACT(week FROM col_a) AS week_number,sum(col_b) AS col_b_total   FROM foo  WHERE foobar  GROUP BY EXTRACT(week FROM col_a)
  ORDER BY EXTRACT(week FROM col_a);I would like to generate the starting date or ending date based on thisnumber.  IOW instead of telling the user "week number" which they won'tunderstand I would like to provide either Friday's date for "week
ending" or Monday's date for "week beginning".SELECT  AS week_ending,sum(col_b) AS col_b_total   FROM foo  WHERE foobar  GROUP BY EXTRACT(week FROM col_a)
  ORDER BY EXTRACT(week FROM col_a);Try this.  It puts Saturday as the Friday before it and Sunday as the Firday after so if you want Saturday or Sunday to be on different weeks you will need to do a little tweaking but this should get you going.
SELECT    date_trunc('day', col_a + (interval '1 day' * (5 - extract(dow from col_a AS week_ending,   sum(col_b) AS col_b_totalFROM fooGROUP BY   date_trunc('day', col_a + (interval '1 day' * (5 - extract(dow from col_a 
 


Re: [SQL] Alternative to serial primary key

2006-07-06 Thread Aaron Bono
On 7/6/06, David Clarke <[EMAIL PROTECTED]> wrote:
I posted a couple of weeks back a question regarding the use of a 100char column as a primary key and the responses uniformily advised theuse of a serial column. My concern is that the key is effectivelyabstract and I want to use the column as a foreign key in other
tables. I have a simple question... why do you want to use the column as a foreign key in other tables?  If you use the serial column then all you need is a simple join to get the 100 char column out in your query.  If you need to make things simpler, just create a view that does the join for you.
Either there is some requirement here that I am not aware of or it sounds like you may be trying to use a sledge hammer on a nail.-Aaron


Re: [SQL] Foreign Key: what value?

2006-07-06 Thread Aaron Bono
On 7/6/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
i alsways make my foreign key column data type int4.i'm not sure if i read that somewhere.  anyone, please
feel free to chime in if this isn't good practice.read up on currval, nextval and that whole section.you can begin by getting the nextval, assigning it toa variable, insert it into your primary table and then
insert it into your related table as a foreign key.from what i understand, either way should bebulletproof.  the way i described is more code, butsome minds might not mind paying that price b/c they
like the process better.good luck.Bigserial's are simply bigint's with a sequence that does the nextval part for you.  Your approach works but takes more coding on your part.  I would recommend using bigserial so you cut some of the work out for yourself.
-Aaron


Re: [SQL] Alternative to serial primary key

2006-07-06 Thread Aaron Bono
On 7/6/06, David Clarke <[EMAIL PROTECTED]> wrote:
To recap, yes there is only a single column, yes it is varchar. I needto do a lookup on the address column which is unique and use it as aforeign key in other tables. Using a serial id would obviously work
and has been recommended. But having a hash function over the addresscolumn as the primary key means I can always regenerate my primary keyfrom the data which is impossible with a serial key. I believe therisk of collision using md5 is effectively zero on this data and I can
put a unique index over it.So if you have:addresses    address_id  bigserial (pk),    addressperson    person_id bigserial (pk),
    first_name,
    last_name,    address_idyou can do something likeINSERT INTO person (    address_id)SELECT    'Joe',   'Blow',   address_idFROM addressesWHERE addresses.address = ?;
No regeneration of PK necessary.  If you index addresses.address the insert should run quickly, right?-Aaron Bono 


Re: [SQL] Alternative to serial primary key

2006-07-06 Thread Aaron Bono
On 7/7/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Plus I feel I would be remiss in not> exploring an
> alternative to the serial key.why?  it is a tried and true method.Actually, there are two reasons I think it good for someone new to SQL to explore this option.  First of all, it is very educational.  Secondly, it is the person who stands up and says, "I know everyone does it this way, but what if we did it that way?" who has a chance of discovering something new.
For this particular topic however, using the serial is very easy and using a hash is not only error prone but much more difficult.It is good to see different philosophies about foreign keys though!
-Aaron Bono


Re: [SQL] Alternative to serial primary key

2006-07-07 Thread Aaron Bono
On 7/7/06, David Clarke <[EMAIL PROTECTED]> wrote:
The question remains regarding the use of a string value as a primarykey for the table and as a foreign key in other tables. If I use theaddress column as a foreign key in a differrent table will postgresphysically duplicate the data or will it simply attach the appropriate
index magically behind the scenes? I agree that the address column isfairly small and I've heard of others using the likes of GUIDs as keyvalues which are not a lot smaller than I would expect my averageaddress to be.
Theoretically using the address as a foreign key WILL duplicate the data.  I don't know if there is some fancy way PostgreSQL optimizes the use of space for indexed fields or foreign keys though.  Might need to get feedback from someone who has looked at the source code for that and it may depend on the version you are using.  Unless you are looking at a huge number of records though, the size difference is of little concern - disk space is cheap.
As far as should you use varchar as a foreign key - as someone mentioned before, you may be treading on religious territory there.  It can be done and is done frequently.  I never do it but I really cannot come up with a compelling argument why not other than it is just one of the standards I have adopted for my team.
-Aaron


Re: [SQL] Select Maths

2006-07-07 Thread Aaron Bono
On 7/7/06, Phillip Smith <[EMAIL PROTECTED]> wrote:
Same SELECT query as before, different area
of it… I have a function that calculates the recommended purchase order
quantity for a stock item based off various other values and functions:

pqty(stock.code) AS "pqty"

 

This needs to be rounded up / down to the
nearest multiple of the purchase unit quantity for that product –
It's Friday afternoon and my head has refused to help me work out the
maths all afternoon!

 

Example:

 Pqty
= 60

 Purchase
Unit = 25

Pqty needs to be rounded down to 50.

 

I guess I'm also asking if I should
do this in the Pqty function or in the SELECT query to optimize the result?select 25 * round(cast(60 as double precision) / 25) 


Re: [SQL] Select CASE Concatenation

2006-07-07 Thread Aaron Bono
On 7/7/06, Phillip Smith <[EMAIL PROTECTED]> wrote:
I have a SELECT statement, part of which is a "Flags"
column which is a CASE function, but I need to be able to concatenate the
results together. Example: in the below, I need to be show both "@"
and "K" if both of the CASE blocks are true… Possible?

 

 CASE WHEN stkeoq(stock.code)
= -1 THEN '@'

   WHEN
stock.kit_pack = 'Y' THEN 'K'

 END AS "flags",

 

Note: "stkeoq" is a function

 

The actual CASE is going to end up with 7 individual tests
and therefore 7 difference flags that I'll need to test and concatenate
all the true ones…With a CASE you will need to provide all possible combinations.  But perhaps there is a way to put the two separate CASE statements together with a || concatenation:
CASE WHEN stkeoq(stock.code)
= -1 THEN '@'ELSE ''END||CASE WHEN
stock.kit_pack = 'Y' THEN 'K'ELSE ''
ENDAS "flags"
Is this what you are looking for?-Aaron Bono


Fwd: [SQL] Atomar SQL Statement

2006-07-07 Thread Aaron Bono
On 7/7/06, Scott Marlowe <
[EMAIL PROTECTED]> wrote:

On Fri, 2006-07-07 at 13:07, Stephan Szabo wrote:> On Fri, 7 Jul 2006, Michael Glaesemann wrote:>> >> > On Jul 7, 2006, at 7:55 , Weber, Johann (ISS Kassel) wrote:> >> > > My concern: in a multi threaded environment, can a second thread
> > > interrupt this statement and eventually insert the same email> > > address in> > > the table with a different id? Or is this statement atomar?> >> > You're safe. Take a look at the FAQ entries on SERIAL:
> >> > http://www.postgresql.org/docs/faqs.FAQ.html#item4.11.2
>> I don't think he is, because I don't think the issue is the SERIAL
> behavior, but instead the NOT EXISTS behavior.  Won't the NOT EXISTS in> read committed potentially be true for both concurrent sessions if the> second happens before the first commits, which then would mean that both
> sessions will go on to attempt the insert (with their own respective ids> from the serial)? Without a unique constraint on email I think he can end> up with the same email address with two different ids.
Yep, this is a possible race condition, if memory serves, and this isthe reason for unique indexes.  That way, should another transactionmanage to sneak in between the two parts of this query, the unique index
will still keep your data coherent.I deal with these issues by first placing a unique constraint on the email field (as was suggested above) and then synchronizing the code that does the check and insert.  I have also looked for instances where a unique constraint error is thrown and had the system give the user a meaningful error or responded appropriately.
-Aaron


Re: [SQL] SELECT substring with regex

2006-07-07 Thread Aaron Bono
On 7/7/06, Rodrigo De Leon <[EMAIL PROTECTED]> wrote:
On 7/7/06, T E Schmitz <[EMAIL PROTECTED]> wrote:> But that takes me to the next problem:>> For the sake of the example I simplified the regular pattern.
> In reality, BASE_NAME might be:>> 28mm> 28-70mm>> So the reg. expr. requires brackets:>> substring (NAME, '^(\\d+(-\\d+)?mm)' ) as BASE_NAME>> Actually, the pattern is more complex than that and I cannot see how I
> can express it without brackets.Maybe:selectsubstring ('150mm LD AD Asp XR Macro', '^[\\d-]*mm' ) as BASE_NAME, substring('150mm LD AD Asp XR Macro','^[\\d-]*mm (.*)$') as SUFFIX;select
substring ('28-70mm LD AD Asp XR Macro', '^[\\d-]*mm' ) as BASE_NAME, substring('28-70mm LD AD Asp XR Macro','^[\\d-]*mm (.*)$') as SUFFIX;etc...Regards,RodrigoIs there a reason this column wasn't separated into two different columns?  Or perhaps into a child table if there could be more than one XXXmm value in the field?
Just curious.-Aaron


Re: [SQL] SELECT substring with regex

2006-07-07 Thread Aaron Bono
On 7/7/06, T E Schmitz <[EMAIL PROTECTED]> wrote:
I am trying to come up with a semi-automatic solution to tidy up somedata. If it's got to be done manually via the GUI it would mean a lot ofdummy work [for the customer].I would recommend you alter the GUI to NOT allow any more bad data to get in - perhaps give some nice validation and friendly error message about the correct format or give various fields that need to be filled out so the user can easily enter it and the GUI assembles the correct string.  It won't fix the current bad data but would stop the insanity from proliferating :)-
First of all I did a 5 table join to select those NAMEs which don'tfollow the required pattern: the pattern describes a photographic lens
(focal length followed by lens speed (aperture)) and nothing else.Unfortuantely, there are a few hundred occurences where a few attributeshave been appended which should have been stored elsewhere.
valid entries would be:"28mm F2.8" (prime lens)"30-70mm F4" (zoom lens)"30-70mm F2.8" (zoom lens)"30-100mm F4.5-5.6" (zoom lens with variable speed)In the WHERE clause I have specified all those NAMEs, which follow that
pattern but have some gubbins appended:WHERE NAME ~'^((\\d+(-\\d+)?)mm F((\\d+(\.\\d+)?)+(\.(\\d+(\.\\d+)?)+)?))\\D+$'which gives me a listing of those candidates that need to be amended -manually or otherwise.
Next, I wanted to produce a result which splits NAME into what it shouldbe (BASE) and attributes (SUFFIX). Maybe I can generate some SQL fromthat to tidy up the data.You might also consider adding the base and suffix columns with a trigger that parses the name field and sets the values of base and suffix and also does any integrity checks during inserts and updates.  Finally, after the data is cleaned up and the client apps are changed to use base and suffix and not name, get rid of the name column.
Then again, this may be exactly what you are already trying to do.-Aaron


Re: [SQL] Select Maths

2006-07-10 Thread Aaron Bono
On 7/10/06, Phillip Smith <[EMAIL PROTECTED]> wrote:
















Beautiful – Works a treat. Thanks
Aaron.

 

A follow-on problem now… I have the
below column in the select, but I need to validate the value across all 3 rules
–  I need to assign it to a variable!!

 

Example – my pqty function
calculates a value less than the suppliers minimum order qty (and therefore
fails the first CASE below), I need to set the column to a new value (stock.purchase_unit)
– That's all OK. But I need to check this new value against the
remaining 2 CASE's…

 



CASE  WHEN pqty(stock.code)
< stock.purchase_unit THEN stock.purchase_unit

    --^^^--
Check that our suggested purchase qty is greater than then suppliers minimum
order qty

  WHEN
MOD(pqty(stock.code), stock.box_qty) > 0 THEN stock.box_qty * ROUND(CAST(pqty(stock.code)
AS DOUBLE PRECISION) / stock.box_qty)

    --^^^--
Check that our suggested purchase qty is a multiple of the box qty

  WHEN
pqty(stock.code) < (urate(stock.code) * creditors.review_cycle) THEN urate(stock.code)
* creditors.review_cycle

    --^^^--
Check that our suggested purchase qty is greater than our Usage Rate x Creditor
Review Cycle

END AS "pqty",

Can you provide example values and show where it is and is not working?  I am not quite sure what you are trying to do here.


-Aaron 


Re: [SQL] MS-SQL<->Postgres sync

2006-07-10 Thread Aaron Bono
On 7/10/06, Kevin Bednar <[EMAIL PROTECTED]> wrote:

  
  
 Looking to keep 2 databases in 
sync, at least semi-realtime if possible, although running a batch update 
every x mins wouldn't be out of the question. One db is postgres and the 
other is ms-sql. It's to keep inventory in sync from 2 seperate locations, 
one being a brick and mortar store and the other an ecommerce system 
for a web site. Any and all help is appreciated since I can't find much 
of anything on syncing these 2 database systems!  
Kevin BednarCan you provide more information about what data is needing to be fed back and forth?  Are both databases being updated or can you set one as the system of record?  Why can't you have the web site read directly from MS SQL Server?
With a little more information we can provide more useful advice.Thanks,Aaron


Re: [SQL] MS-SQL<->Postgres sync

2006-07-10 Thread Aaron Bono
Please reply to all when replying on the list...On 7/10/06, Kevin Bednar <[EMAIL PROTECTED]
> wrote:
  
  
 
Thanks Aron. What I'm actually trying to do is this:   
Postgress in physical store, being used by POS system as the back 
end. MS-SQL being used on web server by ecommerce system. 
  Table structures are different of course, but some 
common fields. What I want to do is when an item is sold in the store, 
update the quantity field for that sku number on the web site and vice 
versa. Only 2 fields basically need to be updated on each side, the SKU 
number and quantity. This is to keep the product table in sync and try to 
avoid selling product that isnt in stock and setting a flag on the web 
system stating such. Thanks for your help. For something this simple you are probably better off doing some custom coding.If you have the ability to modify the databases, I would recommend putting a trigger on each database so when there is a product sold, that sale is recorded in a temp table (which serves as a queue of data that needs to be synched).  Then have a process read from these temp tables and feed the data back to the other database.  Of course, I am assuming you have full control to change the databases - some vendors do not allow that.
You may be able to connect the databases - MS SQL Server will definitely allow you to connect via ODBC to another database and feed data back and forth.  I think there are add on modules for PostgreSQL but I have not tried to have PostgreSQL talk to other databases before.
-Aaron


Re: [SQL] Can function results be used in WHERE?

2006-07-10 Thread Aaron Bono
On 7/10/06, Bryce Nesbitt <[EMAIL PROTECTED]> wrote:
I have a function, the results of which seem to apply to ORDER BY andHAVING, but not to WHERE.  Is this expected?-- Return distance in some mystery units (TODO: convert to miles orkilometers)CREATE FUNCTION calculate_distance(double precision, double precision,
double precision, double precision) RETURNS double precisionAS 'BEGINRETURN (3963 * acos( sin($1/57.2958) * sin($3/57.2958) +cos($1/57.2958) * cos($3/57.2958) * cos($4/57.2958 - $2/57.2958) ));
END;'LANGUAGE plpgsql;demo=# selectpod_code,lat,lon,calculate_distance(lat,lon,37.789629,-122.422082) fromeg_pod where 4 > 5::double precision order by 4 limit 10; pod_code |lat| lon | calculate_distance
--+---+-+   44 | 0 |   0 |  0   45 | 0 |   0 |  0   69 | 37.789629 | -122.422082 |  0
   51 | 37.788166 | -122.421488 |  0.106273303754946   71 | 37.794228 | -122.421382 |  0.320393524437476   73 | 37.787878 | -122.411644 |  0.583267064983836   37 | 37.791736 | -122.411604 |  
0.590977027054446   46 | 37.784929 | -122.412782 |  0.603416307249032   50 | 37.780329 | -122.418482 |  0.672685350683496   30 | 37.780419 | -122.417764 |  0.679355355047995(10 rows)sdemo=# select
pod_code,lat,lon,calculate_distance(lat,lon,37.789629,-122.422082) fromeg_pod having calculate_distance(lat,lon,37.789629,-122.422082) > 5order by 4; pod_code |lat| lon | calculate_distance
--+---+-+   21 | 37.710581 | -122.468864 |   6.03655070159813   77 | 37.805427 |  -122.29528 |   7.01595024232628   29 | 37.802684 | -122.275976 |
8.0364304687727   12 | 37.806133 | -122.273827 |   8.18282157050301   23 | 37.797327 |  -122.26598 |   8.54878571904839   57 | 37.829592 | -122.266347 |   8.94791199923289   35 | 37.809327 |  -
122.25448 |   9.26077996779577   47 | 37.851957 | -122.270376 |   9.34292370436932demo=# select version();  version---
 PostgreSQL 7.4.12 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.320030502 (Red Hat Linux 3.2.3-20)(1 row)First I recommend making your function IMMUTABLE since, given the same arguments, it gives the same result - this will allow PostgreSQL to optimize the function call and cache the results.  Then, don't use "4", use "calculate_distance(lat,lon,
37.789629,-122.422082)".  That use is very ambiguous and subject to breaking if you change the columns in your select.  It may also be the reason you have a problem though I don't use that syntax so cannot be sure.
The only difference between HAVING and WHERE is that WHERE occurs before a GROUP BY and HAVING occurs after.  Since you have no GROUP BY there should be no difference in the queries.  The only other difference is the "4 > 5::double precision" so that is where I would start.
-Aaron


Re: [SQL] Can function results be used in WHERE?

2006-07-10 Thread Aaron Bono
On 7/10/06, Bryce Nesbitt <[EMAIL PROTECTED]> wrote:
I think it is ugly also, but no other syntax seems to work:stage=# selectpod_code,lat,lon,calculate_distance(lat,lon,37.789629,-122.422082) asdist from eg_pod where dist < 1 order by dist desc limit 10;
ERROR:  column "dist" does not existSELECT   pod_code,   lat,   lon,   calculate_distance(lat,lon,37.789629,-122.422082) as dist FROM eg_pod WHERE calculate_distance(lat,lon,
37.789629,-122.422082) < 1 ORDER BY calculate_distance(lat,lon,37.789629,-122.422082) desc limit 10; 


Re: [SQL] Can function results be used in WHERE?

2006-07-10 Thread Aaron Bono
On 7/10/06, Bryce Nesbitt <[EMAIL PROTECTED]> wrote:
Aaron Bono wrote:>>> On 7/10/06, *Bryce Nesbitt* <[EMAIL PROTECTED]> [EMAIL PROTECTED]>> wrote:
>>> I think it is ugly also, but no other syntax seems to work:>> stage=# select> pod_code,lat,lon,calculate_distance(lat,lon,37.789629,-122.422082) as> dist from eg_pod where dist < 1 order by dist desc limit 10;
> ERROR:  column "dist" does not exist>>> SELECT>pod_code,>lat,>lon,>calculate_distance(lat,lon,37.789629,-122.422082) as dist> FROM eg_pod
> WHERE calculate_distance(lat,lon, 37.789629,-122.422082) < 1> ORDER BY calculate_distance(lat,lon,37.789629,-122.422082) desc limit 10;Yep, that works.  I guess with IMMUTABLE it's even effecient.
But I have to pass 6 arguments, not 2.  Is there a way to make it lookcleaner?I had expected using the column label (e.g. "dist") to work with WHERE,just as it does with ORDER BY.
You can also try

SELECT

   pod_code,   lat,   lon,   dist 
FROM (
   SELECT
      pod_code,      lat,      lon,      calculate_distance(lat,lon,37.789629,-122.422082) as dist 
   FROM eg_pod 
) eg_prodWHERE dist < 1 ORDER BY dist desc limit 10; 

If the 37.789629 and -122.422082 are static values you
can create a view for it.  Otherwise you can create a function or
stored procedure that takes 2 arguments and returns the results of the
subquery.

Just some options.  Not sure which you would prefer.  I am sure there are more ways to do it.

-Aaron Bono

 


Re: [SQL] Select Maths

2006-07-10 Thread Aaron Bono
On 7/10/06, Phillip Smith <[EMAIL PROTECTED]> wrote:
















Example:

Funcation 
pqty(stock.code) calculates a value of 0 for a particular product. This fails the
last CASE that makes sure the pqty()
 value is greater than
our Usage Rate *
Review Cycle – in this case is
3. But that is less than our Minimum Order Qty (First CASE) and not a multiple
of our Box Qty (Second CASE)

Another example could be
that pqty()
 calculates less than the Minimum Order Qty (fails first CASE) so
we raise it to the Minimum Order Qty, but that new value could fail either or
both of the second CASE's.

Minimum Order Qty
= stock.purchase_unit
Box Qty = stock.box_qty

I guess
a better way to word it is that because pqty()
 returns a calculated
value each time and I can't take that value and assign it to a variable,
then use that variable. If I was writing VB or similar I'd want something
like:

intPurchaseQty
 = pqty(stock.code)

CASE  WHEN intPurchaseQty < stock.purchase_unit THEN
intPurchaseQty = stock.purchase_unit

WHEN MOD(intPurchaseQty, stock.box_qty) > 0 THEN intPurchaseQty

= stock.box_qty * ROUND(CAST(intPurchaseQty AS DOUBLE PRECISION) /
stock.box_qty)

WHEN intPurchaseQty < (urate(stock.code) *
creditors.review_cycle) THEN intPurchaseQty = urate(stock.code) * creditors.review_cycle

END

COLUMN = intPurchaseQty AS
"pqty",

I hope
that makes it a lighter shade of mud!!Why wouldn't you be able to do this in a function?  Pass in stock.code, stock.purchase_unit, stock.box_qty and creditors.review_cycle
.  You can then use variables in the function, right?-Aaron


Re: [SQL] Can function results be used in WHERE?

2006-07-10 Thread Aaron Bono
On 7/10/06, Tom Lane <[EMAIL PROTECTED]> wrote:
But as far as the underlying misconception goes, you seem to think that"4" in the WHERE clause might somehow be taken as referring to thefourth SELECT result column (why you don't think that the "1" would
likewise refer to the first result column isn't clear).  This is not so."4" means the numeric value four.  There is a special case in ORDER BYand GROUP BY that an argument consisting of a simple integer literal
constant will be taken as a reference to an output column.  This is anugly kluge IMHO, but it's somewhat defensible on the grounds thatneither ordering nor grouping by a simple constant has any possiblereal-world use; so the special case doesn't break anything of interest.
This would certainly not be so if we were to randomly replace integerconstants in general WHERE conditions with non-constant values.I agree whole heartedly with Tom, using the number in the ORDER BY is ugly and not recommended.  Using column names is much easier to read and is much more maintainable by team members.  I have to admit the 4 < 1 did confuse me at first.
-Aaron Bono


Re: [SQL] Avoiding RI failure with INSERT ... SELECT FROM

2006-07-12 Thread Aaron Bono
On 7/12/06, Mark Stosberg <[EMAIL PROTECTED]> wrote:
Hello! I got an unexpected SQL failure last night, and want to see howto prevent it in the future. This is with 8.1.3.I have a couple of insert statements that run in a transaction block,initializing rows that will be used the next day:
INSERT into item_hit_logSELECT item_id, CURRENT_DATE + CAST('1 day' AS interval), 0 FROM items where item_state = 'available';INSERT into item_view_log SELECT item_id, CURRENT_DATE + CAST('1 day' AS interval), 0
 FROM items where item_state = 'available';I would recommend you list your values in your insert statements:INSERT into item_hit_log (item_id_fk, )SELECT 
That makes it less prone to problems in the future (like if the column orders change) and makes it easier for others to understand and help you with.
The "items" table has a few hundred thousand rows in it, so this takes abit a run.The error we got last night was:   ERROR:  insert or update on table "item_hit_log" violates foreign key
constraint "item_id_fk"   DETAIL:  Key (item_id)=(451226) is not present in table "items".Re-running the transaction block a few minutes later worked.What's an alternate design that avoids this possible error?
Does the items table allow deletes?  If so, your insert may be attempting to do an insert for an item_id that was deleted after the select and before the insert.  Don't know if PostgreSQL will prevent that with table locking or not.
==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] SQL (Venn diagram type of logic)

2006-07-12 Thread Aaron Bono
On 7/11/06, Vinnie Ma <[EMAIL PROTECTED]> wrote:
Hello Aaron,Thank you for the help.  My apologies for the directemail but i couldn't figure out how to reply to thethread on the postresql site. Just doing a reply to all should send it to the mailing list but the from address must match the email address you registered on the list.
Once i get the purchases seperated by categories, Iplan to track the month on month status of each
account.For example, in may 2006, based on her previouspurchaes, customer1 was in category A but in june2006, she bought something in category B.  This wouldmake her in bucket 2.bucket 1: A
bucket 2: A&Bbucket 3: A&Cbucket 4: Bbucket 5: B&Cbucket 6: Cbucket 7: A&B&C(these are the combinations i was thinking about wheni had the venn diagram in my head.)
To follow with the example above:In May, she would be counted as part of bucket 1 andher total spending (life to may) will be part ofrunning total for bucket 1In June,  she would be counted as part of bucket 2 and
her total spending (life to june) will be part ofrunning total for bucket 2In the end, we will have count(customers),sum(all_purchases) for each of the 7 buckets for eachmonth.  From a business perpective, we hope this
exercise will show the general migration patterns ofcustomers from one bucket to another over time.I figured out a way to do it but it is no where nearefficient.i added three boolean fields to the account table to
indicate catA, catB, and catC.  i update the table 3times for each of the categories, turning on theboolean fields where applicable.then i select the count, and sum fields from an innerjoin on account and purchase tables for each of the 7
buckets.  it works, but i would manually do it foreach month.any thoughts would be most appreciated.  thanks andplease have a great day.What I have done for situations like this is to first create a report schema.  Then I have a daily process run that updates reporting tables every night.  These reporting tables keep full history and we only update the latest information.
The main reason for doing it this way is to:Remove the need to worry about performance of the query to build the report - it is done once a day during low activity on the serverSegment permissions so people have rights to run reports but not dig through the main database (the tech savy analysts tend to write bad queries and run them on production)
Keep history without having to run the report on everything (my refreshes update this month and last month - this provides enough overlap that I don't have to worry about changing months, leap year, etc.)
Hope this helps!======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Can function results be used in WHERE?

2006-07-12 Thread Aaron Bono
On 7/11/06, Bryce Nesbitt <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:> But as far as the underlying misconception goes, you seem to think that> "4" in the WHERE clause might somehow be taken as referring to the> fourth SELECT result column (why you don't think that the "1" would
> likewise refer to the first result column isn't clear).  This is not so.> "4" means the numeric value four.  There is a special case in ORDER BY> and GROUP BY that an argument consisting of a simple integer literal
> constant will be taken as a reference to an output column.  This is an> ugly kluge IMHO...Yeah, it was a longshot.  I only tried it because the column label didNOT work, and I had some gut reaction to repeating the same function twice:
As I mentioned before: the only difference between HAVING and WHERE is that WHERE occurs before a GROUP BY and HAVING occurs after. 
stage=# SELECT pod_code, lat, lon,calculate_distance(lat,lon,37.789629,-122.422082) as dist FROM eg_podWHERE dist < 1 ORDER BY dist desc limit 10;ERROR:  column "dist" does not exist
You cannot use an alias from the select column list in your WHERE clause because the where is the criteria done BEFORE your columns are pulled out.  This is especially noticable when doing a GROUP BY since the WHERE is done before the GROUP BY and the returned column values are gathered after the GROUP BY.  If you want to use an alias, do a subquery and then put your where in the outer query.
I believe the ORDER BY is done last but that may be dependent on the database implementation.  It does make sense to think of ORDER BY to be done last though.  For that reason it can use the alias.
stage=# SELECT pod_code, lat, lon,calculate_distance(lat,lon,37.789629,-122.422082) as dist FROM eg_podWHERE calculate_distance(lat,lon, 37.789629,-122.422082) < 1 ORDER BYdist desc limit 5; pod_code |lat| lon |   dist
--+---+-+---5 | 37.792022 | -122.404247 | 0.988808031847045   62 | 37.780166 | -122.409615 | 0.9449072731025414 | 37.798528 | -122.409582 | 0.919592583879426
   86 | 37.777529 | -122.417982 | 0.866416010967029   68 | 37.789915 | -122.406926 |  0.82867104307647(5 rows)==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


[SQL] Logging in Stored Procedure

2006-07-12 Thread Aaron Bono
OK, maybe a stupid simple question but, how do you add logging to a stored procedure?  My procedure is running but the results I get are not complete and I need to see better what is going wrong.  The tool I use has a debugger but the debugger is barfing at a particular line that, when run without the debugger, works with no errors so I know there is something wrong with the debugger.
Thanks,Aaron-- ==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] How to find entries missing in 2nd table?

2006-07-13 Thread Aaron Bono
On 7/13/06, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> > SELECT controller_id FROM control> > WHERE controller_id NOT IN> > (SELECT DISTINCT controller_id FROM datapack);> The DISTINCT is not necessary.  I have heard with Oracle that DISTINCT is a
> huge performance problem.  Is that true on PostgreSQL also?From my experience, it does not preform as well as the standard group by clause. I noticed a ~20%increase in query run times.
So in that case this would be better:SELECT controller_id FROM controlWHERE controller_id NOT IN(SELECT controller_id FROM datapack); or
SELECT controller_id FROM controlWHERE controller_id NOT IN(SELECT controller_id FROM datapack GROUP BY controller_id); Guess you need to do some explain plans to see which would be best.
Good luck!
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] How to find entries missing in 2nd table?

2006-07-13 Thread Aaron Bono
On 7/12/06, Exner, Peter <[EMAIL PROTECTED]> wrote:
Hi,what aboutSELECT controller_id FROM controlWHERE controller_id NOT IN(SELECT DISTINCT controller_id FROM datapack);The DISTINCT is not necessary.  I have heard with Oracle that DISTINCT is a huge performance problem.  Is that true on PostgreSQL also?
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Trigger, record "old" is not assigned yet

2006-07-13 Thread Aaron Bono
On 7/13/06, Adrian Klaver <[EMAIL PROTECTED]> wrote:
For plpgsql use TG_OP. See link below.http://www.postgresql.org/docs/8.1/interactive/plpgsql-trigger.htmlOn Thursday 13 July 2006 03:50 pm, Daniel Caune wrote:
> Hi,>>>> I've created a trigger BEFORE INSERT OR UPDATE on a table and, indeed,> when the trigger is raised before insertion the record "old" is not> assigned.  Is there a way to distinguish in the trigger procedure from
> an insert statement to an update statement?>>>> Regards,>>> Daniel CAUNE>> Ubisoft Online Technology>> (514) 490 2040 ext. 3613
--Adrian Klaver[EMAIL PROTECTED]Or to be more specific:IF (TG_OP = 'UPDATE') THEN==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Can I do this smarter?

2006-07-13 Thread Aaron Bono
On 7/13/06, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
I have three tables: customers, salesorders and invoices. Customers havesalesorders and salesorders have invoices ( child tables have foreignkey columns to their parent).I want to get a list of all invoices with their customers. This what I
came up with:selectinvoices.objectid,invoices.invoicenumber,invoices.invoicedate,(select customer from salesorders where objectid = invoices.salesorderobjectid),(select customernumber from customers where objectid = (select customer from salesorders where objectid = 
invoices.salesorderobjectid)),(select lastname from customers where objectid = (select customer from salesorders where objectid = invoices.salesorderobjectid))from invoicesCan I do this smarter as the three subselects select the same customer three times and I would think that 1 time is enough?
 SELECT   invoices.objectid,   invoices.invoicenumber,   invoices.invoicedate,   salesorders.customer,   customers.customernumber,   customers.lastnameFROM invoices
INNER JOIN salesorders ON (   salesorders.objectid = invoices.salesorderobjectid)INNER JOIN customers ON (   customers.objectid = salesorder.customer)You should do INNER and OUTER  joins for connecting the tables by their foreign keys.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


[SQL] Regular Expression in SQL

2006-07-15 Thread Aaron Bono
I recall not long ago a discussion about regular expressions in a query that hit on this exact topic but don't think it was ever resolved so I am giving it a go again...Here is my query (keep in mind that I am just experimenting now so don't worry about the fact that I am using nested substring function calls):
SELECT    referrer,    substring(referrer FROM '^([^\\/]*\\/\\/[^\\/]*)(\\/)?'),    substring(referrer FROM '^[^\\/]*\\/\\/www\\.google\\.[^\\/]*\\/[^\\?]*\\?(.*)$'),    substring(substring(referrer FROM '^[^\\/]*\\/\\/www\\.google\\.[^\\/]*\\/[^\\?]*\\?(.*)$') FROM '((%&q=)|(q=))#"[^&]*#"((&%)|())' FOR '#')
FROM one_hour_air.web_page_viewWHERE referrer ~ '^[^\\/]*\\/\\/(www.google\\.[^\\/]*)\\/'What I get is:referrer substringsubstring_1substring_2
http://www.google.ca/search?q=one+hour+heating&hl=enhttp://www.google.caq=one+hour+heating&hl=enq=
http://www.google.com/search?hl=en&q=One+hour+heating+and+Airhttp://www.google.comhl=en&q=One+hour+heating+and+Airhl=en&q=What I expected for substring_2 was (respectively):
one+hour+heatingOne+hour+heating+and+AirI thought by using the FOR '#' I could specify exactly what part of the _expression_ I would get but it still grabs the first (...) of the pattern.  At least that is what the documentation in seciton 
9.7.2 at http://www.postgresql.org/docs/8.1/static/functions-matching.html led me to believe.  How can I get the part of the string I am really after without using one nested substring after another?
Thanks,Aaron Bono==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Doubt about User-defined function.

2006-07-15 Thread Aaron Bono
I am really confused about what your question is.  Functions can take zero to many arguments and return one value.  The argument types are different from (or at least are independent of) the return value.  Arguments are not returned, they are passed into the function.  Your use of the terminology appears to be inconsistent with the definitions of these words.
-AaronOn 7/15/06, sathiya moorthy <[EMAIL PROTECTED]> wrote:
   I have doubt about user-defined function returns value  :




*  why function returns( accepts  ) One attribute in the
arguments of function/table, Otherwise it returns the whole record of
the table. Function accept more than arguments .

 
   
*   Why it doesn`t return more than one arguments in the
function (or) Attribute of that table.



   
*    Its like same as C-language function, Because in C
function returns one value (or) structure/array of characters.



   
*   My question is, In psql user-defined function doesn`t
returns more than one attribute. Is it possible (or) Not.



 Some examples :

   
i)  create function com2(text,integer)
returns text as ' select city.city from city,weather

   
where weather.city=$1 AND city.pop>$2' language sql;



   
Res : Return type of the function is Text, As well as it is one of the
argument of function.

 
( At the type of returning, why it doesn`t  also accepts integer
).

 
ii)  create function
usrs_tab(text,integer) returns city as 'select
city.city,city.pop,city.state
from
  
city,weather  where weather.city=$1 AND city.pop>$2' language
sql;



  
Res : Return one  record from City table  ( table contains
city, pop, state only ).
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Querying for name/value pairs in reverse

2006-07-15 Thread Aaron Bono
On 7/15/06, Stefan Arentz <[EMAIL PROTECTED]> wrote:
I'm no SQL expert by any means so I'm wondering if something like thisis possible.I have two tables like this:create table Errors (  Id serial not null,  CreateDate timestamp not null,  primary key (Id)
);create table ErrorValues (  Id serial not null,  ErrorId int not null,  Name varchar(255) not null,  Value text not null,  primary key (Id),  foreign key (ErrorId) references Errors (Id)
);Now, selecting specific ErrorValues with a bunch of names that arerelated to an Error is of course pretty simple. But I want to go theother way. I want to query for: 'give me all Errors that have the Code=1234 AND Address=
1.2.3.4 ANDType=OSX Name/Value pairs'What is a good recipe to deal with this? Is something like thispossible with standard sql? Is there a nice PG way to do this?
 Try thisSELECT   Errors.ID,   Errors.CreateDateFROM ErrorsWHERE Errors.ID IN (   SELECT      ErrorValues.id,   FROM ErrorValues   WHERE (ErrorValues.name = 'Code' AND ErrorValues.value
 = '1234')   INTERSECT   SELECT   
   ErrorValues.id,   
FROM ErrorValues   
WHERE (ErrorValues.name = 'Address' AND ErrorValues.value = '1.2.3.4')   INTERSECT   SELECT   
   ErrorValues.id,   
FROM ErrorValues   
WHERE (ErrorValues.name = 'Type' AND ErrorValues.value = 'OSX Name/Value pairs') );==   Aaron Bono   Aranya Software Technologies, Inc.
   http://www.aranya.com==


Re: [SQL] Querying for name/value pairs in reverse

2006-07-15 Thread Aaron Bono
On 7/15/06, Paul S <[EMAIL PROTECTED]> wrote:
This is definitely doable.  
 
one "Set" way that I could think of doing this would be to first compile a temp table with all of the Value/Pairs that your looking to search for and then just JOIN the ID's (in this case it would be the Value and Pair) to the ErrorValues table.  This should give you all of the ErrorID's in the ErrorValues table which would then allow you to JOIN up against the Errors table to get information like CreateDate. 
This works - the subselect I sent earlier kind of does this (the subselect can act as a temp table in memory so you don't have to create a physical one). 
Another way, would be to use XML instead of Value/Pair to turn it into a Node and Value type of thing.  You could then use XQuery to search inside of the XML attribute for what you were looking for.  SET theory would be a better alternative but this is just a different idea. 
I wonder if the use of XML in a database is a very good idea.  (am I treading on religious territory here?)  I can think of some examples where XML can be useful but the problem I see with it is that your data structure is embedded in a single field and your database schema does not describe your data structure very well anymore.  I always like to use the database schema as a way to document the data structure so if you have the DB diagrams, you can understand everything there.
One Naming convention tip, I like to name my PrimaryKey's something more descriptive than just ID.  You'll notice that your ErrorValues table had to include the foreign key called ErrorID that actually relates to attribute ID in the Errors table.  When your looking at miles and miles of code or reviewing JOIN syntax "
a.ID = b.ErrorID" sometimes it's easier to validate if it looks like this. "a.ErroID = b.ErrorID
".  Just my 2 cents...I wholeheartedly agree.  If you don't adopt a good naming convention like this, you will come to regret it as your application grows in size and complexity.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Storing encrypted data?

2006-07-17 Thread Aaron Bono
On 7/17/06, John Tregea <[EMAIL PROTECTED]> wrote:
Hi Michael,Thanks for your advice. I was looking at the bytea data type in thePostgreSQL book I bought (Korry and Susan Douglas, second edition). Iwas concerned that if I have to escape certain characters like the
single quote or the backslash, how do I guarantee that the escapedversion does not already appear in the encrypted string?Should I use the octal value to escape the single quote (\047) andbackslash (\\134)?
Those character sequences are extremely unlikely to occur in anencrypted string.Is the the right approach ?Also... I note that I cannot change the data type of my field from textto bytea (I am using PGADMIN III). Do you know why?
 When I have data like this, I do a Base64 encoding.  The string ends up longer but I no longer have to worry about special characters mucking things up.==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] hi let me know the solution to this question

2006-07-18 Thread Aaron Bono
On 7/18/06, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Tue, Jul 18, 2006 at 10:46:29AM +0530, Penchalaiah P. wrote:> Can u let me know if at all any system defined variable exists to check> whether insert , update , and delete is successfully executed in a
> stored procedure>> So that I can perform some steps based on the  result of execution if> possible please send me the sample code...Depending on what you mean by "successfully executed," see "Obtaining
the Result Status" or "Trapping Errors" in the PL/pgSQL documentation(error trapping is available in 8.0 and later).
http://www.postgresql.org/docs/8.1/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING OK, this question got me wondering: is there a way to determine, in a function/stored procedure, the number of rows that were inserted/updated/deleted?
For example, the following does NOT work but shows what I am trying to achieve:CREATE OR REPLACE FUNCTION "public"."test_fn" () RETURNS VOID AS'DECLARE    mycount INTEGER;
BEGIN    -- This assignment will not work be I want something like it    -- so I can count the number of records updated.    mycount := EXECUTE        ''update mytable '' ||        ''mycolumn = 1 '' ||
        ''WHERE '' ||        ''    mycolumn = 2 ''    ;        RAISE NOTICE ''count = %'', mycount;        RETURN;END;'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;
I could not find any documentation about getting the count of updates but when you run the update by itself, PostgreSQL reports the number of records updated so I have to believe the information is available somewhere.
Thanks,Aaron==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] hi let me know the solution to this question

2006-07-18 Thread Aaron Bono
On 7/18/06, Bricklen Anderson <[EMAIL PROTECTED]> wrote:
Aaron Bono wrote:> On 7/18/06, *Michael Fuhr* <[EMAIL PROTECTED] [EMAIL PROTECTED]>> wrote:> 
http://www.postgresql.org/docs/8.1/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS>> OK, this question got me wondering: is there a way to determine, in a> function/stored procedure, the number of rows that were
> inserted/updated/deleted?> I could not find any documentation about getting the count of updates> but when you run the update by itself, PostgreSQL reports the number of> records updated so I have to believe the information is available
> somewhere.>> Thanks,> Aaron>Look for the section entitled "36.6.6. Obtaining the Result Status" onthe link that Michael Fuhr supplied (above). Is that what you are
looking for?Ah yes, I missed the part on ROW_COUNT .  That should do it.======   Aaron Bono
   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Storing encrypted data?

2006-07-19 Thread Aaron Bono
On 7/19/06, John Tregea <[EMAIL PROTECTED]> wrote:
Hi Aaron,I found that your suggestion worked well. For some reason the IDE I use(Revolution) put a return character every 73rd character when it did thebase64encode, but I strip those out and there no further problems. I
don't even have to put them back later to decode it. I usually leave the return characters where they are.I am curious, why did you decide to remove the return characters?
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] User Permission

2006-07-19 Thread Aaron Bono
On 7/19/06, sathish kumar shanmugavelu <[EMAIL PROTECTED]> wrote:
Dear group,   i created a user named 'dataviewer' and grant only select permission to that user,   but now the user could able to create tables. how to restrict this,   i want to give permission to create views and do selects on tables and views.
   how to do it?   plz help. Have your checked http://www.postgresql.org/docs/8.1/interactive/sql-grant.html
http://www.postgresql.org/docs/8.1/interactive/sql-revoke.htmlI would start by creating a role:
http://www.postgresql.org/docs/8.1/interactive/user-manag.htmlAnd revoke all on it.  Then add only the permissions it needs and assign the role to the user.======
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Multi-table insert using RULE - how to handle id?

2006-07-19 Thread Aaron Bono
On 7/19/06, Collin Peters <[EMAIL PROTECTED]> wrote:
I am learning about how to use rules to handle a multi-table insert.Right now I have a user_activity table which tracks history and auser_activity_users table which tracks what users are associated witha row in user_activity (one to many relationship).
I created a rule (and a view called user_activity_single) which is tosimplify the case of inserting a row in user_activity in which thereis only one user in user_activity_users.CREATE OR REPLACE RULE user_activity_single_insert AS
 ON INSERT TO user_activity_singleDO INSTEAD (INSERT INTO user_activity(user_activity_id,description,...)VALUES (
NEW.user_activity_id,NEW.description,...);INSERT INTO user_activity_users (user_activity_id,user_id
)VALUES (NEW.user_activity_id,NEW.user_id););This works well by itself, but the problem is that I have to manuallypass in the user_activity_id which is the primary key.  I do this by
calling nextval to get the next ID in the sequence.Is there any way to have the rule handle the primary key so I don'thave to pass it in?  It seems you can't use pgsql inside the rule atall.  What I'm looking for is something like:
CREATE OR REPLACE RULE user_activity_single_insert AS ON INSERT TO user_activity_singleDO INSTEAD (SELECT nextval('user_activity_user_activity_id_seq') INTO next_id;INSERT INTO user_activity(
user_activity_id,description,...)VALUES (next_id,NEW.description,...);
INSERT INTO user_activity_users (user_activity_id,user_id)VALUES (next_id,NEW.user_id););
Note the sequence stored in next_id.  This doesn't work as itcomplains about next_id in the INSERT statements.  Any way to dosomething like this?  I suppose I could create a function and thenhave the rule call the function but this seems like overkill.
 Since I have not tried something like this before, I may be off base but have you tried:CREATE OR REPLACE RULE user_activity_single_insert AS ON INSERT TO user_activity_singleDO INSTEAD (
        INSERT INTO user_activity(                description,                ...        )        VALUES (                NEW.description,                ...        );        INSERT INTO user_activity_users (
                user_activity_id,                user_id        )        VALUES (                SELECT currval('user_activity_user_activity_id_seq'),                NEW.user_id        ););
I am assuming user_activity.user_activity_id is a BIGSERIAL or SERIAL.======   Aaron Bono   Aranya Software Technologies, Inc.   
http://www.aranya.com==


Re: [SQL] System catalog table privileges

2006-07-21 Thread Aaron Bono
On 7/21/06, Hilary Forbes <[EMAIL PROTECTED]> wrote:

Dear All
Next question on privileges!  Can I safely remove all privileges
from the system catalog tables for a user and still enable them to select
from the public schema?  I guess the real question is what access
rights does an ordinary user have to have to the system catalog tables in
order for postgres to work properly given I only ever want the user to be
able to SELECT from views.
This is all brought about by a user who wants to use MS Access Query for
adhoc queries to a (small) database via ODBC.  (the database itself
drives a major web application.) I can't find an easy way of preventing
them seeing that tables exist but I don't want them trying to manually
update any tables of mine or postgres's thank you very much!  (Don't
shoot the messenger - there's no accounting for user's tastes!) This doesn't address the permissions issue but is a suggestion regarding your approach on granting access to an untrusted user for reporting purposes...
Whenever I have a user that needs to do reporting from any production database, I set up a separate reporting database.  If possible, this is placed on a completely different machine and the data is fed from production to the reporting server nightly.  Tech savy business users (the ones who typically need this kind of access) are notorious for writing bad queries and causing performance problems.  If you isolate their activity, you will eliminate lots of headache.  If they cause a problem on the reporting server, you don't have to drop everything to get the problem fixed like you would if they caused problems on the live database.
An argument that the users who run the reports often make is that they need the most current data.  Most of the time this is not the case.  My recommendation is to let the users create the queries they need to run for realtime data on the reporting database, then pass them by an expert for review before putting them into an IT controlled reporting application.
Bottom line, be careful about giving non-experts too much access to your live production data.======   Aaron Bono   Aranya Software Technologies, Inc.
   http://www.aranya.com==


Re: [SQL] System catalog table privileges

2006-07-24 Thread Aaron Bono
On 7/21/06, Hilary Forbes <[EMAIL PROTECTED]> wrote:

Aaron
Thanks for this one - I had actually wondered about doing that but the
trouble is that they say that they need up to the minute reports not
"as of last night".  Indeed, I do have another app where I
do just that because I find that reports indexes/requirements are very
different to transactional type requirements.  However, you have
made me make up my mind to see if I can persuade them to work on data
that is a day old. I have heard "I need up to the minute data" a lot but have NEVER seen it to be true.  I guess if you are trading stocks on the stock market and need to buy and sell immediately as the prices change then you would have a reason but almost always business users think they need things now when they don't.
You could also look at the cost/benefit: if they bring the database down, how much would it cost the business?  If they are working on day old data, how much would it cost?  Get the user to write down and justify their numbers.  This will show to you and the user whether it is really necessary to report off of the live data.
Good luck!======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Referential integrity (foreign keys) across multiple tables

2006-07-24 Thread Aaron Bono
On 7/22/06, Richard Jones <[EMAIL PROTECTED]> wrote:
Simplified schema:  create table hosts (id serial primary key,hostname text not null  );  create table pages (id serial primary key,hostid int not null references hosts (id),
url text not null,unique (hostid, url)  );  create table page_contents (pageid int not null references pages (id),section text not null  );(There are many hosts, many pages per host, and many page_contents
sections per page).Now I want to add a column to page_contents, say called link_name,which is going to reference the pages.url column for the particularhost that this page belongs to.Something like:
  alter table page_contents add link_name text;  alter table page_contentsadd constraint foo foreign key (p.hostid, link_name)references pages (hostid, url)where p.id
 = pageid; The easiest, and not necessarily elegant, way to acomplish this is to create linkid rather than link_name and make it a foreign key to pages.id.  Then add a trigger that checks to make sure the pages you link to from page_contents to pages is for the same host.  If not, raise an exception.
Another option is to do this:  create table page_contents (   hostid int not null,   url text not null,   linkhostid int,   linkurl text,    section text not null,   foreign key (hostid, url) references pages (hostid, url),
   foreign key (linkhostid, linkurl) references pages (hostid, url) );Or if you really want to restructure things: create table hosts (    id serial primary key,    hostname text not null
  );  create table pages (    id serial primary key,    url text not null,    unique (url)  );  create table page_contents (    pageid int not null references pages (id),   hostsid int not null references hosts (id),
   linkpageid int references pages(id),    section text not null  );That should give you some options to play with.As a side comment, you should also name your primary key columns more meaningfully.  Use 
hosts.hostsid and pages.pagesid, not hosts.id and pages.id.  When you begin writing large queries, the use of the column name id all over the place will make your queries more prone to error, harder to read and harder to write.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] About Div

2006-07-25 Thread Aaron Bono
On 7/25/06, Otniel Michael <[EMAIL PROTECTED]> wrote:
Dear All,I have a problem with this case :I have 10 candy for 7 child (A, B, C, D, E, F, G).Table X :code   value ---    A    0     B    0     C    0    
 D    0     E    0     F    0     G   0    And I want divide it with this algorithm :     A = 10 / 7 = 1    B = (10-1) / (7-1)
 = 9 / 6 = 1    C = (10-2) / (7-2) = 8 / 5 = 1    D = (10-3) / (7-3) = 7 / 4 = 1    E = (10-4) / (7-4) = 6 / 3 = 2    F = (10-6) / (7-5) = 4 / 2 = 2    G = (10-8) / (7-6) = 2 / 2 = 2In my first solution i use loop - for each record in my function.
But it is too slow in a lot of data.Did postgresql have a function for my case? No loop necessary.  This is a simple math problem:dividedamount := candy / childcount;extra = candy % childcount;
So the first (childcount - extra) get (dividedamount) pieces of candy and the last (extra) get (dividedamount + 1) pieces of candy.==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


[SQL] Storage of Binary Data

2006-07-25 Thread Aaron Bono
I have a somewhat philosophical question about best practices when storing binary data.For my web projects, we have been storing binary data (like images, PDF's, etc) on the hard drive of the web server instead of the database.  Within the database, we keep information, like whether an image is present, in a separate column.
For example:CREATE TABLE site_user (   site_user_id BIGSERIAL NOT NULL,   has_profile_image BOOLEAN DEFAULT false NOT NULL   CONSTRAINT site_user_pkey PRIMARY KEY (site_user_id));And then store the profile image at:
   [web root]/images/siteuserprofile/profile-[site_user_id here].jpgWe also have a back end process that then checks these files into CVS to help maintain version control.My questions are:* What other options are people using to store data like this?
* What are the advantages/disadvantages of these storage methods?I have been very reluctant to store the binary data in the database for the following reasons:* Performance because you cannot just save the data to the file system, you must
   interact with the database to get/save contents* Increases complexity of application* I have had problems with saving binary data using JDBC before (that was with   Oracle 7 though)* I am concerned about how well the data backups will work - there have been several
   people who have had problems with backing up and restoring binary data on this   listI am trying to develop some best practices so any input you guys have is welcome!Thanks==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] About Div

2006-07-26 Thread Aaron Bono
On 7/25/06, Otniel Michael <[EMAIL PROTECTED]> wrote:
Mr. Aaron. I am sorry, your solution didn't match in my case. Example for your solution :A = 1B = 1C = 1D = 1E = 1F = 1G = 4G have 4 candy. Its too much for G.In my case, the solution is :
A = 1B = 1C = 1D = 1E = 2F = 2G = 2The extra candy is given to three child.Do you have the other solution? I need function in postgresql for my case.Because my loop is too slow.
Btw thanks for your solution. I think you misunderstood how to use the MOD value.  10 % 7 = 3 so the last 3 people get 1 extra, NOT the last person getting the extra 3.But it looks like Ross got you the code to fix the problem.  Check what he provided.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Storing an ordered list

2006-07-26 Thread Aaron Bono
On 7/25/06, Michael Artz <[EMAIL PROTECTED]> wrote:
What is the best way to store and ordered list that can be updatedOLTP-style?  A simplified problem is that I have an event, and theevent has an ordered list of predicates and I need to preserve theorder of the predicates.  All of the data is entered via a web
application, and I would like to support the new flashy ajaxdrag-droppy thingies, meaning that there could be a significant amountof updates if the user is dragging things all over the place.I figure that one choice is to explicitly code the order as an integer
column in the predicate table which has the advantage of being veryeasy and fast to query/order but *very* slow to reorder as all of thepredicates need to be updated.  This would seem to be a postgres/MVCC
weak spot as well.  Example:create table event (event_id integer);create table predicate (event_id integer not null referencesevent(event_id), name varchar, order integer);insert into event (event_id) values (1);
insert into predicate (1, 'first event', 1);insert into predicate (1, 'second predicate', 2);select * from predicate p where p.event_id = 1 order by p.order;I'm also thinking about a linked list, i.e.
create table event (event_id integer);create table predicate (predicate_id integer, event_id integer notnull references event(event_id), name varchar, next_predicate integerreferences predicate (predicate_id));
insert into predicate (101, 1, 'second predicate', NULL);insert into predicate (102, 1, 'first predicate', 101);The downside is that I'm not quite sure how to efficiently query thelinked list.  Any suggestions?
Are there any known best practices for storing ordered lists inrelational databases?  Are there any tricks that I can use withpostgres? Even the linked list will require a lot of updates if there are is a lot of reshuffling - perhaps less though in certain circumstances, especially if the list is large and there is very little reshuffling.
If you use the linked list, remember this: to reduce the updates you are going to need more code in the application as it will have to keep track of what to update and what to not update.  It will also be more difficult to order the items using SQL so your application may have to take on that burden.  As a result, your application will become more complicated and writing reports that use the ordering will become difficult.
Another thing to think about with a linked list:  What if two people are reordering the items at the same time - they load the items at the same time, then reorder at the same time (with their own separate cache of the data) and finally save.  If you update everything, the last man to save wins but if you only update only what they change, you could end up with a mess:
Example:    Start with:    1 -> 2 -> 3 -> 4 -> 5        Person 1 reorders to:    1 -> 5 -> 2 -> 3 -> 4 (only update 1 -> 5, 5 -> 2 and 4 -> null)        Person 2 reorders to:
    1 -> 2 -> 5 -> 3 -> 4 (only update 2 -> 5, 5 -> 3 and 4 -> null)        If they then both save (assume person 1 saves and then person 2 saves) you get:    1 -> 5    2 -> 5
    This is going to be a big problem.When I need something like this I go with your first approach, a simple order field.  Unless the user is reordering a small number of items in a very large list and doing it frequently, is there really a need to worry about the number of updates?  Are you worrying about a performance problem you will never have?
==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] SQL generator

2006-07-26 Thread Aaron Bono
On 7/25/06, Chris Browne <[EMAIL PROTECTED]> wrote:
"Bit Byter" <[EMAIL PROTECTED]> writes:> I would like to know what the underlying SQL statement will look> like, so that I can use this knowlege to build a generic parser that
> creates SQL statements from the "English like" text, using the> syntax I described above.I seem to recall that Lotus tried doing this once upon a time withtrying to build an "english-like" interface for doing "business
intelligence-like" queries against spreadsheets.Nobody likely remembers HAL; it was underwhelmingly useful.In effect, anyone that was skilful enough at composing "English-like"queries to get HAL to work was likely to find the "native" methods
more convenient. There was a company near where I live who recently tried to do natural language parsing for search engines - like Ask Jeeves.  It was an utter failure but before they got far enough to see that, they did a study of users to see how they search.  They found that people adapted to learn how to "talk in the search engine's language" fairly easily and it has actually become unnatural for people to talk to search engines with a natural language.
I have had so many problems with users who know a little SQL causing database crashes and slow downs.  I don't think I would recommend allowing someone who can't even do SQL access to a database.If you still think it is a good idea to do this, good luck and I hope you prove us wrong.  But just keep in mind you are going to be attempting something that has been tried before with little success.  Maybe you will figure out what your predecessors were missing...
Bottom line though, just how difficult is SQL to read and write anyway?  It is a hell of a lot easier to read and write than any other computer language I have seen.==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Storing an ordered list

2006-07-28 Thread Aaron Bono
On 7/26/06, Michael Artz <[EMAIL PROTECTED]> wrote:
On 7/26/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote:> If you use numeric instead of int, then it is easy to insert new values.Hmm, hadn't thought about that.  How would you normally implement it?
I'm thinking that, if I wanted to insert between A and B, I could take(A.order + B.order)/2, which would be pretty simple.  Is there abetter way? This is a good idea.  Then you can add a scheduled process to read through these values and turn them back to integer values on a regular basis (sort of a reindexing) to keep your numbers from becoming small enough that you start experiencing round off problems.  Perhaps you could add a trigger that says if the value entered into the order field is going out to too many decimal places, it renumbers everything. to keep the values clean.  Or better yet, add a stored procedure you call to reorder the elements that decides how to do it for you so you can easily rewrite the implementation without having to change the application.
Just some ideas...==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] primary keys as TEXT

2006-07-28 Thread Aaron Bono
On 7/28/06, Manlio Perillo <[EMAIL PROTECTED]> wrote:
Hi.There can be performancs problems in having primary keys of type TEXT?What about having a primary key of 3 columns (all of type TEXT)? If you are really worried about it, why not just use surrogate keys?  They are very easy to use.  Then your problem is solved.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] primary keys as TEXT

2006-07-30 Thread Aaron Bono
On 7/28/06, Manlio Perillo <[EMAIL PROTECTED]> wrote:
Michael Glaesemann ha scritto:>> On Jul 28, 2006, at 17:37 , Manlio Perillo wrote:>>> There can be performancs problems in having primary keys of type TEXT?>> What about having a primary key of 3 columns (all of type TEXT)?
>> What defines a problem in terms of performance is heavily dependent on> your particular needs and requirements. What are your requirements? What> profiling have you done to see where your performance bottlenecks may be?
>I still don't have done profiling.Simply in the first version of my schema I used serial keys but theresult is ugly and it force me to do a lot of joins. Ugly?  Not sure what you mean by that.
I do understand the problem with so many joins.  I use views so that the joins are only delt with once (in the database) and then all my applications run off the views.  That way, the applications use very simple queries.  The views also allow me to change the table structure (column names, more table normalization, etc.) without having to make changes to the application.
I am even getting ready to start using updatable views so my applications never touch the tables directly - it ads another layer of abstraction between the tables and the application.  But that may be more complicated than you are ready for - I have yet to decide if it will be worth the effort but I won't know until I experiment with it more.
======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Triggers using PL/pgSQL

2006-07-30 Thread Aaron Bono
On 7/30/06, Thusitha Kodikara <[EMAIL PROTECTED]> wrote:
Hello,I am interested in developing some triggers to keep track of records that are changed (possibly the changes of one or more specific columns). In addition to keeping the new values, I also need to keep the old values (may be on a separate table).  
Though I  have done similar things in other RDBMs using SQL, I find doing this in Postgres, a little bit complicated - may be because it needs to be done through a separate procedural language and through a separate function. The Postgres documentation also didn't provide much help ( the examples in C). I was mainly looking for example showing how to refer 'OLD' and 'NEW' rows using PL/pgSQL.
Can someone please direct me to some such examples? How about this:CREATE TABLE my_table (   my_table_id BIGSERIAL NOT NULL,   my_value VARCHAR(100) NOT NULL,   CONSTRAINT my_table_pkey PRIMARY KEY (my_table_id)
);CREATE TABLE my_table_history (   my_table_id BIGINT NOT NULL,   my_value VARCHAR(100) NOT NULL,   create_dt TIMESTAMP NOT NULL,   CONSTRAINT my_table_history_pkey PRIMARY KEY (my_table_id, create_dt)
);CREATE OR REPLACE FUNCTION my_table_history_fn () RETURNS SETOF opaque AS'BEGIN    -- if a trigger insert or update operation occurs    IF TG_OP = ''INSERT'' OR TG_OP = ''UPDATE'' THEN    execute
    ''INSERT INTO my_table_history ( '' ||    ''    my_table_id, '' ||    ''    my_value, '' ||    ''    create_dt '' ||    '') VALUES ( '' ||    ''    '''''' || 
NEW.my_table_id || '''''', '' ||    ''    '''''' || NEW.my_value || '''''', '' ||    ''    now() '' ||    '');''    ;    RETURN NEW;    END IF;END;'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;
CREATE TRIGGER my_table_history_tr BEFORE UPDATE OR INSERT    ON my_table FOR EACH ROW    EXECUTE PROCEDURE my_table_history_fn();I tried it out and it works in version 8.1.==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Triggers using PL/pgSQL

2006-07-30 Thread Aaron Bono
No problem.  I have been meaning to put the same code together for
myself but have been putting it off.  It gave me an excuse to stop
procrastinating.On 7/31/06, Thusitha Kodikara <[EMAIL PROTECTED]> wrote:
Hello,Thanks a lot Aaron for the very quick and simple example. I just checked it on 7.4.5 also and it worked. I'll be able to continue with my development using  the syntax of that example.
Aaron Bono <
[EMAIL PROTECTED]> wrote: On 7/30/06, Thusitha Kodikara <
[EMAIL PROTECTED]> wrote: Hello,I am interested in developing some triggers to keep track of records that are changed (possibly the changes of one or more specific columns). In addition to keeping the new values, I also need to keep the old values (may be on a separate table).  
 Though I  have done similar things in other RDBMs using SQL, I find doing this in Postgres, a little bit complicated - may be because it needs to be done through a separate procedural language and through a separate function. The Postgres documentation also didn't provide much help ( the examples in C). I was mainly looking for example showing how to refer 'OLD' and 'NEW' rows using PL/pgSQL. 
Can someone please direct me to some such examples? How about this:CREATE TABLE my_table (   my_table_id BIGSERIAL NOT NULL,   my_value VARCHAR(100) NOT NULL,   CONSTRAINT my_table_pkey PRIMARY KEY (my_table_id) 
);CREATE TABLE my_table_history (   my_table_id BIGINT NOT NULL,   my_value VARCHAR(100) NOT NULL,   create_dt TIMESTAMP NOT NULL,   CONSTRAINT my_table_history_pkey PRIMARY KEY (my_table_id,
 create_dt) );CREATE OR REPLACE FUNCTION my_table_history_fn () RETURNS SETOF opaque AS'BEGIN    -- if a trigger insert or update operation occurs    IF TG_OP = ''INSERT'' OR TG_OP = ''UPDATE'' THEN
    execute     ''INSERT INTO my_table_history ( '' ||    ''    my_table_id, '' ||    ''    my_value, '' ||    ''    create_dt '' ||    '') VALUES ( '' ||
    ''    '''''' ||  NEW.my_table_id || '''''', ''
 ||    ''    '''''' || NEW.my_value || '''''', '' ||    ''    now() '' ||    '');''    ;    RETURN NEW;    END IF;END;'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; 
CREATE TRIGGER my_table_history_tr BEFORE UPDATE OR INSERT    ON my_table FOR EACH ROW    EXECUTE PROCEDURE my_table_history_fn();I tried it out and it works in version 8.1.
==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Joining a result set from four (4) tables

2006-07-31 Thread Aaron Bono
On 7/31/06, John Tregea <[EMAIL PROTECTED]> wrote:
Hi,Can anyone help me with the following?I am setting up a series of permissions of my own making in pgSQL 8.1.4.I have the following tables;resource -- a list of available resourcesactions -- the actions available to the user
policies -- the actions that are allowed to be performed on individualresourcespermissions -- matches users  with granted actions on any resourceusers --  no surprises hereI have read the docs about joins but cannot get my head around the
correct syntax. The following SQL returns all actions for every resourceSELECT   permission.serial_id,   resource.name,   actions.name
,   actions.classification,   actions.display_group,FROM   permission, policies, resource, actionsWHERE   permission.user_id = '11' AND   permission.related_id = policies.serial_id AND
   policies.status = 'Active' AND   permission.status = 'Active'AND   actions.status = 'Active'AND   resource.status = 'Active'I need a list of permissions back for each resource that a user is
authorised to access (when they login to their GUI).I also need to check (at user login) if every record in the chain (e.g.resource, action, policy and permission) is "Active" before thepermission record is considered valid.
The list for a resource called 'Scenarios' would look something like:11900;"Scenarios";"Publish";"Action";"B"11900;"Scenarios";"Authorise";"Action";"B"
11900;"Scenarios";"Create";"Action";"C"11900;"Scenarios";"Update";"Action";"C"I am guessing it should be an inner join? but by reference book does not
show joins on this many tables.Thanks in advance for any help.Can you include the table create statements with primary and foreign keys?  That would help a lot. ==
   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Triggers using PL/pgSQL

2006-07-31 Thread Aaron Bono
On 7/31/06, John DeSoi <[EMAIL PROTECTED]> wrote:
Is it really necessary to build a SQL string and use execute? Itseems you could just issue the INSERT statement.I don't think so but there was some discussion a week or two ago about mixing variables and using execute.  I am curious, does anyone know what the "best" approach is?
Also, I did not address deletions.  If you still need to delete from the table, you will need to get rid of the foreign key on the history table.  You will also need to decide how the history table will reflect the recording of those deletions.
I usually don't allow deletes on tables (unless absolutely necessary) and instead add start/end dates to the tables so rows can be marked as removed.  Then I add a view that filters out the inactive rows - all applications use the views, they do not query the tables directly.  This also allows you to "delete" rows at sometime in the future or make them appear in the future too.
On Jul 31, 2006, at 12:52 AM, Aaron Bono wrote:> CREATE OR REPLACE FUNCTION my_table_history_fn () RETURNS SETOF
> opaque AS> '> BEGIN> -- if a trigger insert or update operation occurs> IF TG_OP = ''INSERT'' OR TG_OP = ''UPDATE'' THEN> execute> ''INSERT INTO my_table_history ( '' ||
> ''my_table_id, '' ||> ''my_value, '' ||> ''create_dt '' ||> '') VALUES ( '' ||> '''''''' || NEW.my_table_id || '''''', '' ||
> '''''''' || NEW.my_value || '''''', '' ||> ''now() '' ||> '');''> ;> RETURN NEW;> END IF;> END;> '
> LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;John DeSoi, Ph.D.http://pgedit.com/Power Tools for PostgreSQL
-- ==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] primary keys as TEXT

2006-07-31 Thread Aaron Bono
On 7/31/06, Manlio Perillo <[EMAIL PROTECTED]> wrote:
Because serial ids are only surrogate keys.My tables have well definited primary keys, the only problem is thatthey are of type TEXT (and spawn up to 3 columns).My concern is: how bad can be performance?
 My guess is that the performance difference is not going to be noticable unless you are dealing with huge amounts of data and even then may account for such a small hit that there will be other issues that are more pressing like writing better queries, creating a data warehouse or adding proper indexes and keeping the indexes well maintained.
Anyone care to disagree?======   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com
==


Re: [SQL] Joining a result set from four (4) tables

2006-08-01 Thread Aaron Bono
On 7/31/06, John Tregea <[EMAIL PROTECTED]> wrote:
Hi aaron,Here are the 'create table' statements. I have indicated what are theprimary and foreign keys with trailing comments.ThanksJohnAaron Bono wrote:> Can you include the table create statements with primary and foreign
> keys?  That would help a lot.CREATE TABLE resources(  serial_id numeric NOT NULL, -- << Primary Key  related_id numeric, -- << Foreign Key  host_id int4,  created timestamptz DEFAULT now(),
  modified timestamptz,  valid_from timestamp,  valid_to timestamp,  schema_name varchar(32),  grid_loc varchar(32),  name varchar(32),  status varchar(16),  description text,  comments text,
  sort_order int2,  user_id int4 DEFAULT 0,  located text,  classification varchar(32),  sequence_id int4,)CREATE TABLE actions(  serial_id numeric NOT NULL, -- primary key  related_id numeric, -- foreign key on 
resources.serial_id  host_id int4,  created timestamptz DEFAULT now(),  modified timestamptz,  valid_from timestamp,  valid_to timestamp,  name varchar(32),  status varchar(16) DEFAULT 'Active'::character varying,
  description text,  comments text,  sort_order int2 DEFAULT 0,  user_id int4 DEFAULT 0, -- User_ID of the creator  located text,  classification varchar(32),  sequence_id int4,  in_box varchar(32),
  display_group varchar(2),)CREATE TABLE policies(  serial_id numeric NOT NULL, -- primary key  related_id numeric, -- foreign key on actions.serial_id  resource_id numeric, -- foreign key on 
resources.serial_id  owner_id numeric,  authority_id int4,  created timestamptz DEFAULT now(),  modified timestamptz,  valid_from timestamp,  valid_to timestamp,  status varchar(16) DEFAULT 'Active'::character varying,
  description text,  comments text,  classification varchar(32),  user_id int4,  sequence_id int4,  inheritance text,)CREATE TABLE permissions(  serial_id numeric NOT NULL, -- primary key
  related_id numeric, -- foreign key on policies.serial_id  user_id int4, -- foreign key on users.serial_id  owner_id int4,  authority_id int4,  resource_id int4,  created timestamptz DEFAULT now(),
  modified timestamptz,  valid_from timestamp,  valid_to timestamp,  name varchar(32),  acronym varchar(6),  status varchar(16) DEFAULT 'Active'::character varying,  inheritance text,  description text,
  comments text,  sort_order int2,  user_id int4 DEFAULT 0,  located text,  classification varchar(32),  sequence_id int4,)CREATE TABLE users(  serial_id numeric NOT NULL, -- primary key
  created timestamptz DEFAULT now(),  modified timestamptz,  valid_from timestamp,  valid_to timestamp,  name varchar(64) NOT NULL,  acronym varchar(6),  status varchar(16),  inheritance text,
  description text NOT NULL,  comments text NOT NULL,  sort_order int2 NOT NULL,  clearance varchar(32) NOT NULL,  administrator bool DEFAULT false,  user_id int4 DEFAULT 0,  next_serial_id int4 DEFAULT 1,
  classification varchar(32),)First of all, I must say it is very unfortunate that all the tables have the same name for their primary key column.  It made things confusing at first when reading through the tables (the foreign key names don't match the primary keys they map to).  It also makes it more likely you will do a join improperly.
Worse, your foriegn key names are very ambiguous.  The name related_id says nothing about what table it maps to.  That means you need documentation or the foreign key definitions (are you using foreign key constraints?) to tell what is going on.
Anyway... on to solving your problem.SELECT   permission.serial_id,   resource.name,   
actions.name,   actions.classification,   actions.display_groupFROM permissionINNER JOIN policies ON (
  policies.serial_id = permission.related_id)INNER JOIN actions ON (  actions.serial_id = policies.related_id)INNER JOIN resource ON (  -- This is tricky as policies maps to resources AND actions maps to resources
  -- so the real question is which one do you do?  I did both.  policies.resource_id = resources.serial_id  AND  actions.related_id = resources.serial_id)WHERE   permission.user_id = '11' AND
   policies.status = 'Active' AND   permission.status = 'Active'AND   actions.status = 'Active'AND   resource.status = 'Active';I always discourage listing more than one table in the FROM clause.  Use INNER and OUTER JOINs - it is much easier to debug and it is somewhat self documenting.  That way, when you or another developer look at this in the future, you understand right away how the tables are being put together.
==   Aaron Bono   Aranya Software Technologies, Inc.   http://www.aranya.com==


Re: [SQL] Joining a result set from four (4) tables

2006-08-02 Thread Aaron Bono
On 8/1/06, John Tregea <[EMAIL PROTECTED]> wrote:
Because the logic structure of this software is in the front endapplication rather than the database there is a strong need to keep thenaming of fields generic rather than specific, I am not usingpre-defined foreign keys at all. If I was building the database with a
purpose specific goal I would be doing what you say. I have found thoughthat when I label elements at different levels of the back end for onepurpose, they are less transportable in the long run. In this case the
naming conventions are actually stored in another table and applied asaliases when needed. That way I can change the names and labels (for anew client or industry) while the underlying structure remains the same.
I hope to increase interoperability in this way as well. We all find ourselves in different situations and because of that, what works for one person, doesn't work for another - so I understand.  Good luck with the application and future queries.  Maybe you can use the existing structure of your application to help create a query builder so you can have it write a lot of your joins for you.  That should help avoid a lot of simply typos or mix-ups when hand writing your queries.
I am glad the queries worked for you.-Aaron======   Aaron Bono   Aranya Software Technologies, Inc.   
http://www.aranya.com==


  1   2   >