Re: [SQL] Slow Query problem

2008-01-28 Thread Andreas Joseph Krogh
On Monday 28 January 2008 08:18:24 Premsun Choltanwanich wrote:
> Dear All,
>
> I am currently using PostgreSQL database version 8.0.13.  My problem
> relates to a slow result when a query using a defined view joins to another
> table for a result.
>
> Background:  I have 7 tables of invoice transactions.  The tables are
> slightly different in that they record different data (some different
> columns in each table).  There are about 250,000 records when a union view
> is created.  A simply query on this union performs satisfactorily.
>
> The invoice table union view is then joined with a table of receipts (which
> have a total of about 150,000 records).
>
> It takes around 3.5 seconds for "select * from view_transaction where
> member_id = 999  and receipt_no is null" (which returns unpaid invoices).
>
> By hard coding I created a single table from the 7 invoice tables (instead
> of creating a union) and then used it with receipt table.  This time for
> the same query improved to 1.8 seconds.
>
> To further improve things I tried to code the selection rather than to use
> a view, and so "select * from temp_transaction where member_id = 999 and
> receipt_no is null" provided the result in .5 second. (2 records returned
> containing the details of receipt_no, transaction_no, transaction_type,
> transaction_amount, member_id).
>
> I would prefer to be able to have completed the above by using unions and
> views.  Is it possible to do this, or am I better creating a permanent
> table of invoices and writing the query as I did above?
>
> Any comments on this and suggestions would be appreciated.  If there is
> documentation where I can read up please let me have a link.

It is very hard to help without you providing the schema for the tables/views 
involved. It sounds like you don't have any indexes if you experience 
performance-problems on queries like "select * from view_transaction where 
member_id = 999  and receipt_no is null". But again, without the definition 
of the view and underlying tables, it's very hard to help.

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
+-+
OfficeNet AS| The most difficult thing in the world is to |
Karenslyst Allé 11  | know how to do a thing and to watch |
PO. Box 529 Skøyen  | somebody else doing it wrong, without   |
0214 Oslo   | comment.|
NORWAY  | |
Tlf:+47 24 15 38 90 | |
Fax:+47 24 15 38 91 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Slow Query problem

2008-01-28 Thread Tom Lane
Andreas Joseph Krogh <[EMAIL PROTECTED]> writes:
> On Monday 28 January 2008 08:18:24 Premsun Choltanwanich wrote:
>> I am currently using PostgreSQL database version 8.0.13.  My problem
>> relates to a slow result when a query using a defined view joins to another
>> table for a result.

> It is very hard to help without you providing the schema for the tables/views
> involved.

One suggestion is that 8.2.x is significantly smarter about joins to
unions than previous releases were.  Whether this would help in your
particular case is impossible to say, though, without more detail.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Proposed archival read only trigger on rows - prevent history modification

2008-01-28 Thread Bryce Nesbitt
I've got a largish database which once a month churns out some
invoices.  Once those invoices are created, there is zero business logic
reason to ever modify the underlying data.  A few hundred thousand
database rows go into the building of each month's invoices.  New data
is added all the time, and used at the end of each month.

I'm considering building a protective mechanism, and am seeking feedback
on the idea.  The approach would be to add a new column named "ro" to
each table at invoice level and below.  Then have a trigger on
'ro'==true deny the write, and probably raise a huge stink.  As invoice
are mailed each month, all the supporting data would be set to "ro" true.

The idea is to protect years and years of archival data from an
inadvertent write (such from an underspecified where clause, or a
software bug).  Ideally the mechanism would never be triggered.  To
corrupt data would require two acts -- changing the "ro" column, then
issuing an update.

I'm seeking feedback on the need, the approach, performance issues, and
any instances of core database support for such a concept.  I do see an
Oracle feature that seems somewhat on target.  I am using postgres, in a
mostly database independent manner.

  Bryce Nesbitt
  http://www.citycarshare.org/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Proposed archival read only trigger on rows - prevent history modification

2008-01-28 Thread chester c young

> I'm considering building a protective mechanism, and am seeking
> feedback
> on the idea.  The approach would be to add a new column named "ro" to
> each table at invoice level and below.  Then have a trigger on
> 'ro'==true deny the write, and probably raise a huge stink.  As
> invoice
> are mailed each month, all the supporting data would be set to "ro"
> true.

instead of triggers I use update-able views and permissions.

1. all dml goes through the view
2. use rules on the view to do dml to the table
3. in rules prevent updating all/any columns when whatever
4. grant dml to view to your pgconnect user
5. revoke dml from table to your pgconnect user

imho another instance where rules rule.  for example, you can easily
fit logging into the same view.



  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org