[SQL] dinamic sql

2006-08-29 Thread Manso Gomez, Ramon




HiI need to write a Postgres function which 
executes a cursor. Query of this cursor however is created on the basis of 
parameters passed by user.if(user_Input = 'a')l_query := 
l_query||a_from_clause ||'where ename in '||user_Inputelseif(user_Input = 
'b')l_query := l_query||b_from_clause||'where ename in '||user_InputOPEN csr FOR 
l_query;How do we do it is postgres. 
My problem I know how to pass parameters like "ename=parameter", but If the SQL 
sentence is  ename in (parameters). It does not 
work.Thanx in Advance.


Re: [SQL] Trigger on Insert to Update only newly inserted fields?

2006-08-29 Thread Michael Fuhr
On Mon, Aug 28, 2006 at 11:53:36AM -0400, Henry Ortega wrote:
> CREATE FUNCTION updated_end_date() RETURNS trigger AS '
> BEGIN
>update table set end_date=(select effective-1 from table t2 where
> t2.employee=table.employee and t2.effective>table.effective order by
> t2.effective limit 1);
>RETURN NEW;
> END;
> ' LANGUAGE 'plpgsql';
> 
> That updates ALL of the records in the table which takes so long.
> Should I be doing things like this? Or is the update query on my trigger
> function so wrong?

You're updating the same table that has the trigger?  Beware of
endless trigger recursion.

You're not restricting the UPDATE with a WHERE clause, which explains
why it updates the entire table.  Maybe you meant this:

  update table set end_date = (...) where employee = new.employee;

The subselect for each row also slows down the update, although you
might not be able to avoid that if requirements demand a potentially
distinct end_date for each row.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] dinamic sql

2006-08-29 Thread Jaime Casanova

On 8/29/06, Manso Gomez, Ramon <[EMAIL PROTECTED]> wrote:


if(user_Input = 'a')
l_query := l_query||a_from_clause ||'where ename in '||user_Input
else
if(user_Input = 'b')
l_query := l_query||b_from_clause||'where ename in '||user_Input

OPEN csr FOR l_query;



l_query := l_query || a_from_clause
 || 'where ename in (' ||
quote_literal(user_input) || ')'

--
regards,
Jaime Casanova

"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs and the universe trying
to produce bigger and better idiots.
So far, the universe is winning."
  Richard Cook

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Performance Problem with sub-select using array

2006-08-29 Thread Aaron Bono
Make sure you do a reply to all so you include the listOn 8/28/06, Travis Whitton <[EMAIL PROTECTED]
> wrote:Ok, I actually got down to business with EXPLAIN ANALYZE. My performance was actually suffering from the DISTINCT in my SQL query and not the subquery, which I guess isn't run repeatedly since it's not constrained and can be cached by the optimizer? Bottom line is, by replacing DISTINCT with DISTINCT ON all my index conditions show up in the EXPLAIN output. Best of all: Total runtime: 
353.588 ms. Thanks for the help.TravisOn 8/28/06, Travis Whitton
 <[EMAIL PROTECTED]> wrote:
I'm pretty sure you're right, which leads me to my next question. Is it possible to pass a column from an outer query to a subquery? For example, is there a way to do something like.
SELECT 
owners.id AS owner_id, array(SELECT dogs.name WHERE 

owners.id = owner_id)  ...I would just do a normal inner-join,  but then I get a row for each item that would otherwise come back nicely packaged in the array. The overhead of rearranging the data takes even more time than the subquery approach.
On 8/28/06, Aaron Bono <

[EMAIL PROTECTED]> wrote:
On 8/24/06, Travis Whitton <[EMAIL PROTECTED]
> wrote:

Hello all, I'm running the following query on about 6,000 records worth
of data, and it takes about 8 seconds to complete. Can anyone provide
any suggestions to improve performance? I have an index on
two columns in the transacts table (program_id, customer_id). If I specify a number
for customer.id
in the sub-select, query time is reduced to about 2 seconds, which
still seems like a million years for only 6,000 records, but I'm guessing that the sub-select can't resolve the id since it's done before the outer query, so it scans the entre recordset for every row? Transacts is a
many to many table for customers and programs. I know this query
doesn't even reference any columns from programs; however, I
dynamically insert where clauses to constrain the result set.
SELECT distinct customers.id,
first_name, last_name, address1, contact_city, contact_state,
primary_phone, email, array(select programs.program_name from
transacts, programs where customer_id = customers.id and 




programs.id
= transacts.program_id and submit_status = 'success') AS partners from
customers, transacts, programs where transacts.customer_id = customers.id and transacts.program_id = 




programs.id My guess is that your problem is that you may be getting 6000 rows, but the array(select ) is having to run once for each of record returned (so it is running 6000 times).
Try an explain analyze: http://www.postgresql.org/docs/7.4/interactive/sql-explain.html
 - that will reveal more of where the performance problem is.
==   Aaron Bono   Aranya Software Technologies, Inc.
   http://www.aranya.com   http://codeelixir.com==


Re: [SQL] Performance Problem with sub-select using array

2006-08-29 Thread Aaron Bono
On 8/28/06, Travis Whitton <[EMAIL PROTECTED]> wrote:
I'm pretty sure you're right, which leads me to my next question. Is it possible to pass a column from an outer query to a subquery? For example, is there a way to do something like.SELECT 

owners.id AS owner_id, array(SELECT dogs.name WHERE 
owners.id = owner_id)  ...I would just do a normal inner-join,  but then I get a row for each item that would otherwise come back nicely packaged in the array. The overhead of rearranging the data takes even more time than the subquery approach.
I don't think you can do that but I may be wrong.  I usually try to stay away from correlated sub-queries because of performance concerns and query complexity.  I find simple subqueries with well formed inner/outer joins work much better.
Does anyone know where documentation about the array function can be found?  I did a search but cannot find it on the postgresql web site. 
On 8/28/06, Aaron Bono <
[EMAIL PROTECTED]> wrote:
On 8/24/06, Travis Whitton <[EMAIL PROTECTED]
> wrote:

Hello all, I'm running the following query on about 6,000 records worth
of data, and it takes about 8 seconds to complete. Can anyone provide
any suggestions to improve performance? I have an index on
two columns in the transacts table (program_id, customer_id). If I specify a number
for customer.id
in the sub-select, query time is reduced to about 2 seconds, which
still seems like a million years for only 6,000 records, but I'm guessing that the sub-select can't resolve the id since it's done before the outer query, so it scans the entre recordset for every row? Transacts is a
many to many table for customers and programs. I know this query
doesn't even reference any columns from programs; however, I
dynamically insert where clauses to constrain the result set.
SELECT distinct customers.id,
first_name, last_name, address1, contact_city, contact_state,
primary_phone, email, array(select programs.program_name from
transacts, programs where customer_id = customers.id and 



programs.id
= transacts.program_id and submit_status = 'success') AS partners from
customers, transacts, programs where transacts.customer_id = customers.id and transacts.program_id = 



programs.id My guess is that your problem is that you may be getting 6000 rows, but the array(select ) is having to run once for each of record returned (so it is running 6000 times).
Try an explain analyze: http://www.postgresql.org/docs/7.4/interactive/sql-explain.html
 - that will reveal more of where the performance problem is.
==   Aaron Bono   Aranya Software Technologies, Inc.   
http://www.aranya.com   http://codeelixir.com==


[SQL]

2006-08-29 Thread Sumeet
Hi All,I'm having very strange problems with indexing, I have a large number of partial indexes, when i try to run some queries sometimes the queries run very fast and sometimes they are 100times slower than what i ran before.I tried running vacuum analyze couple of times and it dint help me at all. Also i tried the explain analyze and what i found was the query execution plan keeps on changes. Is there any better way to gain control over these, I tried turning seqscan off, and messing with some other parameters but dint really help. Also can somebody point me out to links which explains more about query execution plans..i.e. each of parameters like bitmap heap scan, index scanetc...
-- Thanks,Sumeet


Re: [SQL] to get DD-MM-YYYY format of data

2006-08-29 Thread codeWarrior







17.10.2. Locale 
and Formatting


  DateStyle (string) 
  
  Sets the display format for date and time values, as well as the rules for 
  interpreting ambiguous date input values. For historical reasons, this 
  variable contains two independent components: the output format specification 
  (ISO, Postgres, SQL, or German) and the 
  input/output specification for year/month/day ordering (DMY, MDY, or YMD). These can be set separately or together. The 
  keywords Euro and European are synonyms for DMY; 
  the keywords US, NonEuro, and NonEuropean are 
  synonyms for MDY. See Section 8.5, 
  “Date/Time Types” for more information. The default is ISO, MDY. 
 

  ""Penchalaiah P."" <[EMAIL PROTECTED]> wrote in 
  message news:[EMAIL PROTECTED]...
  
  Hi .. 

   
  I am using date is data type to 
  one of the field in my table….but when ever I am passing values to that field 
  it is taking yyy-mm-dd format..
  But I don’t want like that .. I 
  need like this DD-MM-.. for this wt I have to 
  do…
   
  Thanks  
  &  Regards
  Penchal 
  reddy | 
  Software Engineer 
    
  Infinite 
  Computer Solutions | 
  Exciting Times…Infinite Possibilities... 
  SEI-CMMI 
  level 5 | 
  ISO 
  9001:2000
  IT 
  SERVICES | 
  BPO   
  Telecom 
  | 
  Finance 
  | 
  Healthcare 
  | 
  Manufacturing 
  | 
  Energy 
  & Utilities | 
  Retail 
  & Distribution | 
  Government    
  
  Tel 
  +91-80-5193-(Ext:503)| 
  Fax  +91-80-51930009 | 
  Cell No  +91-9980012376|www.infics.com  
  Information 
  transmitted by this e-mail is proprietary to Infinite Computer Solutions and/ 
  or its Customers and is intended for use only by the individual or entity to 
  which it is addressed, and may contain information that is privileged, 
  confidential or exempt from disclosure under applicable law. If you are not 
  the intended recipient or it appears that this mail has been forwarded to you 
  without proper authority, you are notified that any use or dissemination of 
  this information in any manner is strictly prohibited. In such cases, please 
  notify us immediately at 
  [EMAIL PROTECTED] 
  and 
  delete this mail from your records.
   
  


  Information transmitted by this 
e-mail is proprietary to Infinite Computer Solutions and / or its 
Customers and is intended for use only by the individual or the entity 
to which it is addressed, and may contain information that is 
privileged, confidential or exempt from disclosure under applicable law. 
If you are not the intended recipient or it appears that this mail has 
been forwarded to you without proper authority, you are notified that 
any use or dissemination of this information in any manner is strictly 
prohibited. In such cases, please notify us immediately at 
[EMAIL PROTECTED] and delete this email from your 
records.


Re: [SQL]

2006-08-29 Thread Andrew Sullivan
On Tue, Aug 29, 2006 at 12:44:28PM -0400, Sumeet wrote:
> me at all. Also i tried the explain analyze and what i found was the query
> execution plan keeps on changes. Is there any better way to gain control

My bet is that you don't have enough sample data to support a
consistent query plan.  SET STATISTICS might be what you need.

> parameters but dint really help. Also can somebody point me out to links
> which explains more about query execution plans..i.e. each of parameters
> like bitmap heap scan, index scanetc...

On [mumble techdocs I think] there's a tutorial called EXPLAIN
explained.  Google will help you -- just use that title, with maybe
"postgresql" tacked onto your search string.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] UTF8 Vs Latin9 and valid encoding.

2006-08-29 Thread Marcelo .

Hello,
I am using postgres 8.1 for windows and have the
following dilema.

I have 2 databases, one encoded in UTF-8 and one in
Latin9.
In both DBs I can create the following test table:

CREATE TABLE "holaniño"
(
  nombre varchar(5),
  "titúlo" varchar(10)
) 
WITHOUT OIDS;

Then in both DBs I can execute the following SQL:

select * from holaniño where titúlo = 'compania'

HOWEVER, when I change the SQL to:
select * from holaniño where titúlo = "compañia"

(changed the N to a Ñ in the filter)
I receive the error msg:
ERROR:  invalid UTF-8 byte sequence detected near byte
0xf1

Where in the LATIN9 db, there is no problem.

Isn't UTF-8 the same as UNICODE where it encapsulates
all other coding schemes including latin9?

Any insight would be greatly apreciated.





__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings