Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-28 Thread Pavel Stehule
Hello

2008/7/28 Milan Oparnica [EMAIL PROTECTED]:
 Pavel Stehule wrote:

 Hello

 this is near my implemenation of stored procedures - it's not far from
 your view on prepared statements. There  result of any unbound select
 is directly forwarded to client - there is little bit different
 syntax, but idea is same


 create procedure foo(params)
 as $$
 begin
  select a, b, c from tab1 -- unbound select
  select 
 end;

 and you can call with CALL statement. Curent implementation of SRF
 function in plpgsql sn't eficient for big results - it needs
 materialised table in memory. Pipeline Oracle's functions are better,
 but we are limited one thread architecture.

 regards
 Pavel Stehule


 Yeah, this would be pretty the same.

 I just didn't understand if this is already implemented in PG ?


no - I have workin prototype now - and I am searching sponsors for
finishing this work.

http://okbob.blogspot.com/2007/11/stacked-recordset-multirecordset.html
http://okbob.blogspot.com/2007/11/first-real-procedures-on-postgresql.html

But it's really similar to what do you want and it's can be
interesting together with plpgpsm for you, because plpgpsm (SQL/PSM)
allows one statement procedures, like:

create procedure foo(...) as $$
select * from tab;
$$ language plpgpsm;

so this is standard syntax for persistent prepared statement :)

regards
Pavel Stehule



 I've tried to create a PROCEDURE in QueryTool of pgAdminIII and I get
 following result:

 ERROR:  syntax error at or near PROCEDURE
 LINE 1: CREATE PROCEDURE aatestubnd(refcursor, insklid integer, inda...
   ^

 ** Error **

 ERROR: syntax error at or near PROCEDURE
 SQL state: 42601
 Character: 8


 If I try with CREATE FUNCTION I get this result (as expected):

 ERROR:  function result type must be specified

 ** Error **

 ERROR: function result type must be specified
 SQL state: 42P13


 Is this because of QueryTool's limitations or is this feature not yet
 implemented in Postgre ?

 Though, CREATE PROCEDURE should be a valid SQL 92 standard procedure.

 Best regards,

 Milan Oparnica

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-28 Thread Milan Oparnica

chester c young wrote:


   Session #1 has search_path=schema1
   Session #2 has search_path=schema2
if session #1 attempts to exec stmt, it prepare and use schema1
if session #2 attempts to use stmt, if prepared globally, disaster



I'm sorry, I wasn't precise enough. When I said global I meant global in 
Schema scope. Just like standard stored procedures are.


Did you see post from Mr.Pavel.Stehule about PROCEDURES ?
Such implementation would have the same effect as global prepare.
If it could even support nesting (calling procedure from within another 
procedure).


Let's hope he does his implementation in 8.4.

Regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-27 Thread Milan Oparnica

chester c young wrote:

1. like the idea because right now I'm having to track which prepared statement 
(sets) are in which session.  using xcache to track this, but little bit 
dangerous.  could query the database first but the whole idea is to take a load 
off the db.



Thnx for support. The whole idea is to have DBMS support as much SQL 
related functionality as possible. This way you wouldn't have to wonder 
if the client has prepared the statement already or not. If you stored 
it in the public schema then it MUST be there.



2. perhaps global could mean simply that the definition is global - if called 
for session and not exist in session, then session prepares it first from the global def. 
 there would need to be a back reference in case the global def was changed or dropped.



Yes, this seems to be a good idea. Something like this would be easier 
to implement then having the whole process run in some shared memory 
space. The implementation could by like:


1. App cals SQL like EXEC statement_name
2. PG Engine looks first in local prepared statements as it does now
3. If it doesn't find it there it looks in public space
4. If it's found there copy both the SQL and the execution plan stored 
in global space to local process

5. Execute the statement as if it was prepared in local process.

Simplified greatly, new implementation could only add steps 3 and 4 to 
current implementation of PREPARED statements.



3. don't understand your statement about how prepared statements can be used as 
subqueries - you mean something like

  select .. from tab1 t1 where t1.col1 in( exec prep1(..) )

or exactly what?



Well, its best explained on the example. Note that such construction 
would require extra coding from PG developers.


##1 Lets pretend we have a prepared statement:

PREPARE PS_Sector (InSector INTEGER) AS
SELECT SectorID,SectorName,SectorType
FROM CompanySectors
WHERE SectorID = InSector;


##2 Now we can build another statement over the firs one like this:

PREPARE PS_SectorExpences (InDateFrom DATETIME, InDateTo DATETIME, 
InSector INTEGER) AS

SELECT S.SectorID,S.SectorName,S.SectorType,E.ExpType,E.Ammount
FROM PS_Sector AS S INNER JOIN CompanyExpences AS E
ON S.SectorID=E.SectorID
WHERE E.Date=InDateFrom AND E.Date=InDateTo;


Let me explain. Statement ##2 uses PS_Sector in direct join with a table 
with expences. By automatically passing by name the parameter InSector 
to the underlaying prepared statement it first returns results from it. 
Then it joins it to the table CompanyExpences by field SectorID.


This may look complicated to implement but it's just a simplified 
version of implementing SUB-QUERIES. The same effect is gained if you 
simple replace the PS_Sector reference in ##2 by actual SQL statement it 
holds.


PREPARE PS_SectorExpences (InDateFrom DATETIME, InDateTo DATETIME, 
InSector INTEGER) AS

SELECT S.SectorID,S.SectorName,S.SectorType,E.ExpType,E.Ammount
FROM (SELECT SectorID,SectorName,SectorType
FROM CompanySectors
WHERE SectorID = InSector) AS S INNER JOIN CompanyExpences AS E
ON S.SectorID=E.SectorID
WHERE E.Date=InDateFrom AND E.Date=InDateTo;


Only, this way, subquery can come with execution plan from ##1 prepared 
statemnt.



Where could this be used ?

Since it's obvious performance downgrade, you must have a good reason 
for using such approach. Mostly it's reporting !!!


Good report relies on data it gets. Sometimes statements needed for 
reporting purposes include very complex data selection. Usually it takes 
a long period of time until everything is covered by your SELECT 
query. At that point you can implement the core logic into a base 
perpared statement and then use prepared statements over it to get 
various aggregations, or different point of views of data.


But, the best thing is: WHEN YOU ADD SOMETHING TO BASIC LOGIC OF YOUR 
SYSTEM, YOU ONLY CHANGE THE BASE PREPARED STATEMENT. If you wrote it 
cleverly, than all statements built upon it WILL IMMEDIATELY SEE THE NEW 
IMPLEMENTATION. This is very powerful and efficient way of introducing 
improvements in the system not having to worry that majority of your 
reports will show false data until you find time to recompile them.


NOTE. You don't want to manipulate huge database systems this way but 
its very usable for 99% of small and medium business apps (databases up 
to few gigabytes). It greatly depends on your internal database 
representation though.


Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-27 Thread Milan Oparnica

Pavel Stehule wrote:

Hello

this is near my implemenation of stored procedures - it's not far from
your view on prepared statements. There  result of any unbound select
is directly forwarded to client - there is little bit different
syntax, but idea is same


create procedure foo(params)
as $$
begin
  select a, b, c from tab1 -- unbound select
  select 
end;

and you can call with CALL statement. Curent implementation of SRF
function in plpgsql sn't eficient for big results - it needs
materialised table in memory. Pipeline Oracle's functions are better,
but we are limited one thread architecture.

regards
Pavel Stehule



Yeah, this would be pretty the same.

I just didn't understand if this is already implemented in PG ?

I've tried to create a PROCEDURE in QueryTool of pgAdminIII and I get 
following result:


ERROR:  syntax error at or near PROCEDURE
LINE 1: CREATE PROCEDURE aatestubnd(refcursor, insklid integer, inda...
   ^

** Error **

ERROR: syntax error at or near PROCEDURE
SQL state: 42601
Character: 8


If I try with CREATE FUNCTION I get this result (as expected):

ERROR:  function result type must be specified

** Error **

ERROR: function result type must be specified
SQL state: 42P13


Is this because of QueryTool's limitations or is this feature not yet 
implemented in Postgre ?


Though, CREATE PROCEDURE should be a valid SQL 92 standard procedure.

Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-27 Thread chester c young

  2. perhaps global could mean simply that
 the definition is global - if called for session and not
 exist in session, then session prepares it first from the
 global def.  there would need to be a back reference in
 case the global def was changed or dropped.
  
 
 Yes, this seems to be a good idea. Something like this
 would be easier 
 to implement then having the whole process run in some
 shared memory 
 space. The implementation could by like:
 
 1. App cals SQL like EXEC
 statement_name
 2. PG Engine looks first in local prepared statements as it
 does now
 3. If it doesn't find it there it looks in public space
 4. If it's found there copy both the SQL and the
 execution plan stored 
 in global space to local process
 5. Execute the statement as if it was prepared in local
 process.
 
 Simplified greatly, new implementation could
 only add steps 3 and 4 to 
 current implementation of PREPARED statements.

I think it only makes sense if the prepared statement is in the session.

For example, say the Globally submitted statement is
   stmt: select * from tab1
assume
   Session #1 has search_path=schema1
   Session #2 has search_path=schema2
if session #1 attempts to exec stmt, it prepare and use schema1
if session #2 attempts to use stmt, if prepared globally, disaster

in submitted globally, it is not prepared at all until used, but, if ever used, 
prepared in the current session's environment.  really more like a rule who's 
preparation is remembered in the session.




  

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-25 Thread Milan Oparnica

Craig Ringer wrote:

Milan Oparnica wrote:
I found this link from IBM DB2 developers showing why PERSISTENT 
PREPARE is a good idea and how could it be implemented.


[snip]

NONE OF POPULAR SQL DBMS (Oracle, MS SQL, MySQL, Postgre, INTERBASE, 
FIREBIRD) HAVE THIS FEATURE.


WHY ?


I suspect that people tend to use SQL or PL/PgSQL stored procedures 
instead. I'm not 100% sure SQL functions cache their query plans, but I 
know PL/PgSQL does.


Exactly what is gained by the use of persistent prepare over the use of 
a stored procedure?




Its easier to write and call prepared statements then SP's.
When writing, you just create parameters and SQL body of the statement.
When calling from the application or report engine, all you have to do 
is EXEC statement name (parameters).


In case of SP's written as SET OF CustomDataType it's pretty easy to 
call (SELECT * FROM proc (params)) but its hard to maintain 
CutomDataType when changing table structures.


In case of SP's written using CURSORS calling convention is not simple.

BEGIN;
SELECT proc('cursor_name', input parameter);
FETCH ALL FROM cursor_name;
END;

This usually requires using more complicating ways to define source 
recordsets in reports designers. I'm also not sure how cursors use 
indexes of the tables. I've run some tests on several millions records 
with and without indexes and found out that there was no significant 
difference in performance. Plus, for now, CURSORS remain READ-ONLY in PG.


There is one thing more. Prepared statements could be used as table-type 
 (or view-type) datasource. That means we could run prepared statements 
over other prepared statements accepting same parameters. This makes 
possible creating complex low-level BASE queries and dozens of 
reporting queries that rely on them. Something like subquery structures:


SELECT A.Fld1,B.Fld2
FROM
(SELECT Fld1 FROM Table1 WHERE xxx) A INNER JOIN Table2 B ON A.Fld1=B.Fld2
WHERE yyy



What would the interface to the feature be through database access 
drivers like JDBC? Explicit PREPARE GLOBAL or similar, then invocation 
with EXECUTE ?




They could remain the same. If possible, when calling EXECUTE xxx PG 
could try local-then global (or reverse) PREPARED STATEMENTS pool, and 
run the statement that was first found. This is greatly simplified. It 
would require much detailed planning if this option is to be built in PG.



How would users using increasingly common layers like Hibernate/JPA use it?



I don't now, I never used Hibernate/JPA. Somebody could speak for that. 
Making PERSISTANT PREPARE statement available in PG doesn't mean that 
everybody must use it.


I'm also curious about how you'd address the possible need for periodic 
re-planning as the data changes, though AFAIK SQL functions suffer from 
the same limitation there.




Perhaps there could be several ways. It could be implemented within 
VACUUM procedure. It could re-plan all global prepared statements 
according to new circumstances. There should also be a manual 
re-planning command of specific statement maybe even with passing 
parameters to it so the optimization could be even more efficient.


I guess I personally just don't understand what the point of the 
persistent prepare feature you describe is. However, this post that you 
linked to:


http://archives.postgresql.org/pgsql-hackers/2008-04/msg00867.php

actually describes a query plan cache, rather than persistent prepare. 
The post assumes the app will explicitly manage the cache, which I'm not 
sure is a good idea, but I can see the point of a plan cache. There 
might be some heuristics Pg could use to decide what to cache and to 
evict (planner time cost vs memory use, frequency of use, etc) so the 
app doesn't have to know or care about the plan cache.  However, I'm not 
too sure how you'd match an incoming query to a cached plan, and 
determine that the plan was still valid, with enough speed to really 
benefit from the plan cache. Then again, I don't know much about Pg's 
innards, so that doesn't mean much.


Tom Lane responded to that post to point out some of the complexities:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00868.php

--
Craig Ringer



I think persistent prepare has to do something with global caching 
plans. Preparing statement actually prepares a plan of its execution 
giving us the possibility to execute it. With persistent prepare 
statement it somehow these plans have to go from local to global. I 
agree they shouldn't be managed by app itself. App can only create 
requests for creation, modification, execution and removing of such object.


This post is about a point of view. I know we can use stored procedures 
and views and other good features of PG to manipulate all data we need.


What I aim is: it can be done in more simple and efficient way with 
PERSISTENT PREPARE implementation. Let me point the key benefits:


- huge number of database developers could use this technique without 
getting 

Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-25 Thread Milan Oparnica

Jeff Williams wrote:


You can do this with cursors, but I'm not sure if you still get the
query caching?



I can do it with cursors, I don't get query cache but that is for now 
not an issue.


Issue is performance.

Although using cursors show better performance over using SET OF user 
defined data types, it still swallows huge amount of memory.


Here are the results of a query (I post the query at the end just for 
sake of complexity example) running on database of 1.6 GB of data and 
around 5 million of records:


TimeApprox RAM usageSwaping
PREPARE Statement   5 min.  300 MB  None
SP with SET OF  16 min. 800 MB  Heavy
SP with cursor  9 min.  800 MB  Medium

Environment:

Run on Query Tool of pgAdmin III. It does no data modification.
Number of clients: 1
PostgreSQL version 8.3
Processor P4 2.66 GHZ
RAM memory  1 GB.

There where no other active programs or demanding services running in 
the background.



I presume adding more RAM would increase performance but we usually have 
10 to 20 clients working at a time. I guess that would put some load.


Besides, fetching records from cursor type SP requires transactional 
approach in calling convention (BEGINstatment...END). At this moment 
I don't know if this can be done in our main report designer tool - 
Business Objects (former Crystal Reports).


Best regards,

Milan Oparnica


Here is the text of SQL statement:

Note: All fields in join are indexed and every where field is also 
indexed except boolean fields.


CREATE FUNCTION aaTestCurs (refcursor, insklid integer, indatdo 
timestamp without time zone) RETURNS refcursor AS $$

BEGIN
OPEN $1 FOR
SELECT 10 AS 
Poredak,U.UlID,U.DatDoc,DT.FKomis,DT.FSCObrac,DT.FNivelTot,DT.FNivelParc,
  U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS NUMERIC),8) AS 
Kol,SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8) AS TKol,

  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,
  U.Link,SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,
  SU.PorezU,SUN.NNKol,SUN.NPCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 
AS NivelEdited,

  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS LastSCena,SK.RacTipSKL,
  U.DTStamp AS Tabela,U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON DT.DTID = 
U.DocTip) INNER JOIN SasUlaz AS SU ON U.UlID = SU.UlID) INNER JOIN 
Skladista AS SK ON U.SklID = SK.SklID) LEFT JOIN SasUlazNivel AS SUN ON 
SU.SasUlID = SUN.SasUlID
  WHERE DT.FSCObrac AND NOT(SK.FSkipSCObr  false) AND 
NOT(SK.FSkipNivel  false) AND U.DatDoc = InDatDo

  UNION ALL
  SELECT 20 AS 
Poredak,U.UlID,U.DatDoc,DT.FKomis,DT.FSCObrac,DT.FNivelTot,DT.FNivelParc,
  U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS 
NUMERIC),8),SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8),

  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,U.Link,

SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,SU.PorezU,SUN.NNKol,
  SUN.NPCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 AS NivelEdited,
  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS 
LastSCena,SK.RacTipSKL,(U.DTStamp -(20*case when DT.Rank  5 
then -1 else 0

end)) AS Tabela,
  U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON   DT.DTID = 
U.DocTip) INNER JOIN SasUlaz AS SU ON U.UlID = SU.UlID) INNER JOIN 
Skladista AS SK ON U.SklID = SK.SklID)  LEFT JOIN SasUlazNivel AS SUN ON 
SU.SasUlID = SUN.SasUlID
  WHERE NOT DT.FSCObrac AND NOT(SK.FSkipSCObr  false) AND 
NOT(SK.FSkipNivel  false) AND U.DocTip  31 AND U.DatDoc = InDatDo

  UNION ALL
  SELECT 28 AS Poredak,U.UlID,U.DatDoc,False AS 
FKomis,DT.FSCObrac,DT.FNivelTot,
  DT.FNivelParc,U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS 
NUMERIC),8),SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8),

  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,

U.Link,SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,SU.PorezU,
  SUN.NNKol,SU.RCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 AS 
NivelEdited,

  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS LastSCena,SK.RacTipSKL,
  U.DTStamp AS Tabela,U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON DT.DTID = 
U.DocTip) INNER JOIN SasUlaz AS SU ON U.UlID = SU.UlID) INNER JOIN 
Skladista AS SK ON U.SklID = SK.SklID)  LEFT JOIN SasUlazNivel AS SUN ON 
SU.SasUlID = SUN.SasUlID
  WHERE U.DatDoc = InDatDo AND NOT(SK.FSkipSCObr  false) AND 
NOT(SK.FSkipNivel  false) AND U.DocTip = 31 AND((SK.RacTipSKL = 3 
AND(DT.FMP  false)) OR(SK.RacTipSKL  3 AND(DT.FVP  false)))
  UNION ALL SELECT 30 AS Poredak,0 AS UlID,O.DatDoc,False AS 
FKomis,False,False,False,2 AS DocTip,

  0 AS SasUlID,SO.ElID,-SUM(SO.Kol) AS Kol,0 AS NCena,0 AS TKol,
  0 AS SCena,SK.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,0 AS 
Link,
  0 AS FCena,0 AS Popust,0 AS Marza,0 AS MCena,0 AS MPorez,0 AS 
KKol,0 AS PorezU,
 

Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-25 Thread Pavel Stehule
Hello

this is near my implemenation of stored procedures - it's not far from
your view on prepared statements. There  result of any unbound select
is directly forwarded to client - there is little bit different
syntax, but idea is same


create procedure foo(params)
as $$
begin
  select a, b, c from tab1 -- unbound select
  select 
end;

and you can call with CALL statement. Curent implementation of SRF
function in plpgsql sn't eficient for big results - it needs
materialised table in memory. Pipeline Oracle's functions are better,
but we are limited one thread architecture.

regards
Pavel Stehule

2008/7/25 Milan Oparnica [EMAIL PROTECTED]:
 Jeff Williams wrote:

 You can do this with cursors, but I'm not sure if you still get the
 query caching?


 I can do it with cursors, I don't get query cache but that is for now not an
 issue.

 Issue is performance.

 Although using cursors show better performance over using SET OF user
 defined data types, it still swallows huge amount of memory.

 Here are the results of a query (I post the query at the end just for sake
 of complexity example) running on database of 1.6 GB of data and around 5
 million of records:

TimeApprox RAM usageSwaping
 PREPARE Statement   5 min.  300 MB  None
 SP with SET OF  16 min. 800 MB  Heavy
 SP with cursor  9 min.  800 MB  Medium

 Environment:

 Run on Query Tool of pgAdmin III. It does no data modification.
 Number of clients: 1
 PostgreSQL version 8.3
 Processor P4 2.66 GHZ
 RAM memory  1 GB.

 There where no other active programs or demanding services running in the
 background.


 I presume adding more RAM would increase performance but we usually have 10
 to 20 clients working at a time. I guess that would put some load.

 Besides, fetching records from cursor type SP requires transactional
 approach in calling convention (BEGINstatment...END). At this moment I
 don't know if this can be done in our main report designer tool - Business
 Objects (former Crystal Reports).

 Best regards,

 Milan Oparnica


 Here is the text of SQL statement:

 Note: All fields in join are indexed and every where field is also indexed
 except boolean fields.

 CREATE FUNCTION aaTestCurs (refcursor, insklid integer, indatdo timestamp
 without time zone) RETURNS refcursor AS $$
 BEGIN
OPEN $1 FOR
SELECT 10 AS
 Poredak,U.UlID,U.DatDoc,DT.FKomis,DT.FSCObrac,DT.FNivelTot,DT.FNivelParc,
  U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS NUMERIC),8) AS
 Kol,SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8) AS TKol,
  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,
  U.Link,SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,
  SU.PorezU,SUN.NNKol,SUN.NPCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 AS
 NivelEdited,
  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS LastSCena,SK.RacTipSKL,
  U.DTStamp AS Tabela,U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON DT.DTID = U.DocTip)
 INNER JOIN SasUlaz AS SU ON U.UlID = SU.UlID) INNER JOIN Skladista AS SK ON
 U.SklID = SK.SklID) LEFT JOIN SasUlazNivel AS SUN ON SU.SasUlID =
 SUN.SasUlID
  WHERE DT.FSCObrac AND NOT(SK.FSkipSCObr  false) AND NOT(SK.FSkipNivel
  false) AND U.DatDoc = InDatDo
  UNION ALL
  SELECT 20 AS
 Poredak,U.UlID,U.DatDoc,DT.FKomis,DT.FSCObrac,DT.FNivelTot,DT.FNivelParc,
  U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS
 NUMERIC),8),SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8),
  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,U.Link,

 SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,SU.PorezU,SUN.NNKol,
  SUN.NPCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 AS NivelEdited,
  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS
 LastSCena,SK.RacTipSKL,(U.DTStamp -(20*case when DT.Rank  5 then -1
 else 0
 end)) AS Tabela,
  U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON   DT.DTID = U.DocTip)
 INNER JOIN SasUlaz AS SU ON U.UlID = SU.UlID) INNER JOIN Skladista AS SK ON
 U.SklID = SK.SklID)  LEFT JOIN SasUlazNivel AS SUN ON SU.SasUlID =
 SUN.SasUlID
  WHERE NOT DT.FSCObrac AND NOT(SK.FSkipSCObr  false) AND
 NOT(SK.FSkipNivel  false) AND U.DocTip  31 AND U.DatDoc = InDatDo
  UNION ALL
  SELECT 28 AS Poredak,U.UlID,U.DatDoc,False AS
 FKomis,DT.FSCObrac,DT.FNivelTot,
  DT.FNivelParc,U.DocTip,SU.SasUlID,SU.ElID,ROUND(CAST(SU.Kol AS
 NUMERIC),8),SU.NCena,ROUND(CAST(SU.TKol AS NUMERIC),8),
  SU.SCena,SU.SklID,0 AS Obradjen,0 AS IzmSC,0 AS ImaIzmena,

 U.Link,SU.FCena,SU.Popust,SU.Marza,SU.MCena,SU.MPorez,SU.KKol,SU.PorezU,
  SUN.NNKol,SU.RCena,SUN.NTotal,SUN.NTotPP,SUN.NTotPU,0 AS NivelEdited,
  SUN.NNKol AS NivelOldKol,0 AS LastTKol,0 AS LastSCena,SK.RacTipSKL,
  U.DTStamp AS Tabela,U.DopSklID,DT.FPrenos,SK.FRemote
  FROM(((defDocTip AS DT INNER JOIN Ulaz AS U ON DT.DTID = U.DocTip)
 INNER JOIN SasUlaz AS SU 

Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-25 Thread chester c young
1. like the idea because right now I'm having to track which prepared statement 
(sets) are in which session.  using xcache to track this, but little bit 
dangerous.  could query the database first but the whole idea is to take a load 
off the db.

2. perhaps global could mean simply that the definition is global - if called 
for session and not exist in session, then session prepares it first from the 
global def.  there would need to be a back reference in case the global def was 
changed or dropped.

3. don't understand your statement about how prepared statements can be used as 
subqueries - you mean something like

  select .. from tab1 t1 where t1.col1 in( exec prep1(..) )

or exactly what?



  

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-22 Thread Pavel Stehule
2008/7/20 Milan Oparnica [EMAIL PROTECTED]:
 Pavel wrote:


 try to write prototype and show advantages...

 Prototype of what, implementation into Postgre or just efficiency of
 PRESISTANT PREPARE idea ?

really prototype



 ...but I see some disadvatage
 too. Mainly you have to manage some shared memory space for stored
 plans. It's not easy task - MySQL develepoers can talk. Implemenation
 on postgresql is little bit dificult - lot of structures that lives in
 processed memory have to be moved to shared memory.


 Is it solved in MySQL or they've just tried ?

http://www.mysqlperformanceblog.com/2006/08/02/mysql-prepared-statements/

Do not forget to close prepared statements - Many memory leaks
reported in MySQL Server turned out to be prepare statements or
cursors which were forgotten to be closed. Watch Com_stmt_prepare and
Com_stmt_close to see if you're closing all prepared statements. In
newer versions you can also use prepared_stmt_count variable to track
number of open statements diretly. You can also adjust
max_prepared_stmt_count variable which limits how many statements can
be open at the same time to avoid overload.


 We could have only PREP STATEMENT definition stored in shared memory
 (probably something like stored procedures), and it could be run in local
 processed memory. We could even assume only fetching data would be used
 through PREP STATEMENTS for start, and later introduce data modification. Is
 there some simplified PG algorithm we could use to understand the amount of
 work needed for introducing such feature to PG?


there is some complications with portability - shared memory is slow
on windows :( but probably  there isn't problem save plan into shared
memory. Main difficulties is memory maintaining.

 This feature is nice, but question is - who do write it?

 With a little help form PG developers and good documentation perhaps I could
 put some programmers from my team on this job. They are mostly C++
 programmers but we have Delphi and Java if needed.

PostgreSQL is solo C code
http://wiki.postgresql.org/wiki/Development_information


 Actually this problem is solved from outside - with pooling.


 I'm very interested to learn more about this solution. Can you please send
 me details or some links where I could research this solution ?

pgpool??



 Thank you for your reply Pavel.

regards
Pavel

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-22 Thread Scott Marlowe
On Tue, Jul 22, 2008 at 12:43 AM, Pavel Stehule [EMAIL PROTECTED] wrote:
 2008/7/20 Milan Oparnica [EMAIL PROTECTED]:
 Is it solved in MySQL or they've just tried ?

 http://www.mysqlperformanceblog.com/2006/08/02/mysql-prepared-statements/

Wow, the discussion at the bottom of that page made me really think.
In MySQL you rely on the statement cache to provide data really fast
without worrying too much about transactional semantics.  In
PostgreSQL you set up a set of 1 or more slony machines to act as
cache / increase parallel performance.  Or just throw more CPU and
memory at it along with memcached.  Or both.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-22 Thread Jeff Williams
On 20/07/08 22:16, Milan Oparnica wrote:
 Try to write following simple scenario:

 a. Data is retrieved from two tables in INNER JOIN
 b. I don't need all fields, but just some of them from both tables

 Lets call tables Customers and Orders.

 Definition of tables are:
 Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
 Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))

 Now I need a list of order numbers for some customer:

 SELECT C.CustomID, C.Name, O.OrderNum
 FROM Customers C INNER JOIN Orders  O ON C.CustomID=O.CustomID
 WHERE C.Name LIKE some input parameter

You can do this with cursors, but I'm not sure if you still get the
query caching?

CREATE FUNCTION test(refcursor, input varchar) RETURNS refcursor AS $$
BEGIN
OPEN $1 FOR SELECT C.CustomID, C.Name, O.OrderNum
 FROM Customers C INNER JOIN Orders  O ON C.CustomID=O.CustomID
 WHERE C.Name LIKE '%' || input || '%';
RETURN $1;
END
$$ LANGUAGE plpgsql;

Then to use:

BEGIN;
SELECT test('curs', some input parameter);
FETCH ALL FROM curs;
END;

Jeff

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-21 Thread Milan Oparnica
I found this link from IBM DB2 developers showing why PERSISTENT PREPARE 
is a good idea and how could it be implemented.


http://www.hoadb2ug.org/Docs/Favero20606.pdf

It seems that main benefit (beside efficiency) is memory.
Having number of connections all with dozens of PREPARED statements 
consumes lot of memory.


If we put these statements in global space (perhaps not even in RAM 
memory until needed) we could reuse optimization plans as well as 
physical space needed to store them.


I also found articles:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00867.php

and

http://archives.postgresql.org/pgsql-performance/2008-03/msg00480.php

all about users trying to explain PERSISTENT PREPARE (some refer to it 
as global prepare).



There are also some guys who actually made some code for PERSISTENT PREPARE:

http://archives.postgresql.org/pgsql-hackers/2008-03/msg01228.php
and
http://archives.postgresql.org/pgsql-hackers/2008-03/msg01219.php


PEOPLE NEED THIS FEATURE !!! It is not discovering the wheel but it will 
simplify DB programming and even gain some performance.


NONE OF POPULAR SQL DBMS (Oracle, MS SQL, MySQL, Postgre, INTERBASE, 
FIREBIRD) HAVE THIS FEATURE.


WHY ?

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-21 Thread Craig Ringer

Milan Oparnica wrote:
I found this link from IBM DB2 developers showing why PERSISTENT PREPARE 
is a good idea and how could it be implemented.


[snip]

NONE OF POPULAR SQL DBMS (Oracle, MS SQL, MySQL, Postgre, INTERBASE, 
FIREBIRD) HAVE THIS FEATURE.


WHY ?


I suspect that people tend to use SQL or PL/PgSQL stored procedures 
instead. I'm not 100% sure SQL functions cache their query plans, but I 
know PL/PgSQL does.


Exactly what is gained by the use of persistent prepare over the use of 
a stored procedure?


What would the interface to the feature be through database access 
drivers like JDBC? Explicit PREPARE GLOBAL or similar, then invocation 
with EXECUTE ?


How would users using increasingly common layers like Hibernate/JPA use it?

I'm also curious about how you'd address the possible need for periodic 
re-planning as the data changes, though AFAIK SQL functions suffer from 
the same limitation there.


I guess I personally just don't understand what the point of the 
persistent prepare feature you describe is. However, this post that you 
linked to:


http://archives.postgresql.org/pgsql-hackers/2008-04/msg00867.php

actually describes a query plan cache, rather than persistent prepare. 
The post assumes the app will explicitly manage the cache, which I'm not 
sure is a good idea, but I can see the point of a plan cache. There 
might be some heuristics Pg could use to decide what to cache and to 
evict (planner time cost vs memory use, frequency of use, etc) so the 
app doesn't have to know or care about the plan cache.  However, I'm not 
too sure how you'd match an incoming query to a cached plan, and 
determine that the plan was still valid, with enough speed to really 
benefit from the plan cache. Then again, I don't know much about Pg's 
innards, so that doesn't mean much.


Tom Lane responded to that post to point out some of the complexities:

http://archives.postgresql.org/pgsql-hackers/2008-04/msg00868.php

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-21 Thread Craig Ringer

Earlier, I wrote:

Exactly what is gained by the use of persistent prepare over the use of 
a stored procedure?


Er, ENOCOFFEE. Sorry.

The benefit is obvious with use of global prepared statements at the 
wire protocol level rather than via SQL EXECUTE . It's a lot more 
efficient than EXECUTE or SELECT function(params).


It's pretty clear that there are some tricky aspects though, what with 
schema search paths, role priveleges, etc.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-20 Thread Milan Oparnica

Pavel wrote:



try to write prototype and show advantages... 


Prototype of what, implementation into Postgre or just efficiency of 
PRESISTANT PREPARE idea ?



...but I see some disadvatage
too. Mainly you have to manage some shared memory space for stored
plans. It's not easy task - MySQL develepoers can talk. Implemenation
on postgresql is little bit dificult - lot of structures that lives in
processed memory have to be moved to shared memory.



Is it solved in MySQL or they've just tried ?

We could have only PREP STATEMENT definition stored in shared memory 
(probably something like stored procedures), and it could be run in 
local processed memory. We could even assume only fetching data would be 
used through PREP STATEMENTS for start, and later introduce data 
modification. Is there some simplified PG algorithm we could use to 
understand the amount of work needed for introducing such feature to PG?


This feature is nice, but question is - who do write it? 


With a little help form PG developers and good documentation perhaps I 
could put some programmers from my team on this job. They are mostly C++ 
programmers but we have Delphi and Java if needed.



Actually this problem is solved from outside - with pooling.



I'm very interested to learn more about this solution. Can you please 
send me details or some links where I could research this solution ?



Thank you for your reply Pavel.

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-20 Thread Milan Oparnica

Richard Huxton wrote:

Milan Oparnica wrote:

It's simply to complicated to return recordsets through 
server-side stored procedures. They are obviously designed to do 
complex data manipulation ...


 Richard wrote:
I'm not convinced it's always a win one way or another.
You still haven't said what's too complicated about defining a 
function:


CREATE FUNCTION users_at_dotcom(text) RETURNS SETOF users AS $$
  SELECT * FROM users WHERE email LIKE '%@' || $1 || '.com';
$$ LANGUAGE SQL;
  Richard Huxton
  Archonet Ltd
---
Hi Richard,

It sounds like you suggest not using PREPARED statement nor stored 
procedures to fetch data. What do you think is the best way ?


The example you posted is the only situation where it's simple to use 
stored procedures to fetch data.



Try to write following simple scenario:

a. Data is retrieved from two tables in INNER JOIN
b. I don't need all fields, but just some of them from both tables

Lets call tables Customers and Orders.

Definition of tables are:
Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))

Now I need a list of order numbers for some customer:

SELECT C.CustomID, C.Name, O.OrderNum
FROM Customers C INNER JOIN Orders  O ON C.CustomID=O.CustomID
WHERE C.Name LIKE some input parameter

Can you write this without defining an SETOF custom data type ?
--
NOTE! THIS IS VERY SIMPLIFIED REPRESENTATION OF REAL-LIFE STRATEGY.
--
We sometimes have JOINS up to 10 tables.

Besides, using report engines (like Crystal Reports) forces you to avoid 
queries where column order of the recordset can change. If you built a 
report on a query having CutomID,Name,OrderNum columns adding a column 
(CustomID,Name,Adress,OrderNum) will require recompiling the report if 
you want it to give correct results.


Thats one of the reasons we avoid SELECT * statements. Another is 
because some user roles do not have permissions to examine table 
structures. In such cases SELECT * returns error.


I hope I managed to present what I meant by too complicated when using 
stored procedures to fetch data.


PREPARED statements do not suffer from such overhead. They simply return 
records as if the statement was prepared in the client.


I will repeat, it took 5 minutes for prepared statement to return 
results of the same SQL that took 16 minutes for the stored procedure to 
do so. SP was written to return SETOF user type. If you want, I'll send 
you the exact SQL and the database. Later we tested other queries and it 
was always better performance using prepared statements then stored 
procedures with SETOF user defined types.


Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-20 Thread Milan Oparnica

Richard Huxton wrote:

Milan Oparnica wrote:

It's simply to complicated to return recordsets through 
server-side stored procedures. They are obviously designed to do 
complex data manipulation ...


 Richard wrote:
I'm not convinced it's always a win one way or another.
You still haven't said what's too complicated about defining a 
function:


CREATE FUNCTION users_at_dotcom(text) RETURNS SETOF users AS $$
  SELECT * FROM users WHERE email LIKE '%@' || $1 || '.com';
$$ LANGUAGE SQL;
  Richard Huxton
  Archonet Ltd
---
Hi Richard,

It sounds like you suggest not using PREPARED statement nor stored 
procedures to fetch data. What do you think is the best way ?


The example you posted is the only situation where it's simple to use 
stored procedures to fetch data.



Try to write following simple scenario:

a. Data is retrieved from two tables in INNER JOIN
b. I don't need all fields, but just some of them from both tables

Lets call tables Customers and Orders.

Definition of tables are:
Customers (CustomID INTEGER, Name TEXT(50), Adress TEXT(100))
Orders (OrderID INTEGER, CustomID INTEGER, OrderNum TEXT(10))

Now I need a list of order numbers for some customer:

SELECT C.CustomID, C.Name, O.OrderNum
FROM Customers C INNER JOIN Orders  O ON C.CustomID=O.CustomID
WHERE C.Name LIKE some input parameter

Can you write this without defining an SETOF custom data type ?
--
NOTE! THIS IS VERY SIMPLIFIED REPRESENTATION OF REAL-LIFE STRATEGY.
--
We sometimes have JOINS up to 10 tables.

Besides, using report engines (like Crystal Reports) forces you to avoid 
queries where column order of the recordset can change. If you built a 
report on a query having CutomID,Name,OrderNum columns adding a column 
(CustomID,Name,Adress,OrderNum) will require recompiling the report if 
you want it to give correct results.


Thats one of the reasons we avoid SELECT * statements. Another is 
because some user roles do not have permissions to examine table 
structures. In such cases SELECT * returns error.


I hope I managed to present what I meant by too complicated when using 
stored procedures to fetch data.


PREPARED statements do not suffer from such overhead. They simply return 
records as if the statement was prepared in the client.


I will repeat, it took 5 minutes for prepared statement to return 
results of the same SQL that took 16 minutes for the stored procedure to 
do so. SP was written to return SETOF user type. If you want, I'll send 
you the exact SQL and the database. Later we tested other queries and it 
was always better performance using prepared statements then stored 
procedures with SETOF user defined types.


Best regards,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-17 Thread Richard Huxton

Milan Oparnica wrote:

Milan Oparnica wrote:


It's simply to complicated to return recordsets through server-side 
stored procedures. They are obviously designed to do complex data 
manipulation, returning few output variables informing the caller 
about final results. Returning records through sets of 
user-defined-types is memory and performance waste (please see my 
previous post as reply to Steve for more details).


I'm not convinced it's always a win one way or another.

 Plus it's hard to
maintain and make improvements to such a system. I hate to see 800 
user types made for every query we made as stored procedure.


Is this topic completely out of scope in Postgre ?
If I'm missing something too obvious or too important, please let me 
know what.


You still haven't said what's too complicated about defining a function:

CREATE FUNCTION users_at_dotcom(text) RETURNS SETOF users AS $$
  SELECT * FROM users WHERE email LIKE '%@' || $1 || '.com';
$$ LANGUAGE SQL;

SELECT * FROM users_at_dotcom('archonet');
 uid |  uname  | email
-+-+---
   1 | richard | [EMAIL PROTECTED]


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-16 Thread Milan Oparnica

Milan Oparnica wrote:


It's simply to complicated to return recordsets through server-side 
stored procedures. They are obviously designed to do complex data 
manipulation, returning few output variables informing the caller about 
final results. Returning records through sets of user-defined-types is 
memory and performance waste (please see my previous post as reply to 
Steve for more details). Plus it's hard to maintain and make 
improvements to such a system. I hate to see 800 user types made for 
every query we made as stored procedure.


Is this topic completely out of scope in Postgre ?
If I'm missing something too obvious or too important, please let me 
know what.


I run over and over through internet and Postgre documentation and still 
found nothing.


Is there a better place to communicate with Postgre developers ?

Sincerely,

Milan Oparnica

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-13 Thread Milan Oparnica

[snip]
 What's wrong with using complex views, stored procedures, functions and
 maybe even custom data types to accomplish what you want here?

Hi Steve,

Correct me if I’m wrong, but views can not accept parameters, and stored 
procedures require defining sets of custom data types or some other 
complicated ways in order to simply accept some parameters and return 
some recordsets useful to the calling application.


In 90% of cases we simply call same SQL statements with slightly 
different parameters and then pass it to business logic of the program. 
SP's are fine, they allow complex looping, nesting, variable declaration 
etc. but 90% of time we just need a SQL with few parameters and a 
resulting recordsets.


Just try to return SELECT * FROM table WHERE field LIKE condition 
using SP!
You need to define a SET OF table type and then populate the result 
into this set consuming both time and memory (it states so in Postgre 
documentation).

You can't write it as a view because it has a condition as a parameter.
What’s the best solution having in mind that this query will be called 
hundreds of times by all clients connected to DB?


Moreover, I've run absolutely same SQL statement as a PREPARED statement 
and as a Stored procedure returning SET OF CUSTOM TYPE. It contained 
several union queries over a database of 6 millions of records returning 
a recordset of 1.5 millions of records. It took 5 min. for the prepared 
statement to return the results vs. 16 minutes that was required by SP. 
Memory consumption in case of prepared statement was around 300 MB while 
it took over 800MB for SP.


It could be that there is a more efficient way to return recordsets from 
SP's that I do not know, but I couldn't find it in Postgre documentation.


Besides, table definitions tend to change during time. How often did you 
add columns to a table? In this case custom data types must be upgraded, 
as well as EVERY stored procedure using that type. In case of prepared 
statements, you can upgrade only those you need to. This is extremely 
useful for building and improving reports.


[*** not important, just a presentation of something we found useful in 
other engines ***]


There is an interesting concept in JET engine (with all of deficiency of 
the engine itself); it allows building PREPARED statements over other 
PREPARED statements automatically passing parameters by name to the 
underlying PREPARED statement. This concept allows for realization of a 
base SQL statement preparing data for profitloss report, and then using 
it in another SQL statement by adding joins to some specific tables 
(sectors and employees for instance). That way within minutes you can 
produce a profitloss by sectors, or by employees or whatever.


What is the magic part, if we introduce new type of documents to our 
program that do influence profitloss then we simply change the BASE 
prepared statement. And ALL reports based on it get upgraded 
automatically.


Best regards,
Milan Oparnica


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] PERSISTANT PREPARE (another point of view)

2008-07-11 Thread Milan Oparnica

Hi,

We are new to Postgre, actually we are migrating from MICROSOFT DBMS 
technologies to...hopefully Postgre.
Our company is involved in ERP business software in Serbia and region, 
currently counting over 200 clients. Some of them have DB's over 4GB in size.

Reason for posting is implementation of PREPARE statements.
I've read a thread # PREPARE and stuff PFC on pgsql-performance 2007-06 list 
and I do agree that it would not gain performance issues.

What could we gain by introducing a kind of global prepared statement area, is 
SIMPLICITY of DB DEVELOPMENT AND MAINTENANCE.

Here is our point of view:

We have an application layer running over db layer. Application layer consists 
of classes and interfaces and db layer contains data and various data 
manipulation structures.
Application layer calls SQL statements expecting some datasets as results 
(inventory list for instance). What it doesn't care about is HOW is query built 
(sorting, conditions, etc.) as long as it returns EXPECTED columns. Application 
simplly calls EXECUTE statement (param1,param2...). Developers working 
application layer do not interfere with developers working on DB and queries. 
Plus MOST queries can be written to be reusable in various situations !!!

The idea is: LETS SEPARATE SQL STATEMENTS FROM APPLICATION CODE.

This way, we can introduce fine tuning to each of our clients without having to 
recompile our application. We can also work on improvements of queries 
performance and complexity without recompile of the application layer.

Since one company has one set of rules PREPARED statements apply to every 
client connected to that database.

Now, instead of preparing statements on each connection request (and we use 
around 900 prepared statements), why couldn't we simply prepare these 
statements ONCE and keep them in some global storage for future everyday usage.

We use this approach for forms  reports creation where Crystal Report engine 
creates outlined report based on static prepared statement.

This approach is probably not suitable for large db systems with tons of data, 
but is very efficient in 90% of small and medium business size databases.

Please consider this issue when planning your WISH LIST or hopefully 
To-do-task-list.

Ill be glad to here comments on this topic as well.

Milan Oparnica
MELANY SOFWARE TEAM

_
Invite your mail contacts to join your friends list with Windows Live Spaces. 
It's easy!
http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us

Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-11 Thread Tom Lane
Milan Oparnica [EMAIL PROTECTED] writes:
 [ much snipped ]
 What could we gain by introducing a kind of global prepared statement area, 
 is SIMPLICITY of DB DEVELOPMENT AND MAINTENANCE.
 The idea is: LETS SEPARATE SQL STATEMENTS FROM APPLICATION CODE.

Most people around this project think that the best way to do that is to
push as much logic as you can into server-side stored procedures.  That
gives you every advantage that a persistent-prepared-statement feature
would offer, and more besides:

* you can push procedural logic, as well as SQL, out of the application

* you can improve performance by reducing the number of network round
trips needed to accomplish a multi-SQL-statement task

regards, tom lane

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PERSISTANT PREPARE (another point of view)

2008-07-11 Thread Steve Midgley

At 04:31 PM 7/11/2008, [EMAIL PROTECTED] wrote:

Date: Fri, 11 Jul 2008 23:31:03 +
From: Milan Oparnica [EMAIL PROTECTED]
To: pgsql-sql@postgresql.org
Subject: PERSISTANT PREPARE (another point of view)
Message-ID: [EMAIL PROTECTED]
[snip]
What could we gain by introducing a kind of global prepared statement 
area, is SIMPLICITY of DB DEVELOPMENT AND MAINTENANCE.


Here is our point of view:
[snip]
Now, instead of preparing statements on each connection request (and 
we use around 900 prepared statements), why couldn't we simply prepare 
these statements ONCE and keep them in some global storage for future 
everyday usage.


Hi,

What's wrong with using complex views, stored procedures, functions and 
maybe even custom data types to accomplish what you want here? It seems 
like you could build a lot of prepared statements using these tools, 
providing your application layer developers with a consistent set of 
interfaces to obtain data that are not tied to the data tables 
themselves. And allowing them to insert/update/manage tables via 
structured interfaces as well.


Am I missing something?

Best,

Steve


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql