Re:RE: Too many db calls

2002-11-18 Thread dgoulet
Cary,

This is one topic I'll disagree with you.  Assume an application that uses
the database, but is on a machine outside the db server.  Having a number of
calls that return one or two rows will have a negative network impact that is
the results of SQL*Net and it's inefficiencies.  It is better in this case to
encapsulate all of the database interaction into a package where bind variables
will be used to return the desired results.  Using DBMS_SQL is a really BAD
thing to do for stuff like that.  OH, I really think that using DBMS_SQL is a
whole lot easier, for some things that is, than PRO*C's prepare, declare, open,
fetch, and close especially if you have to use that unwieldy SQLDA.  Lastly, I
am not a proponent of having the application merge result sets.  Most times the
merged results are smaller in size than the sum of the source giving your
network one heck of a headache.

BTW: I don't evaluate applications by their BCHR, but by their response
time.  Hit the return key, if I get an answer back in 10 seconds from the
original and 5 seconds from the revised, something was done right.

Dick Goulet

Reply Separator
Author: Cary Millsap [EMAIL PROTECTED]
Date:   11/16/2002 1:49 AM

Greg,

That's one case. PL/SQL is a really poor language in which to write an
application. The language tricks you into believing that writing a
scalable application can be accomplished in just a few lines of 4GL
code, but it's really not true. To write scalable PL/SQL, you need to
use DBMS_SQL. The resulting code is even more cumbersome than the same
function written in Pro*C.

Any language can be abused, though. We see a lot of Java, Visual Basic,
and Powerbuilder applications that do stuff like...

1. Parse inside loops, using literals instead of bind variables.
2. Parse *twice* for each execute by doing describe+parse+execute.
3. Manipulate one row at a time instead of using array processing
capabilities on fetches or inserts (this one, ironically, raises a
system's BCHR while it kills response time).
4. Join result sets in the application instead of in the database.


Cary Millsap
Hotsos Enterprises, Ltd.
http://www.hotsos.com

Upcoming events:
- Hotsos Clinic, Dec 9-11 Honolulu
- 2003 Hotsos Symposium on OracleR System Performance, Feb 9-12 Dallas
- Jonathan Lewis' Optimising Oracle, Nov 19-21 Dallas


-Original Message-
Sent: Saturday, November 16, 2002 2:38 AM
To: Multiple recipients of list ORACLE-L

Cary,

Thank you.

Could you elaborate on the issue of excessive database calls, which show
up
as excessive network traffic?

I can picture a PL/SQL loop, which executes an SQL statement over and
over
again.  This would produce many database calls, and it might be possible
to
remove the loop altogether, replacing it with a single SQL statement.
This
would reduce the database calls.

Is this the classic type of situation that produces too many db calls?
Or
are there other situations I'm missing that are more likely to be the
source
of this problem?

Thanks again.



- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Friday, November 15, 2002 4:13 PM


 Greg,

 I believe that the cultural root cause of the excessive LIO problem is
 the conception that physical I/O is what makes databases slow. Disk
I/O
 certainly *can* make a system slow, but in about 598 of 600 cases
we've
 seen in the past three years, it hasn't. [Why you should focus on
LIOs
 instead of PIOs at www.hotsos.com/catalog]

 The fixation on PIO of course focuses people's attention on the
database
 buffer cache hit ratio (BCHR) metric for evaluating efficiency. The
 problem is that the BCHR is a metric of INSTANCE efficiency, not SQL
 efficiency. However, many people mistakenly apply it as a metric of
SQL
 efficiency anyway.

 Of course, if one's radar equates SQL efficiency with the BCHR's
 proximity to 100%, then a lot of really bad SQL is going to show up on
 your radar wrongly identified as really good SQL. [Why a 99% buffer
 cache hit ratio is not okay at www.hotsos.com/catalog]

 One classic result is that people go on search and destroy missions
 for all full-table scans. They end up producing more execution plans
 that look like this than they should have:

   NESTED LOOPS
 TABLE ACCESS BY INDEX ROWID
   INDEX RANGE SCAN
 TABLE ACCESS BY INDEX ROWID
   INDEX RANGE SCAN

 This kind of plan produces great hit ratios because it tends to
revisit
 the same small set of blocks over and over again. This kind of plan is
 of course appropriate in many cases. But sometimes it is actually less
 work in the database to use full-table scans. [When to use an index
at
 www.hotsos.com/catalog.]


 Cary Millsap
 Hotsos Enterprises, Ltd.
 http://www.hotsos.com

 Upcoming events:
 - Hotsos Clinic, Dec 9-11 Honolulu
 - 2003 Hotsos Symposium on OracleR System Performance, Feb 9-12 Dallas
 - Jonathan Lewis' Optimising Oracle, Nov 19-21 Dallas


 

Re:RE: Too many db calls

2002-11-18 Thread Anjo Kolk

One thing to remember is that not every user call becomes a network 
interaction. It is actually far from that. For example: 

open, parse, bind, define, execute - is 5 user calls but one sqlnet round 
trip.  (AKA bundled or deferred)

Anjo.


On Monday 18 November 2002 03:33, you wrote:
 Cary,

 This is one topic I'll disagree with you.  Assume an application that
 uses the database, but is on a machine outside the db server.  Having a
 number of calls that return one or two rows will have a negative network
 impact that is the results of SQL*Net and it's inefficiencies.  It is
 better in this case to encapsulate all of the database interaction into a
 package where bind variables will be used to return the desired results. 
 Using DBMS_SQL is a really BAD thing to do for stuff like that.  OH, I
 really think that using DBMS_SQL is a whole lot easier, for some things
 that is, than PRO*C's prepare, declare, open, fetch, and close especially
 if you have to use that unwieldy SQLDA.  Lastly, I am not a proponent of
 having the application merge result sets.  Most times the merged results
 are smaller in size than the sum of the source giving your network one heck
 of a headache.

 BTW: I don't evaluate applications by their BCHR, but by their response
 time.  Hit the return key, if I get an answer back in 10 seconds from the
 original and 5 seconds from the revised, something was done right.

 Dick Goulet

 Reply Separator
 Author: Cary Millsap [EMAIL PROTECTED]
 Date:   11/16/2002 1:49 AM

 Greg,

 That's one case. PL/SQL is a really poor language in which to write an
 application. The language tricks you into believing that writing a
 scalable application can be accomplished in just a few lines of 4GL
 code, but it's really not true. To write scalable PL/SQL, you need to
 use DBMS_SQL. The resulting code is even more cumbersome than the same
 function written in Pro*C.

 Any language can be abused, though. We see a lot of Java, Visual Basic,
 and Powerbuilder applications that do stuff like...

 1. Parse inside loops, using literals instead of bind variables.
 2. Parse *twice* for each execute by doing describe+parse+execute.
 3. Manipulate one row at a time instead of using array processing
 capabilities on fetches or inserts (this one, ironically, raises a
 system's BCHR while it kills response time).
 4. Join result sets in the application instead of in the database.


 Cary Millsap
 Hotsos Enterprises, Ltd.
 http://www.hotsos.com

 Upcoming events:
 - Hotsos Clinic, Dec 9-11 Honolulu
 - 2003 Hotsos Symposium on OracleR System Performance, Feb 9-12 Dallas
 - Jonathan Lewis' Optimising Oracle, Nov 19-21 Dallas


 -Original Message-
 Sent: Saturday, November 16, 2002 2:38 AM
 To: Multiple recipients of list ORACLE-L

 Cary,

 Thank you.

 Could you elaborate on the issue of excessive database calls, which show
 up
 as excessive network traffic?

 I can picture a PL/SQL loop, which executes an SQL statement over and
 over
 again.  This would produce many database calls, and it might be possible
 to
 remove the loop altogether, replacing it with a single SQL statement.
 This
 would reduce the database calls.

 Is this the classic type of situation that produces too many db calls?
 Or
 are there other situations I'm missing that are more likely to be the
 source
 of this problem?

 Thanks again.



 - Original Message -
 To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
 Sent: Friday, November 15, 2002 4:13 PM

  Greg,
 
  I believe that the cultural root cause of the excessive LIO problem is
  the conception that physical I/O is what makes databases slow. Disk

 I/O

  certainly *can* make a system slow, but in about 598 of 600 cases

 we've

  seen in the past three years, it hasn't. [Why you should focus on

 LIOs

  instead of PIOs at www.hotsos.com/catalog]
 
  The fixation on PIO of course focuses people's attention on the

 database

  buffer cache hit ratio (BCHR) metric for evaluating efficiency. The
  problem is that the BCHR is a metric of INSTANCE efficiency, not SQL
  efficiency. However, many people mistakenly apply it as a metric of

 SQL

  efficiency anyway.
 
  Of course, if one's radar equates SQL efficiency with the BCHR's
  proximity to 100%, then a lot of really bad SQL is going to show up on
  your radar wrongly identified as really good SQL. [Why a 99% buffer
  cache hit ratio is not okay at www.hotsos.com/catalog]
 
  One classic result is that people go on search and destroy missions
  for all full-table scans. They end up producing more execution plans
  that look like this than they should have:
 
NESTED LOOPS
  TABLE ACCESS BY INDEX ROWID
INDEX RANGE SCAN
  TABLE ACCESS BY INDEX ROWID
INDEX RANGE SCAN
 
  This kind of plan produces great hit ratios because it tends to

 revisit

  the same small set of blocks over and over again. This kind of plan is
  of course