Oooo Oooo Ooooo!  EXCELLENT info, Cary!  And I just got a message that a
table had reached max extents -- 249 because we had 8K blocks.  :)

As a side note, while we don't strictly regulate the max extents, it is nice
to cap it somewhere, so when a rogue user tries to dump a whole year's worth
of data into a table (instead of the month's worth as per design), there is
a little extra safeguard that the TS won't fill up -- unless you're using
autoextend.  But that thread's been covered already...  :)

Thanks, Cary!
Rich Jesse                           System/Database Administrator
[EMAIL PROTECTED]              Quad/Tech International, Sussex, WI USA


> -----Original Message-----
> From: Cary Millsap [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, September 10, 2002 5:28 PM
> To: Multiple recipients of list ORACLE-L
> Subject: RE: OT: Misinformation Ranting
> 
> 
> Just for fun, a little historical perspective: Does anyone remember
> exactly why the number of extents *did* matter at one point 
> in history?
> Did it ever really?
> 
> * * *
> 
> The answer is that yes, it did matter for a while, but not for the
> reasons that most people believed, and not at all for most types of
> applications. Inserting, updating, deleting, and querying has *never*
> been appreciably slower for multi-extent tables or indexes than for
> single-extent ones. But for DROP statements, dictionary 
> managed response
> time is proportional to the square of the number of extents 
> (minutes for
> a few thousand extents, even on fast hardware). For locally managed
> tablespaces, response time is *much* better, proportional only to the
> number of extents (less than a second for tens of thousands 
> of extents,
> even on slow hardware).
> 
> Another problem was a bug in how Oracle reused data blocks in 
> clusters.
> "Clusters?! We don't use clusters!" Sure you do. Oracle 
> stores FET$ and
> TS$ in a cluster called C_TS#.
> 
> If you insert more than about 70 FET$ rows in a 2KB C_TS# 
> cluster block,
> then the cluster will chain (allocate a new block, and link to it).
> That's no problem. The problem is that, once upon a time, there was an
> Oracle bug that prevented good reuse of these blocks if you 
> deleted rows
> and then reinserted. For example, if you inserted 700 rows with TS#=7
> into FET$, then you'd drive the allocation of about 10 blocks 
> to C_TS#.
> Now, if you delete all 700 of those rows and insert a new 
> row, guess how
> many LIOs it would take to query that new row? Nope, not 1. Yes, 10.
> 
> The symptom? If you ever let a table get thousands of extents 
> in it, and
> then try to drop and recreate it, both the drop and the recreate would
> be really sloooow. The DROP would be slow because dictionary-managed
> DROPs are O(n^2). The recreate would be slow because querying FET$ for
> freespace information during the CREATE statements was doing far more
> work than it should have needed to do. This bug was fixed in Oracle
> 6.0.36. But the myth lives on through the magic of authors who either
> (a) assume that it's safe to generalize upon the results of one
> observation, or (b) believe that the benefits of sounding 
> authoritative
> exceed the costs of propagating incorrect information to thousands of
> buying believers.
> 
> "Any widely held myth can outlast a collection of mere facts."
>       --John H. White, Jr.
> 
> 
> Cary Millsap
> Hotsos Enterprises, Ltd.
> http://www.hotsos.com
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.com
-- 
Author: Jesse, Rich
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- (858) 538-5051  FAX: (858) 538-5051
San Diego, California        -- Public Internet access / Mailing Lists
--------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to