Michael, thanks for responding to my question.
I understand that the possible solution would be to go with partitioning.
Already started looking into that :)
Thanks again,
-Jibo
Sounds like your hitting the scalability barrier for context searches,
whatever that may be. I am assuming you'v
At 11:26 AM 12/18/2002 -0800, Jibo John wrote:
Hello DBAs,
I am currently involved in improving the search performance for a tool
which queries a table having a million records (and the table is growing
at a rate of 3000 records per day).
So sounds like, in the next year, it will at least 2 millio
Hello DBAs,
I am currently involved in improving the search performance for a tool
which queries a table having a million records (and the table is growing at
a rate of 3000 records per day).
Thought of introducing Intermedia search for 4 columns in the search table.
Created CONTEXT indexes to
You're reading 98506 blocks (770 M if db_block_size=8K
or 385M if db_block_size=4K) for a resultset of 0
rows.
Happily all your needed data is in memory.
You're sorting a lot but all sorts are done in memory.
--- Seema Singh <[EMAIL PROTECTED]> a écrit : >
Hi Gurus
> When I run one complex que
Hello,
I think, there are 3 factors in performance tuning:
- Time
- Amount
- Speed
Most imporatnt factor in performance tuning is the time. Of course others are
important, too. But, others are indirect indicator. For example:
- 1 block 1000 ms
- 1000 block 1 ms
As we see above, second one tak
Hi Gurus
When I run one complex query I get the following statistics.
Statistics
--
832 recursive calls
4 db block gets
98502 consistent gets
0 physical reads
0 redo size
995 bytes sen