> You know, this might sound strange, but does the performance drop off at
> all if you lose the indices? A table scan of rows 8 bytes wide is going
> to be pretty damn quick. Plus there's a lot less maintenance to do
> without
> indices and no risk of them getting corrupted.
A full table scan is
On 8 Feb 2004, at 20:28, Mark Hazen wrote:
My tables are just 2 INT columns. I have unique indexes on them going
both
ways.
Sounds like you're sorted.
You know, this might sound strange, but does the performance drop off at
all if you lose the indices? A table scan of rows 8 bytes wide is goin
> What's the nature of your query?
>
> If it's using an integer index and that's what your searching on, then
> having
> it physically sorted is a Good Thing. If you're table-scanning your
> main table, you're toast anyway. Finding ways of making that faster is
> the
> way to go, maybe partitioning
On 8 Feb 2004, at 19:37, Mark Hazen wrote:
*snip*
Here's my problem: I've got a bunch of tables with hundreds of
millions of
rows in them. Every night, I delete about couple million rows and
then run
millions of searches on these tables.
What should I worry about more? A sorted index or a da
In reference to my earlier message, I think I've figured out that the
equivalent command for OPTIMIZE TABLE is:
myisamchk -r --sort-index --analyze
That isn't documented anywhere... and in fact, the French language version
says something conflicting (I don't speak French but a Google search brough