I do, but I don't see any way around that with the data I have.

Dave G.

> Good Afternoon David
>
> sounds as if you have a number of non-unique indices (even possibly FTS!)
> slowing down queries..this should help you concentrate on the slower
> indices
> mysql>
> select TABLE_NAME,COLUMN_NAME,INDEX_NAME from
> INFORMATION_SCHEMA.STATISTICS
> where NON_UNIQUE=1;
>
> Anyone else?
> Martin--
> This email message and any files transmitted with it contain confidential
> information intended only for the person(s) to whom this email message is
> addressed.  If you have received this email message in error, please
> notify
> the sender immediately by telephone or email and destroy the original
> message without making a copy.  Thank you.
>
> ----- Original Message -----
> From: "Dave G" <[EMAIL PROTECTED]>
> To: <mysql@lists.mysql.com>
> Sent: Wednesday, June 27, 2007 11:32 AM
> Subject: optimization help
>
>
>>I have a table in my database (currently) that grows to be huge (and I
>> need to keep the data).  I'm in a redesign phase and I'm trying to do it
>> right.  So here are the relevant details:
>>
>> The table has several keys involved:
>>
>> mysql> desc data__ProcessedDataFrames;
>> +------------------------+------------------+------+-----+---------+----------------+
>> | Field                  | Type             | Null | Key | Default |
>> Extra
>>         |
>> +------------------------+------------------+------+-----+---------+----------------+
>> | processed_id           | int(10) unsigned | NO   | PRI | NULL    |
>> auto_increment |
>> | top_level_product_name | varchar(255)     | YES  | MUL | NULL    |
>>         |
>> | test_id                | int(10) unsigned | YES  | MUL | NULL    |
>>         |
>> | p_time                 | double           | YES  | MUL | NULL    |
>>         |
>> | processed_data         | mediumblob       | YES  |     | NULL    |
>>         |
>> +------------------------+------------------+------+-----+---------+----------------+
>> 6 rows in set (0.00 sec)
>>
>> This is the table that contains the data I'm interested in currently.
>> Queries on this table when it gets large is slow as molasses.  I'm
>> thinking about making a new table for anything with a different test_id
>> .... any opinions as to whether this is good or bad?
>>
>> Before you make fun of me for my questions, I a bit new to database
>> programming.
>>
>> If it is better design to break it into smaller tables (for speed
>> anyway)
>> then I would need to know how to query over multiple tables as though it
>> was one table.  Join will do this, but that takes forever (unless of
>> course I may be doing this wrong), so that's not a good option.  I need
>> to
>> be able to query over mutiple test_ids, which will be multiple tables,
>> for
>> specific top_level_product_name, with in some time range (using p_time).
>>
>> Any help would be appreciated.  I will happily give more information if
>> you need to offer an educated opinion.
>>
>> Thanks
>>
>> David Godsey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to