Hello,

one more idea, use something like this [ i hope it's correct :) ]

SELECT
        content.row_id AS row_id,
        content.app_id AS app_id,
        CASE s1.field_id
          when 69 then "niche",
          when 70 then "type",
          when 71 then "title",
          when 72 then "description",
          when 73 then "image",
          when 74 then "last_update",
          when 76 then "content_link",
          when 84 then "unique_id",
          when 129 then "date_added",
          when 116 then "content_provider",
          when 118 then "user_hits",
          when 120 then "vote_total",
          when 121 then "vote_user_total",
          when 125 then "channel" 
        END as fieldname,
        s1.data 
FROM 
        content JOIN
        content s1 ON s1.row_id = content.row_id and JOIN
        content s2 ON s2.field_id=74 AND s2.row_id = content.row_id
WHERE 
        content.app_id = 11 AND 
        unix_timestamp(s2.data)-unix_timestamp('2003-10-23 23:59:59') < 0 AND
        s1.field_id IN (69,70,71,72,73,74,76,84,129,116,118,120,121,125)
ORDER BY
        last_update desc LIMIT -1

I think it's can be fast enough, but your aplication have to collect the different 
fields...

burci

Thursday, October 23, 2003, 11:50:08 PM, you wrote:

MB> Thanks everyone for your input, I'll try the ramdisk idea, I read about
MB> someone else who tried that and had some success. Beyond, that I'm gonna
MB> take the long route and redesign the database to be a bit more
MB> conventional.

MB> Thanks!
MB> Matt

MB> On Thu, 2003-10-23 at 20:28, Peter Buri wrote:
>> Hello,
>> 
>> as i see you use one table to store all the data, but the cohesive data are
>> split into 15! different rows.
>> 
>> I think to get the best performance you shoud redesign your tabel.
>> Use at last first normal form [1NF], if the app_id is uniq this can be the
>> primary key [which will speed up the query] .
>> 
>> Data which have the same row_id should be in one row.
>> 
>> Your table definition shoud look like this:
>> 
>> create table content (
>>         app_id           MEDIUMINT NOT NULL AUTO_INCREMENT,
>>         niche            ??? , -- maybe int
>>         type             int,
>>         title            varchar(200),
>>         description      text,
>>         image            ???, -- maybe varchar
>>         last_update      datetime,
>>         content_link     varchar(200),
>>         unique_id        int,
>>         date_added       datetime,
>>         content_provider int,
>>         user_hits        int,
>>         vote_total       int,
>>         vote_user_total  int,
>>         channel          int,
>>         --...
>> 
>>         primary key ( app_id )
>> 
>> );
>> 
>> As i see you get at last 129 different filed type [s9.field_id=129], you can
>> split the data into different tables. [Use heigher normal form.]
>> 
>> But if you don't want redesign the tables and all the different fields exists
>> then use join instead of left join and then the "group by" is needless.
>> 
>> burci
>> 
>> Thursday, October 23, 2003, 9:05:26 PM, you wrote:
>> 
>> MB> Hey All-
>> 
>> MB> I am trying to improve the speed of a website and was wondering if
>> MB> anyone had any ways I can rewrite these queries so that they actually
>> MB> run with some descent speed.
>> 
>> MB> Its a really nasty query and I'm not sure where to start, I'd like to
>> MB> now have to redo the tables and I already put some indexes on it which
>> MB> improved speed a lot but these queries are still very slow. You can most
>> MB> certainly laugh to yourselves on this one... jsut trying to get some
>> MB> opinions on what I should do with this.
>> 
>> MB> Thanks-
>> MB> Matt
>> 
>> MB> SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
>> MB> niche, s2.data AS type, s3.data AS title, s4.data AS description,
>> MB> s5.data AS image, s6.data AS last_update, s7.data AS content_link,
>> MB> s8.data AS unique_id, s9.data AS date_added, s10.data AS
>> MB> content_provider, s11.data AS user_hits, s12.data AS vote_total,
>> MB> s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
>> MB> content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
>> MB> content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
>> MB> content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
>> MB> content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
>> MB> content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
>> MB> content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
>> MB> content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
>> MB> content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
>> MB> content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
>> MB> content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
>> MB> JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
>> MB> LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
>> MB> content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND s13.row_id
>> MB> = content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
>> MB> s14.row_id = content.row_id WHERE content.app_id = 11 AND
>> MB> unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0 GROUP
>> MB> BY row_id ORDER BY last_update desc LIMIT -1
>> 
>> 
>> 
>> 
>> -- 
>> [nick]:burci [hp]:http://peter.buri.hu [mailto]:[EMAIL PROTECTED]
>> [motto]:"Music makes life easier to survive!"
>> 




-- 
[nick]:burci [hp]:http://peter.buri.hu [mailto]:[EMAIL PROTECTED] [motto]:"Music makes 
life easier to survive!"


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to