Hi all,

do you guys know how to deal with the large tables? 

here's my problem:

I have two web servers( running Nginx ) , two DB servers( running MySQL 5.1.35 
) and a server for load balancing.

What I'm maintaining is a game data tracking system. There's a game_log table 
which will record all detail info from many games.

here's the structure:

  `game_log_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `game_id` int(10) unsigned NOT NULL,
  `event_id` int(10) unsigned NOT NULL,
  `player_id` int(10) unsigned NOT NULL,
  `session_id` varchar(128) NOT NULL COMMENT 'flash session id',
  `score` int(10) unsigned DEFAULT NULL,
  `handle_statu` int(1) unsigned NOT NULL DEFAULT '1' COMMENT '1:not handle  
2:been handle',
  `game_end` bigint(20) DEFAULT NULL,
  `game_start` bigint(20) unsigned NOT NULL DEFAULT '0',
  `event_time` float DEFAULT '0',
  PRIMARY KEY (`game_log_id`),
  KEY `game_id` (`game_id`),
  KEY `event_id` (`event_id`),
  KEY `player_id` (`player_id`)

it currently has about 12200000 records( 2 or 3 of the other tables have around 
a million records for each ). now, it's very slow to query this table even I 
just query this single table. most of the time it failed.

do you guys know what the problem is?  or how to make it more efficient and 
faster?

thanks in advance

CK
                                          
_________________________________________________________________
一张照片的自白――Windows Live照片的可爱视频介绍
http://windowslivesky.spaces.live.com/blog/cns!5892B6048E2498BD!889.entry

Reply via email to