Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Bryan Heitman
Hi Donny,
Remember that the Date field is also indexed:
KEY `myKey` (`AccountID`,`WordID`,`Position`,`date`);
Whether it is a SELECT COUNT(*) or a SELECT locationid, it still must 
evaluate the Date field, in both cases it should and does (according to 
explain) use the key: myKey w/o a table-scan required.

The select as you suggest w/o the date key is the same speed only it returns 
thousands of results, this query would make sense why it is slow to retrieve 
data.  However, my previous query which only returns 2 results should be 
fast and not slow.

You are right that SELECT COUNT(*) is a different query.  My point is the 
only difference is a SELECT locationid must retrieve the results as 
locationid is not in the index.  It should not however take 5 mins to return 
2 rows from the table.

Bryan
- Original Message - 
From: "Donny Simonton" <[EMAIL PROTECTED]>
To: "'Bryan Heitman'" <[EMAIL PROTECTED]>; "'Sergio Salvi'" 
<[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, December 23, 2004 2:49 PM
Subject: RE: scanning 2 rows slow index fast 26GB MyISAM


Bryan,
Select count(*) is basically a different query then select locationid or 
any
of your fields.  I have tables with way more than a billion rows of
information, I have some in innodb and some in myisam, and neither of them
when heavily loaded will take as long as yours is taking.

I recommend that you try this:
Run and Explain:
select locationid from matrix where accountid = 11 and wordid = 72
and position = 'Body';
Who many results do you get?
Then run and explain:
select locationid from matrix where accountid = 11 and wordid = 72
and position = 'Body' and date > now() - interval 10 day;
How many results to you get on this query?
I am betting the problem is that you only have results in the past 10 days
and nothing before that with accounted =11, worded =72 and position = 
Body.
Which would then do a scan on the date, since it doesn't have anything
before that.  Just a theory.

Donny


-Original Message-
From: Bryan Heitman [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 23, 2004 2:02 PM
To: Sergio Salvi
Cc: mysql@lists.mysql.com
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM
Hi Sergio,
All of your suggestions deal with key optimization, I do not believe I
have
a key issue here.  Remember that select count(*), an index-only query
returns in .06 seconds which is very quick.  The real question, is why
does
it take 5 mins to retrieve the row data for these 2 rows that the index
retrieved so quickly.  Why the delay and why the heavy read activity on
the
MYD file.
That to me does not make a lot of sense on the time it takes, does MyISAM
not handle large MYD files w/ a billion rows that well where I should
split
my data across many tables instead?  I have certainly not ran across this
issue before, but this is the first time I have a table with a billion
rows.
mysql> show index from matrix;
+++--+--+-+---
+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation
|
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++--+--+-+---
+-+--++--++-+
| matrix |  1 | myKey|1 | AccountID   | A
|
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|2 | WordID  | A
|
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|3 | Position| A
|
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|4 | date| A
|
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey2   |1 | LocationID  | A
|
NULL | NULL | NULL   |  | BTREE  | |
+++--+--+-+---
+-+--++--++-+
5 rows in set (0.00 sec)
- Original Message -
From: "Sergio Salvi" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, December 23, 2004 12:01 PM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM
>
> On Thu, 23 Dec 2004, Bryan Heitman wrote:
>
>> My mistake!  Here you go:
>
> Ok, no prob :)
>
>>
>> CREATE TABLE `matrix` (
>>   `WordID` int(11) unsigned NOT NULL default '0',
>>   `LocationID` int(11) unsigned NOT NULL default '0',
>>   `Position` enum('Body','From','Subject','To','Mailbox','File') NOT
NULL
>> default 'Body',
>>   `times` int(11) unsigne

Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Bryan Heitman
Hi Sergio,
All of your suggestions deal with key optimization, I do not believe I have 
a key issue here.  Remember that select count(*), an index-only query 
returns in .06 seconds which is very quick.  The real question, is why does 
it take 5 mins to retrieve the row data for these 2 rows that the index 
retrieved so quickly.  Why the delay and why the heavy read activity on the 
MYD file.

That to me does not make a lot of sense on the time it takes, does MyISAM 
not handle large MYD files w/ a billion rows that well where I should split 
my data across many tables instead?  I have certainly not ran across this 
issue before, but this is the first time I have a table with a billion rows.

mysql> show index from matrix;
+++--+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++--+--+-+---+-+--++--++-+
| matrix |  1 | myKey|1 | AccountID   | A | 
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|2 | WordID  | A | 
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|3 | Position| A | 
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey|4 | date| A | 
NULL | NULL | NULL   |  | BTREE  | |
| matrix |  1 | myKey2   |1 | LocationID  | A | 
NULL | NULL | NULL   |  | BTREE  | |
+++--+--+-+---+-+--++--++-+
5 rows in set (0.00 sec)

- Original Message - 
From: "Sergio Salvi" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, December 23, 2004 12:01 PM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM


On Thu, 23 Dec 2004, Bryan Heitman wrote:
My mistake!  Here you go:
Ok, no prob :)
CREATE TABLE `matrix` (
  `WordID` int(11) unsigned NOT NULL default '0',
  `LocationID` int(11) unsigned NOT NULL default '0',
  `Position` enum('Body','From','Subject','To','Mailbox','File') NOT NULL
default 'Body',
  `times` int(11) unsigned NOT NULL default '0',
  `MyOrder` int(11) unsigned NOT NULL default '0',
  `AccountID` int(11) unsigned NOT NULL default '0',
  `date` timestamp(19) NOT NULL,
  KEY `myKey` (`AccountID`,`WordID`,`Position`,`date`),
  KEY `myKey2` (`LocationID`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA
DIRECTORY='/home/imap/fuse3.disk2/SQL/search/'
Oops, I forgot to ask you to send the output of "show index from matrix".
But your index "myKey" looks goods, you could try changing the order of
the fields in your key. Try creating a index with your fields ordered by
the "Cardinality" value from the "show index from matrix" output
(in asceding order).
Also, what happens if you don't specify the "date" value in your query?
Check the time it takes and the explain output.
Another thing I would suggest is to create (or replace) your index,
trying all (or almost all) of the possible combinations regarding the
order of the keys in your index. It helped me in some situations, and
sometimes it's better for me to keep two indices with the same keys but
different order, because of my different selects.
Hope that helps!
[]s,
Sergio


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Bryan Heitman
My mistake!  Here you go:
CREATE TABLE `matrix` (
 `WordID` int(11) unsigned NOT NULL default '0',
 `LocationID` int(11) unsigned NOT NULL default '0',
 `Position` enum('Body','From','Subject','To','Mailbox','File') NOT NULL 
default 'Body',
 `times` int(11) unsigned NOT NULL default '0',
 `MyOrder` int(11) unsigned NOT NULL default '0',
 `AccountID` int(11) unsigned NOT NULL default '0',
 `date` timestamp(19) NOT NULL,
 KEY `myKey` (`AccountID`,`WordID`,`Position`,`date`),
 KEY `myKey2` (`LocationID`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA 
DIRECTORY='/home/imap/fuse3.disk2/SQL/search/'

- Original Message - 
From: "Sergio Salvi" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, December 23, 2004 10:08 AM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM


Bryan,
Can you send the output of "show create table matrix"? You've just sent
the "location" table output, but your select command refers to a table
called "matrix".
[]s,
Sergio.
On Wed, 22 Dec 2004, Bryan Heitman wrote:
I am experiencing extreme slowness performing a query in which 2 rows are
returned hanging in the "sending data" status.
Performing an index only query such as SELECT COUNT(*) is extremely quick 
so
I know the only extra step is retrieving the data from the MYD.

I am looking for thoughts on why this is slow and what can be done to 
speed
it up.  I find it unusual why it would take this long to simply grab 2 
rows
from the MYD.  vmstat reports high reads and strace confirms pread()'s on
the MYD file.

The only abnormality is my  table size MYD is 26 gig and my MYI is about 
30
gig.

Test system details, tests were performed with no load.
System: Redhat Linux 2.4.28
Mysql: tested on versions 4.0.22 and latest 4.1 tree
IDE (WD 2500 JB 8 mb buff) disk1 used for MYD (written contiguously at
beginning of disk)
IDE (WD 2500 JB 8 mb buff) disk2 used for MYI
dual xeon 2.4ghz
1gb ddr266 mem
Here are query tests & detail below:
mysql> select count(*) from matrix where accountid = 11 and wordid = 72 
and
position = 'Body' and date > now() - interval 10 day;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.06 sec)

mysql> select locationid from matrix where accountid = 11 and wordid = 72
and position = 'Body' and date > now() - interval 10 day;
++
| locationid |
++
|  47932 |
|  29571 |
++
2 rows in set (5 min 35.93 sec)
mysql> explain select locationid from matrix where accountid = 11 and 
wordid
= 71 and position = 'Body' and date > now() - interval 10 day;
++--+---+---+-+---+---+-+
| table  | type | possible_keys | key   | key_len | ref   | 
rows
| Extra   |
++--+---+---+-+---+---+-+
| matrix | ref  | myKey | myKey |   9 | const,const,const |
56909 | Using where |
++--+---+---+-+---+---+-+

CREATE TABLE `location` (
  `LocationID` int(11) unsigned NOT NULL auto_increment,
  `ImapUID` int(11) unsigned NOT NULL default '0',
  `AccountID` int(11) unsigned NOT NULL default '0',
  `Date` timestamp(19) NOT NULL,
  `FromWho` tinyblob,
  `Subject` tinyblob,
  `SentTo` tinyblob,
  `mailbox` varchar(255) default NULL,
  `body` longblob,
  PRIMARY KEY  (`LocationID`),
  KEY `myKey` (`LocationID`,`AccountID`,`Date`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA
DIRECTORY='/home/SQL/' INDEX DIRECTORY='/home/SQL/'




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Bryan Heitman
" Because it should scan through all the table to get all records,so it 
takes so a long time,i think."

Leo, see below in the Extra column, it is not doing a table scan according 
to explain.

mysql> explain select locationid from matrix where accountid = 11 and wordid
= 71 and position = 'Body' and date > now() - interval 10 day;
++--+---+---+-+---+---+-+
| table  | type | possible_keys | key   | key_len | ref   |
rows
| Extra   |
++--+---+---+-+---+---+-+
| matrix | ref  | myKey | myKey |   9 | const,const,const |
56909 | Using where |
++--+---+---+-+---+---+-+
- Original Message - 
From: "leo" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Sent: Thursday, December 23, 2004 9:00 AM
Subject: Re: Re: scanning 2 rows slow index fast 26GB MyISAM

Bryan Heitman,您好!
Because it should scan through all the table to get all records,so it takes 
so a long time,i think.

=== 2004-12-23 09:37:00 您在来信中写道:===
Thanks Leo,
However, remember that the key returns quickly on index-only queries, so 
the
conditions in the WHERE are not the problem.  I want to know why it takes 5
mins to scan 2 rows from the MYD

Bryan
- Original Message - 
From: "leo" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Sent: Thursday, December 23, 2004 2:00 AM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM

HI,Bryan Heitman,
You may change there column `date` to type int,and fill in it with
UNIX_TMIESTAMP format value,and alter the key `myKey` on
(`AccountID`,`wordid`,`position`,`Date`) if you often use statement like
"where accountid = xx and wordid = xx and position = 'xx' and date > 
now() -
interval 10 day".

=== 2004-12-22 22:17:00 您在来信中写道:===
I am experiencing extreme slowness performing a query in which 2 rows are
returned hanging in the "sending data" status.
Performing an index only query such as SELECT COUNT(*) is extremely quick
so
I know the only extra step is retrieving the data from the MYD.
I am looking for thoughts on why this is slow and what can be done to 
speed
it up.  I find it unusual why it would take this long to simply grab 2 
rows
from the MYD.  vmstat reports high reads and strace confirms pread()'s on
the MYD file.

The only abnormality is my  table size MYD is 26 gig and my MYI is about 
30
gig.

Test system details, tests were performed with no load.
System: Redhat Linux 2.4.28
Mysql: tested on versions 4.0.22 and latest 4.1 tree
IDE (WD 2500 JB 8 mb buff) disk1 used for MYD (written contiguously at
beginning of disk)
IDE (WD 2500 JB 8 mb buff) disk2 used for MYI
dual xeon 2.4ghz
1gb ddr266 mem
Here are query tests & detail below:
mysql> select count(*) from matrix where accountid = 11 and wordid = 72 
and
position = 'Body' and date > now() - interval 10 day;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.06 sec)

mysql> select locationid from matrix where accountid = 11 and wordid = 72
and position = 'Body' and date > now() - interval 10 day;
++
| locationid |
++
|  47932 |
|  29571 |
++
2 rows in set (5 min 35.93 sec)
mysql> explain select locationid from matrix where accountid = 11 and
wordid
= 71 and position = 'Body' and date > now() - interval 10 day;
++--+---+---+-+---+---+-+
| table  | type | possible_keys | key   | key_len | ref   |
rows
| Extra   |
++--+---+---+-+---+---+-+
| matrix | ref  | myKey | myKey |   9 | const,const,const |
56909 | Using where |
++--+---+---+-+---+---+-+
CREATE TABLE `location` (
 `LocationID` int(11) unsigned NOT NULL auto_increment,
 `ImapUID` int(11) unsigned NOT NULL default '0',
 `AccountID` int(11) unsigned NOT NULL default '0',
 `Date` timestamp(19) NOT NULL,
 `FromWho` tinyblob,
 `Subject` tinyblob,
 `SentTo` tinyblob,
 `mailbox` varchar(255) default NULL,
 `body` longblob,
 PRIMARY KEY  (`LocationID`),
 KEY `myKey` (`LocationID`,`AccountID`,`Date`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA
DIRECTORY='/home/SQL/' INDEX DIRECTORY='/home/SQL/'

Best regards,
Bryan Heitman
FuseMail Team
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
.
= = = = = = = = = = = = = = = = = = = =
给你的祝福,要让你招架不住!
致
礼!
leo
[EMAIL PROTECTED]
  2004-12-23

Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Bryan Heitman
Thanks Leo,
However, remember that the key returns quickly on index-only queries, so the
conditions in the WHERE are not my delay at this time.  I want to know why 
it takes 5
mins to scan 2 rows from the MYD

Bryan
- Original Message - 
From: "Bryan Heitman" <[EMAIL PROTECTED]>
To: "leo" <[EMAIL PROTECTED]>
Sent: Thursday, December 23, 2004 9:37 AM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM


Thanks Leo,
However, remember that the key returns quickly on index-only queries, so 
the conditions in the WHERE are not the problem.  I want to know why it 
takes 5 mins to scan 2 rows from the MYD

Bryan
- Original Message - 
From: "leo" <[EMAIL PROTECTED]>
To: "Bryan Heitman" <[EMAIL PROTECTED]>
Sent: Thursday, December 23, 2004 2:00 AM
Subject: Re: scanning 2 rows slow index fast 26GB MyISAM

HI,Bryan Heitman,
You may change there column `date` to type int,and fill in it with 
UNIX_TMIESTAMP format value,and alter the key `myKey` on 
(`AccountID`,`wordid`,`position`,`Date`) if you often use statement like 
"where accountid = xx and wordid = xx and position = 'xx' and date > 
now() - interval 10 day".

=== 2004-12-22 22:17:00 您在来信中写道:===
I am experiencing extreme slowness performing a query in which 2 rows are
returned hanging in the "sending data" status.
Performing an index only query such as SELECT COUNT(*) is extremely quick 
so
I know the only extra step is retrieving the data from the MYD.

I am looking for thoughts on why this is slow and what can be done to 
speed
it up.  I find it unusual why it would take this long to simply grab 2 
rows
from the MYD.  vmstat reports high reads and strace confirms pread()'s on
the MYD file.

The only abnormality is my  table size MYD is 26 gig and my MYI is about 
30
gig.

Test system details, tests were performed with no load.
System: Redhat Linux 2.4.28
Mysql: tested on versions 4.0.22 and latest 4.1 tree
IDE (WD 2500 JB 8 mb buff) disk1 used for MYD (written contiguously at
beginning of disk)
IDE (WD 2500 JB 8 mb buff) disk2 used for MYI
dual xeon 2.4ghz
1gb ddr266 mem
Here are query tests & detail below:
mysql> select count(*) from matrix where accountid = 11 and wordid = 72 
and
position = 'Body' and date > now() - interval 10 day;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.06 sec)

mysql> select locationid from matrix where accountid = 11 and wordid = 72
and position = 'Body' and date > now() - interval 10 day;
++
| locationid |
++
|  47932 |
|  29571 |
++
2 rows in set (5 min 35.93 sec)
mysql> explain select locationid from matrix where accountid = 11 and 
wordid
= 71 and position = 'Body' and date > now() - interval 10 day;
++--+---+---+-+---+---+-+
| table  | type | possible_keys | key   | key_len | ref   | 
rows
| Extra   |
++--+---+---+-+---+---+-+
| matrix | ref  | myKey | myKey |   9 | const,const,const |
56909 | Using where |
++--+---+---+-+---+---+-+

CREATE TABLE `location` (
 `LocationID` int(11) unsigned NOT NULL auto_increment,
 `ImapUID` int(11) unsigned NOT NULL default '0',
 `AccountID` int(11) unsigned NOT NULL default '0',
 `Date` timestamp(19) NOT NULL,
 `FromWho` tinyblob,
 `Subject` tinyblob,
 `SentTo` tinyblob,
 `mailbox` varchar(255) default NULL,
 `body` longblob,
 PRIMARY KEY  (`LocationID`),
 KEY `myKey` (`LocationID`,`AccountID`,`Date`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA
DIRECTORY='/home/SQL/' INDEX DIRECTORY='/home/SQL/'

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
.
= = = = = = = = = = = = = = = = = = = =
给你的祝福,要让你招架不住!
致
礼!
leo
[EMAIL PROTECTED]
  2004-12-23




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


scanning 2 rows slow index fast 26GB MyISAM

2004-12-22 Thread Bryan Heitman
I am experiencing extreme slowness performing a query in which 2 rows are 
returned hanging in the "sending data" status.

Performing an index only query such as SELECT COUNT(*) is extremely quick so 
I know the only extra step is retrieving the data from the MYD.

I am looking for thoughts on why this is slow and what can be done to speed 
it up.  I find it unusual why it would take this long to simply grab 2 rows 
from the MYD.  vmstat reports high reads and strace confirms pread()'s on 
the MYD file.

The only abnormality is my  table size MYD is 26 gig and my MYI is about 30 
gig.

Test system details, tests were performed with no load.
System: Redhat Linux 2.4.28
Mysql: tested on versions 4.0.22 and latest 4.1 tree
IDE (WD 2500 JB 8 mb buff) disk1 used for MYD (written contiguously at 
beginning of disk)
IDE (WD 2500 JB 8 mb buff) disk2 used for MYI
dual xeon 2.4ghz
1gb ddr266 mem

Here are query tests & detail below:
mysql> select count(*) from matrix where accountid = 11 and wordid = 72 and 
position = 'Body' and date > now() - interval 10 day;
+--+
| count(*) |
+--+
|2 |
+--+
1 row in set (0.06 sec)

mysql> select locationid from matrix where accountid = 11 and wordid = 72 
and position = 'Body' and date > now() - interval 10 day;
++
| locationid |
++
|  47932 |
|  29571 |
++
2 rows in set (5 min 35.93 sec)

mysql> explain select locationid from matrix where accountid = 11 and wordid 
= 71 and position = 'Body' and date > now() - interval 10 day;
++--+---+---+-+---+---+-+
| table  | type | possible_keys | key   | key_len | ref   | rows 
| Extra   |
++--+---+---+-+---+---+-+
| matrix | ref  | myKey | myKey |   9 | const,const,const | 
56909 | Using where |
++--+---+---+-+---+---+-+

CREATE TABLE `location` (
 `LocationID` int(11) unsigned NOT NULL auto_increment,
 `ImapUID` int(11) unsigned NOT NULL default '0',
 `AccountID` int(11) unsigned NOT NULL default '0',
 `Date` timestamp(19) NOT NULL,
 `FromWho` tinyblob,
 `Subject` tinyblob,
 `SentTo` tinyblob,
 `mailbox` varchar(255) default NULL,
 `body` longblob,
 PRIMARY KEY  (`LocationID`),
 KEY `myKey` (`LocationID`,`AccountID`,`Date`)
) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA 
DIRECTORY='/home/SQL/' INDEX DIRECTORY='/home/SQL/'


Best regards,
Bryan Heitman
FuseMail Team 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql and phpBB

2004-04-05 Thread Bryan Heitman
Mark,

You may have a corrupt table.  I would recommend printing the $query just
before mysql_query() is ran.  This will give you the query that is failing
and the table that it is attempting to write to.  May want to try a "check
table" on this table.


Bryan
- Original Message - 
From: "Mark Susol | Ultimate Creative Media"
<[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, April 05, 2004 8:21 PM
Subject: Mysql and phpBB


I'm working with a script that is made for phpBB to rebuild the search
engine. The script does work. I've been able to use it with limited success.
The problem is I haven't been able to use it with complete success.

The problem I'm having is that when it attempts to run its query, which
selects X post texts from a table

$sql = "SELECT post_id, post_subject, post_text FROM ". POSTS_TEXT_TABLE ."
LIMIT $start, ". $HTTP_GET_VARS['post_limit'];

$result = $db->sql_query ($sql);

I will eventually get

Warning: mysql_query(): Unable to save result set

Which comes from the connection script line:

$this->query_result = mysql_query($query, $this->db_connect_id);


Someone suggested I use mysql_unbuffered_query() on another site.

I've never had this kind of error before with this site. Its a new server
I've deployed. Are there my.cnf settings I need have in order to run a
script that will work on 300,000 records doing 10 at a time even? Even if I
try this with 1 or 2 posts text I get this error.

Mark Súsol
---
u l t i m a t e ­ CreativeMedia
Web | Print | CD Media | eCommerce
www.ultimatecreativemedia.com
Ph: 301-668-0588


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Deadlock cannot find free slot for undo

2004-04-01 Thread Bryan Heitman
No, we did not run out of disk space, we had 102GB free using reiserfs
partition.
We are running all the defaults for transactions, so we have REPEATABLE-READ
as our tx_isolation. We should have allowed enough time for this transaction
to finish, this is in a production environment that has run fine for over a
year now with InnoDB without this problem. We do not commit anything, its
all auto_commit. Something must have put an extreme burden on the box to
cause this undo log to fill up, but what must we increase so that we have a
bigger log?

I'm unsure if the problem really was related to our "undo log" needing to be
bigger, though it may have. What is really weird is that we had ~500 MySQL
processes that were sitting in the Sleep status on MySQL's SHOW PROCESSLIST
for over 2000 seconds, but in SHOW INNODB STATUS they were sitting in the
ACTIVE status. While these were like this, InnoDB would not move it's last
checkpoint time. We had to manually kill each one of the MySQL processess
sitting in the Sleep/ACTIVE status, once we accomplished this, InnoDB
continued to clean up it's group log files, and teh Last CheckPoint caught
up with the sequence number/flushed up to numbers.

Any help or a point in the right direction in this situation would be
helpful

Bryan Heitman
FuseMail
- Original Message - 
From: "Dathan Vance Pattishall" <[EMAIL PROTECTED]>
To: "'Bryan Heitman'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, April 01, 2004 12:15 AM
Subject: RE: InnoDB Deadlock cannot find free slot for undo


> Did you run out of disk space? If not what transaction model are you
using?
> Is it repeatable read? If so, are you allowing the transaction to finish
> with a commit? If not your filling your transaction log and that's how
your
> getting this message. Call commit on the session OR SET AUTOCOMMIT=1 on
> connection. Additionally set your innodb_deadlock_timeout var.
>
>
> --
> DVP
>
>
> > -Original Message-
> > From: Bryan Heitman [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, March 31, 2004 2:53 PM
> > To: [EMAIL PROTECTED]
> > Subject: InnoDB Deadlock cannot find free slot for undo
> >
> > Looking for some help on tuning my InnoDB settings.  I received a InnoDB
> > deadlock.
> >
> > From analyzing show innodb status -->
> > * There were a ton of processes in the active status not doing anything.
> > * The log flushed up to was moving but the last checkpoint at was
> > extremely
> > old.
> >
> > Error in mysql error log scrolling repeatedly:
> > 16:21:37InnoDB: Warning: cannot find a free slot for an undo log. Do you
> > have too many active transactions running concurrently?
> >
> > version | 4.0.17-standard-log
> >
> > current innodb settings
> > set-variable = innodb_buffer_pool_size=768M
> > set-variable = innodb_additional_mem_pool_size=20M
> > set-variable=   innodb_log_file_size=256M
> > set-variable=   innodb_log_files_in_group=3
> >
> > Any help would be appreciated.
> >
> > Best,
> >
> >
> > Bryan Heitman
> > [EMAIL PROTECTED]
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> > http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB Deadlock cannot find free slot for undo

2004-03-31 Thread Bryan Heitman
Looking for some help on tuning my InnoDB settings.  I received a InnoDB
deadlock.

>From analyzing show innodb status -->
* There were a ton of processes in the active status not doing anything.
* The log flushed up to was moving but the last checkpoint at was extremely
old.

Error in mysql error log scrolling repeatedly:
16:21:37InnoDB: Warning: cannot find a free slot for an undo log. Do you
have too many active transactions running concurrently?

version | 4.0.17-standard-log

current innodb settings
set-variable = innodb_buffer_pool_size=768M
set-variable = innodb_additional_mem_pool_size=20M
set-variable=   innodb_log_file_size=256M
set-variable=   innodb_log_files_in_group=3

Any help would be appreciated.

Best,


Bryan Heitman
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]