Re: InnoDB - CREATE INDEX - Locks table for too long

2009-03-16 Thread Claudio Nanni
Hi,

I am using your procedure on MyISAM tables now and works but RENAME does not
work with locked tables,

(anyway it is already an atomic operation)

=BARON

Try something like this:

create table new_table like old_table;
alter table new_table add key(.);
insert into new_table select * from old_table where primary_key
between ? and ?;

Repeat the last statement until nearly finished, then:

lock tables new_table write, old_table write;
insert into new_table select * from old_table where primary_key  ?;
rename table old_table backup, new_table old_table;
unlock tables;



so the above procedure fails unless I swap like this:

-

...

...

unlock tables;

rename table old_table backup, new_table old_table;

-

The problem is of course that it is no more atomic.

I also thought of SELECT for UPDATE that locks the records but does not
prevent from inserting new ones,

any suggestions?

Thanks

Clauido


Re: InnoDB - CREATE INDEX - Locks table for too long

2009-02-21 Thread Claudio Nanni

Hi Baron!

I am going to try your solution on preprod on monday.

In the meantime, using your great slow-query-log analyzer, the strategy 
I thought of was similar to yours,
but using only one select that only put a READ lock on the records 
because, while the table is very 'selected' also at night,
I have only one insert and two updates in one week, the idea then is to 
do exactly what you said but with single insert with a READ lock.


So what I have tried on preprod is:

create table new_table like old_table;
alter table new_table add key(.);
insert into new_table select * from old_table FOR UPDATE

the strange thing is that while I used (mistake) 'FOR UPDATE' instead of 'LOCK 
IN SHARE MODE'
having AUTOCOMMIT=1 behaved like the 'LOCK IN SHARE MODE', is it possible?
In fact , during the 1 hour insert I could read records from old_table, insert new records in old_table, but updates where queued. 


In my case, if I do this between 2:00am and 4:00am , I can at first try the 
single insert READ locked,
and if I have problems I will switch to your solution, What do you think?


Thanks!

Claudio








Baron Schwartz wrote:

Hi!

On Thu, Feb 19, 2009 at 2:14 PM, Claudio Nanni claudio.na...@gmail.com wrote:
  

I need to add an index on a table on a production server.
It is one 7Gb InnoDB table with single .ibd file (one_file_per_table),
the index creation on preprod server took 40 minutes but table was smaller.
I tried to add the index but was locking all applications on production and
had to kill it.
I have requested a maintenance window but it will take long time.
Since this application is scanning like crazy I'd like to do it a.s.a.p.

Do you have any hint for a non locking solution?



Try something like this:

create table new_table like old_table;
alter table new_table add key(.);
insert into new_table select * from old_table where primary_key
between ? and ?;

Repeat the last statement until nearly finished, then:

lock tables new_table write, old_table write;
insert into new_table select * from old_table where primary_key  ?;
rename table old_table backup, new_table old_table;
unlock tables;

Baron

  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB - CREATE INDEX - Locks table for too long

2009-02-20 Thread Baron Schwartz
Hi!

On Thu, Feb 19, 2009 at 2:14 PM, Claudio Nanni claudio.na...@gmail.com wrote:
 I need to add an index on a table on a production server.
 It is one 7Gb InnoDB table with single .ibd file (one_file_per_table),
 the index creation on preprod server took 40 minutes but table was smaller.
 I tried to add the index but was locking all applications on production and
 had to kill it.
 I have requested a maintenance window but it will take long time.
 Since this application is scanning like crazy I'd like to do it a.s.a.p.

 Do you have any hint for a non locking solution?

Try something like this:

create table new_table like old_table;
alter table new_table add key(.);
insert into new_table select * from old_table where primary_key
between ? and ?;

Repeat the last statement until nearly finished, then:

lock tables new_table write, old_table write;
insert into new_table select * from old_table where primary_key  ?;
rename table old_table backup, new_table old_table;
unlock tables;

Baron

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



InnoDB - CREATE INDEX - Locks table for too long

2009-02-19 Thread Claudio Nanni

I need to add an index on a table on a production server.
It is one 7Gb InnoDB table with single .ibd file (one_file_per_table),
the index creation on preprod server took 40 minutes but table was smaller.
I tried to add the index but was locking all applications on production 
and had to kill it.

I have requested a maintenance window but it will take long time.
Since this application is scanning like crazy I'd like to do it a.s.a.p.

Do you have any hint for a non locking solution?

I have created smaller indexes and brought the average rows retrieved 
via full table scan from 2 Million to 400.000 per second, now I just 
need this last index!
Thanks to Baron for the slow-query-log analyzer tool, it is simple but 
perfect!

And I used it without patching the server.
I have easily found the worst queries just watching three values: 
elapsed time, number of occurences, and rows retrieved.
And I discovered all the tables which needed indexing, so simple, so 
good! Thanks Baron!


Cheers

Claudio



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to create INDEX for this query?

2008-11-11 Thread Kelvin Wu
No advice? Anyway after monitoring slow queries for few days, I found most
likely the following queries caused bad performance and locked table for
long time:

 LIMIT 16780, 20

A big offset! Even the index is properly used.

After restricting offset value within the software, eg, return error
directly when offset is more than 4000, this issue seems to disappear.



On 2008-11-05, Kelvin Wu [EMAIL PROTECTED] wrote:

 Hi All,

 I need advice for creating proper INDEX for a query.

 The table is like:

 mysql desc article;
 ++--+--+-+-++
 | Field  | Type | Null | Key | Default | Extra  |
 ++--+--+-+-++
 | id | int(11)  | NO   | PRI | NULL| auto_increment |
 | title  | varchar(200) | NO   | | ||
 | timestamp  | int(11)  | NO   | | 0   ||
 | active | tinyint(4)   | NO   | MUL | 1   ||
 | body   | mediumtext   | NO   | | ||
 | source | int(11)  | NO   | | 1   ||
 | category   | int(11)  | NO   | | 0   ||
 ++--+--+-+-++
 mysql select count(*) from article;
 +--+
 | count(*) |
 +--+
 |   536023 |
 +--+

 I have created few indexes for this table:

 mysql show index from article;

 +-+++--+-+---+-+--++--++-+
 | Table   | Non_unique | Key_name   | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +-+++--+-+---+-+--++--++-+
 | article |  0 | PRIMARY|1 | id  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index1 |1 | active  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index1 |2 | timestamp   |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index1 |3 | id  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index2 |1 | active  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index2 |2 | source  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index2 |3 | timestamp   |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index2 |4 | id  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index3 |1 | active  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index3 |2 | source  |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index3 |3 | category|
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index3 |4 | timestamp   |
 A |NULL | NULL | NULL   |  | BTREE  | |
 | article |  1 | index3 |5 | id  |
 A |NULL | NULL | NULL   |  | BTREE  | |

 +-+++--+-+---+-+--++--++-+
 The query is like:

 mysql explain SELECT id, title, timestamp FROM article USE INDEX(index3)
 WHERE active = 1 AND source = 5 AND (category = 411 OR category = 547 or
 category = 559) AND timestamp  0 ORDER BY timestamp DESC, id DESC LIMIT
 0, 20;

 ++-+-+---+---++-+--+--+-+
 | id | select_type | table   | type  | possible_keys | key| key_len
 | ref  | rows | Extra   |

 ++-+-+---+---++-+--+--+-+
 |  1 | SIMPLE  | article | range | index3| index3 | 13
 | NULL | 1483 | Using where; Using filesort |

 ++-+-+---+---++-+--+--+-+
 Question: does the query use correct index? I am asking because I noticed
 such queries were recorded in mysql long query 

How to create INDEX for this query?

2008-11-04 Thread Kelvin Wu
Hi All,

I need advice for creating proper INDEX for a query.

The table is like:

mysql desc article;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra  |
++--+--+-+-++
| id | int(11)  | NO   | PRI | NULL| auto_increment |
| title  | varchar(200) | NO   | | ||
| timestamp  | int(11)  | NO   | | 0   ||
| active | tinyint(4)   | NO   | MUL | 1   ||
| body   | mediumtext   | NO   | | ||
| source | int(11)  | NO   | | 1   ||
| category   | int(11)  | NO   | | 0   ||
++--+--+-+-++
mysql select count(*) from article;
+--+
| count(*) |
+--+
|   536023 |
+--+

I have created few indexes for this table:

mysql show index from article;
+-+++--+-+---+-+--++--++-+
| Table   | Non_unique | Key_name   | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-+++--+-+---+-+--++--++-+
| article |  0 | PRIMARY|1 | id  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index1 |1 | active  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index1 |2 | timestamp   | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index1 |3 | id  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index2 |1 | active  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index2 |2 | source  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index2 |3 | timestamp   | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index2 |4 | id  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index3 |1 | active  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index3 |2 | source  | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index3 |3 | category| A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index3 |4 | timestamp   | A
|NULL | NULL | NULL   |  | BTREE  | |
| article |  1 | index3 |5 | id  | A
|NULL | NULL | NULL   |  | BTREE  | |
+-+++--+-+---+-+--++--++-+
The query is like:

mysql explain SELECT id, title, timestamp FROM article USE INDEX(index3)
WHERE active = 1 AND source = 5 AND (category = 411 OR category = 547 or
category = 559) AND timestamp  0 ORDER BY timestamp DESC, id DESC LIMIT 0,
20;
++-+-+---+---++-+--+--+-+
| id | select_type | table   | type  | possible_keys | key| key_len
| ref  | rows | Extra   |
++-+-+---+---++-+--+--+-+
|  1 | SIMPLE  | article | range | index3| index3 | 13
| NULL | 1483 | Using where; Using filesort |
++-+-+---+---++-+--+--+-+
Question: does the query use correct index? I am asking because I noticed
such queries were recorded in mysql long query log, it somehow took about 14
seconds (# Query_time: 14  Lock_time: 0  Rows_sent: 20  Rows_examined: 1483)
while I expect none of query should take more than 2 seconds.
I also noticed that, when the LIMIT offset is a big value, it is recorded
too even the index is correct, for example.

SELECT id, source, title, timestamp FROM article USE INDEX (index2) WHERE
active = 1 AND source = 7 ORDER BY timestamp DESC, id DESC LIMIT 0, 20;

usually only takes 1 second, but

SELECT id, source, title, timestamp FROM article USE INDEX (index2) WHERE
active = 1 AND source = 7 ORDER BY timestamp DESC, id DESC 

re[2]: (Windows) drop / create index and lock tables

2006-10-04 Thread Rob Desbois
Dan,
I do need to prevent writes from occurring between the DROP and CREATE.
The primary purpose of this is to prevent the CREATE from failing due to 
duplicate entries being found during key creation.

As CREATE INDEX has no IGNORE option, I had thought I would have to do this a 
nasty way, but it turns out that even though CREATE INDEX is mapped to the 
equivalent ALTER TABLE statement, not all of the options available in ALTER 
TABLE are mapped. So, if I change my queries to the following:

DROP INDEX `keyX` ON `foo`;
ALTER IGNORE TABLE `foo` ADD UNIQUE KEY `keyX`(`column1`, `column2`);

Then, if any rows are duplicated between the two, they will be silently dropped 
during the key creation.
Thanks for your help!

--Rob


 Rob, seems like you want to ensure that no writes occur in between the
drop index and create index statements, yes?

It's not pretty, but you could stop the mysql service and start it
back up with --skip-networking, then access it from localhost to
perform your changes.  If you have processes updating from localhost
this won't be effective, though.

Another thought might be to create your new index first, then drop the
old, as in:

   CREATE UNIQUE INDEX `keyXX` ON `foo` (`column1`,`column2`);
   DROP INDEX `keyX` ON `foo`;

The index having a different name should only be a problem if you've
used optimizer hints in your SQL - some people do, some don't.

Someone else may have a better thought.

Dan

On 10/3/06, Rob Desbois [EMAIL PROTECTED] wrote:
 Hi all,

 Say I have the following MyISAM table (example-ified) in a Windows-hosted DB:
 CREATE TABLE foo ( column1 CHAR(1), column2 CHAR(1), UNIQUE KEY 
 `keyX`(`column1`));

 I have to perform an update of the key to extend it to both columns (it's an 
 example, ignore the content of the key), and want to ensure data integrity 
 while I recreate it.

 The following is what I thought I had to do:
 LOCK TABLES foo WRITE;
DROP INDEX `keyX` ON `foo`;
CREATE UNIQUE INDEX `keyX` ON `foo` (`column1`,`column2`);
 UNLOCK TABLES;

 After much head-scratching due to Error Code : 1100  Table 'foo' was not 
 locked with LOCK TABLES, I discovered that CREATE / DROP INDEX statements 
 are mapped to equivalent ALTER TABLE statements. This, due to the way ALTER 
 TABLE statements 'work' on windows, renders this code unusable as the DROP 
 INDEX statement unlocks the table. Before the CREATE INDEX statement will 
 work I then have to run UNLOCK TABLES, which also makes sense (I thought it 
 was unlocked??).

 So - I can't lock the table whiel I drop then recreate the index, so what's 
 the best way to do this?
 TIA,
 --Rob


 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__




-- Original Message --

FROM:  Dan Buettner [EMAIL PROTECTED]
TO:Rob Desbois [EMAIL PROTECTED]
DATE:  Tue, 3 Oct 2006 11:03:58 -0500

SUBJECT:   Re: (Windows) drop / create index and lock tables

Rob, seems like you want to ensure that no writes occur in between the
drop index and create index statements, yes?

It's not pretty, but you could stop the mysql service and start it
back up with --skip-networking, then access it from localhost to
perform your changes.  If you have processes updating from localhost
this won't be effective, though.

Another thought might be to create your new index first, then drop the
old, as in:

   CREATE UNIQUE INDEX `keyXX` ON `foo` (`column1`,`column2`);
   DROP INDEX `keyX` ON `foo`;

The index having a different name should only be a problem if you've
used optimizer hints in your SQL - some people do, some don't.

Someone else may have a better thought.

Dan

On 10/3/06, Rob Desbois [EMAIL PROTECTED] wrote:
 Hi all,

 Say I have the following MyISAM table (example-ified) in a Windows-hosted DB:
 CREATE TABLE foo ( column1 CHAR(1), column2 CHAR(1), UNIQUE KEY 
 `keyX`(`column1`));

 I have to perform an update of the key to extend it to both columns (it's an 
 example, ignore the content of the key), and want to ensure data integrity 
 while I recreate it.

 The following is what I thought I had to do:
 LOCK TABLES foo WRITE;
DROP INDEX `keyX` ON `foo`;
CREATE UNIQUE INDEX `keyX` ON `foo` (`column1`,`column2`);
 UNLOCK TABLES;

 After much head-scratching due to Error Code : 1100  Table 'foo' was not 
 locked with LOCK TABLES, I discovered that CREATE

(Windows) drop / create index and lock tables

2006-10-03 Thread Rob Desbois
Hi all,

Say I have the following MyISAM table (example-ified) in a Windows-hosted DB:
CREATE TABLE foo ( column1 CHAR(1), column2 CHAR(1), UNIQUE KEY 
`keyX`(`column1`));

I have to perform an update of the key to extend it to both columns (it's an 
example, ignore the content of the key), and want to ensure data integrity 
while I recreate it.

The following is what I thought I had to do:
LOCK TABLES foo WRITE;
   DROP INDEX `keyX` ON `foo`;
   CREATE UNIQUE INDEX `keyX` ON `foo` (`column1`,`column2`);
UNLOCK TABLES;

After much head-scratching due to Error Code : 1100  Table 'foo' was not 
locked with LOCK TABLES, I discovered that CREATE / DROP INDEX statements are 
mapped to equivalent ALTER TABLE statements. This, due to the way ALTER TABLE 
statements 'work' on windows, renders this code unusable as the DROP INDEX 
statement unlocks the table. Before the CREATE INDEX statement will work I then 
have to run UNLOCK TABLES, which also makes sense (I thought it was unlocked??).

So - I can't lock the table whiel I drop then recreate the index, so what's the 
best way to do this?
TIA,
--Rob


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: (Windows) drop / create index and lock tables

2006-10-03 Thread Dan Buettner

Rob, seems like you want to ensure that no writes occur in between the
drop index and create index statements, yes?

It's not pretty, but you could stop the mysql service and start it
back up with --skip-networking, then access it from localhost to
perform your changes.  If you have processes updating from localhost
this won't be effective, though.

Another thought might be to create your new index first, then drop the
old, as in:

   CREATE UNIQUE INDEX `keyXX` ON `foo` (`column1`,`column2`);
   DROP INDEX `keyX` ON `foo`;

The index having a different name should only be a problem if you've
used optimizer hints in your SQL - some people do, some don't.

Someone else may have a better thought.

Dan

On 10/3/06, Rob Desbois [EMAIL PROTECTED] wrote:

Hi all,

Say I have the following MyISAM table (example-ified) in a Windows-hosted DB:
CREATE TABLE foo ( column1 CHAR(1), column2 CHAR(1), UNIQUE KEY 
`keyX`(`column1`));

I have to perform an update of the key to extend it to both columns (it's an 
example, ignore the content of the key), and want to ensure data integrity 
while I recreate it.

The following is what I thought I had to do:
LOCK TABLES foo WRITE;
   DROP INDEX `keyX` ON `foo`;
   CREATE UNIQUE INDEX `keyX` ON `foo` (`column1`,`column2`);
UNLOCK TABLES;

After much head-scratching due to Error Code : 1100  Table 'foo' was not locked 
with LOCK TABLES, I discovered that CREATE / DROP INDEX statements are mapped to 
equivalent ALTER TABLE statements. This, due to the way ALTER TABLE statements 'work' on 
windows, renders this code unusable as the DROP INDEX statement unlocks the table. Before 
the CREATE INDEX statement will work I then have to run UNLOCK TABLES, which also makes 
sense (I thought it was unlocked??).

So - I can't lock the table whiel I drop then recreate the index, so what's the 
best way to do this?
TIA,
--Rob


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Create Index with DESC not working on v5

2005-08-10 Thread Michael Stassen

asdas wrote:

 It could be a known issue, but thought it's a well known  well used
 feature so, asking you all.

 I am trying to create an index with the following command -

 create index an_idx on atable (acol desc, bcol, ccol)

 Then when I do a -

 show index from atable

 It shows for all indexed columns the 'Collation' is 'A'. And also when we
 do a query without any ORDER BY, the result is not sorted in descending
 order on acol. For example a query like

 select pk, acol, bcol, ccol from atable

 doesn't show the result in descending order of acol, instead it's in
 ascending order of acol.

 Is there a simple trick I am missing ?

 I am using MySQL version 5.0.3.

 Thanks in advance.


Hassan Schroeder wrote:


Note the last line of this excerpt from The Fine Manual :-)
  http://dev.mysql.com/doc/mysql/en/create-index.html

An index_col_name specification can end with ASC or DESC. These keywords 
are allowed for future extensions for specifying ascending or descending 
index value storage. Currently they are parsed but ignored; index values 
are always stored in ascending order.


HTH!


Also, you cannot expect your results to be ordered wihout an explicit ORDER BY 
clause.  Without ORDER BY, mysql is free to return rows in any order it 
chooses (generally in the order they are found).  For example:


  CREATE TABLE atable (pk INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   acol CHAR(3), bcol INT, ccol CHAR(3),
   INDEX an_idx (acol, bcol, ccol)
  );

  INSERT INTO atable (acol, bcol, ccol)
  VALUES ('aaa',1,'aaa'), ('bbb',1,'aaa'), ('aaa',3,'aaa'),
 ('aaa',2,'abc'), ('aaa',1,'bbb'), ('bbb',3,'abc'),
 ('aaa',2,'aaa'), ('aaa',1,'abc'), ('abc',3,'def'),
 ('bbb',2,'aaa');

  DELETE FROM atable WHERE bcol = 3;

  INSERT INTO atable (acol, bcol, ccol)
  VALUES ('ccc',1,'aaa'), ('ccc',2,'aaa'), ('ccc',1,'bbb');

  SELECT pk, acol, bcol, ccol FROM atable;
  ++--+--+--+
  | pk | acol | bcol | ccol |
  ++--+--+--+
  |  1 | aaa  |1 | aaa  |
  |  2 | bbb  |1 | aaa  |
  | 13 | ccc  |1 | bbb  |
  |  4 | aaa  |2 | abc  |
  |  5 | aaa  |1 | bbb  |
  | 12 | ccc  |2 | aaa  |
  |  7 | aaa  |2 | aaa  |
  |  8 | aaa  |1 | abc  |
  | 11 | ccc  |1 | aaa  |
  | 10 | bbb  |2 | aaa  |
  ++--+--+--+
  10 rows in set (0.00 sec)

You see?  Without ORDER BY, the rows are returned as they are found.  Note 
that rows from the second INSERT are where the deleted rows were.


An exception may occur if you only ask for columns which are covered by a 
single index.  In that case, mysql may choose to read the data from the 
covering index, rather than from the table.  Then you'd get results in index 
order:


  mysql SELECT pk FROM atable;
  ++
  | pk |
  ++
  |  1 |
  |  2 |
  |  4 |
  |  5 |
  |  7 |
  |  8 |
  | 10 |
  | 11 |
  | 12 |
  | 13 |
  ++
  10 rows in set (0.00 sec)

  mysql SELECT acol, bcol, ccol FROM atable;
  +--+--+--+
  | acol | bcol | ccol |
  +--+--+--+
  | aaa  |1 | aaa  |
  | aaa  |1 | abc  |
  | aaa  |1 | bbb  |
  | aaa  |2 | aaa  |
  | aaa  |2 | abc  |
  | bbb  |1 | aaa  |
  | bbb  |2 | aaa  |
  | ccc  |1 | aaa  |
  | ccc  |1 | bbb  |
  | ccc  |2 | aaa  |
  +--+--+--+
  10 rows in set (0.00 sec)

I wouldn't count on this though.  If you need ordered results, you need ORDER 
BY.

Michael


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Create Index with DESC not working on v5

2005-08-10 Thread asdas
Thanks a lot Michael, it sure helps.



People are conversing... without posting their email or filling up their mail 
box. ~~1123696760414~~
roomity.com http://roomity.com/launch.jsp No sign up to read or search this 
Rich Internet App



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Create Index with DESC not working on v5

2005-08-09 Thread asdas
It could be a known issue, but thought it's a well known  well used feature 
so, asking you all.

I am trying to create an index with the following command -

create index an_idx on atable
(acol desc, bcol, ccol)
/

Then when I do a -

show index from atable

It shows for all indexed columns the 'Collation' is 'A'. And also when we do a 
query without any ORDER BY, the result is not sorted in descending order on 
acol. For example a query like

select pk, acol, bcol, ccol from atable

doesn't show the result in descending order of acol, instead it's in ascending 
order of acol.

Is there a simple trick I am missing ?

I am using MySQL version 5.0.3.

Thanks in advance.




People are conversing... without posting their email or filling up their mail 
box. ~~1123626929418~~
roomity.com http://roomity.com/launch.jsp No sign up to read or search this 
Rich Internet App



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Create Index with DESC not working on v5

2005-08-09 Thread Hassan Schroeder

asdas wrote:


I am trying to create an index with the following command -

create index an_idx on atable
(acol desc, bcol, ccol)

/

select pk, acol, bcol, ccol from atable

doesn't show the result in descending order of acol, instead it's in ascending 
order of acol.

Is there a simple trick I am missing ?


Note the last line of this excerpt from The Fine Manual :-)
  http://dev.mysql.com/doc/mysql/en/create-index.html

An index_col_name specification can end with ASC or DESC. These keywords 
are allowed for future extensions for specifying ascending or descending 
index value storage. Currently they are parsed but ignored; index values 
are always stored in ascending order.


HTH!
--
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

  dream.  code.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: help needed to create index

2005-05-25 Thread Peter Normann
Asha  wrote:

 Is there a physical
 limitation in the InnoDb table structure as to why it can't
 support  FullText indexes?

http://dev.mysql.com/doc/mysql/en/fulltext-restrictions.html

http://dev.mysql.com/doc/mysql/en/innodb-restrictions.html


Peter Normann


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



help needed to create index

2005-05-24 Thread Asha

creating an index is necessary for fulltext search.

and when i say

create fulltext index search_index on sometable(column1,column2)  i
get an error messg like

Error: The used table type doesn't support FULLTEXT indexes



 Why doesn't InnoDb support FullText indexes?

Is there a physical
limitation in the InnoDb table structure as to why it can't
support  FullText indexes?

Can anyone help me to find solution ?

Tia,


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Create index if not exists????

2005-02-01 Thread Michael J. Pawlowsky
I'm writing an install script and need to create an index if it does not 
exists.
I've tried SQL statements but none of them work.

Basically I'm looking to do something like
$sql[] = ALTER TABLE `exp_members` ADD INDEX IF NOT EXIST 
`idx_members_username` ( `username` );

Is this possible?
Thanks,
Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-22 Thread matt ryan

Split the myisam table into seperate tables.  We will
call each table a bucket.
Create a MERGE table of all of them. For selecting the
data.
When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.
 

I hit duplicate hashes for unique records, not sure why, I think I used 
aes_encrypt, how do you recomend creating a hash column via sql?

I already split the data into separate tables, and use a merge, it 
really didnt speed things up that much

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
I load all the data into a table with no keys
then I insert this data into a table with 225 million records, this 
large table has the primary key, this is what takes a LONG time

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lachlan Mulcahy wrote:
MySQL Version: 4.0.18
Server OS: windows 2000, or 2003
Memory 2 gig
CPU(s) dual 2.6-3ghz  xeon 500-2mb cache (cpu load is low)
Disks (RAIDs and Independent disk speed/types) 8x72 gig 15,000 rpm scsi 
II u320 raid 5 dell perc setup


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Lopez David E-r9374c wrote:
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
 

This creates a file that inserts the records back into the same table
it also does not do an insert ignore
I need the records to go into the historical table, with an insert ignore
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread gerald_clark
mysql -i   filename.sql
matt ryan wrote:
Lopez David E-r9374c wrote:
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
 

This creates a file that inserts the records back into the same table
it also does not do an insert ignore
I need the records to go into the historical table, with an insert ignore

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
I don't see how using a multi value insert would be
any faster than the insert between the tables.  It
would certainly be faster than one insert statement
per row, but I don't think it would be faster than
insert ... select ...

The only reason I suggested an extended syntax insert
earlier was because I wasn't aware that a temporary
table was being loaded first.

Do you ever delete from this table?

Can you post the results from
show variables for us?

Have you removed the unecessary duplicate key on the
first column of your primary key?

Thanks,

Justin



--- gerald_clark [EMAIL PROTECTED]
wrote:
 mysql -i   filename.sql
 
 matt ryan wrote:
 
  Lopez David E-r9374c wrote:
 
  Since you have a temp table created (no keys I
 assume), use the command
 
   mysqldump -v -e -n -t dbname tablename 
 filename.sql
 
 
   
 
  This creates a file that inserts the records back
 into the same table
 
  it also does not do an insert ignore
 
  I need the records to go into the historical
 table, with an insert ignore
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread matt ryan
Do you ever delete from this table?
Temp table is trunicated before the EBCDIC file is loaded
Have you removed the unecessary duplicate key on the first column of your primary key?
Have not touched the DIC index yet, I need a backup server to change
indexes, it would take the main server down for too long, and it wont be
that big an increase, it's only a 3 character index, I also do joines on
that field to other tables, so I was hesitant on removing that index.
Can you post the results from show variables for 
nope, the list wont let me send an email that big
I did put it on the web though, here's the results from show variables
http://www.geekopolis.com/Query_Result.txt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
--- matt ryan [EMAIL PROTECTED] wrote:
 Do you ever delete from this table?
 
 Temp table is trunicated before the EBCDIC file is
 loaded
I meant the history table :)

 Have you removed the unecessary duplicate key on
 the first column of your primary key?
 
 Have not touched the DIC index yet, I need a backup
 server to change
 indexes, it would take the main server down for too
 long, and it wont be
 that big an increase, it's only a 3 character index,
 I also do joines on
 that field to other tables, so I was hesitant on
 removing that index.

Even if it is only a 3 character index, you still need
to do disk reads/writes to update the values in the
index.  With a huge table like yours one index can
make a big difference.

It will still work fine with joins.  Since it is the
leading column of another index, it will function just
like a normal index.  Having indexes on a and (a,b) is
redundant for searching/joining only a.  If you need
to join/search on b, then a seperate index is required
for b if you have indexed (a,b).


The last option I can think of requires a lot more
work on your part as far as inserting data, but it may
be your only option at this point.  This is why I
asked if you delete data from the history table, as it
makes deletes/updates more problematic as well.

Split the myisam table into seperate tables.  We will
call each table a bucket.

Create a MERGE table of all of them. For selecting the
data.

When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Donny Simonton wrote:
Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you are
running.
For example, we have one table that has 8 billion rows in it and it close to
100 gigs and we can hammer it all day long without any problems.  It really
depends on how you are doing things.
But as far as you mentioning about mysql not using multiple indexes, it
does.  You just have to create an index on multiple fields at one time.
I've got tables with 10 fields in one index, now the trick with mysql is
that you must use all top 10 fields in your where clause for mysql to really
take advantage of the index.
But I would definitely send the list your table structure with your indexes
and some of your selects and inserts.  You can always change the names of
things if you don't want people to know the names of everything.
Just my 2 cents.
 

Original email has the table structure, query speed is not an issue (it 
is, but I will deal with that later)

the issue is insert speed, I get 150k-1M records daily, of these, only 
5-10 % are new records, I load the EBCDIC file into a temp table, and 
then do insert ignore into historytable select * from temp table

and I cant create multiple field indexes, I would need 100 indexes on a 
table, the index side already exceeds the table size, I tried to add an 
index on date, but gave up because it ran for 2 days and was not done yet.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread Udi . S . Karni
Here's another option to load without requiring a primary key (requires a 
LOT of extra disk space and fast CPU, and a batch window to run).

Load the new daily data into the table without checking for dupes.

Then create a new version of the table with distinct values.

Something like this (assuming your table has 2 columns which are char (3) 
and char (5) for simplicity's sake) in pseudocode:
(remember to pad all columns to the maximum width to make them uniform, 
right spaces on char, left zeroes on numerics)


CREATE NEW_TABLE AS
SELECT SUBSTR (DISTINCT (RPAD(COL1,3) || RPAD(COL2,5)),1,3) AS COL1,
   SUBSTR (DISTINCT (RPAD(COL1,3) || RPAD(COL2,5)),4,5) AS COL2
  FROM OLD_TABLE









matt ryan [EMAIL PROTECTED]
07/16/2004 07:43 AM

 
To: 
cc: [EMAIL PROTECTED]
Subject:Re: Mysql growing pains, 4 days to create index on one table!


Donny Simonton wrote:

Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you 
are
running.

For example, we have one table that has 8 billion rows in it and it close 
to
100 gigs and we can hammer it all day long without any problems.  It 
really
depends on how you are doing things.

But as far as you mentioning about mysql not using multiple indexes, it
does.  You just have to create an index on multiple fields at one time.
I've got tables with 10 fields in one index, now the trick with mysql is
that you must use all top 10 fields in your where clause for mysql to 
really
take advantage of the index.

But I would definitely send the list your table structure with your 
indexes
and some of your selects and inserts.  You can always change the names of
things if you don't want people to know the names of everything.

Just my 2 cents.
 


Original email has the table structure, query speed is not an issue (it 
is, but I will deal with that later)

the issue is insert speed, I get 150k-1M records daily, of these, only 
5-10 % are new records, I load the EBCDIC file into a temp table, and 
then do insert ignore into historytable select * from temp table

and I cant create multiple field indexes, I would need 100 indexes on a 
table, the index side already exceeds the table size, I tried to add an 
index on date, but gave up because it ran for 2 days and was not done yet.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





Re: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread matt ryan
Since you have a temp table created (no keys I assume), use the command
 mysqldump -v -e -n -t dbname tablename  filename.sql
This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.
 mysql -u matt -p dbname  filename.sql
This is very fast way to insert rows. Speeds up insertion by at
least 10x on my large tables (11 million rows). I noticed someone
with 1000x more rows w/o problems.
BTW, by forcing the table to have packed keys, the docs say it
will slow your insertion. Maybe not that much, i don't know.
David
 

Isnt the text file it creates, going to insert the records back into the 
temp table when I load it back in?

Does this do insert ignore or insert replace?  I need to control that, 
on some tables I do insert ignore, on others i do insert replace.

Almost all of the speed issue is read related, the disk writes are 
nearly 0, the reads are as fast as the drive can run, reading to see if 
the record violates the primary key I assume

about 3 gig seems to be the magic number, less than that is lightning 
fast, more than that is extreemly slow




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Mysql growing pains, 4 days to create index on one table!

2004-07-16 Thread Lopez David E-r9374c
matt

 
 the issue is insert speed, I get 150k-1M records daily, of 
 these, only 
 5-10 % are new records, I load the EBCDIC file into a temp table, and 
 then do insert ignore into historytable select * from temp table
 
Since you have a temp table created (no keys I assume), use the command

  mysqldump -v -e -n -t dbname tablename  filename.sql

This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.

  mysql -u matt -p dbname  filename.sql

This is very fast way to insert rows. Speeds up insertion by at
least 10x on my large tables (11 million rows). I noticed someone
with 1000x more rows w/o problems.

BTW, by forcing the table to have packed keys, the docs say it
will slow your insertion. Maybe not that much, i don't know.

David

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Justin Swanhart wrote:
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.
 

Using 8x72 gig 15,000 rpm U320 scsi drives in raid 5 now, that should be 
a fast read raid config

no more will fit in the server, and solid state are 70,000 $ it's out of 
our budget

I optimize the tables every weekened
any other sugestions?
Would it help to defrag?  The only way I can do it, is backup every 
file, wipe out the server, and then restore the files, there's not 
enough free space to do a proper defrag

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Tim Brody
I know it's naff but I've found it quicker to use myisamchk with row-sort
than it is to get the MySQL daemon to regenerate keys (and if you know
you're not changing the data file you can tell myisamchk not to copy the
data), unless I've missed something in the MySQL config ...

The only way I know to do this for PRIMARY KEYs is a process described by
Kyle J. Munn (but watch out for free space :-):
http://lists.mysql.com/mysql/158737

This boils down to moving the MYD file, truncating the table to create an
empty MYD, modify your key(s), move the MYD back, and repair using myisamchk
to re-create the keys.

Otherwise you can use ALTER TABLE table DISABLE KEYS/ENABLE KEYS to
disable keys while you make your inserts, then enable keys to update
non-Primary keys in one go.

I've never tried PACK_KEYS so can't suggest anything on that, other than it
sounds like it adds more data to your index which is probably a bad thing
with such short values.

You may find that the 'dic' KEY isn't necessary, as it's the first part of
your PRIMARY KEY.

Depending on your data you may be better off storing integer IDs and the
char values in a look-up table.

I've found better performance for multi-column keys by putting the columns
in order of least variance first, e.g. for a list of dates:
1979-04-23
1979-07-15
1980-02-04
1980-06-04
You want a key on (YEAR-MONTH-DAY)

If you can you could put the index/data on different disks - not sure how
you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?).

You should definitely put the binary log file on another disk, but again not
something I've used.

I've found MySQL to be a royal pain working with multi-GB tables (my biggest
is 12GB+13GB index). I've learnt that MySQL is a bit like a confused puppy -
it doesn't know why it wet the floor, but it expects you to know that pained
expression means you need to move it somewhere else ...

All the best,
Tim.

- Original Message - 
From: matt ryan [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, June 30, 2004 7:50 PM
Subject: Mysql growing pains, 4 days to create index on one table!


 Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..

 I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
 raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
 backplanes (new ones will have dual channel)

 All have 2 gig of ram, but I've never seen mysql use more than 600mb
 of ram.

 The servers handle huge loads, each day there are 30 1-2 gig files
 loaded into large tables, total mysql data size is 96 gig, the large
 tables are 2-6 gig.

 Inserts are done on dupe key ignore, this takes hours on the large
 files, it barely keeps up with input files.

 At the bottom of this post I've got the mysql.ini config lines, any
 suggestions are welcome, I'm already beyond the mysql huge sample
 they used to include in the program.

 Sample table that I load is as follows.

 each day I get 40 % new records on the text file, the input file is
 normally 20mb, once a week I get one that's 1-2 gig, these take all
 day to load.

 I need more multiple column indexes, as some querys return millions of
 rows that must be scanned, but the index size already exceeds the
 table size, and the combinations I need would result in an myi that's
 5x larger than the data itself.

 Here's an example of the speed problem, the index was corrupt so I
 dropped all and recreated, rather than a myisam repair. I think 3 days
 is a little excessive for a table that's only 3.428 gig, index is
 2.729 gig.  I cant remove the primary key, as it keeps duplicates out
 of the system, the input files are from old database's, we use mysql
 to store the data for the web frontend, mostly done in ASP, most
 queries take less than a second, unforuntatly we have big queries that
 take way more than the IIS timeout setting all the time, but no way
 around it, I cant add more indexes without making it even slower :(



 I cant tell if it's mysql that's the problem, or the hardware, Here's a
 screenshot of the disk IO, if I copy a file while mysql is doing the
 build index, the io shoots way up, which tells me, mysql is NOT maxing
 out the drives, and it's also not maxing out the memory.

 Unless it's doing lots and lots of seeks on the drive, which is harder
 to test using perfmon, are there any mysql test setups that would help
 identify where the bottleneck is?


 screenshot of disk io usage

 http://www.geekopolis.com/pics/diskio.jpg

 I'm all out of ideas, other than switching to another db, and the table
  indexes split across drives, maybe a 2 channel setup, 4 drives per
 channel, each 4 is a separate raid 5 setup, one holds data one holds
 indexes, cant do this with mysql though

 mysql alter table hood_stat add primary key
 (dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
 index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
 (`dic`), add index `ctasc` (`ctasc`);

 Query OK, 45449534 rows

Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Tim Brody wrote:
You may find that the 'dic' KEY isn't necessary, as it's the first part of
your PRIMARY KEY.
I've found better performance for multi-column keys by putting the columns
in order of least variance first, e.g. for a list of dates:
1979-04-23
1979-07-15
1980-02-04
1980-06-04
You want a key on (YEAR-MONTH-DAY)
If you can you could put the index/data on different disks - not sure how
you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?).
You should definitely put the binary log file on another disk, but again not
something I've used.
I've found MySQL to be a royal pain working with multi-GB tables (my biggest
is 12GB+13GB index). I've learnt that MySQL is a bit like a confused puppy -
it doesn't know why it wet the floor, but it expects you to know that pained
expression means you need to move it somewhere else ...
 

I need the DIC in the key to keep the record unique, I have thousands 
with everything identical except the DIC.

I was confused on the multi key index issue, I thought it would seek 
faster if I put the most unique field up front, which I do on most 
tables, I did not on this one though.   I have one large raid array now, 
so I cant split the data, or put the binary log on another disk.

I found mysql was great up to about 3 gig, then everything hit the 
brakes and got really really really slow

I'm scared of joines, every time I do a join in mysql on indexed fields 
in mysql, the performance is horrible, because the where clause is not a 
field that's in the join, performance is poopy

I wish mysql could use multiple indexes like oracle, to narrow down the 
results, I've got some simple queries that take hours due to single 
index use, but every query field is indexed.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re[2]: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Pete McNeil
On Thursday, July 15, 2004, 9:10:43 AM, matt wrote:

mr Justin Swanhart wrote:

Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.

mr Using 8x72 gig 15,000 rpm U320 scsi drives in raid 5 now, that should be
mr a fast read raid config

mr no more will fit in the server, and solid state are 70,000 $ it's out of
mr our budget

mr I optimize the tables every weekened

mr any other sugestions?

Consider moving to a raid 10 configuration + adding additional drives
externally on alternate scsi chains.

Consider additional ram.

Consider replicating to some slave servers and dividing reads among them.

Hope this helps,

_M




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan
Consider replicating to some slave servers and dividing reads among them.
I already replicate to slaves, and sites will do read only queries off 
these slaves

99.9 % of the tables are read only anyway, the only tables we update or 
insert into, are very very small and fast.

These big tables are daily extracts from IBM DB2 sites, in ebcdic 
format, we archive the data and users then query our site which is 
faster, unless they start doing multiple query options, then things get 
slow.

If you query only one feild its FAST, but if you query two feilds, its 
slow, very slow, need multiple key per query support in mysql.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re[2]: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread adburne
matt on Thursday, July 15, 2004, 11:58:31 AM, wrote:

 Consider replicating to some slave servers and dividing reads among them.

mr I already replicate to slaves, and sites will do read only queries off
mr these slaves

mr 99.9 % of the tables are read only anyway, the only tables we update or
mr insert into, are very very small and fast.

mr These big tables are daily extracts from IBM DB2 sites, in ebcdic 
mr format, we archive the data and users then query our site which is
mr faster, unless they start doing multiple query options, then things get
mr slow.

mr If you query only one feild its FAST, but if you query two feilds, its
mr slow, very slow, need multiple key per query support in mysql.


One thing that can help (at least for me) is working with temporary
tables, spliting one complex query in many littles.

A simple example:
2 tables: header and details

CREATE TEMPORARY TABLE tmp TYPE=HEAP
SELECT hdr_code FROM header WHERE ...;

ALTER TABLE tmp ADD PRIMARY KEY(hdr_code);

SELECT d.* FROM details d INNER JOIN tmp
ON d.hdr_code=tmp.hdr_code WHERE ...;

DROP TABLE TMP;

-- 
Alejandro D. Burne
Departamento de Sistemas
Asociación Médica Rosario
España 401 - S2000SBI
Rosario - Santa Fe - Argentina
+54-0341-4252313 Interno 145



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re[3]: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Alejandro D. Burne
adburne el Thursday, July 15, 2004, 12:45:05 PM, escribió:

aaca matt on Thursday, July 15, 2004, 11:58:31 AM, wrote:

 Consider replicating to some slave servers and dividing reads among them.

mr I already replicate to slaves, and sites will do read only queries off
mr these slaves

mr 99.9 % of the tables are read only anyway, the only tables we update or
mr insert into, are very very small and fast.

mr These big tables are daily extracts from IBM DB2 sites, in ebcdic
mr format, we archive the data and users then query our site which is
mr faster, unless they start doing multiple query options, then things get
mr slow.

mr If you query only one feild its FAST, but if you query two feilds, its
mr slow, very slow, need multiple key per query support in mysql.


aaca One thing that can help (at least for me) is working with temporary
aaca tables, spliting one complex query in many littles.

aaca A simple example:
aaca 2 tables: header and details

aaca CREATE TEMPORARY TABLE tmp TYPE=HEAP
aaca SELECT hdr_code FROM header WHERE ...;

aaca ALTER TABLE tmp ADD PRIMARY KEY(hdr_code);

aaca SELECT d.* FROM details d INNER JOIN tmp
aaca ON d.hdr_code=tmp.hdr_code WHERE ...;

aaca DROP TABLE TMP;

aaca -- 
aaca Alejandro D. Burne
aaca Departamento de Sistemas
aaca Asociación Médica Rosario
aaca España 401 - S2000SBI
aaca Rosario - Santa Fe - Argentina
aaca +54-0341-4252313 Interno 145

I'm sorry this was an answer to Patrick Drouin

-- 
Alejandro D. Burne
Departamento de Sistemas
Asociación Médica Rosario
España 401 - S2000SBI
Rosario - Santa Fe - Argentina
+54-0341-4252313 Interno 145



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Tim Brody
- Original Message - 
From: matt ryan [EMAIL PROTECTED]

 I need the DIC in the key to keep the record unique, I have thousands
 with everything identical except the DIC.

In your schema you had DIC in the PRIMARY KEY and an additional
(unnecessary?) KEY on DIC.

 I was confused on the multi key index issue, I thought it would seek
 faster if I put the most unique field up front, which I do on most
 tables, I did not on this one though.   I have one large raid array now,
 so I cant split the data, or put the binary log on another disk.

Having performed a couple of tests on a multi-key index I can't find any
significant difference between different orders on the columns (logical I
guess), although there may be certain uses that might work better in either
situation ...

Perhaps a better point to make would be to say change the order of your
PRIMARY KEY to reflect the queries that you're performing. e.g. as I said
above MySQL will efficiently use the first part of any key. Certainly, if
you're performing a WHERE or ORDER BY on any of the columns in the primary
key put those at the front.

 I found mysql was great up to about 3 gig, then everything hit the
 brakes and got really really really slow

 I'm scared of joines, every time I do a join in mysql on indexed fields
 in mysql, the performance is horrible, because the where clause is not a
 field that's in the join, performance is poopy

Hmm, how about (sorry if I'm pointing out the obvious) e.g.:
_left (col1 int, col2 int, col3 int, unique(col3,col1,col2));
_right (col1 int, col2 int, col3 int, unique(col1,col2,col3));

_left inner join _right using(col1,col2) where _left.col3=42 and
_right.col3=35;

The analyzer (explain) shows a const at the end of the ref columns for
_right and const for _left.

All the best,
Tim.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread matt ryan

You might be out of luck with MySQL ... sorry.
You may need to switch to a database that has a parallel query 
facility. Then - every query becomes a massive table scan but gets 
divided into multiple concurrent subqueries - and overall the job 
finishes in a reasonable amount of time. The epitomy of brute force. 
It's hard to rationalize initially but after a while you see it's the 
only way to go. Remember -  indexes are no longer required.

We have a billion row 100GB table the users search any and every way. 
Response time is less than a minute.

We are anxiously waiting to see this technology added to MySQL. Maybe 
one day we'll have some money to contribute to the effort. Parallel 
query is not trivial. That's why these databases are expensive.

I can send you more details privately if you are interested.

I've used it, with oracle, but oracles index searches are better, hit 
the best one first, then 2nd best, then 3rd, but I really dont want to 
go to oracle, it's too complicated for my tech's

vs mysql, hit the best one first, and use no other
Query time is a non issue at this point, it's load time, load daily file 
into temp table, then insert ignore into main table, on key violation 
the violating record is ignored

load time is the issue, the server loads files 15 hours a day, that big 
primary key makes loading any table over 2-3 gig VERY slow

I thought it was a bug, everything was great untill you get up to about 
3 or 4 gig, then it gets SLOW

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Udi . S . Karni
That's the whole point. Eliminate your indexes and your load problems are 
solved. Especially given the fact that you insert ignore and don't use the 
primary key to validate uniqueness.






matt ryan [EMAIL PROTECTED]
07/15/2004 11:38 AM

 
To: 
cc: [EMAIL PROTECTED]
Subject:Re: Mysql growing pains, 4 days to create index on one table!




 You might be out of luck with MySQL ... sorry.

 You may need to switch to a database that has a parallel query 
 facility. Then - every query becomes a massive table scan but gets 
 divided into multiple concurrent subqueries - and overall the job 
 finishes in a reasonable amount of time. The epitomy of brute force. 
 It's hard to rationalize initially but after a while you see it's the 
 only way to go. Remember -  indexes are no longer required.

 We have a billion row 100GB table the users search any and every way. 
 Response time is less than a minute.

 We are anxiously waiting to see this technology added to MySQL. Maybe 
 one day we'll have some money to contribute to the effort. Parallel 
 query is not trivial. That's why these databases are expensive.

 I can send you more details privately if you are interested.


I've used it, with oracle, but oracles index searches are better, hit 
the best one first, then 2nd best, then 3rd, but I really dont want to 
go to oracle, it's too complicated for my tech's

vs mysql, hit the best one first, and use no other

Query time is a non issue at this point, it's load time, load daily file 
into temp table, then insert ignore into main table, on key violation 
the violating record is ignored

load time is the issue, the server loads files 15 hours a day, that big 
primary key makes loading any table over 2-3 gig VERY slow

I thought it was a bug, everything was great untill you get up to about 
3 or 4 gig, then it gets SLOW

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Udi . S . Karni
Reordering the primary key wouldn't necessarily speed up the key check. 
The reason for selecting a particular sequence within a primary key is to 
put the columns that are available the most often - upfront - so that the 
index will have at least something to bite on.

Can you parittion further? You want to shoot for reading no more than a 1% 
slice for a given query. If possible.

Can you partition by ranges of a column? Values 01 to 100 in 
partition 1, 11 to 200 in partition 2, etc? Anything that's 
specified consistently in every query?

Then again -
if your table is huge...
and your queries are all over the place...
and the user can specify any search criteria...
and there's no predictable pattern you can partition on...
and you frequently add and/or remove lots of data from your table...
and your queries pull lots of rows...

You might be out of luck with MySQL ... sorry.

You may need to switch to a database that has a parallel query facility. 
Then - every query becomes a massive table scan but gets divided into 
multiple concurrent subqueries - and overall the job finishes in a 
reasonable amount of time. The epitomy of brute force. It's hard to 
rationalize initially but after a while you see it's the only way to go. 
Remember -  indexes are no longer required.

We have a billion row 100GB table the users search any and every way. 
Response time is less than a minute.

We are anxiously waiting to see this technology added to MySQL. Maybe one 
day we'll have some money to contribute to the effort. Parallel query is 
not trivial. That's why these databases are expensive.

I can send you more details privately if you are interested.






matt ryan [EMAIL PROTECTED]
07/14/2004 12:27 PM

 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: Mysql growing pains, 4 days to create index on one table!


[EMAIL PROTECTED] wrote:

You may want more indexes but you might be getting killed because you 
already have too many.

To test - try loading into a table without indexes and see if it makes a 
difference.

At the very least - check to see if the primary index which starts with 
'dic' can make your special 'dic' index superfluous.

If write speed is a bottleneck you might consider Raid-1 instead of 
Raid-5.

Reading lots of rows via index is a killer. Depending on your hardware it 
may be cheaper to table scan 50 rows than to read 1 via index. However, 
this requires partitioning of the data based on some column which appears 
in every query and acts as an initial filter. If you are lucky enough to 
be in that situation - consider a MERGE table.


 


These tables are merged, the total table size is huge, on this 
particular table, it's , 45,449,534 rows, however, all the merge tables 
combined are 258,840,305 records

perhaps I should reorder the pimary key, putting the longest most unique 
record up front, and the least unique at the end, would that speed up 
the key check?   I can tell that almost everything is read IO, very 
little write IO

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
a few suggestions...

Your slow inserts could be a concurrancy issue.  If
lots of users are doing selects that could be
interfering with your inserts, especially if they use
a n odbc/jdbc app that locks the table for the entire
read. Jdbc reads do that when you tell it to stream
the contents of the query, because they wouldn't all
fit in memory.

--

Does your import script do an INSERT for each line, or
does it combine lines into multi-value inserts?

doing an insert into ... values (...),(...),(...) 
will be much faster than doing one insert for each
row.

since your max packet size is pretty big, you should
be able to consolidate a fairly large number of rows
into one insert.

--

What settings are you using on the Percs?  What stripe
size?  What write cache are you using (back or thru)?
Are you using read-ahead cacheing (that can hurt index
performance)?

--

Defragging the filessytem probably won't have any
major impact on the speed of your application.  

---

Is there another way you can approach the duplicate
problem?  For instance, if duplicate data can only be
generated in the last few days worth of imports, you
might not have to rely on your primary key on the main
table for importing.  This only works if you can drop
the primary key because it is only used for duplicate
checking and isn't used to speed queries.  You could
instead create another table that you do all your
importing to, taking care of  dupes with a primary key
on that table, then insert from that table into the
main one.  Keep a timestamp in that table and purge
the older records periodically.


the last thing i could think of would be a dupe
checker table.  Create an innodb table that consists
of only the columns from your big table and make all
the columns the primary key.  Essentially you have
just created an index only table.  Insert your new
data into a temporary heap table, then delete from the
heap table where the key is in your dupe table. 
Then insert everything from the heap table into the
big table.  Once again, this only works if you don't
need the primary key on the big table.  This will use
more CPU/memory but it may get around your read
problems.




Justin

--- matt ryan [EMAIL PROTECTED] wrote:
 
 
  You might be out of luck with MySQL ... sorry.
 
  You may need to switch to a database that has a
 parallel query 
  facility. Then - every query becomes a massive
 table scan but gets 
  divided into multiple concurrent subqueries - and
 overall the job 
  finishes in a reasonable amount of time. The
 epitomy of brute force. 
  It's hard to rationalize initially but after a
 while you see it's the 
  only way to go. Remember -  indexes are no longer
 required.
 
  We have a billion row 100GB table the users search
 any and every way. 
  Response time is less than a minute.
 
  We are anxiously waiting to see this technology
 added to MySQL. Maybe 
  one day we'll have some money to contribute to the
 effort. Parallel 
  query is not trivial. That's why these databases
 are expensive.
 
  I can send you more details privately if you are
 interested.
 
 
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
 
 vs mysql, hit the best one first, and use no other
 
 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
 
 load time is the issue, the server loads files 15
 hours a day, that big 
 primary key makes loading any table over 2-3 gig
 VERY slow
 
 I thought it was a bug, everything was great untill
 you get up to about 
 3 or 4 gig, then it gets SLOW
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
Insert ignore doesn't insert the record if there is a
duplicate.  It simply doesn't insert the row.  Without
the IGNORE clause, the query would generate an error
insert of silenty ignoring the insert.

--- [EMAIL PROTECTED] wrote:
 That's the whole point. Eliminate your indexes and
 your load problems are 
 solved. Especially given the fact that you insert
 ignore and don't use the 
 primary key to validate uniqueness.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Donny Simonton
Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you are
running.

For example, we have one table that has 8 billion rows in it and it close to
100 gigs and we can hammer it all day long without any problems.  It really
depends on how you are doing things.

But as far as you mentioning about mysql not using multiple indexes, it
does.  You just have to create an index on multiple fields at one time.
I've got tables with 10 fields in one index, now the trick with mysql is
that you must use all top 10 fields in your where clause for mysql to really
take advantage of the index.

But I would definitely send the list your table structure with your indexes
and some of your selects and inserts.  You can always change the names of
things if you don't want people to know the names of everything.

Just my 2 cents.

Donny

 -Original Message-
 From: matt ryan [mailto:[EMAIL PROTECTED]
 Sent: Thursday, July 15, 2004 9:32 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Mysql growing pains, 4 days to create index on one table!
 
 Tim Brody wrote:
 
 You may find that the 'dic' KEY isn't necessary, as it's the first part
 of
 your PRIMARY KEY.
 
 
 I've found better performance for multi-column keys by putting the
 columns
 in order of least variance first, e.g. for a list of dates:
 1979-04-23
 1979-07-15
 1980-02-04
 1980-06-04
 You want a key on (YEAR-MONTH-DAY)
 
 If you can you could put the index/data on different disks - not sure how
 you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?).
 
 You should definitely put the binary log file on another disk, but again
 not
 something I've used.
 
 I've found MySQL to be a royal pain working with multi-GB tables (my
 biggest
 is 12GB+13GB index). I've learnt that MySQL is a bit like a confused
 puppy -
 it doesn't know why it wet the floor, but it expects you to know that
 pained
 expression means you need to move it somewhere else ...
 
 
 
 I need the DIC in the key to keep the record unique, I have thousands
 with everything identical except the DIC.
 
 I was confused on the multi key index issue, I thought it would seek
 faster if I put the most unique field up front, which I do on most
 tables, I did not on this one though.   I have one large raid array now,
 so I cant split the data, or put the binary log on another disk.
 
 I found mysql was great up to about 3 gig, then everything hit the
 brakes and got really really really slow
 
 I'm scared of joines, every time I do a join in mysql on indexed fields
 in mysql, the performance is horrible, because the where clause is not a
 field that's in the join, performance is poopy
 
 I wish mysql could use multiple indexes like oracle, to narrow down the
 results, I've got some simple queries that take hours due to single
 index use, but every query field is indexed.
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
Oracle rarely performs index merges, but it does have
the ability to do, which mysql lacks.

 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
I know you don't want to go with Oracle, but I will
just add that it could help you here too, because it
can do parallel DML.  This is especially useful if you
have access to the partitioning option, because you
could then partition your data by hash and get a
number of bonuses.  #1 your index updates will be much
faster because there are a lot less rows to look
through.  #2 parallel DML can insert into multiple
partitions at once.  

I am unsure if MaxDB supports any of those features,
though it may.  You may want to look into it to see if
they are.

I don't want to sound like I'm pushing Oracle.  I'm
not an Oracle sales rep, or anything like that.  I am
a professional Oracle DBA that happens to also use
mySQL a lot and I like both databases.  Sometimes one
is better than the other for solving a problem.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Lachlan Mulcahy
Matt,

I think I might have missed the start of this thread, as I can't seem to
find it.

Could you please post the following info (I don't mind if you just mail it
directly to me to save the list):

MySQL Version:
Server OS:
Server Hardware configuration:
 - Memory
 - CPU(s)
 - Disks (RAIDs and Independent disk speed/types)

Database Structure dump and some pointers on where you are having the
issues.

Lachlan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Lopez David E-r9374c wrote:
matt
1) inserts using this format is much faster:
INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
   is much faster then single row insert. My experience is
   2.5 hrs vs.. 36 hrs.
2) The PACK_KEYS=1 may be hurting you. I've never used it.
3) There may be a cache somewhere that's to small. You'll
  have to do some digging in this area.
4) dup key ignore - what does that mean exactly?
5) what is your OS  rev, mysql rev.
Please post any suggestions that you find valuable so we 
can all learn..

david
 

If I understand it, pack_keys should help if your disk IO limited vs cpu 
limited, cpu is ususally near idle.

I increased the cache sizes, helped a little, but not much
delay_key_write=ALL
key_buffer_size=1000M
read_buffer_size=512M
record_buffer=512M
What would the syntax for that type of insert be?
I have a table with 30,000 records, I need to insert them into the main 
table with millions of records, I thought that insert into table select 
* from  2nd table  would be the fastest way.

insert ignore will not insert a record, if it violates the primary key, 
I do this to keep duplicate records out of the system

windows 2003 and 2000 servers, mysql-nt 4.0.16  I tried the newer 
versions, but found bugs on all of them, I submitted them to the bug system.

I believe my bottleneck is reading the data to ensure the primary key is 
not violated, I see lots of read IO, but little write IO

Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
Egor Egorov wrote:
Are you running this under Microsoft Windows? 
 

Yes, windows 2k and 2003, mysql-nt 4.0.16
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread matt ryan
[EMAIL PROTECTED] wrote:
You may want more indexes but you might be getting killed because you already have too 
many.
To test - try loading into a table without indexes and see if it makes a difference.
At the very least - check to see if the primary index which starts with 'dic' can make 
your special 'dic' index superfluous.
If write speed is a bottleneck you might consider Raid-1 instead of Raid-5.
Reading lots of rows via index is a killer. Depending on your hardware it may be 
cheaper to table scan 50 rows than to read 1 via index. However, this requires 
partitioning of the data based on some column which appears in every query and acts as 
an initial filter. If you are lucky enough to be in that situation - consider a MERGE 
table.
 

These tables are merged, the total table size is huge, on this 
particular table, it's , 45,449,534 rows, however, all the merge tables 
combined are 258,840,305 records

perhaps I should reorder the pimary key, putting the longest most unique 
record up front, and the least unique at the end, would that speed up 
the key check?   I can tell that almost everything is read IO, very 
little write IO

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread Justin Swanhart
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.


--- matt ryan [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
 You may want more indexes but you might be getting
 killed because you already have too many.
 
 To test - try loading into a table without indexes
 and see if it makes a difference.
 
 At the very least - check to see if the primary
 index which starts with 'dic' can make your special
 'dic' index superfluous.
 
 If write speed is a bottleneck you might consider
 Raid-1 instead of Raid-5.
 
 Reading lots of rows via index is a killer.
 Depending on your hardware it may be cheaper to
 table scan 50 rows than to read 1 via index.
 However, this requires partitioning of the data
 based on some column which appears in every query
 and acts as an initial filter. If you are lucky
 enough to be in that situation - consider a MERGE
 table.
 
 
   
 
 
 These tables are merged, the total table size is
 huge, on this 
 particular table, it's , 45,449,534 rows, however,
 all the merge tables 
 combined are 258,840,305 records
 
 perhaps I should reorder the pimary key, putting the
 longest most unique 
 record up front, and the least unique at the end,
 would that speed up 
 the key check?   I can tell that almost everything
 is read IO, very 
 little write IO
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-01 Thread Egor Egorov
Are you running this under Microsoft Windows? 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread matt ryan
Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..
I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
backplanes (new ones will have dual channel)
All have 2 gig of ram, but I've never seen mysql use more than 600mb
of ram.
The servers handle huge loads, each day there are 30 1-2 gig files
loaded into large tables, total mysql data size is 96 gig, the large
tables are 2-6 gig.
Inserts are done on dupe key ignore, this takes hours on the large
files, it barely keeps up with input files.
At the bottom of this post I've got the mysql.ini config lines, any
suggestions are welcome, I'm already beyond the mysql huge sample
they used to include in the program.
Sample table that I load is as follows.
each day I get 40 % new records on the text file, the input file is
normally 20mb, once a week I get one that's 1-2 gig, these take all
day to load.
I need more multiple column indexes, as some querys return millions of
rows that must be scanned, but the index size already exceeds the
table size, and the combinations I need would result in an myi that's
5x larger than the data itself.
Here's an example of the speed problem, the index was corrupt so I
dropped all and recreated, rather than a myisam repair. I think 3 days
is a little excessive for a table that's only 3.428 gig, index is
2.729 gig.  I cant remove the primary key, as it keeps duplicates out
of the system, the input files are from old database's, we use mysql
to store the data for the web frontend, mostly done in ASP, most
queries take less than a second, unforuntatly we have big queries that
take way more than the IIS timeout setting all the time, but no way
around it, I cant add more indexes without making it even slower :(

I cant tell if it's mysql that's the problem, or the hardware, Here's a 
screenshot of the disk IO, if I copy a file while mysql is doing the 
build index, the io shoots way up, which tells me, mysql is NOT maxing 
out the drives, and it's also not maxing out the memory.

Unless it's doing lots and lots of seeks on the drive, which is harder 
to test using perfmon, are there any mysql test setups that would help 
identify where the bottleneck is?

screenshot of disk io usage
http://www.geekopolis.com/pics/diskio.jpg
I'm all out of ideas, other than switching to another db, and the table 
 indexes split across drives, maybe a 2 channel setup, 4 drives per 
channel, each 4 is a separate raid 5 setup, one holds data one holds 
indexes, cant do this with mysql though

mysql alter table hood_stat add primary key
(dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
(`dic`), add index `ctasc` (`ctasc`);
Query OK, 45449534 rows affected (3 days 19 hours 6 min 34.94 seconds
Records: 45449534 Duplicates: 0 Warnings: 0
CREATE TABLE `hood_stat` (
`dic` char(3) NOT NULL default '',
`fr_ric` char(3) NOT NULL default '',
`niin` char(11) NOT NULL default '',
`ui` char(2) NOT NULL default '',
`qty` char(5) NOT NULL default '',
`don` char(14) NOT NULL default '',
`suf` char(1) NOT NULL default '',
`dte_txn` char(5) NOT NULL default '',
`ship_to` char(3) NOT NULL default '',
`sta` char(2) NOT NULL default '',
`lst_sos` char(3) NOT NULL default '',
`esd` char(4) NOT NULL default '',
`stor` char(3) NOT NULL default '',
`d_t` char(4) NOT NULL default '',
`ctasc` char(10) NOT NULL default '',
PRIMARY KEY (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta` ),
KEY `don` (`don`),
KEY `niin` (`niin`),
KEY `stor` (`stor`),
KEY `dic` (`dic`),
KEY `ctasc` (`ctasc`)
) TYPE=MyISAM MAX_ROWS=10 PACK_KEYS=1
skip-locking
set-variable=delay_key_write=ALL
set-variable= key_buffer_size=1500M
set-variable=join_buffer=512M
set-variable= max_allowed_packet=256M
set-variable= table_cache=512
set-variable= sort_buffer=256M
set-variable=tmp_table_size=400M
set-variable= record_buffer=512M
set-variable= thread_cache=8
set-variable=myisam_sort_buffer_size=256M
myisam-recover=BACKUP,FORCE
set-variable=read_buffer_size=512M
set-variable=interactive_timeout=7200
set-variable=wait_timeout=7200
log-bin
server-id=1
replicate-do-db=finlog
set-variable=open-files-limit=500
set-variable=table-cache=400
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread Lopez David E-r9374c
matt

1) inserts using this format is much faster:
 INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
is much faster then single row insert. My experience is
2.5 hrs vs.. 36 hrs.

2) The PACK_KEYS=1 may be hurting you. I've never used it.

3) There may be a cache somewhere that's to small. You'll
   have to do some digging in this area.

4) dup key ignore - what does that mean exactly?

5) what is your OS  rev, mysql rev.

Please post any suggestions that you find valuable so we 
can all learn..

david
  

 -Original Message-
 From: matt ryan [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 30, 2004 11:51 AM
 To: [EMAIL PROTECTED]
 Subject: Mysql growing pains, 4 days to create index on one table!
 
 
 Rebuilding index takes 3 1/2 days!!!  Growing pains with mysql..
 
 I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
 raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
 backplanes (new ones will have dual channel)
 
 All have 2 gig of ram, but I've never seen mysql use more than 600mb
 of ram.
 
 The servers handle huge loads, each day there are 30 1-2 gig files
 loaded into large tables, total mysql data size is 96 gig, the large
 tables are 2-6 gig.
 
 Inserts are done on dupe key ignore, this takes hours on the large
 files, it barely keeps up with input files.
 
 At the bottom of this post I've got the mysql.ini config lines, any
 suggestions are welcome, I'm already beyond the mysql huge sample
 they used to include in the program.
 
 Sample table that I load is as follows.
 
 each day I get 40 % new records on the text file, the input file is
 normally 20mb, once a week I get one that's 1-2 gig, these take all
 day to load.
 
 I need more multiple column indexes, as some querys return millions of
 rows that must be scanned, but the index size already exceeds the
 table size, and the combinations I need would result in an myi that's
 5x larger than the data itself.
 
 Here's an example of the speed problem, the index was corrupt so I
 dropped all and recreated, rather than a myisam repair. I think 3 days
 is a little excessive for a table that's only 3.428 gig, index is
 2.729 gig.  I cant remove the primary key, as it keeps duplicates out
 of the system, the input files are from old database's, we use mysql
 to store the data for the web frontend, mostly done in ASP, most
 queries take less than a second, unforuntatly we have big queries that
 take way more than the IIS timeout setting all the time, but no way
 around it, I cant add more indexes without making it even slower :(
 
 
 
 I cant tell if it's mysql that's the problem, or the 
 hardware, Here's a 
 screenshot of the disk IO, if I copy a file while mysql is doing the 
 build index, the io shoots way up, which tells me, mysql is 
 NOT maxing 
 out the drives, and it's also not maxing out the memory.
 
 Unless it's doing lots and lots of seeks on the drive, which 
 is harder 
 to test using perfmon, are there any mysql test setups that 
 would help 
 identify where the bottleneck is?
 
 
 screenshot of disk io usage
 
 http://www.geekopolis.com/pics/diskio.jpg
 
 I'm all out of ideas, other than switching to another db, and 
 the table 
  indexes split across drives, maybe a 2 channel setup, 4 drives per 
 channel, each 4 is a separate raid 5 setup, one holds data one holds 
 indexes, cant do this with mysql though
 
 mysql alter table hood_stat add primary key
 (dic,niin,fr_ric,don,suf,dte_txn,sta) , add index `don` (`don`), add
 index `niin` (`niin`), add index `stor` (`stor`), add index `dic`
 (`dic`), add index `ctasc` (`ctasc`);
 
 Query OK, 45449534 rows affected (3 days 19 hours 6 min 34.94 seconds
 Records: 45449534 Duplicates: 0 Warnings: 0
 
 
 CREATE TABLE `hood_stat` (
 `dic` char(3) NOT NULL default '',
 `fr_ric` char(3) NOT NULL default '',
 `niin` char(11) NOT NULL default '',
 `ui` char(2) NOT NULL default '',
 `qty` char(5) NOT NULL default '',
 `don` char(14) NOT NULL default '',
 `suf` char(1) NOT NULL default '',
 `dte_txn` char(5) NOT NULL default '',
 `ship_to` char(3) NOT NULL default '',
 `sta` char(2) NOT NULL default '',
 `lst_sos` char(3) NOT NULL default '',
 `esd` char(4) NOT NULL default '',
 `stor` char(3) NOT NULL default '',
 `d_t` char(4) NOT NULL default '',
 `ctasc` char(10) NOT NULL default '',
 PRIMARY KEY (`dic`,`niin`,`fr_ric`,`don`,`suf`,`dte_txn`,`sta` ),
 KEY `don` (`don`),
 KEY `niin` (`niin`),
 KEY `stor` (`stor`),
 KEY `dic` (`dic`),
 KEY `ctasc` (`ctasc`)
 ) TYPE=MyISAM MAX_ROWS=10 PACK_KEYS=1
 
 skip-locking
 set-variable=delay_key_write=ALL
 set-variable= key_buffer_size=1500M
 set-variable=join_buffer=512M
 set-variable= max_allowed_packet=256M
 set-variable= table_cache=512
 set-variable= sort_buffer=256M
 set-variable=tmp_table_size=400M
 set-variable= record_buffer=512M
 set-variable= thread_cache=8
 set-variable=myisam_sort_buffer_size=256M
 myisam-recover=BACKUP,FORCE
 set-variable=read_buffer_size=512M
 set

Re: Mysql growing pains, 4 days to create index on one table!

2004-06-30 Thread Udikarni
You may want more indexes but you might be getting killed because you already have too 
many.

To test - try loading into a table without indexes and see if it makes a difference.

At the very least - check to see if the primary index which starts with 'dic' can make 
your special 'dic' index superfluous.

If write speed is a bottleneck you might consider Raid-1 instead of Raid-5.

Reading lots of rows via index is a killer. Depending on your hardware it may be 
cheaper to table scan 50 rows than to read 1 via index. However, this requires 
partitioning of the data based on some column which appears in every query and acts as 
an initial filter. If you are lucky enough to be in that situation - consider a MERGE 
table.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb: CREATE INDEX

2003-09-25 Thread aguia

Yes, i did that.
It's given me something like

si 200/300
so 300/500

It's a lot, doing my system going down. But i think that the problem is that i'm
reserving too much memory for mysql...

Or could exists another reason?

Thx 
Alexis



Quoting Per Andreas Buer [EMAIL PROTECTED]:

 [EMAIL PROTECTED] writes:
 
  I'm creating indexes in a table with 16 million rows (it's a fact
  table), and it takes a lot of time (2/3/4 hours), because my system is
  always swapping in/out (i think). 
 
 If you run vmstat 2 while the index is created you will see if swap is
 beeing used or not (see the si and so columns and man vmstat for
 futher information).
 
 
 -- 
 Per Andreas Buer
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb: CREATE INDEX

2003-09-25 Thread Jeremy Zawodny
On Thu, Sep 25, 2003 at 03:55:26PM +0100, [EMAIL PROTECTED] wrote:
 
 Yes, i did that.
 It's given me something like
 
 si 200/300
 so 300/500
 
 It's a lot, doing my system going down. But i think that the problem is that i'm
 reserving too much memory for mysql...

That's probably true and very easy to test.  Try it with giving MySQL
less memory.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 4.0.15-Yahoo-SMP: up 11 days, processed 423,773,355 queries (419/sec. avg)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Innodb: CREATE INDEX

2003-09-24 Thread aguia


Hi,

I'm working in MySQL with innodb tables, in Linux (Red Hat 9). 
I'm creating indexes in a table with 16 million rows (it's a fact table), and it
takes a lot of time (2/3/4 hours), because my system is always swapping in/out
(i think). 
At the start of the creating, it's fast (because my buffer pool is free), but
when the buffer pool reaches at the end, the system slow down, and the swap
in/out begins :/

I have 512MB of RAM. 
My configuration is:
buffer pool size 256MB
innobd_log_file 128MB
innodb_log_buffer_size 8MB

I tried innodb_flush_log_at_trx_commit with 1/0, innodb_flush_method with
fdatasync/O_SYNC, innodb_log_buffer_size to 32MB, but nothing happens!

I don't know if i have to re-build my tables... but i think that it would take a
lot of time too (16 million rows...).

What can i do?

Thanks
Alexis

P.S.:i have only innodb tables on my DB.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb: CREATE INDEX

2003-09-24 Thread Jeremy Zawodny
On Wed, Sep 24, 2003 at 06:15:19PM +0100, [EMAIL PROTECTED] wrote:
 
 Hi,
 
 I'm working in MySQL with innodb tables, in Linux (Red Hat 9). 
 I'm creating indexes in a table with 16 million rows (it's a fact table), and it
 takes a lot of time (2/3/4 hours), because my system is always swapping in/out
 (i think). 

You should first verify if swap really is a problem or not.  That could
have a big impact on how you solve this problem...
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 4.0.15-Yahoo-SMP: up 10 days, processed 389,349,141 queries (418/sec. avg)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



create index

2003-07-22 Thread PaT!
Hi,

I have a table:

+-+--+--+-+-+---
-+
| Field   | Type | Null | Key | Default | Extra
|
+-+--+--+-+-+---
-+
| id  | int(11)  |  | PRI | NULL|
auto_increment |
| NUMFATT | int(5) unsigned zerofill |  | | 0   |
|
| dataemiss   | date | YES  | | NULL|
|
+-+--+--+-+-+---
-+
(the table has more fields but I removed them to semplify the view of
it)

where I want to create a UNIQUE index with two tables
'dataemiss' and 'NUMFATT'
'dataemiss' is a date format and in the index I want to consider just
the year part of the date. 
I cannot change the field type setting it as a string (ex. char, var,
etc.).
any hint on how to do it?

The goal of this is index is to avoid to insert the same 'NUMFATT' twice
or more in the same year.

I'm using MySQL 4.0.13 on Win 2K server

Thanks
Patrizio




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: create index

2003-07-22 Thread Paul DuBois
At 19:50 +0200 7/22/03, PaT! wrote:
Hi,

I have a table:

+-+--+--+-+-+---
-+
| Field   | Type | Null | Key | Default | Extra
|
+-+--+--+-+-+---
-+
| id  | int(11)  |  | PRI | NULL|
auto_increment |
| NUMFATT | int(5) unsigned zerofill |  | | 0   |
|
| dataemiss   | date | YES  | | NULL|
|
+-+--+--+-+-+---
-+
(the table has more fields but I removed them to semplify the view of
it)
where I want to create a UNIQUE index with two tables
'dataemiss' and 'NUMFATT'
'dataemiss' is a date format and in the index I want to consider just
the year part of the date.
You cannot do that for DATE values.  If you store your dates as separate
year, month, and day columns, then you could create a unique index on
the year column.
I cannot change the field type setting it as a string (ex. char, var,
etc.).
any hint on how to do it?
The goal of this is index is to avoid to insert the same 'NUMFATT' twice
or more in the same year.
I'm using MySQL 4.0.13 on Win 2K server

Thanks
Patrizio


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


The difference between 1.9G and 2.2G table when create Index

2003-07-05 Thread James Yang
Hello MySQL,

I have a MYISAM table, which is about 2.2G and 19,000,000 records. When I 
create indexs for it, I couldn't finish after 4 hours. But when I reduce the 
records to 16,000,000, about 1.9G, the same index can be finished within 35 
minutes. Can anyone explain it? Thanks in advance.

I am using PIII 550, RedHat 8.0, with 1G Memory and IDE Hard Disk.The 
database is stored at ext3 file system. When I use show processlist for 
the 2.2G indexs, it tell me Copying to tmp table.

James

[EMAIL PROTECTED]

_
The new MSN 8: advanced junk mail protection and 2 months FREE*  
http://join.msn.com/?page=features/junkmail

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: The difference between 1.9G and 2.2G table when create Index

2003-07-05 Thread Fred van Engen
On Sat, Jul 05, 2003 at 01:03:17PM +, James Yang wrote:
 I have a MYISAM table, which is about 2.2G and 19,000,000 records. When I 
 create indexs for it, I couldn't finish after 4 hours.

You mean it was still running? Was there still disk or CPU activity
after 4 hours?

But when I reduce 
 the records to 16,000,000, about 1.9G, the same index can be finished 
 within 35 minutes. Can anyone explain it? Thanks in advance.
 

That's much shorter than 4 hours!

Maybe your disk was full when creating the 2.2G table's index? MySQL
waits until disk space becomes available again. You wouldn't see much
disk activity while it is waiting.

Keep in mind that MySQL makes a copy of your table while it is adding
the index. You'll need twice the space of your initial table, plus
space for the additional index.


Regards,

Fred.

-- 
Fred van Engen  XB Networks B.V.
email: [EMAIL PROTECTED]Televisieweg 2
tel: +31 36 5462400 1322 AC  Almere
fax: +31 36 5462424 The Netherlands

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The difference between 1.9G and 2.2G table when create Index

2003-07-05 Thread Fred van Engen
On Sat, Jul 05, 2003 at 03:16:11PM +0200, Fred van Engen wrote:
 On Sat, Jul 05, 2003 at 01:03:17PM +, James Yang wrote:
  I have a MYISAM table, which is about 2.2G and 19,000,000 records. When I 
  create indexs for it, I couldn't finish after 4 hours.
 
 You mean it was still running? Was there still disk or CPU activity
 after 4 hours?
 
 But when I reduce 
  the records to 16,000,000, about 1.9G, the same index can be finished 
  within 35 minutes. Can anyone explain it? Thanks in advance.
  
 
 That's much shorter than 4 hours!
 
 Maybe your disk was full when creating the 2.2G table's index? MySQL
 waits until disk space becomes available again. You wouldn't see much
 disk activity while it is waiting.
 
 Keep in mind that MySQL makes a copy of your table while it is adding
 the index. You'll need twice the space of your initial table, plus
 space for the additional index.
 

Check this URL for details:

http://www.mysql.com/doc/en/Full_disk.html

There seems to be an exception to this rule when an index is added, so I
may be wrong on this. The adding of an index happens in two phases and
the docs are not clear to me about the first (copying) phase however.
Your earlier comment indicates it is in the first phase still.

  I am using PIII 550, RedHat 8.0, with 1G Memory and IDE Hard Disk.The
  database is stored at ext3 file system. When I use show processlist for
  the 2.2G indexs, it tell me Copying to tmp table.


Regards,

Fred.

-- 
Fred van Engen  XB Networks B.V.
email: [EMAIL PROTECTED]Televisieweg 2
tel: +31 36 5462400 1322 AC  Almere
fax: +31 36 5462424 The Netherlands

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The difference between 1.9G and 2.2G table when create Index

2003-07-05 Thread James Yang
Thanks guys,

Actaully my Hard disk is 10G, just for one table. There is enough table 
space. I noticed when I created index for 1.9G table, at the first phase, 
mysql copy table, but mysql doesn't write to the tmp index file, I means the 
filesize of the tmp index didn't increase. When I create index for 2.2G 
table, the tmp index file is increased at the first phase. Any reason?

James

[EMAIL PROTECTED]


  I have a MYISAM table, which is about 2.2G and 19,000,000 records. 
When I
  create indexs for it, I couldn't finish after 4 hours.

 You mean it was still running? Was there still disk or CPU activity
 after 4 hours?

 But when I 
reduce
  the records to 16,000,000, about 1.9G, the same index can be finished
  within 35 minutes. Can anyone explain it? Thanks in advance.
 

 That's much shorter than 4 hours!

 Maybe your disk was full when creating the 2.2G table's index? MySQL
 waits until disk space becomes available again. You wouldn't see much
 disk activity while it is waiting.

 Keep in mind that MySQL makes a copy of your table while it is adding
 the index. You'll need twice the space of your initial table, plus
 space for the additional index.


Check this URL for details:

http://www.mysql.com/doc/en/Full_disk.html

There seems to be an exception to this rule when an index is added, so I
may be wrong on this. The adding of an index happens in two phases and
the docs are not clear to me about the first (copying) phase however.
Your earlier comment indicates it is in the first phase still.
  I am using PIII 550, RedHat 8.0, with 1G Memory and IDE Hard Disk.The
  database is stored at ext3 file system. When I use show 
processlist for
  the 2.2G indexs, it tell me Copying to tmp table.

Regards,

Fred.

--
Fred van Engen  XB Networks B.V.
email: [EMAIL PROTECTED]Televisieweg 2
tel: +31 36 5462400 1322 AC  Almere
fax: +31 36 5462424 The Netherlands
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

_
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*.  
http://join.msn.com/?page=features/virus

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


CREATE INDEX problems on sparse column of huge table

2003-03-23 Thread Steve Allaway
I am trying to create indexes on several columns of a huge table in
MySQL 4.0.12 running on Windows 2000 Server.  The table contains 42
million rows, and occupies about 7G of disk space.  The server is a
1.5GHz P4 with 1G RAM, and performs fairly well - the load data infile
takes from 22 - 28 minutes with only a primary key index on an int with
an empty table.  All of the columns are NOT NULL, there is plenty of
disk space, and I have tuned my.cnf to match the system.  Here are the
problems:
 
1.  Even with a write lock, MySQL creates a copy of the data and
index files before creating the new index.  Is there any way to prevent
this behavior?  The copy routines used by MySQL in Windows appear to be
inefficient, and the copy process takes appx 400%-600% longer than a
simple file copy.  Is MySQL performing some sort of integrity check
during this process?  And why the copy at all, if an exclusive write
lock is obtained? Although this is not a huge issue, the additional 40+
minutes seems poorly spent (and gets worse the more indexes you have, as
each existing index is somehow reprocessed).
 
2.  I have a varchar(32) column that is 50% populated with strings
8-13 chars in length.  When I try to create an index on this column,
MySQL seems takes 15-20 TIMES longer than creating other indexes.  I
have created indexes on an int with sequential data (58 minutes), on the
first 10 chars of a varchar(30) with random data (64 minutes), a
varchar(2) + first 5 chars of a varchar(30) (84 minutes), and a datetime
randomly spanning 90 days (72 minutes).  All of these times also include
the above mentioned file copy.  During these indexes my CPU is averaging
about 95%.  When I create an index on the sparse (50% populated)
varchar, the CPU won't even rise above 3%, EVEN DURING THE INITIAL FILE
COPY, so it seems that MySQL canot effectively index sparse varchars on
huge tables.  Any suggestions?  I am trying to deal a blow to the evil
empire by proving MySQL as scalable as MS SQL Server, but the apparent
poor implementation on win32 hurts.  (If I do switch to Linux, whos
version has the best performance and supports huge files?)
 
Thanks in advance!
Steve Allaway
 
 
 
 


Re: CREATE INDEX is sooo slow! any ideas?

2003-02-27 Thread Sergei Golubchik
Hi!

On Feb 27, Sebastian Stan wrote:
 OK. but what about this :
 (AND i don't have a fulltext field !)
 
 Here's my table:
 
 (lac CHAR (3),
  ano  CHAR (7),
  bno  CHAR (18),
  cty  CHAR (4),
  dat  CHAR (8),
  tim  CHAR (6),
  dur  CHAR (8),
  ccu  CHAR (8),
  loc  CHAR (20),
  ccl  CHAR (12),
  isdn CHAR (1),
  ddi  char(4));
 
 ..which have 5-6mil records.
 
 When I do the following  it takes 5-6 hours. After the index it's done the
 processes list shows mysqld-nt.exe (i use Win2k Server) with a lot of Mem.
 Usage. Usually it uses 3-4,000k . When i create the index, it's goes to
 11,000k and after it's done it takes about a DAY!!! to go down to 3-4,000.
 You can imagine how frustrating the users are and how my phone gets on fire
 :)
 
 
 CREATE INDEX ANO ON FACTDET20028 (ANO) ;
 CREATE INDEX BNO ON FACTDET20028 (BNO) ;
 CREATE INDEX CTY ON FACTDET20028 (CTY) ;
 CREATE INDEX DAT ON FACTDET20028 (DAT) ;
 CREATE INDEX ANOCTYDAT ON FACTDET20028 (ANO, CTY, DAT) ;

It's wrong in two ways. First, each time you add an index, MySQL has to
rebuild the index file - and all existing indexes as well!
So index ANO gets rebuilt 5 times, index BNO - 4 times, etc !
Second - index ANO is absolutely not necessary as it's the prefix of
index ANOCTYDAT. It's only wasting space and time.

To build indexes use

ALTER TABLE FACTDET20028 ADD INDEX BNO (BNO), ADD INDEX CTY (CTY),
  ADD INDEX DAT (DAT), ADD INDEX ANOCTYDAT (ANO, CTY, DAT);

 I've tried to create the indexes two ways :
 1.before loading the date  into table (LOAD DATA local INFILE ... )
 2. after that.
 
 Both ways it's the same thing.

Still that ALTER TABLE shouldn't be any better than creating indexes on
empty table before load data.

What SHOW PROCESSLIST says ? It should be repair-by-sorting.
 
Regards,
Sergei

-- 
MySQL Development Team
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, http://www.mysql.com/
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: CREATE INDEX is sooo slow! any ideas?

2003-02-26 Thread Sebastian Stan
OK. but what about this :
(AND i don't have a fulltext field !)

Here's my table:

(lac CHAR (3),
 ano  CHAR (7),
 bno  CHAR (18),
 cty  CHAR (4),
 dat  CHAR (8),
 tim  CHAR (6),
 dur  CHAR (8),
 ccu  CHAR (8),
 loc  CHAR (20),
 ccl  CHAR (12),
 isdn CHAR (1),
 ddi  char(4));

..which have 5-6mil records.

When I do the following  it takes 5-6 hours. After the index it's done the
processes list shows mysqld-nt.exe (i use Win2k Server) with a lot of Mem.
Usage. Usually it uses 3-4,000k . When i create the index, it's goes to
11,000k and after it's done it takes about a DAY!!! to go down to 3-4,000.
You can imagine how frustrating the users are and how my phone gets on fire
:)


CREATE INDEX ANO ON FACTDET20028 (ANO) ;
CREATE INDEX BNO ON FACTDET20028 (BNO) ;
CREATE INDEX CTY ON FACTDET20028 (CTY) ;
CREATE INDEX DAT ON FACTDET20028 (DAT) ;
CREATE INDEX ANOCTYDAT ON FACTDET20028 (ANO, CTY, DAT) ;


I've tried to create the indexes two ways :
1.before loading the date  into table (LOAD DATA local INFILE ... )
2. after that.


Both ways it's the same thing.


--
  Sebastian Stan
 [EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: CREATE INDEX is sooo slow! any ideas?

2003-02-17 Thread Sergei Golubchik
Hi!

On Feb 16, Dan Nelson wrote:
 In the last episode (Feb 16), Michael T. Babcock said:
   At 06:27 PM 2/16/2003 -0500, Peter Grigor wrote:
Well, MySql stores all its index information in one index file,
so when you add another index it has to rebuild the WHOLE file.
:)
  
  Anyone on the MySQL team feel like explaining that design decision,
  besides historical reasons?  I doubt its any more efficient except in
  file descriptor usage (although I've expressed the same doubts about
  InnoDB's avoidance of the filesystem too).
 
 Which decision, putting all the indexes in one file, or rebuilding all
 indexes whenever you ALTER TABLE or add an index?  If the latter, I
 agree with you.  Modifying unrelated indexes or columns should not
 force a rebuild of every index.

Of course not.
And it won't eventually - it's in the todo.
 
Regards,
Sergei

-- 
MySQL Development Team
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, http://www.mysql.com/
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-17 Thread Sergei Golubchik
Hi!

On Feb 16, david wrote:
 
 Hi
 
 I have one table with 12 million records.   I'm trying to create an index 
 on one of the fields and it's taking hours ( over 6 so far)!  Am I doing 
 something wrong?
 
 The command I issued was
 create index title on article ( title);
 
  The table files are
 -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
 -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
 -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
 
 As you can see the main file is 1.4Gb
 
 The table definition is
 mysql show columns from article;
 +---+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +---+--+--+-+-+---+
 | ID| bigint(20)   | YES  | | NULL|   |
 | id2   | bigint(20)   | YES  | | NULL|   |
 | title | varchar(250) | YES  | | NULL|   |
 | body  | text | YES  | MUL | NULL|   |
 +---+--+--+-+-+---+
 4 rows in set (0.00 sec)
 
 I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL version 
 3.23.53a

Use MySQL-4.x

In 3.23 creating FULLTEXT index can easily take months, not hours.
In 4.0 this process is ~80-100 times faster.
 
Regards,
Sergei

-- 
MySQL Development Team
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, http://www.mysql.com/
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-17 Thread Michael T. Babcock
Sergei Golubchik wrote:


Which decision, putting all the indexes in one file, or rebuilding all
indexes whenever you ALTER TABLE or add an index?  If the latter, I
agree with you.  Modifying unrelated indexes or columns should not
force a rebuild of every index.
   


Of course not.
And it won't eventually - it's in the todo.




Wow, I love this list (and the MySQL team).  Go to bed with a question, 
wake up with an answer.

Well, in EST5EDT at least.  Thanks.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread david

Hi

I have one table with 12 million records.   I'm trying to create an index 
on one of the fields and it's taking hours ( over 6 so far)!  Am I doing 
something wrong?

The command I issued was
create index title on article ( title);

 The table files are
-rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
-rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
-rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI

As you can see the main file is 1.4Gb

The table definition is
mysql show columns from article;
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| ID| bigint(20)   | YES  | | NULL|   |
| id2   | bigint(20)   | YES  | | NULL|   |
| title | varchar(250) | YES  | | NULL|   |
| body  | text | YES  | MUL | NULL|   |
+---+--+--+-+-+---+
4 rows in set (0.00 sec)

I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL version 
3.23.53a

Thanks
David



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Peter Grigor
That body column wouldn't happen to be a fulltext index, would it? :)

If so, 'ave another coffee.

Peter
^_^
-
Peter Grigor
Hoobly Free Classifieds
http://www.hoobly.com


- Original Message -
From: david [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 5:47 PM
Subject: CREATE INDEX is sooo slow! any ideas?



 Hi

 I have one table with 12 million records.   I'm trying to create an index
 on one of the fields and it's taking hours ( over 6 so far)!  Am I doing
 something wrong?

 The command I issued was
 create index title on article ( title);

   The table files are
 -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
 -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
 -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI

 As you can see the main file is 1.4Gb

 The table definition is
 mysql show columns from article;
 +---+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +---+--+--+-+-+---+
 | ID| bigint(20)   | YES  | | NULL|   |
 | id2   | bigint(20)   | YES  | | NULL|   |
 | title | varchar(250) | YES  | | NULL|   |
 | body  | text | YES  | MUL | NULL|   |
 +---+--+--+-+-+---+
 4 rows in set (0.00 sec)

 I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL version
 3.23.53a

 Thanks
 David



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Jerry
Another coffee ?

I'd buy Starbucks .

Jerry

- Original Message -
From: Peter Grigor [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; david [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 10:58 PM
Subject: Re: CREATE INDEX is sooo slow! any ideas?


 That body column wouldn't happen to be a fulltext index, would it? :)

 If so, 'ave another coffee.

 Peter
 ^_^
 -
 Peter Grigor
 Hoobly Free Classifieds
 http://www.hoobly.com


 - Original Message -
 From: david [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, February 16, 2003 5:47 PM
 Subject: CREATE INDEX is sooo slow! any ideas?


 
  Hi
 
  I have one table with 12 million records.   I'm trying to create an
index
  on one of the fields and it's taking hours ( over 6 so far)!  Am I doing
  something wrong?
 
  The command I issued was
  create index title on article ( title);
 
The table files are
  -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
  -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
  -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
 
  As you can see the main file is 1.4Gb
 
  The table definition is
  mysql show columns from article;
  +---+--+--+-+-+---+
  | Field | Type | Null | Key | Default | Extra |
  +---+--+--+-+-+---+
  | ID| bigint(20)   | YES  | | NULL|   |
  | id2   | bigint(20)   | YES  | | NULL|   |
  | title | varchar(250) | YES  | | NULL|   |
  | body  | text | YES  | MUL | NULL|   |
  +---+--+--+-+-+---+
  4 rows in set (0.00 sec)
 
  I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL
version
  3.23.53a
 
  Thanks
  David
 
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
 [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Peter Grigor
Yes, and better go to the one across town to get it :)

Peter
^_^
-
Peter Grigor
Hoobly Free Classifieds
http://www.hoobly.com


- Original Message -
From: Jerry [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 6:00 PM
Subject: Re: CREATE INDEX is sooo slow! any ideas?


 Another coffee ?

 I'd buy Starbucks .

 Jerry

 - Original Message -
 From: Peter Grigor [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; david [EMAIL PROTECTED]
 Sent: Sunday, February 16, 2003 10:58 PM
 Subject: Re: CREATE INDEX is sooo slow! any ideas?


  That body column wouldn't happen to be a fulltext index, would it? :)
 
  If so, 'ave another coffee.
 
  Peter
  ^_^
  -
  Peter Grigor
  Hoobly Free Classifieds
  http://www.hoobly.com
 
 
  - Original Message -
  From: david [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Sunday, February 16, 2003 5:47 PM
  Subject: CREATE INDEX is sooo slow! any ideas?
 
 
  
   Hi
  
   I have one table with 12 million records.   I'm trying to create an
 index
   on one of the fields and it's taking hours ( over 6 so far)!  Am I
doing
   something wrong?
  
   The command I issued was
   create index title on article ( title);
  
 The table files are
   -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
   -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
   -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
  
   As you can see the main file is 1.4Gb
  
   The table definition is
   mysql show columns from article;
   +---+--+--+-+-+---+
   | Field | Type | Null | Key | Default | Extra |
   +---+--+--+-+-+---+
   | ID| bigint(20)   | YES  | | NULL|   |
   | id2   | bigint(20)   | YES  | | NULL|   |
   | title | varchar(250) | YES  | | NULL|   |
   | body  | text | YES  | MUL | NULL|   |
   +---+--+--+-+-+---+
   4 rows in set (0.00 sec)
  
   I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL
 version
   3.23.53a
  
   Thanks
   David
  
  
  
   -
   Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
  
   To request this thread, e-mail [EMAIL PROTECTED]
   To unsubscribe, e-mail
  [EMAIL PROTECTED]
   Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
  
  
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
 [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Jerry
I meant buy the whole chain, its going to take a few cups 

Jerry


- Original Message -
From: Peter Grigor [EMAIL PROTECTED]
To: Jerry [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 11:11 PM
Subject: Re: CREATE INDEX is sooo slow! any ideas?


 Yes, and better go to the one across town to get it :)

 Peter
 ^_^
 -
 Peter Grigor
 Hoobly Free Classifieds
 http://www.hoobly.com


 - Original Message -
 From: Jerry [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, February 16, 2003 6:00 PM
 Subject: Re: CREATE INDEX is sooo slow! any ideas?


  Another coffee ?
 
  I'd buy Starbucks .
 
  Jerry
 
  - Original Message -
  From: Peter Grigor [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]; david [EMAIL PROTECTED]
  Sent: Sunday, February 16, 2003 10:58 PM
  Subject: Re: CREATE INDEX is sooo slow! any ideas?
 
 
   That body column wouldn't happen to be a fulltext index, would it? :)
  
   If so, 'ave another coffee.
  
   Peter
   ^_^
   -
   Peter Grigor
   Hoobly Free Classifieds
   http://www.hoobly.com
  
  
   - Original Message -
   From: david [EMAIL PROTECTED]
   To: [EMAIL PROTECTED]
   Sent: Sunday, February 16, 2003 5:47 PM
   Subject: CREATE INDEX is sooo slow! any ideas?
  
  
   
Hi
   
I have one table with 12 million records.   I'm trying to create an
  index
on one of the fields and it's taking hours ( over 6 so far)!  Am I
 doing
something wrong?
   
The command I issued was
create index title on article ( title);
   
  The table files are
-rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
-rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
-rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
   
As you can see the main file is 1.4Gb
   
The table definition is
mysql show columns from article;
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default | Extra |
+---+--+--+-+-+---+
| ID| bigint(20)   | YES  | | NULL|   |
| id2   | bigint(20)   | YES  | | NULL|   |
| title | varchar(250) | YES  | | NULL|   |
| body  | text | YES  | MUL | NULL|   |
+---+--+--+-+-+---+
4 rows in set (0.00 sec)
   
I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL
  version
3.23.53a
   
Thanks
David
   
   
   
  
 -
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)
   
To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
   [EMAIL PROTECTED]
Trouble unsubscribing? Try:
http://lists.mysql.com/php/unsubscribe.php
   
   
  
  
   -
   Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
  
   To request this thread, e-mail [EMAIL PROTECTED]
   To unsubscribe, e-mail
  [EMAIL PROTECTED]
   Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
  
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
 [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread david
Yes it is but why would that slow down index creation on the other field.
The full text index is just for the body field.

Thanks

At 05:58 PM 2/16/2003 -0500, Peter Grigor wrote:

That body column wouldn't happen to be a fulltext index, would it? :)

If so, 'ave another coffee.

Peter
^_^
-
Peter Grigor
Hoobly Free Classifieds
http://www.hoobly.com


- Original Message -
From: david [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 5:47 PM
Subject: CREATE INDEX is sooo slow! any ideas?



 Hi

 I have one table with 12 million records.   I'm trying to create an index
 on one of the fields and it's taking hours ( over 6 so far)!  Am I doing
 something wrong?

 The command I issued was
 create index title on article ( title);

   The table files are
 -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
 -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
 -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI

 As you can see the main file is 1.4Gb

 The table definition is
 mysql show columns from article;
 +---+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +---+--+--+-+-+---+
 | ID| bigint(20)   | YES  | | NULL|   |
 | id2   | bigint(20)   | YES  | | NULL|   |
 | title | varchar(250) | YES  | | NULL|   |
 | body  | text | YES  | MUL | NULL|   |
 +---+--+--+-+-+---+
 4 rows in set (0.00 sec)

 I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL version
 3.23.53a

 Thanks
 David



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php





-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Peter Grigor
Well, MySql stores all its index information in one index file, so when you
add another index it has to rebuild the WHOLE file. :)

Peter
^_^
-
Peter Grigor
Hoobly Free Classifieds
http://www.hoobly.com


- Original Message -
From: david [EMAIL PROTECTED]
To: Peter Grigor [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 6:13 PM
Subject: Re: CREATE INDEX is sooo slow! any ideas?


 Yes it is but why would that slow down index creation on the other field.
 The full text index is just for the body field.

 Thanks

 At 05:58 PM 2/16/2003 -0500, Peter Grigor wrote:
 That body column wouldn't happen to be a fulltext index, would it? :)
 
 If so, 'ave another coffee.
 
 Peter
 ^_^
 -
 Peter Grigor
 Hoobly Free Classifieds
 http://www.hoobly.com
 
 
 - Original Message -
 From: david [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, February 16, 2003 5:47 PM
 Subject: CREATE INDEX is sooo slow! any ideas?
 
 
  
   Hi
  
   I have one table with 12 million records.   I'm trying to create an
index
   on one of the fields and it's taking hours ( over 6 so far)!  Am I
doing
   something wrong?
  
   The command I issued was
   create index title on article ( title);
  
 The table files are
   -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
   -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
   -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
  
   As you can see the main file is 1.4Gb
  
   The table definition is
   mysql show columns from article;
   +---+--+--+-+-+---+
   | Field | Type | Null | Key | Default | Extra |
   +---+--+--+-+-+---+
   | ID| bigint(20)   | YES  | | NULL|   |
   | id2   | bigint(20)   | YES  | | NULL|   |
   | title | varchar(250) | YES  | | NULL|   |
   | body  | text | YES  | MUL | NULL|   |
   +---+--+--+-+-+---+
   4 rows in set (0.00 sec)
  
   I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL
version
   3.23.53a
  
   Thanks
   David
  
  
  
   -
   Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
  
   To request this thread, e-mail [EMAIL PROTECTED]
   To unsubscribe, e-mail
 [EMAIL PROTECTED]
   Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
  
  


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread david
Thanks.  So, after I've travelled to Seattle to get my coffee, do you think 
that creating this index will speed up the select queries?

Would it be faster if I had created separate tables and indexed them 
individually?

What is the thoughts about searching text fields for text.  Is it best to 
store the text in a text field or is it best to tokenize out the terms and 
create a word table with index?

Thanks
David


At 06:27 PM 2/16/2003 -0500, Peter Grigor wrote:
Well, MySql stores all its index information in one index file, so when you
add another index it has to rebuild the WHOLE file. :)

Peter
^_^
-
Peter Grigor
Hoobly Free Classifieds
http://www.hoobly.com


- Original Message -
From: david [EMAIL PROTECTED]
To: Peter Grigor [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, February 16, 2003 6:13 PM
Subject: Re: CREATE INDEX is sooo slow! any ideas?


 Yes it is but why would that slow down index creation on the other field.
 The full text index is just for the body field.

 Thanks

 At 05:58 PM 2/16/2003 -0500, Peter Grigor wrote:
 That body column wouldn't happen to be a fulltext index, would it? :)
 
 If so, 'ave another coffee.
 
 Peter
 ^_^
 -
 Peter Grigor
 Hoobly Free Classifieds
 http://www.hoobly.com
 
 
 - Original Message -
 From: david [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Sunday, February 16, 2003 5:47 PM
 Subject: CREATE INDEX is sooo slow! any ideas?
 
 
  
   Hi
  
   I have one table with 12 million records.   I'm trying to create an
index
   on one of the fields and it's taking hours ( over 6 so far)!  Am I
doing
   something wrong?
  
   The command I issued was
   create index title on article ( title);
  
 The table files are
   -rwxrwxrwx1 mysqlmysql8646 Feb 14 17:32 article.frm
   -rwxrwxrwx1 mysqlmysql   1406663720 Feb 14 23:27 article.MYD
   -rwxrwxrwx1 mysqlmysql   69661696 Feb 14 23:27 article.MYI
  
   As you can see the main file is 1.4Gb
  
   The table definition is
   mysql show columns from article;
   +---+--+--+-+-+---+
   | Field | Type | Null | Key | Default | Extra |
   +---+--+--+-+-+---+
   | ID| bigint(20)   | YES  | | NULL|   |
   | id2   | bigint(20)   | YES  | | NULL|   |
   | title | varchar(250) | YES  | | NULL|   |
   | body  | text | YES  | MUL | NULL|   |
   +---+--+--+-+-+---+
   4 rows in set (0.00 sec)
  
   I'm running this on linux redhat 7.3,  P500Mhz 512Mb ram  -- MySQL
version
   3.23.53a
  
   Thanks
   David
  
  
  
   -
   Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
  
   To request this thread, e-mail [EMAIL PROTECTED]
   To unsubscribe, e-mail
 [EMAIL PROTECTED]
   Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
  
  


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail 
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Michael T. Babcock
At 06:27 PM 2/16/2003 -0500, Peter Grigor wrote:


Well, MySql stores all its index information in one index file, so 
when you
add another index it has to rebuild the WHOLE file. :)



Anyone on the MySQL team feel like explaining that design decision, 
besides historical reasons?  I doubt its any more efficient except in 
file descriptor usage (although I've expressed the same doubts about 
InnoDB's avoidance of the filesystem too).

SQL and all that ...

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: CREATE INDEX is sooo slow! any ideas?

2003-02-16 Thread Dan Nelson
In the last episode (Feb 16), Michael T. Babcock said:
  At 06:27 PM 2/16/2003 -0500, Peter Grigor wrote:
   Well, MySql stores all its index information in one index file,
   so when you add another index it has to rebuild the WHOLE file.
   :)
 
 Anyone on the MySQL team feel like explaining that design decision,
 besides historical reasons?  I doubt its any more efficient except in
 file descriptor usage (although I've expressed the same doubts about
 InnoDB's avoidance of the filesystem too).

Which decision, putting all the indexes in one file, or rebuilding all
indexes whenever you ALTER TABLE or add an index?  If the latter, I
agree with you.  Modifying unrelated indexes or columns should not
force a rebuild of every index.

But on the first point, consider the fd usage in the case of, say, a
10-table database with 5 indices per table, with 20 concurrent users. 
That'd be a max of 10*5*20=1000 fds open, instead of 10*2*20=400 with
Mysql's current method, and those are pretty conservative values (small
table, not many indexes, not many users). 

Also there's not much benefit in separating the indexes from each
other.  With different files for index and data, you can move the
indexes to another disk and get a nice speed boost (since you're not
ping-ponging the disk head between index and data).  But since MySQL
can only use one index per table per query anyway, you don't gain
anything by splitting the indexes.

-- 
Dan Nelson
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Error 1034: 136 when fixing table from Create Index on long table

2002-10-22 Thread Chris Stoughton
Jocelyn asked whether the file system supports files over 2 GB, and 
whether there was space left on disk.

The file system /export/data/dp20.a supports files larger than 2 GB. 
For example, the targetTsObj files are larger:

bash-2.04$ ls -lh targetTsObj*
-rw-rw1 mysqlmysql134G Oct 20 05:37 targetTsObj.MYD
-rw-rw1 mysqlmysql5.0k Oct 21 11:38 targetTsObj.MYI
-rw-rw1 mysqlmysql 32k Oct 18 21:05 targetTsObj.frm

And, there is still room on the disk that I *think* I am writing the 
index to.
bash-2.04$ df -h .
FilesystemSize  Used Avail Use% Mounted on
/export/data/dp20.a   1.0T  302G  761G  29% /data/dp20.a
bash-2.04$

I have pointed tmpdir to this file system as well, and smaller databases 
have successfully build indices since these changes.

== So, is there a chance that the index file is being built somewhere 
else, and it runs out of space?  Any clues to help me track this down?

== Should I be using innodb tables instead of myisam tables for this 
application?


Thanks again.


Jocelyn Fournier wrote:

According to show variables, your tmp dir is /export/data/dp20.a/tmp/.
Are you sure you have a file system which allow to create files greater than
2 GB for exemple ?
 Original Message -
From: [EMAIL PROTECTED]
To: Jocelyn Fournier [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 1:33 AM
Subject: Re: Error 1034: 136 when fixing table from Create Index on long
table


 

Well, I have datadir, bdb_tmpdir, and tmpdir all pointing to
/export/data/dp20.a, which is 28% full.  It is over one tera-byte.

I am worried that it is trying to write a temporary file, or an index
file, to some other location.

How can I see where the index file is written, to be sure?

Thanks!

- Original Message -
From: Jocelyn Fournier [EMAIL PROTECTED]
Date: Monday, October 21, 2002 5:37 pm
Subject: Re: Error 1034: 136 when fixing table from Create Index on long
table

   

Hi,

[root@forum] /usr/local/mysql/var perror 136
Error code 136:  Unknown error 136
136 = No more room in index file

Are you sure your file system can handle the size of your index
file ?

Regards,
Jocelyn
- Original Message -
From: Chris Stoughton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 12:26 AM
Subject: Error 1034: 136 when fixing table from Create Index on
long table


 

I am running 3.23.52-Max under Linux.

I now have a table with 54M rows:

mysql select count(*) from targetTsObj;
+--+
| count(*) |
+--+
| 54549046 |
+--+
1 row in set (0.05 sec)

Creating an index on this takes 1 hour 10 minutes, with this error:

mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table

The describe command shows that no index has been built.

Previously, it had trouble building this same index, and complained
about not being able to open a file in /tmp
I suspect that /tmp was not large enough, so we changed the tmpdir
variable in my.cnf to point to a file system with *plenty* of
   

roomw and
 

restarted the server.

myisamchk seems to have no complaints about this table:

bash-2.04$ myisamchk targetTsObj
Checking MyISAM file: targetTsObj
Data records: 54549046   Deleted blocks:   0
- check file-size
- check key delete-chain
- check record delete-chain
- check index reference
bash-2.04$ myisamchk -d targetTsObj

MyISAM file: targetTsObj
Record format:   Fixed length
Character set:   latin1 (8)
Data records: 54549046  Deleted blocks:
   

0
 

Recordlength: 2643

table description:
Key Start Len Index   Type
bash-2.04$

Please let me know what this error means, how to get around it,
   

or what
 

additional information you need.  Thanks!

Here is the output of mysqladmin variable

   

+-+-





+
 

| Variable_name   |
Value
|

   

+-+-





+
 

| back_log|
50
|
| basedir |
/
|
| bdb_cache_size  |
8388600
|
| bdb_log_buffer_size |
262144
|
| bdb_home|
/export/data/dp20.a/data/mysql/
|
| bdb_max_lock|
1
|
| bdb_logdir
|
|
| bdb_shared_data |
OFF
|
| bdb_tmpdir  |
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB
3.2.9a: (August 14,
2002)
|
| binlog_cache_size   |
32768
|
| character_set

Error 1034: 136 when fixing table from Create Index on long table

2002-10-21 Thread Chris Stoughton
I am running 3.23.52-Max under Linux.

I now have a table with 54M rows:

mysql select count(*) from targetTsObj;
+--+
| count(*) |
+--+
| 54549046 |
+--+
1 row in set (0.05 sec)

Creating an index on this takes 1 hour 10 minutes, with this error:

mysql  create index targetTsObjobjId on targetTsObj (objId);
ERROR 1034: 136 when fixing table

The describe command shows that no index has been built.

Previously, it had trouble building this same index, and complained 
about not being able to open a file in /tmp
I suspect that /tmp was not large enough, so we changed the tmpdir 
variable in my.cnf to point to a file system with *plenty* of roomw and 
restarted the server.  

myisamchk seems to have no complaints about this table:

bash-2.04$ myisamchk targetTsObj
Checking MyISAM file: targetTsObj
Data records: 54549046   Deleted blocks:   0
- check file-size
- check key delete-chain
- check record delete-chain
- check index reference
bash-2.04$ myisamchk -d targetTsObj

MyISAM file: targetTsObj
Record format:   Fixed length
Character set:   latin1 (8)
Data records: 54549046  Deleted blocks: 0
Recordlength: 2643

table description:
Key Start Len Index   Type
bash-2.04$

Please let me know what this error means, how to get around it, or what 
additional information you need.  Thanks!

Here is the output of mysqladmin variable
+-+-+
| Variable_name   | 
Value   
|
+-+-+
| back_log| 
50  
|
| basedir | 
/   
|
| bdb_cache_size  | 
8388600 
|
| bdb_log_buffer_size | 
262144  
|
| bdb_home| 
/export/data/dp20.a/data/mysql/ 
|
| bdb_max_lock| 
1   
|
| bdb_logdir  
| 
|
| bdb_shared_data | 
OFF 
|
| bdb_tmpdir  | 
/export/data/dp20.a/tmp/
|
| bdb_version | Sleepycat Software: Berkeley DB 
3.2.9a: (August 14, 
2002)   
|
| binlog_cache_size   | 
32768   
|
| character_set   | 
latin1

Re: Error 1034: 136 when fixing table from Create Index on long table

2002-10-21 Thread Jocelyn Fournier
Hi,

[root@forum] /usr/local/mysql/var perror 136
Error code 136:  Unknown error 136
136 = No more room in index file

Are you sure your file system can handle the size of your index file ?

Regards,
  Jocelyn
- Original Message -
From: Chris Stoughton [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 12:26 AM
Subject: Error 1034: 136 when fixing table from Create Index on long table


 I am running 3.23.52-Max under Linux.

 I now have a table with 54M rows:

 mysql select count(*) from targetTsObj;
 +--+
 | count(*) |
 +--+
 | 54549046 |
 +--+
 1 row in set (0.05 sec)

 Creating an index on this takes 1 hour 10 minutes, with this error:

 mysql  create index targetTsObjobjId on targetTsObj (objId);
 ERROR 1034: 136 when fixing table

 The describe command shows that no index has been built.

 Previously, it had trouble building this same index, and complained
 about not being able to open a file in /tmp
 I suspect that /tmp was not large enough, so we changed the tmpdir
 variable in my.cnf to point to a file system with *plenty* of roomw and
 restarted the server.

 myisamchk seems to have no complaints about this table:

 bash-2.04$ myisamchk targetTsObj
 Checking MyISAM file: targetTsObj
 Data records: 54549046   Deleted blocks:   0
 - check file-size
 - check key delete-chain
 - check record delete-chain
 - check index reference
 bash-2.04$ myisamchk -d targetTsObj

 MyISAM file: targetTsObj
 Record format:   Fixed length
 Character set:   latin1 (8)
 Data records: 54549046  Deleted blocks: 0
 Recordlength: 2643

 table description:
 Key Start Len Index   Type
 bash-2.04$

 Please let me know what this error means, how to get around it, or what
 additional information you need.  Thanks!

 Here is the output of mysqladmin variable

+-+-


+
 | Variable_name   |
 Value
 |

+-+-


+
 | back_log|
 50
 |
 | basedir |
 /
 |
 | bdb_cache_size  |
 8388600
 |
 | bdb_log_buffer_size |
 262144
 |
 | bdb_home|
 /export/data/dp20.a/data/mysql/
 |
 | bdb_max_lock|
 1
 |
 | bdb_logdir
 |
 |
 | bdb_shared_data |
 OFF
 |
 | bdb_tmpdir  |
 /export/data/dp20.a/tmp/
 |
 | bdb_version | Sleepycat Software: Berkeley DB
 3.2.9a: (August 14,
 2002)
 |
 | binlog_cache_size   |
 32768
 |
 | character_set   |
 latin1
 |
 | character_sets  | latin1 big5 czech euc_kr gb2312 gbk
 sjis tis620 ujis dec8 dos german1 hp8 koi8_ru latin2 swe7 usa7 cp1251
 danish hebrew win1251 estonia hungarian koi8_ukr win1251ukr greek
 win1250 croat cp1257 latin5 |
 | concurrent_insert   |
 ON

 |
 | connect_timeout |
 5
 |
 | datadir |
 /export/data/dp20.a/data/mysql/
 |
 | delay_key_write |
 ON
 |
 | delayed_insert_limit|
 100
 |
 | delayed_insert_timeout  |
 300
 |
 | delayed_queue_size  |
 1000
 |
 | flush   |
 OFF
 |
 | flush_time  |
 0

 |
 | have_bdb|
 YES
 |
 | have_gemini |
 NO
 |
 | have_innodb |
 DISABLED
 |
 | have_isam   |
 YES
 |
 | have_raid   |
 NO
 |
 | have_openssl|
 NO
 |
 | init_file
 |
 |
 | innodb_additional_mem_pool_size |
 1048576
 |
 | innodb_buffer_pool_size |
 8388608
 |
 | innodb_data_file_path
 |
 |
 | innodb_data_home_dir
 |
 |
 | innodb_file_io_threads  |
 4
 |
 | innodb_force_recovery   |
 0
 |
 | innodb_thread_concurrency   |
 8
 |
 | innodb_flush_log_at_trx_commit  |
 16777216
 |
 | innodb_fast_shutdown|
 ON
 |
 | innodb_flush_method
 |
 |
 | innodb_lock_wait_timeout|
 50
 |
 | innodb_log_arch_dir
 |
 |
 | innodb_log_archive  |
 OFF
 |
 | innodb_log_buffer_size  |
 1048576
 |
 | innodb_log_file_size|
 5242880
 |
 | innodb_log_files_in_group   |
 2
 |
 | innodb_log_group_home_dir
 |
 |
 | innodb_mirrored_log_groups  |
 1
 |
 | interactive_timeout |
 28800
 |
 | join_buffer_size|
 131072
 |
 | key_buffer_size |
 536866816
 |
 | language|
 /usr/share/mysql/english/
 |
 | large_files_support

I have a error message in MySQL using Create Index

2002-07-13 Thread Carlos Rovetto

Hi.  I have this error message in MySQL using Create Index

create index indice on products(productid(7))

Error on rename of '.\video\products.MYI' to '.\video\#sql2-fffa66cd-11.MYI' 
(Errcode: 13)

The ProductID fields is text type

Carlos


From: David Kramer [EMAIL PROTECTED]
To: 'Carlos Rovetto' [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: RE: Using Inner Join in MySQL
Date: Fri, 12 Jul 2002 12:38:12 -0700

It depends on the size of the tables that you are joining together.  Can 
you
index any of the columns that you are joining?  If you post the query that
you are trying to execute I might be able to help you more.

DK

-Original Message-
From: Carlos Rovetto [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 12, 2002 12:32 PM
To: [EMAIL PROTECTED]
Subject: Using Inner Join in MySQL


Hi.  I want know why the time of response is slowly when i make a Join
between two tables using sql query in MySQL.

I don't have more experience in MySQL

Carlos




_
MSN. Más Útil cada Día. http://www.msn.es/intmap/


-
Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail 
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




_
MSN Fotos: la forma más fácil de compartir e imprimir fotos. 
http://photos.msn.es/support/worldwide.aspx


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Create Index - memory leak?

2001-09-20 Thread Will French

I have noticed some odd behavior in my mysql server (running build 41 on
Win2k).

I have a large table (4 million rows, 118 columns, about 1.7Gb on disk).  As
its primary key, this table has an auto increment int field called 'runid'.
The machine it runs on has 733Mhz cpu/1.25Gb RAM and a 60Gb ide drive (on
its own controller) dedicated to mysql.  While I have not set any parameters
to prevent others from logging in, I am the only one logged in and running
mysql client on that machine.  The data in this table is completely static
and I am building a number of indexes on it to speed some heavy-duty
analysis I intend to perform in the near future.

Enough of the background, here is the problem:
When I build two indexes consecutively using mysql command line, the memory
used by the first index build does not seem to be re-used by the second
index build.  I have all my memory set-variable quite high to take advantage
of the large amount of memory on my box.  If I bring up the task manager and
watch the memory-utilization for my-sql, I can watch the mu creep up, bit by
bit, during the first index build.  Then when I start the second index
build, this mu creep begins immediately as if all the memory allocated by
the first index build is still spoken for.

Is this behavior correct?  Is there some flush command or something I can
give it to avoid this?  The second build also takes substantially longer
(both in cpu and elapsed time) even when the specifics of the indexes are
very similar (same # and type of fields, both using the primary key (int)
field as the last column of the index).  I believe the discrepancy may be
because there is less memory available for the second index build to use for
its sort.

One last thing: I notice that when the index is being built, a temporary
file is created in the data directory.  What surprises me is that this temp
file grows to about the same size as the actual table.  This is probably by
design but it seem inefficient in that the actual data size of the columns
being indexed is obviously much smaller than that of the hole table.  Why
doesn't it just extract the columns it needs along with something like a
unique record or page pointer and sort that?

Any light that can be shed will be much appreciated - even if it just points
me to some reference material that is relevant.

Thanks,

Will French


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re : Create Index Speed

2001-09-03 Thread Mark kirkwood

Thanks to everyone who replied - much appreciated.

Ok here is how it went :

1   Creating index first and loading table  :saved 1 minute overall
2   Increasing key_buffer - 100M   :no difference
3   increasing myisam_max_sort_file_size- 1000M and 
myisam_sort_buffer_size-100M   :no difference

Note that the index concerned is about 80M (table about 300M )

so looks like that is as fast as it goes ( and pre-creating in index helps a 
little bit )

regards

Mark

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Index names chosen for CREATE INDEX etc. matter to MySQL ?

2001-07-25 Thread S A


Do the names of indexes have any significance to MySQL internals or is the name merely 
a convenience for the DB operator ?

For now I try to match the index names to the column names used but I'm wondering if 
the name matters at all.

thanks,

- Sam.



-
Do You Yahoo!?
Make international calls for as low as $.04/minute with Yahoo! Messenger
http://phonecard.yahoo.com/


RE: Index names chosen for CREATE INDEX etc. matter to MySQL ?

2001-07-25 Thread Carsten H. Pedersen

 Do the names of indexes have any significance to MySQL internals 
 or is the name merely a convenience for the DB operator ?

Well, PRIMARY KEY does carry some significance... ;-)

Other than that, no. But you need them if you should ever
want to do an ALTER TABLE to change the index definitions.

 For now I try to match the index names to the column names used 
 but I'm wondering if the name matters at all.

They only matter as far as using logical names will help you keep 
your sanity when you need to restructure your DB three years hence.

/ Carsten
--
Carsten H. Pedersen
keeper and maintainer of the bitbybit.dk MySQL FAQ
http://www.bitbybit.dk/mysqlfaq


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php