RE: optimization strategies based on file-level storage

2011-06-17 Thread Wm Mussatto


On Fri, June 17, 2011 07:11, Jerry Schwartz wrote:
>>-Original Message-
> 
> 
>>What I was really trying to figure out was why it takes me 4
hours to
>>add a new column to my 22-million-row table, and
whether a different
>>table design can avoid that problem. 
That reply in the forum says,
>>"ALTER TABLE ... ADD
COLUMN will always copy the entire table over,
>>and rebuild
all the indexes. (And, effectively, do an OPTIMIZE.) Do
>>_not_
leave space for extra columns, it won't help."  I'm about to
>>reply and point out the trick that you suggested to me: create
dummy
>>columns early and then just rename them later :)
>>
>> -Bennett
>>
> [JS]
They will be too small, or the wrong type, or there won't be enough
> of
> them. Based upon 30+ years of database design, I'd bet
money on it. ;-)
> 
> Regards,
> 
> Jerry
Schwartz
> Global Information Incorporated
The only
"alternative design" would be to create another table with the
added columns and a common key field and then lock the primary table and
populate it with the keys from the original table, and I'm not convinced
that would be any faster or less disruptive.
--
William R.
Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


RE: optimization strategies based on file-level storage

2011-06-17 Thread Jerry Schwartz
>-Original Message-


>What I was really trying to figure out was why it takes me 4 hours to
>add a new column to my 22-million-row table, and whether a different
>table design can avoid that problem.  That reply in the forum says,
>"ALTER TABLE ... ADD COLUMN will always copy the entire table over,
>and rebuild all the indexes. (And, effectively, do an OPTIMIZE.) Do
>_not_ leave space for extra columns, it won't help."  I'm about to
>reply and point out the trick that you suggested to me: create dummy
>columns early and then just rename them later :)
>
> -Bennett
>
[JS] They will be too small, or the wrong type, or there won't be enough of 
them. Based upon 30+ years of database design, I'd bet money on it. ;-)

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization strategies based on file-level storage

2011-06-17 Thread Bennett Haselton

At 10:47 PM 6/16/2011, Johan De Meersman wrote:



- Original Message -
> From: "Bennett Haselton" 
>
> Do you happen to know the answer to my other problem -- if I have
> TEXT and BLOB columns but all my other columns are fixed-length, can
> I still get the benefit of faster lookups resulting from fixed-length
> rows, if each row just contains a fixed-length reference to the
> actual TEXT or BLOB data which is stored elsewhere?

entirely unsure, but given the single-datafile structure of MyISAM 
tables, I strongly suspect BLOBS get stored inline.


I can't find a source that says for sure.
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
does say: "The maximum size of a row in a MyISAM table is 65,535 
bytes. This figure excludes BLOB or TEXT columns, which contribute 
only 9 to 12 bytes toward this size... For BLOB and TEXT data, the 
information is stored internally in a different area of memory than 
the row buffer."  But that's talking about memory, not disk.  When 
people talk about performance improvements from using fixed-length 
rows, are they talking primarily about memory or hard disk?


Hold up though, I just got this reply from posting the question in a forum:
http://forums.mysql.com/read.php?21,423433,423846
which says "Almost always the discussions recommending Fixed length 
records in MyISAM are myths. The apparent performance improvement is 
swamped by the performance loss of shoveling around the wasted 
padding bytes" and goes on to give reasons.


Actually, that does make sense that it's a myth.  I was surprised to 
hear so many sources claiming that there was a big performance 
increase from being able to find row N by jumping to position 
N*rowlength.  Because even with variable-length rows, you can just 
store a table associating row numbers with the position of the row in 
the file, can't you -- which would mean it would only take one 
near-instantaneous lookup to be able to jump to the row you're looking for.


What I was really trying to figure out was why it takes me 4 hours to 
add a new column to my 22-million-row table, and whether a different 
table design can avoid that problem.  That reply in the forum says, 
"ALTER TABLE ... ADD COLUMN will always copy the entire table over, 
and rebuild all the indexes. (And, effectively, do an OPTIMIZE.) Do 
_not_ leave space for extra columns, it won't help."  I'm about to 
reply and point out the trick that you suggested to me: create dummy 
columns early and then just rename them later :)


-Bennett 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization strategies based on file-level storage

2011-06-16 Thread Johan De Meersman


- Original Message -
> From: "Bennett Haselton" 
> 
> Do you happen to know the answer to my other problem -- if I have
> TEXT and BLOB columns but all my other columns are fixed-length, can
> I still get the benefit of faster lookups resulting from fixed-length
> rows, if each row just contains a fixed-length reference to the
> actual TEXT or BLOB data which is stored elsewhere?

entirely unsure, but given the single-datafile structure of MyISAM tables, I 
strongly suspect BLOBS get stored inline.

For InnoDB, the answer appears to be "it varies": 
  If a row is less than half a page long, all of it is stored locally 
  within the page. If it exceeds half a page, variable-length columns 
  are chosen for external off-page storage until the row fits within 
  half a page. For a column chosen for off-page storage, InnoDB stores 
  the first 768 bytes locally in the row, and the rest externally into 
  overflow pages.

See http://dev.mysql.com/doc/refman/5.0/en/innodb-file-space.html for more on 
that.

Also, I *think* the concept of "fixed-length" rows is only applicable to 
MyISAM, InnoDB has index-organised tables - that is to say, it stores all row 
data in the leaves of the primary index. The consequence, of course, is that no 
additional pointer lookup gets done for primary key selects; the tradeoff is 
that all nonprimary key lookups get detoured through the primary key.


The online documentation is really pretty good; but for the really low-level 
things, I guess the best documentation may be the source.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization strategies based on file-level storage

2011-06-15 Thread Bennett Haselton

At 11:45 AM 6/14/2011, Johan De Meersman wrote:



- Original Message -
> From: "Bennett Haselton" 
>
> modifications.  (For example, the question I asked earlier about
> whether you can declare extra space at the end of each row that is
> "reserved for future columns".)

That question I can answer: you can't "reserve" space, but if you 
know what kind of rows you'll want to add later you can pre-add them 
(and incur the accompanying storage cost), and simply rename them 
appropriately later.


Thanks.  It would be more flexible if I could declare, say, 50 bytes, 
and decide later if I wanted to use them for a datetime, a char(n), 
or an int, but this is still helpful :)


Do you happen to know the answer to my other problem -- if I have 
TEXT and BLOB columns but all my other columns are fixed-length, can 
I still get the benefit of faster lookups resulting from fixed-length 
rows, if each row just contains a fixed-length reference to the 
actual TEXT or BLOB data which is stored elsewhere?


-Bennett 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization strategies based on file-level storage

2011-06-14 Thread Johan De Meersman


- Original Message -
> From: "Bennett Haselton" 
> 
> modifications.  (For example, the question I asked earlier about
> whether you can declare extra space at the end of each row that is
> "reserved for future columns".)

That question I can answer: you can't "reserve" space, but if you know what 
kind of rows you'll want to add later you can pre-add them (and incur the 
accompanying storage cost), and simply rename them appropriately later.

ALTER TABLE will do a full re-write of your table when that is necessary (like 
adding or deleting columns), but will (likely) not do so when you're just 
renaming an existing column. Look hard enough on the web and you can find info 
on how to hack the .frm files, too, so some operations that would do a full 
re-write don't - but that's fishy business, and always at your own risk.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: optimization strategies based on file-level storage

2011-06-14 Thread Bennett Haselton

At 05:46 AM 6/14/2011, Carlos Eduardo Caldi wrote:

Hello Bennett


On the Mysql developer site have a grate documentation, try the 
links above.


http://dev.mysql.com/doc/refman/5.0/en/optimizing-database-structure.html
http://dev.mysql.com/doc/refman/5.0/en/data-size.html


Thanks, this gets me a little closer to the answer but doesn't really 
provide the level of detail that I'm looking for.  For example, it 
says: "For MyISAM tables, if you do not have any variable-length 
columns (VARCHAR, TEXT, or BLOB columns), a fixed-size row format is 
used. This is faster..."  I still don't understand: If TEXT and BLOB 
columns are stored not by putting the data in the row but by having 
the row store a reference to the TEXT/BLOB data stored somewhere 
else, then can't a row with TEXT and BLOB data types *still* be a 
fixed-size row, with the resulting increased speed?


My main motivation is that I have a table with 22 million records and 
it takes a few hours for me to add a new column to it.  I suspect 
this would be faster if I designed the table correctly from the 
beginning, and I want to change it to a smarter design, but I only 
want to do it once.  So I want to understand really thoroughly *why* 
a different design would make it faster to complete the table 
modifications.  (For example, the question I asked earlier about 
whether you can declare extra space at the end of each row that is 
"reserved for future columns".)




Att.
Carlos,

> Date: Tue, 14 Jun 2011 01:44:47 -0700
> To: mysql@lists.mysql.com
> From: benn...@peacefire.org
> Subject: optimization strategies based on file-level storage
>
> I'm looking for some tips & tricks documentation that explains how
> different data types in rows are stored at the file level (in MyISAM
> tables, at least), and how to optimize tables for faster queries,
> updates, table definition modification, etc. based on this knowledge.
>
> For example, I've heard that if all of your columns are fixed-length,
> that makes it faster to find individual rows since row N is located
> at position N*rowlength in the file. (And, presumably, having just
> one variable-length column will slow things down considerably.) But
> I've also read that having TEXT and BLOB columns will also slow down
> row-finding operations. This seems to make no sense because I
> thought TEXT was not actually stored in the row, but the row just
> stored a constant-length reference to the TEXT whose actual data was
> stored somewhere else. Is this correct? Then is it incorrect to say
> that a TEXT column will slow down the locate-row-N operation,
> assuming all other columns are fixed-length? This is the kind of
> thing I'm looking for a document to explain.
>
> Another example: It sometimes takes me an extremely long time to add
> new columns to a table. What if I had a table with all fixed-length
> columns, and I "reserved" some space at the end of each row to be
> used for columns to be added in the future. Would it then be
> possible to add new columns much more quickly? You wouldn't have to
> move around the existing row data to make room for the new column
> (although presumably you would still have to *write* to the place in
> reach row where the new column had just been defined, to fill it in
> with its default value).
>
> In particular, I'm not looking for a list of optimization tricks, so
> much as a document that explains how the rows are stored at the file
> level, and thereby explains how the optimization tricks *follow
> logically from* this information. The reason is that if I just have
> a grab-bag of optimization hints (of which I've found many on the
> Web), some of them will be not applicable to my situation, or just
> plain wrong, and I'll have no way of knowing which ones. But if you
> know *why* something works, you can more easily figure out if it
> applies to your situation.
>
> -Bennett
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=ce_ca...@hotmail.com
>



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization

2010-01-26 Thread Kyong Kim
On Tue, Jan 26, 2010 at 11:23 AM, Keith Murphy  wrote:
> You absolutely *should not* convert the mysql database to InnoDB.
>
> Read the above sentence again :)
>
> All others, unless you had a specific reason not to do so, yes, I would
> convert them.
>
> keith
>
> On Tue, Jan 26, 2010 at 2:18 PM, John G. Heim  wrote:
>
>> Just to be clear, you're suggesting I convert all of the spamassassin,
>> drupal, and mediawiki tables to innodb too? Or just my own database? What
>> about the mysql database itself? I wouldn't convert those tables, would I?
>>
>> - Original Message - From: "Keith Murphy" 
>> To: 
>> Sent: Tuesday, January 26, 2010 11:06 AM
>> Subject: Re: optimization
>>
>>
>>
>> ♫
>> I would recommend the same to you about reading High Perf. MySQL as Baron,
>> et al wrote a great book about performance on MySQL. That being said, it
>> has
>> been my experience that in 99% of client cases they don't really need to
>> run
>> two different types of tables. If I were you, I would use InnoDB
>> exclusively
>> unless there is legitimate reason to do otherwise. In an environment that
>> is
>> running 25% writes and a decent query rate you are bound to have contention
>> issues with MyISAM. While there are always going to be edge cases for
>> MyISAM, your default should be innodb and your config should reflect this.
>>
>> Changing your tables to InnoDB is a simple ALTER TABLE which you can script
>> if there are a number of tables to convert.  Allocate as much of your
>> available RAM as possible to the innodb_buffer_pool (typically 50 - 80% of
>> total RAM) and I bet you would see a dramatic difference. That is
>> simplifying things somewhat, but should give an idea.
>>
>>
>> keith
>>
>>
>> On Tue, Jan 26, 2010 at 11:53 AM, mos  wrote:
>>
>>  Get yourself a copy of the book High Performance MySQL 2nd Edition
>>> Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
>>> ISBN-10: 0596101716 ISBN-13: 978-0596101718
>>>
>>> Here is a brief preview of the first edition:
>>>
>>> http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&dq=high+performance+mysql&cd=1#v=onepage&q=&f=false
>>>
>>> Mike
>>>
>>> At 10:19 AM 1/26/2010, John G. Heim wrote:
>>>
>>>  From: "Jaime Crespo Rincón" 
>>>>
>>>> Sent: Monday, January 25, 2010 5:30 PM
>>>>
>>>>
>>>> 2010/1/25 John G. Heim :
>>>>
>>>>  I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
>>>>> running the latest mysql-server from debian lenny (5.0.1). I have
>>>>> databases
>>>>> for drupal, moodle, spamassassin, horde3, and a small database for
>>>>> departmental stuff.
>>>>>
>>>>> The problem is that inserts/updates are sometimes very slow, on the
>>>>> order
>>>>> of
>>>>> a minute. I am hoping somebody can sspot something wrong in my config.
>>>>> Here's the optimization settings section (for your convenience). The
>>>>> whole
>>>>> my.cnf is reproduced below that:
>>>>>
>>>>>
>>>> Are your databases using MyISAM or InnoDB?
>>>>
>>>> Both. Maybe that's the problem? I started creating database tables for my
>>>> own web apps with the default mysql configuration. I believe the default
>>>> database engine is MyISAM. But then I wanted to use foreign keys and I
>>>> saw
>>>> that it required me to use  InnoDB. So I converted some tables to InnoDB
>>>> but
>>>> not all. Maybe it was a mistake not to convert all of them.
>>>>
>>>> After that, I installed drupal, moodle, and mediawiki. I haven't looked
>>>> at
>>>> what kind of tables those packages create. They may not specify it and
>>>> the
>>>> default is still whatever it is when you install mysql, MyISAM I think.
>>>>
>>>>  * If MyISAM, you could be suffering contention problems on writes
>>>>
>>>>>
>>>>>  because of full table locks. No easy solution but engine change or
>>>> database sharding. Also key_buffer, (and the other buffers) coud be
>>>> too small for 16GB of RAM. Are you really using more thant 10% of it?
>>>> You could also disable other engi

Re: optimization

2010-01-26 Thread Keith Murphy
You absolutely *should not* convert the mysql database to InnoDB.

Read the above sentence again :)

All others, unless you had a specific reason not to do so, yes, I would
convert them.

keith

On Tue, Jan 26, 2010 at 2:18 PM, John G. Heim  wrote:

> Just to be clear, you're suggesting I convert all of the spamassassin,
> drupal, and mediawiki tables to innodb too? Or just my own database? What
> about the mysql database itself? I wouldn't convert those tables, would I?
>
> - Original Message - From: "Keith Murphy" 
> To: 
> Sent: Tuesday, January 26, 2010 11:06 AM
> Subject: Re: optimization
>
>
>
> ♫
> I would recommend the same to you about reading High Perf. MySQL as Baron,
> et al wrote a great book about performance on MySQL. That being said, it
> has
> been my experience that in 99% of client cases they don't really need to
> run
> two different types of tables. If I were you, I would use InnoDB
> exclusively
> unless there is legitimate reason to do otherwise. In an environment that
> is
> running 25% writes and a decent query rate you are bound to have contention
> issues with MyISAM. While there are always going to be edge cases for
> MyISAM, your default should be innodb and your config should reflect this.
>
> Changing your tables to InnoDB is a simple ALTER TABLE which you can script
> if there are a number of tables to convert.  Allocate as much of your
> available RAM as possible to the innodb_buffer_pool (typically 50 - 80% of
> total RAM) and I bet you would see a dramatic difference. That is
> simplifying things somewhat, but should give an idea.
>
>
> keith
>
>
> On Tue, Jan 26, 2010 at 11:53 AM, mos  wrote:
>
>  Get yourself a copy of the book High Performance MySQL 2nd Edition
>> Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
>> ISBN-10: 0596101716 ISBN-13: 978-0596101718
>>
>> Here is a brief preview of the first edition:
>>
>> http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&dq=high+performance+mysql&cd=1#v=onepage&q=&f=false
>>
>> Mike
>>
>> At 10:19 AM 1/26/2010, John G. Heim wrote:
>>
>>  From: "Jaime Crespo Rincón" 
>>>
>>> Sent: Monday, January 25, 2010 5:30 PM
>>>
>>>
>>> 2010/1/25 John G. Heim :
>>>
>>>  I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
>>>> running the latest mysql-server from debian lenny (5.0.1). I have
>>>> databases
>>>> for drupal, moodle, spamassassin, horde3, and a small database for
>>>> departmental stuff.
>>>>
>>>> The problem is that inserts/updates are sometimes very slow, on the
>>>> order
>>>> of
>>>> a minute. I am hoping somebody can sspot something wrong in my config.
>>>> Here's the optimization settings section (for your convenience). The
>>>> whole
>>>> my.cnf is reproduced below that:
>>>>
>>>>
>>> Are your databases using MyISAM or InnoDB?
>>>
>>> Both. Maybe that's the problem? I started creating database tables for my
>>> own web apps with the default mysql configuration. I believe the default
>>> database engine is MyISAM. But then I wanted to use foreign keys and I
>>> saw
>>> that it required me to use  InnoDB. So I converted some tables to InnoDB
>>> but
>>> not all. Maybe it was a mistake not to convert all of them.
>>>
>>> After that, I installed drupal, moodle, and mediawiki. I haven't looked
>>> at
>>> what kind of tables those packages create. They may not specify it and
>>> the
>>> default is still whatever it is when you install mysql, MyISAM I think.
>>>
>>>  * If MyISAM, you could be suffering contention problems on writes
>>>
>>>>
>>>>  because of full table locks. No easy solution but engine change or
>>> database sharding. Also key_buffer, (and the other buffers) coud be
>>> too small for 16GB of RAM. Are you really using more thant 10% of it?
>>> You could also disable other engines if unused.
>>> * If InnoDB, you have not set innodb_buffer_pool_size nor log size.
>>> You could increase the pool to >50% of ram available.
>>>
>>> Those are very general suggestions. It depends a lot on your hardware
>>> (slow storage?), other apps installed on the same machine or the load
>>> of the server, among others.
>>>
>>>
>>> Well, it could be that the disks aren't real fas

Re: optimization

2010-01-26 Thread John G. Heim
Just to be clear, you're suggesting I convert all of the spamassassin, 
drupal, and mediawiki tables to innodb too? Or just my own database? What 
about the mysql database itself? I wouldn't convert those tables, would I?


- Original Message - 
From: "Keith Murphy" 

To: 
Sent: Tuesday, January 26, 2010 11:06 AM
Subject: Re: optimization


♫
I would recommend the same to you about reading High Perf. MySQL as Baron,
et al wrote a great book about performance on MySQL. That being said, it has
been my experience that in 99% of client cases they don't really need to run
two different types of tables. If I were you, I would use InnoDB exclusively
unless there is legitimate reason to do otherwise. In an environment that is
running 25% writes and a decent query rate you are bound to have contention
issues with MyISAM. While there are always going to be edge cases for
MyISAM, your default should be innodb and your config should reflect this.

Changing your tables to InnoDB is a simple ALTER TABLE which you can script
if there are a number of tables to convert.  Allocate as much of your
available RAM as possible to the innodb_buffer_pool (typically 50 - 80% of
total RAM) and I bet you would see a dramatic difference. That is
simplifying things somewhat, but should give an idea.


keith


On Tue, Jan 26, 2010 at 11:53 AM, mos  wrote:


Get yourself a copy of the book High Performance MySQL 2nd Edition
Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
ISBN-10: 0596101716 ISBN-13: 978-0596101718

Here is a brief preview of the first edition:
http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&dq=high+performance+mysql&cd=1#v=onepage&q=&f=false

Mike

At 10:19 AM 1/26/2010, John G. Heim wrote:


From: "Jaime Crespo Rincón" 

Sent: Monday, January 25, 2010 5:30 PM


2010/1/25 John G. Heim :


I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
running the latest mysql-server from debian lenny (5.0.1). I have
databases
for drupal, moodle, spamassassin, horde3, and a small database for
departmental stuff.

The problem is that inserts/updates are sometimes very slow, on the 
order

of
a minute. I am hoping somebody can sspot something wrong in my config.
Here's the optimization settings section (for your convenience). The
whole
my.cnf is reproduced below that:



Are your databases using MyISAM or InnoDB?

Both. Maybe that's the problem? I started creating database tables for my
own web apps with the default mysql configuration. I believe the default
database engine is MyISAM. But then I wanted to use foreign keys and I 
saw
that it required me to use  InnoDB. So I converted some tables to InnoDB 
but

not all. Maybe it was a mistake not to convert all of them.

After that, I installed drupal, moodle, and mediawiki. I haven't looked 
at
what kind of tables those packages create. They may not specify it and 
the

default is still whatever it is when you install mysql, MyISAM I think.

 * If MyISAM, you could be suffering contention problems on writes



because of full table locks. No easy solution but engine change or
database sharding. Also key_buffer, (and the other buffers) coud be
too small for 16GB of RAM. Are you really using more thant 10% of it?
You could also disable other engines if unused.
* If InnoDB, you have not set innodb_buffer_pool_size nor log size.
You could increase the pool to >50% of ram available.

Those are very general suggestions. It depends a lot on your hardware
(slow storage?), other apps installed on the same machine or the load
of the server, among others.


Well, it could be that the disks aren't real fast. The server is also
running a print server (lprng). I don't think that's very CPU intensive 
but

it might be slowing down writes.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com





--
Chief Training Officer
Paragon Consulting Services
850-637-3877


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization

2010-01-26 Thread Keith Murphy
♫
I would recommend the same to you about reading High Perf. MySQL as Baron,
et al wrote a great book about performance on MySQL. That being said, it has
been my experience that in 99% of client cases they don't really need to run
two different types of tables. If I were you, I would use InnoDB exclusively
unless there is legitimate reason to do otherwise. In an environment that is
running 25% writes and a decent query rate you are bound to have contention
issues with MyISAM. While there are always going to be edge cases for
MyISAM, your default should be innodb and your config should reflect this.

Changing your tables to InnoDB is a simple ALTER TABLE which you can script
if there are a number of tables to convert.  Allocate as much of your
available RAM as possible to the innodb_buffer_pool (typically 50 - 80% of
total RAM) and I bet you would see a dramatic difference. That is
simplifying things somewhat, but should give an idea.


keith


On Tue, Jan 26, 2010 at 11:53 AM, mos  wrote:

> Get yourself a copy of the book High Performance MySQL 2nd Edition
> Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English
> ISBN-10: 0596101716 ISBN-13: 978-0596101718
>
> Here is a brief preview of the first edition:
> http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&dq=high+performance+mysql&cd=1#v=onepage&q=&f=false
>
> Mike
>
> At 10:19 AM 1/26/2010, John G. Heim wrote:
>
>> From: "Jaime Crespo Rincón" 
>>
>> Sent: Monday, January 25, 2010 5:30 PM
>>
>>
>> 2010/1/25 John G. Heim :
>>
>>> I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
>>> running the latest mysql-server from debian lenny (5.0.1). I have
>>> databases
>>> for drupal, moodle, spamassassin, horde3, and a small database for
>>> departmental stuff.
>>>
>>> The problem is that inserts/updates are sometimes very slow, on the order
>>> of
>>> a minute. I am hoping somebody can sspot something wrong in my config.
>>> Here's the optimization settings section (for your convenience). The
>>> whole
>>> my.cnf is reproduced below that:
>>>
>>
>> Are your databases using MyISAM or InnoDB?
>>
>> Both. Maybe that's the problem? I started creating database tables for my
>> own web apps with the default mysql configuration. I believe the default
>> database engine is MyISAM. But then I wanted to use foreign keys and I saw
>> that it required me to use  InnoDB. So I converted some tables to InnoDB but
>> not all. Maybe it was a mistake not to convert all of them.
>>
>> After that, I installed drupal, moodle, and mediawiki. I haven't looked at
>> what kind of tables those packages create. They may not specify it and the
>> default is still whatever it is when you install mysql, MyISAM I think.
>>
>>  * If MyISAM, you could be suffering contention problems on writes
>>>
>> because of full table locks. No easy solution but engine change or
>> database sharding. Also key_buffer, (and the other buffers) coud be
>> too small for 16GB of RAM. Are you really using more thant 10% of it?
>> You could also disable other engines if unused.
>> * If InnoDB, you have not set innodb_buffer_pool_size nor log size.
>> You could increase the pool to >50% of ram available.
>>
>> Those are very general suggestions. It depends a lot on your hardware
>> (slow storage?), other apps installed on the same machine or the load
>> of the server, among others.
>>
>>
>> Well, it could be that the disks aren't real fast. The server is also
>> running a print server (lprng). I don't think that's very CPU intensive but
>> it might be slowing down writes.
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm
>>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=bmur...@paragon-cs.com
>
>


-- 
Chief Training Officer
Paragon Consulting Services
850-637-3877


Re: optimization

2010-01-26 Thread mos

Get yourself a copy of the book High Performance MySQL 2nd Edition
Publisher: O'Reilly Media; 2 edition (June 18, 2008) Language: English 
ISBN-10: 0596101716 ISBN-13: 978-0596101718


Here is a brief preview of the first edition: 
http://books.google.ca/books?id=iaCCQ13_zMIC&printsec=frontcover&dq=high+performance+mysql&cd=1#v=onepage&q=&f=false


Mike

At 10:19 AM 1/26/2010, John G. Heim wrote:

From: "Jaime Crespo Rincón" 
Sent: Monday, January 25, 2010 5:30 PM


2010/1/25 John G. Heim :

I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
running the latest mysql-server from debian lenny (5.0.1). I have databases
for drupal, moodle, spamassassin, horde3, and a small database for
departmental stuff.

The problem is that inserts/updates are sometimes very slow, on the order of
a minute. I am hoping somebody can sspot something wrong in my config.
Here's the optimization settings section (for your convenience). The whole
my.cnf is reproduced below that:


Are your databases using MyISAM or InnoDB?

Both. Maybe that's the problem? I started creating database tables for my 
own web apps with the default mysql configuration. I believe the default 
database engine is MyISAM. But then I wanted to use foreign keys and I saw 
that it required me to use  InnoDB. So I converted some tables to InnoDB 
but not all. Maybe it was a mistake not to convert all of them.


After that, I installed drupal, moodle, and mediawiki. I haven't looked at 
what kind of tables those packages create. They may not specify it and the 
default is still whatever it is when you install mysql, MyISAM I think.



* If MyISAM, you could be suffering contention problems on writes

because of full table locks. No easy solution but engine change or
database sharding. Also key_buffer, (and the other buffers) coud be
too small for 16GB of RAM. Are you really using more thant 10% of it?
You could also disable other engines if unused.
* If InnoDB, you have not set innodb_buffer_pool_size nor log size.
You could increase the pool to >50% of ram available.

Those are very general suggestions. It depends a lot on your hardware
(slow storage?), other apps installed on the same machine or the load
of the server, among others.


Well, it could be that the disks aren't real fast. The server is also 
running a print server (lprng). I don't think that's very CPU intensive 
but it might be slowing down writes.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization

2010-01-26 Thread John G. Heim

From: "Jaime Crespo Rincón" 
Sent: Monday, January 25, 2010 5:30 PM


2010/1/25 John G. Heim :

I have a server with 16Gb of RAM and a dual-core 2Ghz processor. It is
running the latest mysql-server from debian lenny (5.0.1). I have 
databases

for drupal, moodle, spamassassin, horde3, and a small database for
departmental stuff.

The problem is that inserts/updates are sometimes very slow, on the order 
of

a minute. I am hoping somebody can sspot something wrong in my config.
Here's the optimization settings section (for your convenience). The whole
my.cnf is reproduced below that:


Are your databases using MyISAM or InnoDB?

Both. Maybe that's the problem? I started creating database tables for my 
own web apps with the default mysql configuration. I believe the default 
database engine is MyISAM. But then I wanted to use foreign keys and I saw 
that it required me to use  InnoDB. So I converted some tables to InnoDB but 
not all. Maybe it was a mistake not to convert all of them.


After that, I installed drupal, moodle, and mediawiki. I haven't looked at 
what kind of tables those packages create. They may not specify it and the 
default is still whatever it is when you install mysql, MyISAM I think.



* If MyISAM, you could be suffering contention problems on writes

because of full table locks. No easy solution but engine change or
database sharding. Also key_buffer, (and the other buffers) coud be
too small for 16GB of RAM. Are you really using more thant 10% of it?
You could also disable other engines if unused.
* If InnoDB, you have not set innodb_buffer_pool_size nor log size.
You could increase the pool to >50% of ram available.

Those are very general suggestions. It depends a lot on your hardware
(slow storage?), other apps installed on the same machine or the load
of the server, among others.


Well, it could be that the disks aren't real fast. The server is also 
running a print server (lprng). I don't think that's very CPU intensive but 
it might be slowing down writes. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: optimization

2010-01-25 Thread Jaime Crespo Rincón
2010/1/25 John G. Heim :
> I have a server with 16Gb of RAM and a dual-core  2Ghz processor.  It is
> running the latest mysql-server from debian lenny (5.0.1). I have databases
> for drupal, moodle, spamassassin, horde3, and a small database for
> departmental stuff.
>
> The problem is that inserts/updates are sometimes very slow, on the order of
> a minute. I am hoping somebody can sspot something wrong in my config.
> Here's the optimization settings section (for your convenience). The whole
> my.cnf is reproduced below that:

Are your databases using MyISAM or InnoDB?

* If MyISAM, you could be suffering contention problems on writes
because of full table locks. No easy solution but engine change or
database sharding. Also key_buffer, (and the other buffers) coud be
too small for 16GB of RAM. Are you really using more thant 10% of it?
You could also disable other engines if unused.
* If InnoDB, you have not set innodb_buffer_pool_size nor log size.
You could increase the pool to >50% of ram available.

Those are very general suggestions. It depends a lot on your hardware
(slow storage?), other apps installed on the same machine or the load
of the server, among others.

There also exists a tool to get introduced into MySQL server variables
tuning:  Tuning primer -  https://launchpad.net/mysql-tuning-primer It
is also a very general tool, but it could be helpful for a starting
point.

Regards,
-- 
Jaime Crespo
MySQL & Java Instructor
Warp Networks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Optimization suggestions

2009-12-14 Thread sudhir543-nimavat
Tables has varchar36 primary keys, because I use UUID and not auto increment 
columns.
I have verified all the queries that are going to run on tables using Explain 
and it show correct index being used.

Thanks
SN

 
  

Sudhir NimavatSenior software engineer. 
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught
  





From: Gavin Towey 
To: Sudhir N ; Mysql 
Sent: Tue, 15 December, 2009 2:16:53 AM
Subject: RE: Optimization suggestions

Id should probably be an auto_incrementing INT, if you still need a unique text 
identifier, then I would make a separate field.  Though my opinion isn't the 
only way; there is much debate on natural vs. surrogate keys.

I would normalize "folderid" into a lookup in another table, and make folderid 
an INT value.

Threadid is another field that would probably be better as an INT.

As for your indexes, they depend completely on what type of queries you're 
going to be running.  Once you know that, then you can test them using sample 
data and EXPLAIN.

http://dev.mysql.com/doc/refman/5.0/en/explain.html 
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html 

About normalization:
http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html 


Regards,
Gavin Towey


-Original Message-
From: Sudhir N [mailto:sudhir_nima...@yahoo.com]
Sent: Monday, December 14, 2009 10:31 AM
To: Mysql
Subject: Optimization suggestions

I have following table structure, I have to use merge storage engine.
Please have a look, and provide feedback if theres some thing wrong or if 
there's space for optimization.


/*Table structure for table `messages2009` */

CREATE TABLE `messages2009` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

/*Table structure for table `messages` */
/*Merge table definition that covers all message tables*/

CREATE TABLE `messages` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST 
UNION=(`messages2009`);






Sudhir NimavatSenior software engineer.
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught


  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/ 

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com



  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

RE: Optimization suggestions

2009-12-14 Thread Gavin Towey
Id should probably be an auto_incrementing INT, if you still need a unique text 
identifier, then I would make a separate field.  Though my opinion isn't the 
only way; there is much debate on natural vs. surrogate keys.

I would normalize "folderid" into a lookup in another table, and make folderid 
an INT value.

Threadid is another field that would probably be better as an INT.

As for your indexes, they depend completely on what type of queries you're 
going to be running.  Once you know that, then you can test them using sample 
data and EXPLAIN.

http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

About normalization:
http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html


Regards,
Gavin Towey


-Original Message-
From: Sudhir N [mailto:sudhir_nima...@yahoo.com]
Sent: Monday, December 14, 2009 10:31 AM
To: Mysql
Subject: Optimization suggestions

I have following table structure, I have to use merge storage engine.
Please have a look, and provide feedback if theres some thing wrong or if 
there's space for optimization.


/*Table structure for table `messages2009` */

CREATE TABLE `messages2009` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

/*Table structure for table `messages` */
/*Merge table definition that covers all message tables*/

CREATE TABLE `messages` (
  `id` varchar(36) NOT NULL default '',
  `folderid` varchar(36) NOT NULL default '',
  `fromid` int(11) NOT NULL default '0',
  `fromtype` varchar(10) NOT NULL default '',
  `toid` int(11) NOT NULL default '0',
  `totype` varchar(10) NOT NULL default '',
  `subject` varchar(255) default NULL,
  `body` text,
  `readbyrecipient` tinyint(1) NOT NULL default '0',
  `deletedbyauthor` tinyint(1) NOT NULL default '0',
  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
  `threadid` varchar(36) NOT NULL default '',
  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  KEY `folderid` (`folderid`),
  KEY `threadid` (`threadid`),
  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
  KEY `sentitemsindex` (`fromid`,`fromtype`)
) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST 
UNION=(`messages2009`);






Sudhir NimavatSenior software engineer.
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught


  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


Re: Optimization suggestions

2009-12-14 Thread Jim Lyons
After one very quick look, the index on folderid alone is unnecessary since
you have another index in which that field is the high-order field.

On Mon, Dec 14, 2009 at 12:31 PM, Sudhir N  wrote:

> I have following table structure, I have to use merge storage engine.
> Please have a look, and provide feedback if theres some thing wrong or if
> there's space for optimization.
>
>
> /*Table structure for table `messages2009` */
>
> CREATE TABLE `messages2009` (
>  `id` varchar(36) NOT NULL default '',
>  `folderid` varchar(36) NOT NULL default '',
>  `fromid` int(11) NOT NULL default '0',
>  `fromtype` varchar(10) NOT NULL default '',
>  `toid` int(11) NOT NULL default '0',
>  `totype` varchar(10) NOT NULL default '',
>  `subject` varchar(255) default NULL,
>  `body` text,
>  `readbyrecipient` tinyint(1) NOT NULL default '0',
>  `deletedbyauthor` tinyint(1) NOT NULL default '0',
>  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
>  `threadid` varchar(36) NOT NULL default '',
>  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
>  PRIMARY KEY  (`id`),
>  KEY `folderid` (`folderid`),
>  KEY `threadid` (`threadid`),
>  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
>  KEY `sentitemsindex` (`fromid`,`fromtype`)
> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
>
> /*Table structure for table `messages` */
> /*Merge table definition that covers all message tables*/
>
> CREATE TABLE `messages` (
>  `id` varchar(36) NOT NULL default '',
>  `folderid` varchar(36) NOT NULL default '',
>  `fromid` int(11) NOT NULL default '0',
>  `fromtype` varchar(10) NOT NULL default '',
>  `toid` int(11) NOT NULL default '0',
>  `totype` varchar(10) NOT NULL default '',
>  `subject` varchar(255) default NULL,
>  `body` text,
>  `readbyrecipient` tinyint(1) NOT NULL default '0',
>  `deletedbyauthor` tinyint(1) NOT NULL default '0',
>  `deletedbyrecipient` tinyint(1) NOT NULL default '0',
>  `threadid` varchar(36) NOT NULL default '',
>  `senttime` timestamp NOT NULL default CURRENT_TIMESTAMP,
>  PRIMARY KEY  (`id`),
>  KEY `folderid` (`folderid`),
>  KEY `threadid` (`threadid`),
>  KEY `inboxfolderindex` (`folderid`,`toid`,`totype`),
>  KEY `sentitemsindex` (`fromid`,`fromtype`)
> ) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 INSERT_METHOD=LAST
> UNION=(`messages2009`);
>
>
>
>
>
>
> Sudhir NimavatSenior software engineer.
> Quick start global PVT LTD.
> Baroda - 390007
> Gujarat, India
>
> Personally I'm always ready to learn, although I do not always like being
> taught
>
>
>  The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.
> http://in.yahoo.com/




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: optimization help

2007-06-27 Thread Brent Baisley
That's quite a query. You may not be able to optimize it well with  
those nested selects. You may want to think about changing your query  
around a little, perhaps joining pieces of data using whatever  
programming language you're using on the front end. You have MySQL  
doing a lot of work and perhaps transferring a lot of data.
If some of those selects are pulling data that is redundant across  
many rows, if may be more efficient to join them on the front end  
using arrays or something similar.


Another alternative would be to use a different table type like a  
MERGE table. That allows you to query multiple tables as one, which  
is something you had asked about. You need to be appear of it's  
limitations, like unique indexes not being enforced across tables. So  
if you want to use auto increment, you need to set the value when you  
create a new table to add it to the merge setup.



On Jun 27, 2007, at 12:16 PM, Dave G wrote:


I think I can do that:

I don't have any other indexes, just the keys.

mysql> show create table data__ProcessedDataFrames;
+--- 
+- 
-- 
-- 
-- 
-- 
-- 
-- 
-+

| Table | Create Table   |
+--- 
+- 
-- 
-- 
-- 
-- 
-- 
-- 
-+
| data__ProcessedDataFrames | CREATE TABLE  
`data__ProcessedDataFrames` (

  `processed_id` int(10) unsigned NOT NULL auto_increment,
  `top_level_product_name` varchar(255) default NULL,
  `test_id` int(10) unsigned default NULL,
  `payload_time` double default NULL,
  `universal_time` double default NULL,
  `processed_data` mediumblob,
  PRIMARY KEY  (`processed_id`),
  KEY `test_id` (`test_id`),
  KEY `payload_time` (`payload_time`),
  KEY `top_level_product_name` (`top_level_product_name`)
) ENGINE=MyISAM AUTO_INCREMENT=1392568 DEFAULT CHARSET=latin1 |
+--- 
+- 
-- 
-- 
-- 
-- 
-- 
-- 
-+

1 row in set (0.00 sec)

mysql>

As for the amount of data I expect to get  infinite really.

Our setup: we have serveral boxes we are running tests from, where  
sql is

the storage engine on each local box, then we will have a main storage
area for all relevant tests.  Based on passed data, the the tables  
size
will be pushing 4G as it is (when we port the data)  and expect at  
least
that much more over the life of this software, but since the  
payloads I

will be getting the data from have not been developed yet, I can't be
entirely sure.  One of the reasons I was inquiring as to whether  
breaking
it up into several tables would be a good idea is because it would  
make it
easier for me to merge the data from the different testers into the  
main
data repository that way.  Otherwise I will have to figure out a  
good way
of redoing the test_id in each test that is stored in the main  
repository.


Slow queries will be a little hard to show without giving a full
evaluation of my system.  So I'll simplify it a little.  I'm doing  
several

joins to get the right parameters to query this table in a stored
procedure  . but when it comes down to it, the query on this  
table is
the big one and I can modify my other joins, just making the query  
on this

table fast is my concern.  Example query:

select payload_time,HEX(processed_data) from data__ProcessedDataFrames
where test_id=18 AND top_level_product_name="DataProduct" AND  
payload_time

11808.74704 AND payload_time < 1180564096.24967;


What I'm concerned about is with how mu

Re: optimization help

2007-06-27 Thread David T. Ashley

On 6/27/07, Dave G <[EMAIL PROTECTED]> wrote:


select payload_time,HEX(processed_data) from data__ProcessedDataFrames
where test_id=18 AND top_level_product_name="DataProduct" AND payload_time
> 11808.74704 AND payload_time < 1180564096.24967;

What I'm concerned about is with how much data I will eventually have,
even scanning over the KEYS will take a long time.



Hi Dave,

In the case above, you want to be sure that everything involved in the query
is indexed or a key (probably the same thing).

To give an example, "test_id=18" ... if that isn't indexed, it will be an
O(N) scan over all records in a table rather than an O(log N) retrieval
based on some BTREE or similar.  As a first step, be sure that everything
involved in a typical query is indexed.

For joins, the related columns should also be indexed.

Dave A.


Re: optimization help

2007-06-27 Thread Dave G
I do, but I don't see any way around that with the data I have.

Dave G.

> Good Afternoon David
>
> sounds as if you have a number of non-unique indices (even possibly FTS!)
> slowing down queries..this should help you concentrate on the slower
> indices
> mysql>
> select TABLE_NAME,COLUMN_NAME,INDEX_NAME from
> INFORMATION_SCHEMA.STATISTICS
> where NON_UNIQUE=1;
>
> Anyone else?
> Martin--
> This email message and any files transmitted with it contain confidential
> information intended only for the person(s) to whom this email message is
> addressed.  If you have received this email message in error, please
> notify
> the sender immediately by telephone or email and destroy the original
> message without making a copy.  Thank you.
>
> - Original Message -
> From: "Dave G" <[EMAIL PROTECTED]>
> To: 
> Sent: Wednesday, June 27, 2007 11:32 AM
> Subject: optimization help
>
>
>>I have a table in my database (currently) that grows to be huge (and I
>> need to keep the data).  I'm in a redesign phase and I'm trying to do it
>> right.  So here are the relevant details:
>>
>> The table has several keys involved:
>>
>> mysql> desc data__ProcessedDataFrames;
>> ++--+--+-+-++
>> | Field  | Type | Null | Key | Default |
>> Extra
>> |
>> ++--+--+-+-++
>> | processed_id   | int(10) unsigned | NO   | PRI | NULL|
>> auto_increment |
>> | top_level_product_name | varchar(255) | YES  | MUL | NULL|
>> |
>> | test_id| int(10) unsigned | YES  | MUL | NULL|
>> |
>> | p_time | double   | YES  | MUL | NULL|
>> |
>> | processed_data | mediumblob   | YES  | | NULL|
>> |
>> ++--+--+-+-++
>> 6 rows in set (0.00 sec)
>>
>> This is the table that contains the data I'm interested in currently.
>> Queries on this table when it gets large is slow as molasses.  I'm
>> thinking about making a new table for anything with a different test_id
>>  any opinions as to whether this is good or bad?
>>
>> Before you make fun of me for my questions, I a bit new to database
>> programming.
>>
>> If it is better design to break it into smaller tables (for speed
>> anyway)
>> then I would need to know how to query over multiple tables as though it
>> was one table.  Join will do this, but that takes forever (unless of
>> course I may be doing this wrong), so that's not a good option.  I need
>> to
>> be able to query over mutiple test_ids, which will be multiple tables,
>> for
>> specific top_level_product_name, with in some time range (using p_time).
>>
>> Any help would be appreciated.  I will happily give more information if
>> you need to offer an educated opinion.
>>
>> Thanks
>>
>> David Godsey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization help

2007-06-27 Thread Dave G
I think I can do that:

I don't have any other indexes, just the keys.

mysql> show create table data__ProcessedDataFrames;
+---+--+
| Table | Create Table   |
+---+--+
| data__ProcessedDataFrames | CREATE TABLE `data__ProcessedDataFrames` (
  `processed_id` int(10) unsigned NOT NULL auto_increment,
  `top_level_product_name` varchar(255) default NULL,
  `test_id` int(10) unsigned default NULL,
  `payload_time` double default NULL,
  `universal_time` double default NULL,
  `processed_data` mediumblob,
  PRIMARY KEY  (`processed_id`),
  KEY `test_id` (`test_id`),
  KEY `payload_time` (`payload_time`),
  KEY `top_level_product_name` (`top_level_product_name`)
) ENGINE=MyISAM AUTO_INCREMENT=1392568 DEFAULT CHARSET=latin1 |
+---+--+
1 row in set (0.00 sec)

mysql>

As for the amount of data I expect to get  infinite really.

Our setup: we have serveral boxes we are running tests from, where sql is
the storage engine on each local box, then we will have a main storage
area for all relevant tests.  Based on passed data, the the tables size
will be pushing 4G as it is (when we port the data)  and expect at least
that much more over the life of this software, but since the payloads I
will be getting the data from have not been developed yet, I can't be
entirely sure.  One of the reasons I was inquiring as to whether breaking
it up into several tables would be a good idea is because it would make it
easier for me to merge the data from the different testers into the main
data repository that way.  Otherwise I will have to figure out a good way
of redoing the test_id in each test that is stored in the main repository.

Slow queries will be a little hard to show without giving a full
evaluation of my system.  So I'll simplify it a little.  I'm doing several
joins to get the right parameters to query this table in a stored
procedure  . but when it comes down to it, the query on this table is
the big one and I can modify my other joins, just making the query on this
table fast is my concern.  Example query:

select payload_time,HEX(processed_data) from data__ProcessedDataFrames
where test_id=18 AND top_level_product_name="DataProduct" AND payload_time
> 11808.74704 AND payload_time < 1180564096.24967;

What I'm concerned about is with how much data I will eventually have,
even scanning over the KEYS will take a long time.

Thanks

Dave G.

BTW: heres the giant query that I use.
   SELECT
  E.product_id,
  product_name,
  D.top_level_product_name,
  processed_id,
  product_offset,
  product_length,
  version_id,
byte_order,
  ROUND(R.payload_time,6) as payload_time,
  SUBSTR(
 BINARY(processed_data),
 FLOOR(product_offset/8)+1,
 CEIL(product_length/8)) as substring,
(SELECT HEX(substring)) as raw_data,
  (SELECT toString (
substring,
round(char_length(raw_data)/2,0),
 data_type,
 (SELECT attribute_value FROM
config__DataProductAttributes WHERE attribute_name =
'FormatString' AND
   config__DataProductAttributes.product_id =
  E.product_id),
 product_offset % 8,
 (product_length + (product_offset % 8)) % 8,
 product_length,
 byte_orde

Re: optimization help

2007-06-27 Thread Martin Gainty

Good Afternoon David

sounds as if you have a number of non-unique indices (even possibly FTS!) 
slowing down queries..this should help you concentrate on the slower indices

mysql>
select TABLE_NAME,COLUMN_NAME,INDEX_NAME from INFORMATION_SCHEMA.STATISTICS 
where NON_UNIQUE=1;


Anyone else?
Martin--
This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: "Dave G" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, June 27, 2007 11:32 AM
Subject: optimization help



I have a table in my database (currently) that grows to be huge (and I
need to keep the data).  I'm in a redesign phase and I'm trying to do it
right.  So here are the relevant details:

The table has several keys involved:

mysql> desc data__ProcessedDataFrames;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra
|
++--+--+-+-++
| processed_id   | int(10) unsigned | NO   | PRI | NULL|
auto_increment |
| top_level_product_name | varchar(255) | YES  | MUL | NULL|
|
| test_id| int(10) unsigned | YES  | MUL | NULL|
|
| p_time | double   | YES  | MUL | NULL|
|
| processed_data | mediumblob   | YES  | | NULL|
|
++--+--+-+-++
6 rows in set (0.00 sec)

This is the table that contains the data I'm interested in currently.
Queries on this table when it gets large is slow as molasses.  I'm
thinking about making a new table for anything with a different test_id
 any opinions as to whether this is good or bad?

Before you make fun of me for my questions, I a bit new to database
programming.

If it is better design to break it into smaller tables (for speed anyway)
then I would need to know how to query over multiple tables as though it
was one table.  Join will do this, but that takes forever (unless of
course I may be doing this wrong), so that's not a good option.  I need to
be able to query over mutiple test_ids, which will be multiple tables, for
specific top_level_product_name, with in some time range (using p_time).

Any help would be appreciated.  I will happily give more information if
you need to offer an educated opinion.

Thanks

David Godsey


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization help

2007-06-27 Thread David T. Ashley

On 6/27/07, Dave G <[EMAIL PROTECTED]> wrote:


Queries on this table when it gets large is slow as molasses.  I'm
thinking about making a new table for anything with a different test_id
 any opinions as to whether this is good or bad?



Hi Dave G.,

We need to know how:

a)How large the table might grow to, and

b)The queries you'll be executing on the table, and

c)The anticipated size of the result sets (from the queries).

Generally speaking, you are shooting for O(log N) on the queries.  You need
to be sure that the queries you are issuing combined with the database
design will allow that.

So, I need to know the specific queries.

Dave A.


Re: Optimization

2006-08-01 Thread mos

At 02:37 PM 8/1/2006, Cesar David Rodas Maldonado wrote:

Hello to all!

How can I optimize this query

select * from table where col1 = 5 && col2 = 15;


Put both fields in the same index index. The first index should be the 
column with the least unique values (col1 by your example).




If I  know that col2 I have 15 rows and col1 I have just 100 rows, I
know that because I save that data in another table.


You could just do:

select count(distinct col1) from table;
select count(distinct col2) from table;

to get the number of distinct values in each  column.



How can I do for search first where is minus number of result?



Not sure what you mean by this, but if you want rows where col1 is negative 
just try:


select * from table where col1 < 0




The two columns are indexes.


Yes but probably you are using 2 different indexes. Putting both columns in 
the same index will make the query faster because you are searching on both 
columns.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization - directories and sub-directories (with descendants)

2006-03-07 Thread Peter Brawley

Eli,

>Example: I want to search on all the directories under 'd4' that 
contain the word "music".


>I got several solutions, but not satisfying:
>A) Loop from 'd4' to sub-dirs in first level, and use buffer list for 
next iterations
>when going deeper into levels. [not good: there can be many sub-dirs 
with descendants,

>and the loop will iter more; slow on searches].
>B) Storing the directory tree structure in the form of 'root/d1/d4/d6' 
and etc.
>[not good: personally I can't use it (specific implementation 
restriction)].
>C) Descendants sub-dirs connections to sub-dirs on deeper levels, so 
searching will go
>over the first level sub-dirs and the descendants sub-dirs. [not good: 
there can be many
>sub-dirs and there would be many descendants sub-dirsl; duplicating 
descendants on references].


As you say, your A and C aren't maintainable. If a node can have more 
than one parent node (ie its indegree can > 1), it ain't  a tree, so 
tree methods won't work unmodified. It's possible, though, to adapt some 
tree traversal methods to a graph like yours; for an example see the 
parts explosion section in 
http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html.


PB

-

Eli wrote:

Hi,

I have a table of directories. Each row represents a directory, which 
holds his name and desc. Another table lists sub-directories from each 
directory source to its sub-directories targets.


dirs:
+--+--++
| dir_id   | dir_name | dir_desc   |
+--+--++
|0 | root |   root dir |
|   11 |   d1 |  dir no. 1 |
|   12 |   d2 |  dir no. 2 |
|   21 |   d3 |  dir no. 3 |
|   22 |   d4 |  dir no. 4 |
|   23 |   d5 |  dir no. 5 |
|   31 |   d6 |  dir no. 6 |
|   32 |   d7 |  dir no. 7 |
|   41 |   d8 |  dir no. 8 |
|   51 |   d9 |  dir no. 9 |
|   52 |  d10 | dir no. 10 |
|   61 |  d11 | dir no. 11 |
+--+--++
12 rows in set (0.00 sec)

subdirs:
+++
| dir_source | dir_target |
+++
|  0 | 11 |
|  0 | 12 |
| 11 | 21 |
| 11 | 22 |
| 11 | 23 |
| 12 | 31 |
| 22 | 31 |
| 22 | 32 |
| 23 | 52 |
| 31 | 41 |
| 41 | 51 |
| 41 | 52 |
| 52 | 61 |
+++
13 rows in set (0.00 sec)

root (0)
   +d1 (11)
   |  +d3 (21)
   |  +d4 (22)
   |  |  +d6 (31)
   |  |  |  +d8 (41)
   |  |  | +d9 (51)
   |  |  | +d10 (52)
   |  |  | +d11 (61)
   |  |  +d7 (32)
   |  +d5 (23)
   | +*d10* (52) -reference
   +d2 (12)
  +*d6* (31) -reference

Note that a directory can be contained in several parent directories 
(as long as it doesn't creates circles) - "references".


Example: I want to search on all the directories under 'd4' that 
contain the word "music".


I got several solutions, but not satisfying:
A) Loop from 'd4' to sub-dirs in first level, and use buffer list for 
next iterations when going deeper into levels. [not good: there can be 
many sub-dirs with descendants, and the loop will iter more; slow on 
searches].
B) Storing the directory tree structure in the form of 'root/d1/d4/d6' 
and etc. [not good: personally I can't use it (specific implementation 
restriction)].
C) Descendants sub-dirs connections to sub-dirs on deeper levels, so 
searching will go over the first level sub-dirs and the descendants 
sub-dirs. [not good: there can be many sub-dirs and there would be 
many descendants sub-dirsl; duplicating descendants on references].


Do you have any other suggestions? What's the better way?


-Thanks in advance... :-)




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.2.0/275 - Release Date: 3/6/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization suggestions when only using Innodb

2005-12-01 Thread Kayra Otaner
Please see my comments below :

On Thu, 2005-12-01 at 14:06 -0600, Ross Anderson wrote:
> I have read the suggestions for optimization for innodb however I'm 
> curious if anyone can help me understand which buffers are common (thus 
> used by innodb action) and which are specific to myiasm.  The server has 
> 2GB of ram but runs multiple services. I have a number of different 
> types of connections also. Some are repeated queries to static tables 
> and others are dynamic read write to large tables.
> 
> Thanks in advance!
> Ross Anderson
> 
> mysql 4.0.25
> linux-2.6.14
> 
> 

MyISAM storage engine specific :
> key_buffer = 128M
> myisam_sort_buffer_size = 16M


Effecting all engines:
> thread_cache = 8
> #query_cache_type = ON
> query_cache_size= 32M
> # Try number of CPU's*2 for thread_concurrency
> thread_concurrency = 4
> sort_buffer_size = 2M
> read_buffer_size = 2M
> table_cache = 256

InnoDB specific:
> innodb_buffer_pool_size = 768M
> innodb_additional_mem_pool_size = 20M
> innodb_log_file_size = 256M
> innodb_log_buffer_size = 8M
> innodb_flush_log_at_trx_commit = 0
> innodb_lock_wait_timeout = 50
> innodb_status_file=0

Also if there is too much IO, try to increase number of innodb log files
from 2 to something like 4. 

Check
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

for full list of system variables.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization; index on combined PK

2005-04-15 Thread Joerg Bruehe
Hi!


Am Fr, den 15.04.2005 schrieb Fan, Wellington um 18:50:
> Hello all,
> 
> I have a link table that relates 'Profiles' with 'Categories':
> 
> CREATE TABLE lnk_profile_category (
>   profile_fk int(11) NOT NULL default '0',
>   category_fk int(11) NOT NULL default '0',
>   PRIMARY KEY  (category_fk,profile_fk)
> ) TYPE=MyISAM;
> 
> If I expect that my most frequent queries are going to be those that select
> profiles in a given category -- i.e.: (select * from profile
>  where category.label = 'widgets' ) -- is it better to
> define my PK as:
> 
> 1. PRIMARY KEY  (category_fk,profile_fk)
>  --Rather than--
> 2. PRIMARY KEY  (profile_fk,category_fk)
> 
> ??

Assuming that "category.label = 'widgets'" leads to an equality
predicate on column "category_fk", sure the first order is better.

For a coarse analogy: Try to look up somebody in a phone book whose
family name "Smith-Miller" you do not know, only the given name "Jim".
A multi-column primary key or index is comparable to a phone book whose
entries are sorted by family name, then first name, then address: It can
only be used if the leading column value/s is/are known, because these
are the (most) significant ones.

Regards,
Jörg

-- 
Joerg Bruehe, Senior Production Engineer
MySQL AB, www.mysql.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization; index on combined PK

2005-04-15 Thread SGreen
"Fan, Wellington" <[EMAIL PROTECTED]> wrote on 04/15/2005 12:50:07 PM:

> Hello all,
> 
> I have a link table that relates 'Profiles' with 'Categories':
> 
> CREATE TABLE lnk_profile_category (
>   profile_fk int(11) NOT NULL default '0',
>   category_fk int(11) NOT NULL default '0',
>   PRIMARY KEY  (category_fk,profile_fk)
> ) TYPE=MyISAM;
> 
> If I expect that my most frequent queries are going to be those that 
select
> profiles in a given category -- i.e.: (select * from profile
>  where category.label = 'widgets' ) -- is it better to
> define my PK as:
> 
> 1. PRIMARY KEY  (category_fk,profile_fk)
>  --Rather than--
> 2. PRIMARY KEY  (profile_fk,category_fk)
> 
> ??
> 

If you have the room, I would do both. That way you are covered either 
way. 

If you have to choose, I would run an EXPLAIN on the same query, twice. 
Once with the index on (cat,pro) the other time on (pro,cat) and see if 
the engine produces a different query plan. However, since you use 
category table in the WHERE clause more often, I would (for no really good 
reason) use the (cat, pro) index. It's just my gut feeling. (I assume you 
already have an index on category for label?)

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: Optimization

2004-05-27 Thread Egor Egorov
"Abdul Aziz" <[EMAIL PROTECTED]> wrote:
> 
> How can we optimize MySQL queries, plz define easy method
> comparing as well as better performance for data retrieval.
> 

The following chapter of the manual will be helful to you:
http://dev.mysql.com/doc/mysql/en/Query_Speed.html



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization

2004-05-27 Thread daniel
try using explain

>
>
> Hi  All,
>
> How can we optimize MySQL queries, plz define easy method
> comparing as well as better performance for data retrieval.
>
> Regards:
> aaziz
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization needed

2004-05-11 Thread Chris Elsworth
On Wed, May 05, 2004 at 01:06:45PM -0400, Brent Baisley wrote:
> Basically, you can't, it's a limitation of the InnoDB format. If you 
> change the table type to MyISAM, that query would be almost 
> instantaneous. But you are probably using InnoDB for a reason, so you 
> may be stuck if you want a record count.

I might be way off base here, but couldn't he use the following index:

> >  INDEX protocol_TimeStamp (time_stamp)

Then do something like COUNT(*) WHERE time_stamp > '1979-';

Would this use the index to resolve it quickly? Or, having written
that and looked at it, will the 30% rule kick in? It probably will,
won't it.. Maybe a FORCE INDEX?

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization needed

2004-05-05 Thread Sasha Pachev
Brent Baisley wrote:
Basically, you can't, it's a limitation of the InnoDB format. If you 
change the table type to MyISAM, that query would be almost 
instantaneous. But you are probably using InnoDB for a reason, so you 
may be stuck if you want a record count.
As Brent said, there is no way to optimize count(*) with InnoDB. However, there 
are a couple of workarounds that can help:

 * Upgrade to 4.0 and enable the query cache. If the cache is large enough, 
count(*) will be executed all the way only the first time after you've updated 
the table. If you do not update a lot, this would help.

 * Create a record count summary table that will store the value of count(*) 
and update it in your application every time you insert or delete records

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: optimization needed

2004-05-05 Thread Brent Baisley
Basically, you can't, it's a limitation of the InnoDB format. If you 
change the table type to MyISAM, that query would be almost 
instantaneous. But you are probably using InnoDB for a reason, so you 
may be stuck if you want a record count.

On May 5, 2004, at 10:01 AM, Vladimir Romanovski wrote:

Hi All!

We use MySQL 3.23.54 on dual-processor DELL running ReadHat 8.0 .
Performance of operations with one of the tables is very  low .
For example :
mysql> select count(*) from protocol;
+--+
| count(*) |
+--+
| 2266 |
+--+
1 row in set (28.93 sec)
The table is created with statement:

CREATE TABLE protocol
(
  id int(8)NOT NULL auto_increment,
  time_stamp timestamp ,
   source char(32)  default NULL,
  system char(32)  default NULL,
  severity   enum('Debug',
  'Info',
  'Warning',
  'Error') default 'Debug',
  area   enum('Technical',
  'Business',
  'Security')  default 'Technical',
  priority   enum('A',
  'B',
  'C') default 'C',
  exchange   char(32)  default NULL,
  isin   char(32)  default NULL,
  login  char(32)  default NULL,
  group_name char(100) default NULL,
  text   text  default NULL,
  msg_id char(32)  default NULL,
  PRIMARY KEY(id),
  INDEX protocol_MsgId (msg_id),
  INDEX protocol_TimeStamp (time_stamp)
) TYPE=InnoDB;
"SHOW TABLE STATUS" shows following result:
Nameprotocol
TypeInnoDB
Row_format  Dynamic
Rows1750258
Avg_row_length  233
Data_length 409387008
Max_data_length NULL
Index_length60948480
Data_free   0
Auto_increment  3647628
Create_time NULL   Update_time NULL   
Check_time  NULL  Create_options Comment 
InnoDB free: 648192 kB

How can we tune this table!

Thanks for your help!
Vladimir


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Optimization and the use of indexes

2004-04-26 Thread Boyd E. Hemphill
Jeremy:

That has to be a record for fastest response with the most use.  WOW!

I do use Explain but knowing this info allows me to use it in a more
intelligent way.  

Thank you very much for your time.  

Best Regards,
Boyd E. Hemphill
[EMAIL PROTECTED]
Triand, Inc.
www.triand.com
O:  (512) 248-2287
M:  (713) 252-4688

-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 26, 2004 4:23 PM
To: Boyd E. Hemphill
Cc: [EMAIL PROTECTED]
Subject: Re: Optimization and the use of indexes

On Mon, Apr 26, 2004 at 05:17:45PM -0500, Boyd E. Hemphill wrote:
> I thought I heard at the conference that the optimizer only uses a one
> index per table in a query regardless of the number of indexes on the
> table.  
>
> Is this true?

Yes.

The only exception is in a UNION, but that's best thought of as
multiple queries anyway.

> Are there any more details regarding the use of indexes I can find
> in the documentation?  Specifically how the optimizer picks which
> index to use?  I read through 7.2.4 and several other sections but
> no luck.

The optimizer looks at the available indexes and attempts to find the
one that will require the least amount of work, which usually
translates to reading the fewest records to find a match.

Have you run your queries thru EXPLAIN to see which keys is considers
and which one it chooses?

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization and the use of indexes

2004-04-26 Thread Jeremy Zawodny
On Mon, Apr 26, 2004 at 05:17:45PM -0500, Boyd E. Hemphill wrote:
> I thought I heard at the conference that the optimizer only uses a one
> index per table in a query regardless of the number of indexes on the
> table.  
>
> Is this true?

Yes.

The only exception is in a UNION, but that's best thought of as
multiple queries anyway.

> Are there any more details regarding the use of indexes I can find
> in the documentation?  Specifically how the optimizer picks which
> index to use?  I read through 7.2.4 and several other sections but
> no luck.

The optimizer looks at the available indexes and attempts to find the
one that will require the least amount of work, which usually
translates to reading the fewest records to find a match.

Have you run your queries thru EXPLAIN to see which keys is considers
and which one it chooses?

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization help

2004-01-12 Thread Mike Schienle
On Jan 12, 2004, at 08:09 AM, Douglas Sims wrote:

I think... you don't have an index on the "Incident" field itself, 
just on (Date, Incident, Type, Task) which means that it concatenates 
those fields and orders the result - thus this may be virtually 
useless if you're looking for a specific incident within a large date 
range.  Since your query has a specific incident number,  indexing 
that field would probably help a lot.
Thanks for the help, Douglas. That was the ticket. We're back under a 
second for queries now.

Mike Schienle, Custom Visuals
http://www.customvisuals.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Optimization help

2004-01-12 Thread Douglas Sims
I think... you don't have an index on the "Incident" field itself, just 
on (Date, Incident, Type, Task) which means that it concatenates those 
fields and orders the result - thus this may be virtually useless if 
you're looking for a specific incident within a large date range.  Since 
your query has a specific incident number,  indexing that field would 
probably help a lot.

Do a SHOW INDEXES FROM DeltaPAF;

To see the indexes that are actually there.

or

EXPLAIN SELECT Date FROM DeltaPAF WHERE Date>="2003-12-11" AND 
Date<="2004-01-11" AND Incident=98996144;

to see which indexes MySQL is really using.

For example, in the table below, there are really only two indexes, the 
one primary key index and the second name index.  The Seq_in_index 
column shows the fields that are included in the index but the ones that 
aren't listed first will be much harder to find.  Like a telephone 
directory, which is ordered by lastname, firstname - both fields are 
indexed but they are in the same index, so finding a specific firstname 
still means a full table scan.

Good luck!

mysql> describe test1;
++-+--+-++---+
| Field  | Type| Null | Key | Default| Extra |
++-+--+-++---+
| name   | varchar(20) | YES  | MUL | NULL   |   |
| mydate | date|  | PRI | -00-00 |   |
| number | int(10) |  | PRI | 0  |   |
++-+--+-++---+
3 rows in set (0.00 sec)
mysql> show indexes from test1;
+---++--+--+-+---+-+--++--++-+ 

| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation 
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+---+-+--++--++-+ 

| test1 |  0 | PRIMARY  |1 | mydate  | A 
|NULL | NULL | NULL   |  | BTREE  | |
| test1 |  0 | PRIMARY  |2 | number  | A 
|   0 | NULL | NULL   |  | BTREE  | |
| test1 |  1 | name |1 | name| A 
|NULL | NULL | NULL   | YES  | BTREE  | |
| test1 |  1 | name |2 | mydate  | A 
|NULL | NULL | NULL   |  | BTREE  | |
| test1 |  1 | name |3 | number  | A 
|NULL | NULL | NULL   |  | BTREE  | |
+---++--+--+-+---+-+--++--++-+ 

5 rows in set (0.15 sec)



Mike Schienle wrote:

Hi all -

I have a speed problem that I don't understand. I've been pretty 
active with DB's for a few years, but I'm no expert, so let me know if 
I'm missing the obvious. I have Paul DuBois' MySQL book (New Riders 
edition) and Descartes and Bunce's Programming DBI book on my desk, so 
feel free to reference something there if that will help.

Here's the table I'm working from and it's structure:
CREATE TABLE DeltaPAF (
  Date  DATE NOT NULL,
  Type  VARCHAR(4) NOT NULL,
  Incident  INT UNSIGNED NOT NULL,
  Mgr   VARCHAR(4) NOT NULL,
  Site  VARCHAR(40) NOT NULL,
  Task  ENUM('Proposed', 'Approved', 'Completed', 'Invoiced',
 'Expired', 'Rejected', 'Cancelled') NOT NULL,
  Webpage   MEDIUMTEXT NOT NULL,
  BudgetDECIMAL(12, 2) DEFAULT 0.00,
  PRIMARY KEY (Date, Incident, Type, Task),
  INDEX (Type, Mgr, Site)
);
I have about 125,000 records in the table and it's running on an older 
400 MHz MacOS X 10.2.8 system. The MySQL version is 3.23.52.

The following query comes back with 210 records in about 0.6 seconds.
mysql> SELECT Date FROM DeltaPAF WHERE Date>="2003-12-11"
-> AND Date<="2004-01-11" AND Incident=98996144;
However, this query comes back with 210 records in a little over 2 
minutes.
mysql> SELECT Budget FROM DeltaPAF WHERE Date>="2003-12-11"
-> AND Date<="2004-01-11" AND Incident=98996144;

Can someone clue me in how I might get the SELECT Budget query to 
return in a similar time to the SELECT Date query? I tried adding an 
index for Budget, knowing it shouldn't help, and it didn't. FWIW, the 
Webpage fields average about 5K characters, but can be as much as 40K.

Mike Schienle, Custom Visuals
http://www.customvisuals.com/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Optimization on query with WHERE, GROUP BY and ORDER BY

2003-12-05 Thread Chris Elsworth
Hello,

On Fri, Dec 05, 2003 at 12:02:05PM +0100, Martin Gillstr?m wrote:
> 
> The table can look something like this:
> row_id INT PRIMARY KEY
> where_column SET('a','b','c')
> groupby_column VARCHAR(255)
> orderby_column DATE
> .. more rows that I need to fetch with the select.

> This is what I have tried but not seen any dramaticly speed improvements 
> with:
> I have tried but I can't get mysql to use one index only.
> A also get that mysql uses temp tables and also store them disk. I have 
> raised  tmp_table_size without any success.
> I have experimented with sort_buffer_size, read_rnd_buffer_size, 
> key_buffer_size.

As I understand it, you can't get MySQL to use an index for sorting
and grouping if you're sorting and grouping on a different row. This
one bites me too, and forces me to live with a "using temporary; using
filesort" query on one of my biggest busiest tables.

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization

2002-05-05 Thread Rodrigo Pérez

Thanks Jay,
Actually what i want to do is much more simple. I want to have on the same
query result, the first 10 rows with this word and the total of rows from
the table with this specific word.

The result should be like a search engine page result. Has the total of
pages with that word and the first 10 results. I want to put it together in
only one query. Is it possible ?

SELECT COUNT(*) from TABLE where name="JOHN"
SELECT * from TABLE where name="JOHN" limit 10

Thanks,
Rodrigo
- Original Message -
From: "Jay Blanchard" <[EMAIL PROTECTED]>
To: "'Rodrigo Pérez'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 2:23 PM
Subject: RE: Optimization


> [snip]
> I want to optimize one of my php scripts to acess mysql only once. Is
there
> anyway to put these two queries together:
>
> SELECT COUNT(*) from TABLE where name="JOHN"
> SELECT * from TABLE where name="JOHN" limit 10
> [/snip]
>
> try
>
> select *,
> sum(if(name = 'JOHN', 1, 0)) AS COUNT
> from table
> where name = 'JOHN'
> group by name
>
> HTH!
>
> Jay
>
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Optimization

2002-05-03 Thread Jay Blanchard

[snip]
I want to optimize one of my php scripts to acess mysql only once. Is there
anyway to put these two queries together:

SELECT COUNT(*) from TABLE where name="JOHN"
SELECT * from TABLE where name="JOHN" limit 10
[/snip]

try

select *,
sum(if(name = 'JOHN', 1, 0)) AS COUNT
from table
where name = 'JOHN'
group by name

HTH!

Jay



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization And Memory Size

2002-03-19 Thread Ken Menzel

Hi Howard,
   I am trying to think of ideas.  I have several databases with
tables over 1.4G and rows still add quickly.  How about available
descriptors,  what are these values:
| Open_tables  | 2738   |
| Open_files   | 5013   |
| Open_streams | 0  |
| Opened_tables| 81542  |
| Questions| 31719361   |
|
this is a problem,  I have 81542 opened tables,  this sever needs more
file descriptors.  (I will look at this right away!,  but my server is
still running pretty well.)

Maybe you could include the output from show variables and show
status?  How fast is your SCSI drive subsystem,  I assume you are
using 7200 RPM SCSI drives at least?

Cordially,
Ken

- Original Message -
From: "mysql mailing list user" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, March 19, 2002 9:29 AM
Subject: Re: Optimization And Memory Size


>
> You have written the following:
>
> I have a mysql database table that is currently 1.5G in size with
well over
> a
> million records. It is running on a twin pentium 3 1G processor
machine with
> SuSE Linux version 1.4.
>
> Recently inserts have become VERY slow (several seconds). As I am
adding
> around 7K new fields a day, this is becoming a big problem.
>
> I recently increased the machine memory from 512K to 2G at some
expense! It
> made no difference. I am using the HUGE version of my.cnf as
shipped.
>
> Anybody got any ideas how to speed things up. Why did the memory
increase
> not help at all???
>
> Howard
>
>
>
> 
-
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try:
http://lists.mysql.com/php/unsubscribe.php
>
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization And Memory Size

2002-03-19 Thread mysql mailing list user


You have written the following: 

I have a mysql database table that is currently 1.5G in size with well over 
a
million records. It is running on a twin pentium 3 1G processor machine with 
SuSE Linux version 1.4. 

Recently inserts have become VERY slow (several seconds). As I am adding
around 7K new fields a day, this is becoming a big problem. 

I recently increased the machine memory from 512K to 2G at some expense! It 
made no difference. I am using the HUGE version of my.cnf as shipped. 

Anybody got any ideas how to speed things up. Why did the memory increase
not help at all??? 

Howard 

 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: optimization of large records

2002-02-18 Thread Laurent Oget

On Mon, Feb 18, 2002 at 01:52:41PM -0600, Melvin wrote:
> Hi:
> I'm trying to create an image database of about 130,000 records.  Right
> now I have only the filenames in a CHAR Field (the records are of fixed
> size).  I read those images a lot.  Is it better to leave it this way or to
> move all the image data to a blob fields?
>

you are better off with filenames.
 
> I already tried to use the blob field but it was up to 100 times slower.
> Is there a way I can optimize this?
>

with some operating systems having 130 000 files in a directory
might be sub-optimal. you might want to create a 'comb' of directory.

i.e 1/1/,1/2/,..,1/9/,..,9/1/,..,9/9/

this will also help if you want to share your datas between several
hard-drives.
 
> I'm using mysql-max 3.23.47 on Windows XP with Athlon XP 1800+, 512 RAM
> and 60Gb 5400 revs/s IDE Hard Disk
>

my guess is that your Athlon spends all the time waiting for the HD. You 
would probably get better performances with 4 400Mhz processors at about
the same price, and 2 HDs, if the database is used by several users at 
the same time. 

what exactly do you want to optimize? time? space? 

nobody will be able to tell you anything unless you can do some profiling
of your application to pinpoint where the bottlenecks are.


-- 
Laurent Oget, Ph.D. [EMAIL PROTECTED]http://oget.net
Senior Engineer Zvolve Systems Inc  http://zvolve.com
Chercheur Associé   Liafa   http://liafa.jussieu.fr

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization problem in 3.23.44

2002-01-18 Thread Fredrick Bartlett

Did you get an answer to this problem.  I'm experiencing the same behavior.

Mike Wexler wrote:

> When I do
>
> EXPLAIN
> SELECT status.itemKey, status.auctionUrl,
> status.hideItem, status.auctionId, status.action,
> status.auctionHouse
> FROM auction.status, inventory.thisItem
> WHERE status.itemKey=thisItem.itemKey
> AND (status.closeDate>=NOW() OR
>   (status.closeDate IS NULL AND action="queued"))
> AND status.action<>'build'
>
> I get
>
> 
>+--+++---+-+--+---++
> | table| type   | possible_keys  | key   |
> key_len | ref  | rows  | Extra  |
> 
>+--+++---+-+--+---++
> | thisItem | system | NULL   | NULL  |
>NULL | NULL | 1 ||
> | status   | range  | itemKey,itemKey_2,closeDate,action | closeDate |
>   9 | NULL | 23417 | where used |
> 
>+--+++---+-+--+---++
> 2 rows in set (0.01 sec)
>
> Note that the second table matches 23417 rows.
>
> Here is inventory.thisItem:
>
> mysql> show fields from inventory.thisItem;
> +-++--+-+-+---+
> | Field   | Type   | Null | Key | Default | Extra |
> +-++--+-+-+---+
> | itemKey | bigint(10) |  | | 0   |   |
> +-++--+-+-+---+
>
> and auction.status:
>
> mysql> show fields from auction.status;
> 
>+-+-+--+-+-+---+
> | Field   | Type
>  | Null | Key | Default | Extra |
> 
>+-+-+--+-+-+---+
> | action  |
> enum('atTIAS','build','uploaded','sold','queued','unknown') |  | MUL
> | atTIAS  |   |
> | actionDate  | timestamp(14)
>  | YES  | | NULL|   |
> | auctionHouse| varchar(10)
>  |  | | |   |
> | batchNum| int(11) unsigned
>  |  | MUL | 0   |   |
> | auctionId   | varchar(64)
>  |  | MUL | |   |
> | auctionUrl  | varchar(255)
>  | YES  | | NULL|   |
> | minimumBid  | decimal(7,2)
>  | YES  | | NULL|   |
> | reserve | decimal(7,2)
>  | YES  | | NULL|   |
> | finalBid| decimal(7,2)
>  | YES  | | NULL|   |
> | closeDate   | datetime
>  | YES  | MUL | NULL|   |
> | durationHrs | int(11)
>  | YES  | | NULL|   |
> | buyerEmail  | varchar(128)
>  | YES  | | NULL|   |
> | hideItem| enum('Link','Hide','Ignore','NoAuction')
>  |  | | Link|   |
> | buyerName   | varchar(64)
>  | YES  | | NULL|   |
> | title   | varchar(128)
>  | YES  | | NULL|   |
> | description | text
>  | YES  | | NULL|   |
> | invoiced| enum('no','yes')
>  |  | | no  |   |
> | uploadKey   | varchar(128)
>  | YES  | | NULL|   |
> | qty | int(11)
>  |  | | 0   |   |
> | uploadFee   | decimal(7,2)
>  | YES  | | NULL|   |
> | bold| varchar(32)
>  |  | | no  |   |
> | private | varchar(32)
>  |  | | no  |   |
> | feature1| varchar(32)
>  |  | | no  |   |
> | feature2| varchar(32)
>  |  | | no  |   |
> | feature3| varchar(32)
>  |  | | no  |   |
> | feature4| varchar(32)
>  |  | | no  |   |
> | feature5| varchar(32)
>  |  | | no  |   |
> | feature6| varchar(32)
>  |  | | no  |   |
> | feature7| varchar(32)
>  |  | | no  |   |
> | feature8| varchar(32)
>  |  | | no  |   |
> | imageUrl0   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl1   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl2   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl3   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl4   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl5   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl6   | varchar(255)
>  | YES  | | NULL|   |
> | imageUrl7   | varchar(255)
>  | YES  | | NULL|

Fwd: RE: Optimization

2002-01-18 Thread Michael Widenius


Hi!

I just forwarder your email about query caching
(Yes, I know that the query cache in MySQL 4.0 has solved this for you but...)

>> Hello all,
>> 
>> I'm having real trouble trying to optimize MySQL cause I can't believe
>> that
>> MSSQL is faster.
>> My configurations are as follows:
>> MSSQL 2000 on W2K server. PIII 733 - 512 MB memory.
>> MySQL-3.23.47-1 on Redhat 7.2. Dual PIII 1000 - 1.128 GB memory
>> 
>> I have a PHP script that runs on a Redhat 7.1 - PIII 500 640 MB memory.
>> The php script takes a username from a mysql table and runs a query for
>> each
>> of the usernames on another table.
>> The test is that I have 2 different versions of the script that do the
>> exactly same thing but one queries the MSSQL server and the other the
>> MySQL.
>> 
>> The MSSQL version takes 28 secs while the MySQL takes 34 secs.
>> As you can see the MSSQL is much more slower with less RAM.
>> 
>> I said what the heck I will use the my-huge.cnf to see if it makes any
>> difference.
>> Unfortunately nothing changed and then I started panicking.. It can't be
>> true!
>> 
>> I noticed that MSSQL caches the queries while MySQL doesn't.
>> In my script I might have this:
>> select emails from dbo_Company where username='';
>> come up 5 or even 10 times.
>> 
>> If I run it on mysql It takes always 0.26 secs while it appears the
>> MSSQL
>> caches the result and doesn't take any time at all.

Any chance you could give us a copy of the dbo_Company table to use
for testing ?
(If yes, please upload the .frm, .MYI and .MYD files to:
ftp://support.mysql.com/pub/mysql/secret)

According to tests I have done, for a simple query as the above, MySQL
should usually be faster than MSSQL, even with MS-SQL query caching.

(Look at:

http://www.mysql.com/information/benchmark-results/result-db2,informix,ms-sql,mysql,solid,sybase-relative.html

and the test for select_key, where MySQL is 3.52 times faster on
simple key lookups)

A couple of reasons for your results:

- username is a primary key or the table is sorted according to the
  user name (explicite or implicit) in MS-SQL.
- The query returns many rows, and they rows are far apart in the
  data file, so MySQL has to do many reads to fetch the rows.
  (In this case it's the file caching in Windows that is slow).

In both cases, an OPTIMIZE TABLE or 'ALTER TABLE dbo_Company ORDER BY
username' would speed up things considerably in MySQL.

If the reason for the speed difference is that 'username' is a primary
key and MS-SQL stores the row clustered together with the primary key,
then by using the InnoDB table handler you should get the same benefit
in MySQL as MS-SQL has.  (The downside with this algorithm is that
secondary keys are slower, but that is another issue)

For example, MyISAM stores the rows separate from the keys. This makes
the primary key slightly slower in MyISAM (except when you only want
to have information from the primary key, then MyISAM should be
faster), but instead all keys are equally fast and table scans are
much faster.

No sql server can be faster an ALL queries; Depending on the
optimization engine and how rows are stored you get certain benefits
but also some downsides.

If MS-SQL is faster in this case, we would like to add a similar case
to the MySQL benchmark suite because:

- If MS-SQL is faster on this particular case, we want to acknowledge
  this fact (give credit where credit is due...)
- We would like to know why MS-SQL is faster so that we can document
  this and provide workarounds 
- We will know about this and can in the future try to speed up MySQL
  for this case.
- We will do the test with all the table handlers MySQL support; This
  will show if the speed of this test is dependent of how the rows are
  stored or by the optimizer.

Thanks for any help you can give us regarding this!

Regards,
Monty

-- 
For technical support contracts, goto https://order.mysql.com/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Mr. Michael Widenius <[EMAIL PROTECTED]>
 / /|_/ / // /\ \/ /_/ / /__   MySQL AB, CTO
/_/  /_/\_, /___/\___\_\___/   Helsinki, Finland
   <___/   www.mysql.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Optimization

2002-01-17 Thread Spyros Ioakim

Problem Solved.
Indeed version 4 does work :-)

Here are some times I got for all of you mysql fans (I knew that mysql
couldn't let me down :-))

MSSQL: 28,6 secs
MySQL 3: 40,1 secs
MySQL 4: 36,3 secs
MySQL 4 (caching enabled): 21,9 secs
MySQL 4 (caching enabled - second pass): 0 secs

So indeed caching DOES work.. Thanks a lot guys

-Original Message-
From: Fournier Jocelyn [Presence-PC] [mailto:[EMAIL PROTECTED]] 
Sent: Thursday, January 17, 2002 7:58 PM
To: Ioakim Spyros; [EMAIL PROTECTED]
Subject: Re: Optimization

Hi,

It's already done since MySQL 4.0.1.
Take a look here :
http://www.mysql.com/doc/Q/u/Query_Cache.html

Regards,
Jocelyn
- Original Message -
From: "Ioakim Spyros" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 17, 2002 6:52 PM
Subject: FW: Optimization


Hello all,

I'm having real trouble trying to optimize MySQL cause I can't believe
that
MSSQL is faster.
My configurations are as follows:
MSSQL 2000 on W2K server. PIII 733 - 512 MB memory.
MySQL-3.23.47-1 on Redhat 7.2. Dual PIII 1000 - 1.128 GB memory

I have a PHP script that runs on a Redhat 7.1 - PIII 500 640 MB memory.
The php script takes a username from a mysql table and runs a query for
each
of the usernames on another table.
The test is that I have 2 different versions of the script that do the
exactly same thing but one queries the MSSQL server and the other the
MySQL.

The MSSQL version takes 28 secs while the MySQL takes 34 secs.
As you can see the MSSQL is much more slower with less RAM.

I said what the heck I will use the my-huge.cnf to see if it makes any
difference.
Unfortunately nothing changed and then I started panicking.. It can't be
true!

I noticed that MSSQL caches the queries while MySQL doesn't.
In my script I might have this:
select emails from dbo_Company where username='';
come up 5 or even 10 times.

If I run it on mysql It takes always 0.26 secs while it appears the
MSSQL
caches the result and doesn't take any time at all.

If I get the same query in the script 10 times I'm losing immediately
2.34
secs with MySQL.

Is there a way to get the same behavior in MySQL..

Regards,
Spyros

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Optimization

2002-01-17 Thread Bret Ewin

The first thing that comes to mind is that you're running the databases on
different hardware and operating systems. I know the Linux kernel had some
SMP performance issues not too long ago, prompting IBM to rewrite portions
of the kernel and improving performance by (I think) 16-20%. Also, you're
only using one processor at a time if your test isn't multithreaded.

The general overhead of SMP combined with Linux kernel issues may be where
the difference is. I would format a new hard drive with Linux (non-SMP),
temporarily install it in the W2K box and see what numbers you get.

Bret

-Original Message-
From: Ioakim Spyros [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 17, 2002 12:53 PM
To: [EMAIL PROTECTED]
Subject: FW: Optimization


Hello all,
 
I'm having real trouble trying to optimize MySQL cause I can't believe that
MSSQL is faster.
My configurations are as follows:
MSSQL 2000 on W2K server. PIII 733 - 512 MB memory.
MySQL-3.23.47-1 on Redhat 7.2. Dual PIII 1000 - 1.128 GB memory
 
I have a PHP script that runs on a Redhat 7.1 - PIII 500 640 MB memory.
The php script takes a username from a mysql table and runs a query for each
of the usernames on another table.
The test is that I have 2 different versions of the script that do the
exactly same thing but one queries the MSSQL server and the other the MySQL.
 
The MSSQL version takes 28 secs while the MySQL takes 34 secs.
As you can see the MSSQL is much more slower with less RAM.
 
I said what the heck I will use the my-huge.cnf to see if it makes any
difference.
Unfortunately nothing changed and then I started panicking.. It can't be
true!
 
I noticed that MSSQL caches the queries while MySQL doesn't.
In my script I might have this:
select emails from dbo_Company where username='';
come up 5 or even 10 times.
 
If I run it on mysql It takes always 0.26 secs while it appears the MSSQL
caches the result and doesn't take any time at all.
 
If I get the same query in the script 10 times I'm losing immediately 2.34
secs with MySQL.
 
Is there a way to get the same behavior in MySQL..
 
Regards,
Spyros

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization

2002-01-17 Thread Fournier Jocelyn [Presence-PC]

Hi,

It's already done since MySQL 4.0.1.
Take a look here :
http://www.mysql.com/doc/Q/u/Query_Cache.html

Regards,
Jocelyn
- Original Message -
From: "Ioakim Spyros" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, January 17, 2002 6:52 PM
Subject: FW: Optimization


Hello all,

I'm having real trouble trying to optimize MySQL cause I can't believe that
MSSQL is faster.
My configurations are as follows:
MSSQL 2000 on W2K server. PIII 733 - 512 MB memory.
MySQL-3.23.47-1 on Redhat 7.2. Dual PIII 1000 - 1.128 GB memory

I have a PHP script that runs on a Redhat 7.1 - PIII 500 640 MB memory.
The php script takes a username from a mysql table and runs a query for each
of the usernames on another table.
The test is that I have 2 different versions of the script that do the
exactly same thing but one queries the MSSQL server and the other the MySQL.

The MSSQL version takes 28 secs while the MySQL takes 34 secs.
As you can see the MSSQL is much more slower with less RAM.

I said what the heck I will use the my-huge.cnf to see if it makes any
difference.
Unfortunately nothing changed and then I started panicking.. It can't be
true!

I noticed that MSSQL caches the queries while MySQL doesn't.
In my script I might have this:
select emails from dbo_Company where username='';
come up 5 or even 10 times.

If I run it on mysql It takes always 0.26 secs while it appears the MSSQL
caches the result and doesn't take any time at all.

If I get the same query in the script 10 times I'm losing immediately 2.34
secs with MySQL.

Is there a way to get the same behavior in MySQL..

Regards,
Spyros

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: optimization problem in where clause (f1="Foo" or f2="Foo")

2001-10-04 Thread Gerald Clark

MySQL uses only one index for a select, so it can't use an index
when there is an OR in the where clause.

Ask Bjoern Hansen wrote:

> Hi,
> 
> I have a table with about 1.5M rows.
> 
> 9 of the colums are varchar(9)'s.
> 
> when I just select on one of them it goes fine, like:
> 
> 
>> explain select * from t1 where f2 = 'COM051000';
> 
> +---+--+---++-+---+--++
> | table | type | possible_keys | key| key_len | ref   | rows | Extra  |
> +---+--+---++-+---+--++
> | t1| ref  | f2_idx| f2_idx |  10 | const |  422 | where used |
> +---+--+---++-+---+--++
> 1 row in set (0.02 sec)
> 
> (likewise for f1 = ...)
> 
> 
> But if I use f1 = ... or f2 = ... it doesn't use the index at all.
> 
> 
>> explain select * from t1 where f2 = 'COM051000' or f1 = 'COM051000';
> 
> +---+--+---+--+-+--+-++
> | table | type | possible_keys | key  | key_len | ref  | rows| Extra  |
> +---+--+---+--+-+--+-++
> | t1| ALL  | f1_idx,f2_idx | NULL |NULL | NULL | 1194779 | where used |
> +---+--+---+--+-+--+-++
> 1 row in set (0.01 sec)
> 
> I tried running myisamchk -a on the table and now it shows the
> cardinality for each key correctly in "show keys from t1", but it
> didn't help on the queries. :-)
> 
> I am sure this is something really obvious, but I've no clue (as you
> probably can gather from the above). What am I missing? What kind of
> thing can I do to make the above query go faster?  Any hints would be
> appreciated.
> 
> 
>  - ask


-- 
Gerald L. Clark
[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization question and possible bug

2001-09-07 Thread Adams, Bill TQO

Stefan Pinkert wrote:

> Examine that query with explain select... show that the indexes are not
> used. The keyfields are unsigned tiny int where every bit has a different
> meaning. Any clue how i can rewrite this query in a way where the indexes
> are
> used?

If MySQL thinks it will be faster to scan, it will.  Make sure you analyze
the tables with [my]isamchk -a.
If that does not help, please send the output from "SHOW INDEX FROM table"
and the EXPLAIN.


> In the database i have a merge-table that merges 10 myisam tables.
> Sometimes the loadaverage of the system raise above 50 and the
> long-query-log is
> filled with some query accessing the merge table. This happens 2-3 times a
> day.
> Only a minute later everthing is okay again without doing anything.
> I can't believe that it is a performance problem because there is a
> summary of only 10,000 entries in the merge table and 50,000 entries in
> other
> tables. Does anybody experienced this strange problem, too?

Is the long query different than the other queries? Have you done an expain
on it?


> The last thing i found is a possible bug in merge-table implementation
> of 2.23.41. After an update from 2.23.37 to 2.23.41 i got only the first
> row of the result set again and again. Maybe it's a communication problem
> between php (with included mysql interface) and the new MySQL version.
> (Haven't determined it yet).

Don't know.  But read this thread just to be sure:
http://lists.mysql.com/cgi-ez/ezmlm-cgi?1:mss:84077:200108:ilgknliamhblokdjmmhb

--Bill


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization of MySQL

2001-05-02 Thread Jeremy Zawodny

On Wed, May 02, 2001 at 08:40:10AM +0200, Warren van der Merwe wrote:
> Hi there
> 
> Is there anything similar to this that will run on a Windows platform?

As luck would have it, yes. I managed to get mytop working on Windows
today. There are a few rough edges, but with about 20 minutes more
work (tomorrow, maybe?) I can patch it right and it'll work well.

My testing was on Windows NT 4.x with ActiveState Perl.

Jeremy
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878Fax: (408) 349-5454Cell: (408) 439-9951

MySQL 3.23.29: up 120 days, processed 741,250,690 queries (71/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Optimization of MySQL

2001-05-01 Thread Warren van der Merwe

Hi there

Is there anything similar to this that will run on a Windows platform?

Regards
WARREN


~
Warren van der Merwe
Software Director
PRT Trading (Pty) Ltd t/a RedTie
Durban, South Africa
Cell (+27-83) 262-9163
Office (+27-31) 767-0249
 

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]
> ]On Behalf
> Of Jeremy Zawodny
> Sent: 02 May 2001 07:30
> To: Bryan Coon
> Cc: '[EMAIL PROTECTED]'
> Subject: Re: Optimization of MySQL
> 
> 
> On Tue, May 01, 2001 at 04:46:39PM -0700, Bryan Coon wrote:
> > 
> > Our database is large, and getting larger, with several tables
> > approaching the 1gig mark.  In addition, the database will be moving
> > to a dedicated node on a beowulf cluster.
> 
> Cool... :-)
> 
> > For our users, we are not particulary interested in squeezing every
> > last drop of performance out of MySQL, but I would be interested to
> > know if there are obvious things that I should do to optimize our
> > performace.
> > 
> > For example, compiling my own mysql with certain flags, or perhaps
> > startup options, etc.
> > 
> > I searched the usenet and mysql site, and found some information but
> > would like to hear some experienced advice before I jump in with
> > both feet.
> 
> Well, just wait a few weeks for my article in the next issue of Linux
> Magazine. :-)
> 
> But before that happens, here are some ideas... There are two
> approaches to optimization, and you should use both.
> 
> First is optimizing your application. This is generally just making
> sure your queries are fast (well indexed), you're only retrieving the
> data you need, you aren't indexing columns which will never benefit
> from indexes, you're caching data in your app which can be cached,
> etc.
> 
> Second is server tuning. You can look at increasing the size if the
> key_buffer, record_buffer, and so on in your /etc/my.cnf (or similar)
> file. Try to get an idea how efficient things are currently. I often
> use mytop (http://public.yahoo.com/~jzawodn/mytop/) to gather some
> info about my system before, during, and after tuning. It doesn't give
> you everything you'll need (yet!), but it's a decent start. You'll
> probably want to look closely at the output of "SHOW VARIABLES" and
> "SHOW STATUS" and learn more about what some of them mean.
> 
> And, of course, we're glad to field specific questions on this list.
> 
> (This reminds me... I'm thinking of another patch to the MySQL manual
> which explains some more of this stuff. Just need to find the time to
> do it. Things are getting more, uh... "interesting" as the number of
> table handlers expand. With ISAM, MyISAM, InnoDB, BDB, Gemini, and
> HEAP, there is more room for both improvement and error.)
> 
> Jeremy
> -- 
> Jeremy D. Zawodny, <[EMAIL PROTECTED]>
> Technical Yahoo - Yahoo Finance
> Desk: (408) 349-7878Fax: (408) 349-5454Cell: (408) 439-9951
> 
> MySQL 3.23.29: up 118 days, processed 734,376,106 queries 
> (71/sec. avg)
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 
> 
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Optimization of MySQL

2001-05-01 Thread Jeremy Zawodny

On Tue, May 01, 2001 at 04:46:39PM -0700, Bryan Coon wrote:
> 
> Our database is large, and getting larger, with several tables
> approaching the 1gig mark.  In addition, the database will be moving
> to a dedicated node on a beowulf cluster.

Cool... :-)

> For our users, we are not particulary interested in squeezing every
> last drop of performance out of MySQL, but I would be interested to
> know if there are obvious things that I should do to optimize our
> performace.
> 
> For example, compiling my own mysql with certain flags, or perhaps
> startup options, etc.
> 
> I searched the usenet and mysql site, and found some information but
> would like to hear some experienced advice before I jump in with
> both feet.

Well, just wait a few weeks for my article in the next issue of Linux
Magazine. :-)

But before that happens, here are some ideas... There are two
approaches to optimization, and you should use both.

First is optimizing your application. This is generally just making
sure your queries are fast (well indexed), you're only retrieving the
data you need, you aren't indexing columns which will never benefit
from indexes, you're caching data in your app which can be cached,
etc.

Second is server tuning. You can look at increasing the size if the
key_buffer, record_buffer, and so on in your /etc/my.cnf (or similar)
file. Try to get an idea how efficient things are currently. I often
use mytop (http://public.yahoo.com/~jzawodn/mytop/) to gather some
info about my system before, during, and after tuning. It doesn't give
you everything you'll need (yet!), but it's a decent start. You'll
probably want to look closely at the output of "SHOW VARIABLES" and
"SHOW STATUS" and learn more about what some of them mean.

And, of course, we're glad to field specific questions on this list.

(This reminds me... I'm thinking of another patch to the MySQL manual
which explains some more of this stuff. Just need to find the time to
do it. Things are getting more, uh... "interesting" as the number of
table handlers expand. With ISAM, MyISAM, InnoDB, BDB, Gemini, and
HEAP, there is more room for both improvement and error.)

Jeremy
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878Fax: (408) 349-5454Cell: (408) 439-9951

MySQL 3.23.29: up 118 days, processed 734,376,106 queries (71/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php