Slow Queries Log and Index-less queries

2005-08-30 Thread HMax
Hey list,

I have a question regarding the slow queries log, and queries not using index.

I have a small table, with say 10 entries, like that :
ID |  Element
-
1  |  One
2  |  Two
3  |  Three
4  |  Four
5  |  Five
6  |  Six
7  |  Seven
8  |  Eight
9  |  Nine
10 |  Ten

I want to get all those entries:
SELECT ID, Element FROM tblentries;

The problem is that this query, even if very fast, is logged in the slow query 
log because it does not use index (I
activated this option in the slow query log). Woudln't it be better if such 
queries would be logged only in case there
is a WHERE, ORDER or GROUP/HAVING clause ?

Also, is it better to do :
SELECT ID, Element FROM tblentries;
or
SELECT ID, Element FROM tblentries WHERE ID  0;

(In this last case, it won't be logged in the slow query log beause it uses an 
index...)

Thank you,
HMax



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re[2]: Slow Queries Log and Index-less queries

2005-08-30 Thread HMax
Thanks Alexey,

This is enough explanation for me ;)

Cheers,
HMax


AP I have a question regarding the slow queries log, and queries not using 
index.

AP I have a small table, with say 10 entries, like that :
AP ID |  Element
AP -
AP 1  |  One
AP 2  |  Two
AP 3  |  Three
AP 4  |  Four
AP 5  |  Five
AP 6  |  Six
AP 7  |  Seven
AP 8  |  Eight
AP 9  |  Nine
AP 10 |  Ten

AP I want to get all those entries:
AP SELECT ID, Element FROM tblentries;

AP The problem is that this query, even if very fast, is logged in the slow 
query log because it does not use index (I
AP activated this option in the slow query log). Woudln't it be better if such 
queries would be logged only in case there
AP is a WHERE, ORDER or GROUP/HAVING clause ?

AP  
AP Slow query log, especially with long-long-format turned on, isn't made for 
manual browsing anyway.
AP You'll have to use mysqldumpslow or some custom-made script to analyze it.
AP Queries that don't have 'where' are easy to filter then.

AP  


AP Also, is it better to do :
AP SELECT ID, Element FROM tblentries;
AP or
AP SELECT ID, Element FROM tblentries WHERE ID  0;

AP (In this last case, it won't be logged in the slow query log beause it uses 
an index...)

AP  
AP It won't be logged if it actually will use index. In your example it won't 
use index, full table scan will be
AP used instead, because query optimizer is able to determine that all records 
match where condition. 
AP  
AP  
AP  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subquery speed : IN (SELECT ...) versus IN (X, Y, Z, ...)

2005-02-25 Thread HMax
Hi there, thank you for your reply.

I off course indexed all the required field.
The problem is that, whatever you do, if you have more than one value
in the IN or = ANY clause, index won't be used, just like when you do
a OR. At least this is what I noticed already in the past.

Concerning the query time to the zipcode selection, it's an instant
query. It does not explain the difference between the 2 query times.


On Thu, 24 Feb 2005 23:28:20 +0200, Eli [EMAIL PROTECTED] wrote:
 Hi,
 
 You may put indexes on the following fields:
 
 1. cityname (in tblcity)
 2. zip (in tblpeople)
 
 I assume it will speed up your queries. If you already do have those
 indexes, then try to look at the 'EXPLAIN' of the first query (with
 sub-query), and see the column 'type' that describes you how the tables
 are joined.
 (see: http://dev.mysql.com/doc/mysql/en/explain.html )
 You may post here your EXPLAIN results.
 
 Also note that when you use the second query (B), you also spend time on
 the SQL1 query that you didn't consider about its time.
 
 -Eli
 
 
 HMax wrote:
  Hello list,
 
  We are currently tuning our queries speed and we found out that the
  ones using subqueries are quite slower than the 'usual' ones. Here is
  an example of a wierd behavior.
 
  We have a city list associated with zipcode, and user can search a
  database of people living in a given city. The problem is that a city
  can have several zip codes.
 
  Our first request is :
  SQL1 =  SELECT zip FROM tblcity WHERE cityname = 'Paris' 
 
  This request actually returns something like 20 results.
 
  The second request list the people living in areas with those zip codes:
  SQL2 =  SELECT people FROM tblpeople WHERE zip IN (###) 
 
  In ### we can either put
  - A : SQL1
  - B :  the list build from a recordset opened on SQL1 which would give
  something like : '75000', '75001', '75002', '75003', etc...
 
  Queries speed are 0.16s for A, and 0.05s for B.
 
  Can anybody explain this behavior, and maybe offer some advices on
  optimizing our queries.
 
  Thanks
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FULLTEXT Exact phrase search including quotes

2005-02-24 Thread HMax
Hi list,

I'm trying to figure out how to use the exact phrase search in
fulltext boolean mode when the phease to search includes double
quotes.

For instance, what if I want to search this exact phrase :
I like football on TV

I think I've tried all the solution I'm aware of without any results.
Any help would be appreciated!

Thanks

-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Subquery speed : IN (SELECT ...) versus IN (X, Y, Z, ...)

2005-02-24 Thread HMax
Hello list,

We are currently tuning our queries speed and we found out that the
ones using subqueries are quite slower than the 'usual' ones. Here is
an example of a wierd behavior.

We have a city list associated with zipcode, and user can search a
database of people living in a given city. The problem is that a city
can have several zip codes.

Our first request is :
SQL1 =  SELECT zip FROM tblcity WHERE cityname = 'Paris' 

This request actually returns something like 20 results.

The second request list the people living in areas with those zip codes:
SQL2 =  SELECT people FROM tblpeople WHERE zip IN (###) 

In ### we can either put
- A : SQL1
- B :  the list build from a recordset opened on SQL1 which would give
something like : '75000', '75001', '75002', '75003', etc...

Queries speed are 0.16s for A, and 0.05s for B.

Can anybody explain this behavior, and maybe offer some advices on
optimizing our queries.

Thanks

-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD INDEX INTO CACHE problem

2005-02-22 Thread HMax
Hi again,

I worked on the problem and found a way to make the LOAD INDEX INTO
CACHE work on my main tables now.

I actually found out that some of my indexes using varchar could be
optimized because they aren't used for search but ordering
(alphabetical and such). So having a index length of 255 on this index
isn't really useful, and I tuned it so that it's 24 chars long. (I
didn't put more because it's a multiple index with other fields type).
Then all my indexes are using 1024 block size now, and the LOAD INDEX
INTO CACHE works like a charm, showing much improved performance with
all the RAM available.

Anyway, thanks again for all your explaination. It's always good to
know how things work in the core. I can better handle how I index my
tables knowing I want to put all their indexes in cache at start.

Happy programming !

On Mon, 21 Feb 2005 21:46:13 +0100, Sergei Golubchik [EMAIL PROTECTED] wrote:
 Hi!
 
 On Feb 21, HMax wrote:
  Thank you for your answer Sergei,
 
  It's all clear now, and I'm glad to know where the problem comes from.
 
  Now if I understand correctly, my only solution is to manage to create
  indexes in my tables that ALL have the same block size (1024). This
  would mean reducing the size of the indexes on my Varchar fields,
  which I think I can. What is the max characters I should use when
  indexing my Varchar so that block size are 1024 ? (if possible of
  course). I have no idea how to calculate this.
 
 You'd better try with trial-and-error.
 myisamchk -dvv shows block size. It's enough to create an empty table
 and run myisamchk -dvv on it.
 
 The formula is in mi_create.c but I would spend more time unrolling all
 the defines and deriving max varchar langth, that you would do with
 trial-and-error :)
 
  And is there absolutly no way to force 1024 block size even for
  varchar ?
 
 No, but you can make it 2048 for normal indexes.
 Block length is a multiple of myisam_block_size, so if you set it to
 2048, all indexes will use it. (of course it'll be suboptimal for
 everything but the long varchar keys. And even if you have an index over
 VARCHAR(255), actual values are usually shorter, right ?)
 
 Regards,
 Sergei
 
 --
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
 /_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com
 


-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD INDEX INTO CACHE problem

2005-02-22 Thread HMax
One last question, maybe :)

Is there any way to empty the key buffer once the server is started ?
That would be handy :)

Thanks


On Tue, 22 Feb 2005 11:13:29 +0100, HMax [EMAIL PROTECTED] wrote:
 Hi again,
 
 I worked on the problem and found a way to make the LOAD INDEX INTO
 CACHE work on my main tables now.
 
 I actually found out that some of my indexes using varchar could be
 optimized because they aren't used for search but ordering
 (alphabetical and such). So having a index length of 255 on this index
 isn't really useful, and I tuned it so that it's 24 chars long. (I
 didn't put more because it's a multiple index with other fields type).
 Then all my indexes are using 1024 block size now, and the LOAD INDEX
 INTO CACHE works like a charm, showing much improved performance with
 all the RAM available.
 
 Anyway, thanks again for all your explaination. It's always good to
 know how things work in the core. I can better handle how I index my
 tables knowing I want to put all their indexes in cache at start.
 
 Happy programming !
 
 On Mon, 21 Feb 2005 21:46:13 +0100, Sergei Golubchik [EMAIL PROTECTED] 
 wrote:
  Hi!
 
  On Feb 21, HMax wrote:
   Thank you for your answer Sergei,
  
   It's all clear now, and I'm glad to know where the problem comes from.
  
   Now if I understand correctly, my only solution is to manage to create
   indexes in my tables that ALL have the same block size (1024). This
   would mean reducing the size of the indexes on my Varchar fields,
   which I think I can. What is the max characters I should use when
   indexing my Varchar so that block size are 1024 ? (if possible of
   course). I have no idea how to calculate this.
 
  You'd better try with trial-and-error.
  myisamchk -dvv shows block size. It's enough to create an empty table
  and run myisamchk -dvv on it.
 
  The formula is in mi_create.c but I would spend more time unrolling all
  the defines and deriving max varchar langth, that you would do with
  trial-and-error :)
 
   And is there absolutly no way to force 1024 block size even for
   varchar ?
 
  No, but you can make it 2048 for normal indexes.
  Block length is a multiple of myisam_block_size, so if you set it to
  2048, all indexes will use it. (of course it'll be suboptimal for
  everything but the long varchar keys. And even if you have an index over
  VARCHAR(255), actual values are usually shorter, right ?)
 
  Regards,
  Sergei
 
  --
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
  / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
  /_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
___/  www.mysql.com
 
 
 --
 HMax
 


-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD INDEX INTO CACHE problem

2005-02-21 Thread HMax
So this means we cannot combine both FULLTEXT and classical indexes if
we want to use a LOAD INDEX INTO CACHE, and that we won't ever be able
to ?

How about being able to specify the indexes we want to load into the
cache. It's supposed to work this way (but it is told in the doc it
doesn't yet). This would solve the problem I believe, if we specify
what index we want in cache.

What I don't undestand is that when not cached using LOAD INDEX INTO
CACHE, FULLTEXT indexes can be into cache, the ones on VARCHAR too,
and this does not see to cause any trouble. But using LOAD INDEX, it
doesn't work. Is there really no workaround ? We have for about 1.5Go
of fulltext indexes and if they were in cache, this would speed up
things so much !

Thx for your advices
HMax


On Fri, 18 Feb 2005 18:01:29 +0200, Gleb Paharenko
[EMAIL PROTECTED] wrote:
 Hello.
 
 Sergei Golubchik said that we can't change the value of the blocksize
 
 of a key (it is chosen in mi_create.c) and there is no workaround
 
 with this LOAD INDEX problem.
 
 
 HMax [EMAIL PROTECTED] wrote:
 
  Hello there.
 
 
 
  OK I'll paste the results of commands you asked right after my reply,
 
  because we found out where the problem comes from.
 
  The myisamchk command showed that the index on the VarChar has a block
 
  size of 2048 instead of 1024. However, when I turn this index to a
 
  FULLTEXT one, the block size is back to 1024, and the LOAD INDEX INTO
 
  CACHE works.
 
 
 
  Now this is a problem because our huge table needs both our FULLTEXT
 
  indexes and some on VARCHAR fields too. At least we know where it
 
  comes from. Now, is there a fast solution ? We were waiting for this
 
  bug correction to study a release date for our application :/
 
 
 
  Thank you, and here is the results :
 
 
 
 
 
  SHOW CREATE TABLE=20
 
  `tbltest`;
 
 
 
 
 
  CREATE TABLE `tbltest` (
 
   `testid` int(10) unsigned NOT NULL auto_increment,
 
   `testvalue` varchar(100) NOT NULL default '',
 
   PRIMARY KEY  (`testid`),
 
   KEY `BOB` (`testvalue`)
 
  ) ENGINE=3DMyISAM DEFAULT CHARSET=3Dutf8
 
 
 
 
 
  SHOW VARIABLES;
 
 
 
  +-+-+
 
  | Variable_name   | Value   |
 
  +-+-+
 
  | back_log| 50  |
 
  | basedir | D:\mysql\4.1\   |
 
  | binlog_cache_size   | 32768   |
 
  | bulk_insert_buffer_size | 8388608 |
 
  | character_set_client| utf8|
 
  | character_set_connection| utf8|
 
  | character_set_database  | utf8|
 
  | character_set_results   | utf8|
 
  | character_set_server| utf8|
 
  | character_set_system| utf8|
 
  | character_sets_dir  | D:\mysql\4.1\share\charsets/|
 
  | collation_connection| utf8_general_ci |
 
  | collation_database  | utf8_general_ci |
 
  | collation_server| utf8_general_ci |
 
  | concurrent_insert   | ON  |
 
  | connect_timeout | 5   |
 
  | datadir | D:\mysql\4.1\Data\  |
 
  | date_format | %Y-%m-%d|
 
  | datetime_format | %Y-%m-%d %H:%i:%s   |
 
  | default_week_format | 0   |
 
  | delay_key_write | ON  |
 
  | delayed_insert_limit| 100 |
 
  | delayed_insert_timeout  | 300 |
 
  | delayed_queue_size  | 1000|
 
  | expire_logs_days| 0   |
 
  | flush   | OFF |
 
  | flush_time  | 1800|
 
  | ft_boolean_syntax   | + -()~*:|  |
 
  | ft_max_word_len | 84  |
 
  | ft_min_word_len | 4   |
 
  | ft_query_expansion_limit| 20  |
 
  | ft_stopword_file| (built-in)  |
 
  | group_concat_max_len| 1024|
 
  | have_archive| NO  |
 
  | have_bdb| NO  |
 
  | have_compress   | YES

Re: LOAD INDEX INTO CACHE problem

2005-02-21 Thread HMax
Thank you for your answer Sergei,

It's all clear now, and I'm glad to know where the problem comes from.

Now if I understand correctly, my only solution is to manage to create
indexes in my tables that ALL have the same block size (1024). This
would mean reducing the size of the indexes on my Varchar fields,
which I think I can. What is the max characters I should use when
indexing my Varchar so that block size are 1024 ? (if possible of
course). I have no idea how to calculate this. And is there absolutly
no way to force 1024 block size even for varchar ?

Thx again for your enlightment


On Mon, 21 Feb 2005 17:41:40 +0100, Sergei Golubchik [EMAIL PROTECTED] wrote:
 Hi!
 
 On Feb 21, HMax wrote:
  So this means we cannot combine both FULLTEXT and classical indexes if
  we want to use a LOAD INDEX INTO CACHE, and that we won't ever be able
  to ?
 
 No. FULLTEXT indexes now have small block size (1024) so they should
 load ok. Only long varchar indexes will be a problem (but not
 'ever', see below :)
 
  How about being able to specify the indexes we want to load into the
  cache. It's supposed to work this way (but it is told in the doc it
  doesn't yet). This would solve the problem I believe, if we specify
  what index we want in cache.
 
 Right, it's in the TODO.
 Here's the problem: LOAD INDEX reads the complete MYI file
 sequentially, block after a block, and loads them in cache.
 If blocks would have different sizes it would be not possible, because
 block header does not store block size.
 
 Loading only a selected index does not work either, because block
 header does not store what index it belongs to.
 
 The only solution would be to traverse the index tree from the root -
 but it'd be slow, because it implies random reads from the index file
 :(
 
 Instead, we plan to store index number in every block, but it means
 incompatible change in MYI file format, so it's not for 4.1 (and not
 even for 5.0 which is almost frozen now).
 
  What I don't undestand is that when not cached using LOAD INDEX INTO
  CACHE, FULLTEXT indexes can be into cache, the ones on VARCHAR too,
  and this does not see to cause any trouble.
 
 See above, regular btree traversal is not a problem. Sequential MYI file
 access is.
 
  But using LOAD INDEX, it doesn't work. Is there really no workaround ?
  We have for about 1.5Go of fulltext indexes and if they were in cache,
  this would speed up things so much !
 
 It's fixed in 4.1.8.
 
 Regards,
 Sergei
 
 --
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
 /_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com
 


-- 
HMax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD INDEX INTO CACHE problem

2005-02-17 Thread HMax
Hello there.

OK I'll paste the results of commands you asked right after my reply,
because we found out where the problem comes from.
The myisamchk command showed that the index on the VarChar has a block
size of 2048 instead of 1024. However, when I turn this index to a
FULLTEXT one, the block size is back to 1024, and the LOAD INDEX INTO
CACHE works.

Now this is a problem because our huge table needs both our FULLTEXT
indexes and some on VARCHAR fields too. At least we know where it
comes from. Now, is there a fast solution ? We were waiting for this
bug correction to study a release date for our application :/

Thank you, and here is the results :


SHOW CREATE TABLE 
`tbltest`;


CREATE TABLE `tbltest` (
  `testid` int(10) unsigned NOT NULL auto_increment,
  `testvalue` varchar(100) NOT NULL default '',
  PRIMARY KEY  (`testid`),
  KEY `BOB` (`testvalue`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8


SHOW VARIABLES;

+-+-+
| Variable_name   | Value   |
+-+-+
| back_log| 50  |
| basedir | D:\mysql\4.1\   |
| binlog_cache_size   | 32768   |
| bulk_insert_buffer_size | 8388608 |
| character_set_client| utf8|
| character_set_connection| utf8|
| character_set_database  | utf8|
| character_set_results   | utf8|
| character_set_server| utf8|
| character_set_system| utf8|
| character_sets_dir  | D:\mysql\4.1\share\charsets/|
| collation_connection| utf8_general_ci |
| collation_database  | utf8_general_ci |
| collation_server| utf8_general_ci |
| concurrent_insert   | ON  |
| connect_timeout | 5   |
| datadir | D:\mysql\4.1\Data\  |
| date_format | %Y-%m-%d|
| datetime_format | %Y-%m-%d %H:%i:%s   |
| default_week_format | 0   |
| delay_key_write | ON  |
| delayed_insert_limit| 100 |
| delayed_insert_timeout  | 300 |
| delayed_queue_size  | 1000|
| expire_logs_days| 0   |
| flush   | OFF |
| flush_time  | 1800|
| ft_boolean_syntax   | + -()~*:|  |
| ft_max_word_len | 84  |
| ft_min_word_len | 4   |
| ft_query_expansion_limit| 20  |
| ft_stopword_file| (built-in)  |
| group_concat_max_len| 1024|
| have_archive| NO  |
| have_bdb| NO  |
| have_compress   | YES |
| have_crypt  | NO  |
| have_csv| NO  |
| have_example_engine | NO  |
| have_geometry   | YES |
| have_innodb | DISABLED|
| have_isam   | NO  |
| have_ndbcluster | NO  |
| have_openssl| NO  |
| have_query_cache| YES |
| have_raid   | NO  |
| have_rtree_keys | YES |
| have_symlink| YES |
| init_connect| |
| init_file   | |
| init_slave  | |
| innodb_additional_mem_pool_size | 2097152 |
| innodb_autoextend_increment | 8   |
| innodb_buffer_pool_awe_mem_mb   | 0   |
| innodb_buffer_pool_size | 8388608 |
| 

LOAD INDEX INTO CACHE problem

2005-02-16 Thread HMax
Hello there,

We have a problem with the LOAD INDEX INTO CACHE command which is
supposed to be fixed in version 4.1.10

It may be fixed, but then we don't get the way to make it work.

We want to load all the indexes of one of our big table into the main key cache.
This table is myISAM, and has all sort of indexes, including UNIQUE
AND FULLTEXT.

When we try to load the indexes into cache, we have the following error :
Indexes use different block size Operation Failed

Now we created a small test table with 2 rows :
A integer, primary key, and a varchar(100) filled 10 times with MD5
values of NOW().
When the varchar row is not indexed, the command works fine, but when
we index it, LOAD INDEX INTO CACHE returns the same error. This means
this is not the FULLTEXT which create the problem.

Key buffer block size is set to 1024.
Tried to change it to 2048 but won't do.

Any help would be greatly appreciated.
Otherwise, I'll post a bug report.

Thank you

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]