Re: great problem with questions
* DeRyl > * Roger Baklund > The first thing to notice: "Using temporary"... this is to be avoided, if > possible. > > ## how is the correct way to avoid that? Depends, in this case I think it is because of the DISTINCT. > The first table read is klientslowo based on the criteria > klientslowo.klientslowo LIKE 'sam%'. Is this a reasonable approach to > solving the query, giving your knowledge of the database structure and the > data distribution? MySQL estimates that 351 records matches this criteria, > is this close to the truth? If not, run ANALYZE TABLE. > > ## after analyze I saw: > > Table Op Msg_type Msg_text > ikt.klientslowo analyze status Table is already up to date You did not answer my questions... > You should not need the DISTINCT keyword. There is no PRIMARY KEY in the > branzaslowa table, it seems to have duplicates? > > ## it shouldn't so I'll try to select without DISTINCT The EXPLAIN output indicates that 3 rows would match in the branzaslowa table based on klientbranza.branzaid... I guess it should be only one, and if it was, the query could be up to three times faster. > You use a lot of bigint's... an unsigned int is usually sufficient, it > consumes half the disc/RAM space... this could be important for > the speed of the indexes. Optimally you would want all your > indexes to fit in RAM, i.e. the sum of the size of the indexes > should be smaller than your index buffer size (key_buffer_size). > > ## some of tables [like klient, klientslowa, klientulice, > klientmiejscowosci > etc...] uses in klientid field a numeric value which is 12,14 or > 14 characters long > so how to correct that? Your klientid column is a BIGINT. This column type is stored in a binary format, it allways consumes 8 bytes. You could keep this column in the klient table, and create a new internal id column, datatype should be unsigned int, maybe with auto_increment, and use this new column as the primary key and in any table that references the klient table. The existing klientid should have an UNIQUE index in the klient table, and be replaced by the new column in any other table. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: great problem with questions
"DeRyl" <[EMAIL PROTECTED]> wrote on 09/23/2004 09:51:45 AM: > The fact that you are joining 10 tables together in one query may be a > major portion of your performance problem. Even if the *average* size of > each table is only 10 rows, the MySQL engine will have to pour over > 10x10x10x10x10x10x10x10x10x10 = 10 to the 10th power = 1e10 = > 10,000,000,000 (10 billion) combinations of data rows before it gets to > even think about some of your WHERE conditions. > > ## oh I understand... > > I have a few suggestions: > > FIRST - Rewrite your query to use the explicit INNER JOIN form of table > joins and NOT the comma-join form you are currently using. The explicit > form is much MUCH easier to double check that you have properly JOINed > each table to each other table. Without properly defined join conditions > you will create a Cartesian product between two or more of your tables. > Cartesian products are rarely good things for query performance. > > ## can you show me an example? > I'm not sure I understand correctly Your suggestion... This is my attempt to convert your implicit comma joins into explicit INNER JOIN statements. I also reformatted your query to get rid of email-wrapping and applied some aliases to your table names. SELECT DISTINCT logo , klient.klientid , klientnazwa , struktura , concat(kodpocztowy,' ',miejscowosc) miasto , aparatnumer , concat(ulicaskrot,' ',ulicanazwa,' ',posesja) ulica , concat('woj. ',wojewodztwo,' powiat:',powiat) wojpow , klientbranza branza , email , www , wizytowka FROM klient INNER JOIN klientulice ON klient.klientid = klientulice.klientid INNER JOIN klientulica ON klientulice.klientulicaid= klientulica.klientulicaid INNER JOIN klientmiejscowosci ON klient.klientid = klientmiejscowosci.klientid INNER JOIN klientmiejscowosc ON klientmiejscowosci.klientmiejscowoscid=klientmiejscowosc.klientmiejscowoscid INNER JOIN klientbranza ON klient.klientid = klientbranza.klientid INNER JOIN branzaslowa ON klientbranza.branzaid=branzaslowa.branzaid INNER JOIN branzaslowo ON branzaslowa.branzaslowoid=branzaslowo.branzaslowoid INNER JOIN klientslowa ON klient.klientid = klientslowa.klientid INNER JOIN klientslowo ON klientslowa.klientslowoid=klientslowo.klientslowoid WHERE wojewodztwoid=7 AND klientulica.klientulica like 'dwo%' AND klientmiejscowosc.klientmiejscowosc like'war%' AND branzaslowo.branzaslowo like'sam%' AND klientslowo.klientslowo LIKE 'sam%' I staggered your sub-joins by one indention so that you could recognize some possible points of creating little mini-joins later on a partial result set. (See step 4) > > SECOND - Check your indexes. Make sure that you are joining tables on > columns that appear first in at least one index. Carefully consider > creating a multiple column index or two (especially if this is a common > query and/or you use those conditions often). > > ## I usually use indexes on key fields [like klientid...] and on launching > tables [like klientslowa] one index on both two fields... > > THIRD - Consider evaluating this query in stages. Based on your re-write > to eliminate the comma-joins, I can help you with that, too. > > ## ? what do You mean? I mean do not try to do everything in one massive query. There are times, especially when performing more than about 6 or 7 joins in a single query that breaking one larger operation into two smaller ones is the more efficient way to go. I wish I knew more polish as it would help me to break up this query better. What you want to do in each stage is to build a temporary table that holds just enough information in it to build the next stage. CREATE TEMPORARY TABLE tmpStage1 (KEY(klientid)) Select klient.klientid , ... (all of the computed values that rely on data from any of the tables participating in this stage or values to be used in future stages)... FROM klient INNER JOIN klientulice ON klient.klientid = klientulice.klientid INNER JOIN klientulica ON klientulice.klientulicaid= klientulica.klientulicaid WHERE wojewodztwoid=7 AND klientulica.klientulica like 'dwo%' CREATE TEMPORARY TABLE tmpStage2 (KEY(klientid)) SELECT t1.klientid , t1.logo , t1.klientnazwa , ... (other t1.fields) ... , ...(computed fields using values from these tables and values to combine with values from the next stage's tables ) FROM tmpStage1 t1 INNER JOIN klientmiejscowosci ON t1.klientid = klientmiejscowosci.klientid INNER JOIN klientmiejscowosc ON klientmiejscowosci.klientmiejscowoscid=klientmiejscowosc.klientmiejscowoscid WHERE klientmiejscowosc.klientmiejscowosc like'war%' ... repeat until you have added together the data from all but the very last tables. Then you just S
Re: great problem with questions
The fact that you are joining 10 tables together in one query may be a major portion of your performance problem. Even if the *average* size of each table is only 10 rows, the MySQL engine will have to pour over 10x10x10x10x10x10x10x10x10x10 = 10 to the 10th power = 1e10 = 10,000,000,000 (10 billion) combinations of data rows before it gets to even think about some of your WHERE conditions. I have a few suggestions: FIRST - Rewrite your query to use the explicit INNER JOIN form of table joins and NOT the comma-join form you are currently using. The explicit form is much MUCH easier to double check that you have properly JOINed each table to each other table. Without properly defined join conditions you will create a Cartesian product between two or more of your tables. Cartesian products are rarely good things for query performance. SECOND - Check your indexes. Make sure that you are joining tables on columns that appear first in at least one index. Carefully consider creating a multiple column index or two (especially if this is a common query and/or you use those conditions often). THIRD - Consider evaluating this query in stages. Based on your re-write to eliminate the comma-joins, I can help you with that, too. FOURTH - Consider using table aliases. It may make your SELECT and WHERE clauses easier to read. I look forward to seeing the rewrite. Shawn Green Database Administrator Unimin Corporation - Spruce Pine "DeRyl" <[EMAIL PROTECTED]> wrote on 09/23/2004 05:50:21 AM: > hello, > > I have database with 30 tables [some have over 2000k some other > over 4000k rec] > > when I want to run a sql with a few conditions the answer is dramatically slow > [over 70 seconds!] > sql-s with one condition usually works well.. > > how is the corrcet way to optimize the database and sql questions? > the answer from sql should be under 1 second... > > SHOW VARIABLES gives these informations: > > Variable_name Value > back_log 50 > basedir /usr/local/mysql/ > bdb_cache_size 8388600 > bdb_log_buffer_size 32768 > bdb_home /dysk/mysql/data/ > bdb_max_lock 1 > bdb_logdir > bdb_shared_data OFF > bdb_tmpdir /tmp/ > bdb_version Sleepycat Software: Berkeley DB 3.2.9a: (May 14, 2... > binlog_cache_size 32768 > bulk_insert_buffer_size 8388608 > character_set latin2 > character_sets latin1 big5 czech euc_kr gb2312 gbk latin1_de sjis... > concurrent_insert ON > connect_timeout 5 > convert_character_set > datadir /dysk/mysql/data/ > default_week_format 0 > delay_key_write ON > delayed_insert_limit 100 > delayed_insert_timeout 300 > delayed_queue_size 1000 > flush OFF > flush_time 0 > ft_boolean_syntax + -><()~*:""&| > ft_min_word_len 4 > ft_max_word_len 254 > ft_max_word_len_for_sort 20 > ft_stopword_file (built-in) > have_bdb YES > have_crypt YES > have_innodb YES > have_isam YES > have_raid YES > have_symlink YES > have_openssl NO > have_query_cache YES > init_file > innodb_additional_mem_pool_size 1048576 > innodb_buffer_pool_size 8388608 > innodb_data_file_path ibdata1:10M:autoextend > innodb_data_home_dir > innodb_file_io_threads 4 > innodb_force_recovery 0 > innodb_thread_concurrency 8 > innodb_flush_log_at_trx_commit 1 > innodb_fast_shutdown ON > innodb_flush_method > innodb_lock_wait_timeout 50 > innodb_log_arch_dir ./ > innodb_log_archive OFF > innodb_log_buffer_size 1048576 > innodb_log_file_size 5242880 > innodb_log_files_in_group 2 > innodb_log_group_home_dir ./ > innodb_mirrored_log_groups 1 > innodb_max_dirty_pages_pct 90 > interactive_timeout 28800 > join_buffer_size 131072 > key_buffer_size 8388600 > language /usr/local/mysql/share/mysql/polish/ > large_files_support ON > license GPL > local_infile ON > locked_in_memory OFF > log OFF > log_update OFF > log_bin OFF > log_slave_updates OFF > log_slow_queries OFF > log_warnings ON > long_query_time 10 > low_priority_updates OFF > lower_case_file_system OFF > lower_case_table_names 0 > max_allowed_packet 1048576 > max_binlog_cache_size 4294967295 > max_binlog_size 1073741824 > max_connections 100 > max_connect_errors 10 > max_delayed_threads 20 > max_insert_delayed_threads 20 > max_heap_table_size 16777216 > max_join_size 18446744073709551615 > max_relay_log_size 0 > max_seeks_for_key 4294967295 > max_sort_length 1024 > max_user_connections 0 > max_tmp_tables 32 > max_write_lock_count 4294967295 > myisam_max_extra_sort_file_size 268435456 > myisam_max_sort_file_size 2147483647 > myisam_repair_threads 1 > myisam_recover_options OFF > myisam_sort_buffer_size 8388608 > net_buffer_length 16384 > net_read_timeout 30 > net_retry_count 10 > net_write_timeout 60 > new OFF > open_files_limit 1024 > pid_file /dysk/mysql/data/mysqld.pid > log_error > port 3306 > protocol_version 10 > query_alloc_block_size 8192 > query_cache_limit 1048576 > query_cache_size 0 > query_cache_type ON > query_prealloc_size 8192 > range_alloc_block_size 2048 > read_buffer_size 131072 > read_only OFF > read_rnd_buffer_size 262144 > rpl_recovery_rank 0 > server_id 0 >
Re: great problem with questions
The first thing to notice: "Using temporary"... this is to be avoided, if possible. ## how is the correct way to avoid that? The first table read is klientslowo based on the criteria klientslowo.klientslowo LIKE 'sam%'. Is this a reasonable approach to solving the query, giving your knowledge of the database structure and the data distribution? MySQL estimates that 351 records matches this criteria, is this close to the truth? If not, run ANALYZE TABLE. ## after analyze I saw: Table Op Msg_type Msg_text ikt.klientslowo analyze status Table is already up to date You should not need the DISTINCT keyword. There is no PRIMARY KEY in the branzaslowa table, it seems to have duplicates? ## it shouldn't so I'll try to select without DISTINCT You use a lot of bigint's... an unsigned int is usually sufficient, it consumes half the disc/RAM space... this could be important for the speed of the indexes. Optimally you would want all your indexes to fit in RAM, i.e. the sum of the size of the indexes should be smaller than your index buffer size (key_buffer_size). ## some of tables [like klient, klientslowa, klientulice, klientmiejscowosci etc...] uses in klientid field a numeric value which is 12,14 or 14 characters long so how to correct that? I took a look at the variables from your previous posting. You should increase the key_buffer_size, it is only 8M (default). If you only use MyISAM tables, try setting it to half your total available RAM. This will not affect the speed of the first few queries you run after a restart, but when the caches get "hot" you get very improved speed. ## how to change that from shell? I know Apache a little... regards in advance Darek -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: great problem with questions
* DeRyl > explain example sql question gives: [...] The first thing to notice: "Using temporary"... this is to be avoided, if possible. The first table read is klientslowo based on the criteria klientslowo.klientslowo LIKE 'sam%'. Is this a reasonable approach to solving the query, giving your knowledge of the database structure and the data distribution? MySQL estimates that 351 records matches this criteria, is this close to the truth? If not, run ANALYZE TABLE. http://dev.mysql.com/doc/mysql/en/ANALYZE_TABLE.html > You should not need the DISTINCT keyword. There is no PRIMARY KEY in the branzaslowa table, it seems to have duplicates? You use a lot of bigint's... an unsigned int is usually sufficient, it consumes half the disc/RAM space... this could be important for the speed of the indexes. Optimally you would want all your indexes to fit in RAM, i.e. the sum of the size of the indexes should be smaller than your index buffer size (key_buffer_size). I took a look at the variables from your previous posting. You should increase the key_buffer_size, it is only 8M (default). If you only use MyISAM tables, try setting it to half your total available RAM. This will not affect the speed of the first few queries you run after a restart, but when the caches get "hot" you get very improved speed. http://dev.mysql.com/doc/mysql/en/Server_parameters.html > http://dev.mysql.com/doc/mysql/en/MyISAM_key_cache.html > -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: great problem with questions
Am Thursday 23 September 2004 13:22 schrieb Stefan Kuhn: > This is probably due to having indexed only single columns. MySQL uses only > one index per column, meaning if you have a where with two conditions you Sorry, this should read "one index per table"... > need a combined index for the columns used. If there are only indeces for > the single columns, only one index for one condition will be used, the rest > requires a full table scan (this is explained somewhere in the docs, have a > look). > Stefan > > Am Thursday 23 September 2004 11:50 schrieb DeRyl: > > hello, > > > > I have database with 30 tables [some have over 2000k some other over > > 4000k rec] > > > > when I want to run a sql with a few conditions the answer is dramatically > > slow [over 70 seconds!] > > sql-s with one condition usually works well.. > > > > how is the corrcet way to optimize the database and sql questions? > > the answer from sql should be under 1 second... > > > > SHOW VARIABLES gives these informations: > > > > Variable_name Value > > back_log 50 > > basedir /usr/local/mysql/ > > bdb_cache_size 8388600 > > bdb_log_buffer_size 32768 > > bdb_home /dysk/mysql/data/ > > bdb_max_lock 1 > > bdb_logdir > > bdb_shared_data OFF > > bdb_tmpdir /tmp/ > > bdb_version Sleepycat Software: Berkeley DB 3.2.9a: (May 14, 2... > > binlog_cache_size 32768 > > bulk_insert_buffer_size 8388608 > > character_set latin2 > > character_sets latin1 big5 czech euc_kr gb2312 gbk latin1_de sjis... > > concurrent_insert ON > > connect_timeout 5 > > convert_character_set > > datadir /dysk/mysql/data/ > > default_week_format 0 > > delay_key_write ON > > delayed_insert_limit 100 > > delayed_insert_timeout 300 > > delayed_queue_size 1000 > > flush OFF > > flush_time 0 > > ft_boolean_syntax + -><()~*:""&| > > ft_min_word_len 4 > > ft_max_word_len 254 > > ft_max_word_len_for_sort 20 > > ft_stopword_file (built-in) > > have_bdb YES > > have_crypt YES > > have_innodb YES > > have_isam YES > > have_raid YES > > have_symlink YES > > have_openssl NO > > have_query_cache YES > > init_file > > innodb_additional_mem_pool_size 1048576 > > innodb_buffer_pool_size 8388608 > > innodb_data_file_path ibdata1:10M:autoextend > > innodb_data_home_dir > > innodb_file_io_threads 4 > > innodb_force_recovery 0 > > innodb_thread_concurrency 8 > > innodb_flush_log_at_trx_commit 1 > > innodb_fast_shutdown ON > > innodb_flush_method > > innodb_lock_wait_timeout 50 > > innodb_log_arch_dir ./ > > innodb_log_archive OFF > > innodb_log_buffer_size 1048576 > > innodb_log_file_size 5242880 > > innodb_log_files_in_group 2 > > innodb_log_group_home_dir ./ > > innodb_mirrored_log_groups 1 > > innodb_max_dirty_pages_pct 90 > > interactive_timeout 28800 > > join_buffer_size 131072 > > key_buffer_size 8388600 > > language /usr/local/mysql/share/mysql/polish/ > > large_files_support ON > > license GPL > > local_infile ON > > locked_in_memory OFF > > log OFF > > log_update OFF > > log_bin OFF > > log_slave_updates OFF > > log_slow_queries OFF > > log_warnings ON > > long_query_time 10 > > low_priority_updates OFF > > lower_case_file_system OFF > > lower_case_table_names 0 > > max_allowed_packet 1048576 > > max_binlog_cache_size 4294967295 > > max_binlog_size 1073741824 > > max_connections 100 > > max_connect_errors 10 > > max_delayed_threads 20 > > max_insert_delayed_threads 20 > > max_heap_table_size 16777216 > > max_join_size 18446744073709551615 > > max_relay_log_size 0 > > max_seeks_for_key 4294967295 > > max_sort_length 1024 > > max_user_connections 0 > > max_tmp_tables 32 > > max_write_lock_count 4294967295 > > myisam_max_extra_sort_file_size 268435456 > > myisam_max_sort_file_size 2147483647 > > myisam_repair_threads 1 > > myisam_recover_options OFF > > myisam_sort_buffer_size 8388608 > > net_buffer_length 16384 > > net_read_timeout 30 > > net_retry_count 10 > > net_write_timeout 60 > > new OFF > > open_files_limit 1024 > > pid_file /dysk/mysql/data/mysqld.pid > > log_error > > port 3306 > > protocol_version 10 > > query_alloc_block_size 8192 > > query_cache_limit 1048576 > > query_cache_size 0 > > query_cache_type ON > > query_prealloc_size 8192 > > range_alloc_block_size 2048 > > read_buffer_size 131072 > > read_only OFF > > read_rnd_buffer_size 262144 > > rpl_recovery_rank 0 > > server_id 0 > > slave_net_timeout 3600 > > skip_external_locking ON > > skip_networking OFF > > skip_show_database OFF > > slow_launch_time 2 > > socket /tmp/mysql.sock > > sort_buffer_size 2097144 > > sql_mode 0 > > table_cache 64 > > table_type MYISAM > > thread_cache_size 0 > > thread_stack 196608 > > tx_isolation REPEATABLE-READ > > timezone CEST > > tmp_table_size 33554432 > > tmpdir /tmp/ > > transaction_alloc_block_size 8192 > > transaction_prealloc_size 4096 > > version 4.0.20-max > > version_comment Official MySQL-max binary > > version_compile_os pc-linux > > wait_timeout 28800 > > > > a good example of question is: > > select distinct logo, klient.klientid, klientnazwa, struktura, > > concat(k
Re: great problem with questions
explain example sql question gives: table type possible_keys key key_len ref rows Extra klientslowo range PRIMARY,klientslowo klientslowo 40 NULL 351 Using where; Using temporary klientslowa ref klientslowoid klientslowoid 4 klientslowo.klientslowoid 19 Using index klient ref klientid,wojewodztwoid klientid 9 klientslowa.klientid 1 Using where klientmiejscowosci ref KLIENTMIEJSCOWOSCID,KLIENTID KLIENTID 9 klient.klientid 1 Using where; Distinct klientmiejscowosc eq_ref PRIMARY,KLIENTMIEJSCOWOSC PRIMARY 4 klientmiejscowosci.KLIENTMIEJSCOWOSCID 1 Using where; Distinct klientulice ref klientulicaid,klientid klientid 9 klient.klientid 1 Using where; Distinct klientbranza ref BRANZAID,KLIENTID KLIENTID 9 klient.klientid 2 Using where; Distinct branzaslowa ref branzaslowoid,branzaid branzaid 4 klientbranza.BRANZAID 3 Distinct branzaslowo eq_ref PRIMARY,branzaslowo PRIMARY 4 branzaslowa.branzaslowoid 1 Using where; Distinct klientulica eq_ref PRIMARY,klientulica PRIMARY 4 klientulice.klientulicaid 1 Using where; Distinct and the create table for explain tables: CREATE TABLE `branzaslowa` ( `branzaid` int(11) NOT NULL default '0', `branzaslowoid` int(11) NOT NULL default '0', KEY `branzaslowoid` (`branzaslowoid`), KEY `branzaid` (`branzaid`) ) TYPE=MyISAM; CREATE TABLE `branzaslowo` ( `branzaslowoid` int(11) NOT NULL auto_increment, `branzaslowo` varchar(32) default NULL, PRIMARY KEY (`branzaslowoid`), KEY `branzaslowo` (`branzaslowo`) ) TYPE=MyISAM AUTO_INCREMENT=1710 ; CREATE TABLE `klient` ( `klientid` bigint(14) default NULL, `klientnazwaorder` int(11) default NULL, `id_inst_nadrz` bigint(14) default NULL, `id_isnt_glown` bigint(14) default NULL, `wojewodztwoid` int(11) default NULL, `powiatid` int(11) default NULL, `gminaid` int(11) default NULL, `numerporzadkowy` int(11) default NULL, `klientnazwa` varchar(250) NOT NULL default '', `klientbranza` longtext, `ulicaskrot` varchar(20) default NULL, `ulicanazwa` varchar(255) default NULL, `posesja` varchar(100) default NULL, `miejscowosc` varchar(100) default NULL, `kodpocztowy` varchar(22) default NULL, `powiat` varchar(100) default NULL, `gmina` varchar(100) default NULL, `wojewodztwo` varchar(100) default NULL, `aparatnumer` longtext, `www` varchar(100) default NULL, `email` varchar(100) default NULL, `logo` varchar(100) default NULL, `wizytowka` varchar(100) default NULL, `wizytowkas` int(11) default NULL, `wizytowkaw` int(11) default NULL, `struktura` int(11) default NULL, `id_regionu` int(11) default NULL, `zrodlo` char(1) default NULL, `kwotareklam` double default NULL, KEY `klientid` (`klientid`), KEY `wojewodztwoid` (`wojewodztwoid`) ) TYPE=MyISAM; CREATE TABLE `klientbranza` ( `KLIENTID` bigint(14) default NULL, `BRANZAID` int(11) default NULL, `CZYPODSTAWOWA` int(11) default NULL, KEY `BRANZAID` (`BRANZAID`), KEY `KLIENTID` (`KLIENTID`) ) TYPE=MyISAM; CREATE TABLE `klientmiejscowosc` ( `KLIENTMIEJSCOWOSCID` int(11) NOT NULL default '0', `KLIENTMIEJSCOWOSC` char(64) default NULL, PRIMARY KEY (`KLIENTMIEJSCOWOSCID`), KEY `KLIENTMIEJSCOWOSC` (`KLIENTMIEJSCOWOSC`) ) TYPE=MyISAM; CREATE TABLE `klientmiejscowosci` ( `KLIENTID` bigint(14) default NULL, `KLIENTMIEJSCOWOSCID` int(11) default NULL, KEY `KLIENTMIEJSCOWOSCID` (`KLIENTMIEJSCOWOSCID`), KEY `KLIENTID` (`KLIENTID`) ) TYPE=MyISAM; CREATE TABLE `klientslowa` ( `klientid` bigint(14) NOT NULL default '0', `klientslowoid` int(11) NOT NULL default '0', KEY `klientslowoid` (`klientslowoid`,`klientid`) ) TYPE=MyISAM; CREATE TABLE `klientslowo` ( `klientslowoid` int(11) NOT NULL auto_increment, `klientslowo` varchar(40) NOT NULL default '', PRIMARY KEY (`klientslowoid`), KEY `klientslowo` (`klientslowo`) ) TYPE=MyISAM AUTO_INCREMENT=228453 ; CREATE TABLE `klientulica` ( `klientulicaid` int(11) NOT NULL default '0', `klientulica` char(96) default NULL, PRIMARY KEY (`klientulicaid`), KEY `klientulica` (`klientulica`) ) TYPE=MyISAM; CREATE TABLE `klientulice` ( `klientid` bigint(14) default NULL, `klientulicaid` int(11) default NULL, KEY `klientulicaid` (`klientulicaid`), KEY `klientid` (`klientid`) ) TYPE=MyISAM; --- Darek - Original Message - From: "Roger Baklund" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: "DeRyl" <[EMAIL PROTECTED]> Sent: Thursday, September 23, 2004 1:19 PM Subject: Re: great problem with questions * DeRyl > I have database with 30 tables [some have over 2000k some other > over 4000k rec] > > when I want to run a sql with a few conditions the answer is > dramatically slow > [over 70 seconds!] > sql-s with one condition usually works well.. > > how is the corrcet way to optimize the database and sql questions? > the answer from sql should be under 1 second... I agree. For a st
Re: great problem with questions
This is probably due to having indexed only single columns. MySQL uses only one index per column, meaning if you have a where with two conditions you need a combined index for the columns used. If there are only indeces for the single columns, only one index for one condition will be used, the rest requires a full table scan (this is explained somewhere in the docs, have a look). Stefan Am Thursday 23 September 2004 11:50 schrieb DeRyl: > hello, > > I have database with 30 tables [some have over 2000k some other over 4000k > rec] > > when I want to run a sql with a few conditions the answer is dramatically > slow [over 70 seconds!] > sql-s with one condition usually works well.. > > how is the corrcet way to optimize the database and sql questions? > the answer from sql should be under 1 second... > > SHOW VARIABLES gives these informations: > > Variable_name Value > back_log 50 > basedir /usr/local/mysql/ > bdb_cache_size 8388600 > bdb_log_buffer_size 32768 > bdb_home /dysk/mysql/data/ > bdb_max_lock 1 > bdb_logdir > bdb_shared_data OFF > bdb_tmpdir /tmp/ > bdb_version Sleepycat Software: Berkeley DB 3.2.9a: (May 14, 2... > binlog_cache_size 32768 > bulk_insert_buffer_size 8388608 > character_set latin2 > character_sets latin1 big5 czech euc_kr gb2312 gbk latin1_de sjis... > concurrent_insert ON > connect_timeout 5 > convert_character_set > datadir /dysk/mysql/data/ > default_week_format 0 > delay_key_write ON > delayed_insert_limit 100 > delayed_insert_timeout 300 > delayed_queue_size 1000 > flush OFF > flush_time 0 > ft_boolean_syntax + -><()~*:""&| > ft_min_word_len 4 > ft_max_word_len 254 > ft_max_word_len_for_sort 20 > ft_stopword_file (built-in) > have_bdb YES > have_crypt YES > have_innodb YES > have_isam YES > have_raid YES > have_symlink YES > have_openssl NO > have_query_cache YES > init_file > innodb_additional_mem_pool_size 1048576 > innodb_buffer_pool_size 8388608 > innodb_data_file_path ibdata1:10M:autoextend > innodb_data_home_dir > innodb_file_io_threads 4 > innodb_force_recovery 0 > innodb_thread_concurrency 8 > innodb_flush_log_at_trx_commit 1 > innodb_fast_shutdown ON > innodb_flush_method > innodb_lock_wait_timeout 50 > innodb_log_arch_dir ./ > innodb_log_archive OFF > innodb_log_buffer_size 1048576 > innodb_log_file_size 5242880 > innodb_log_files_in_group 2 > innodb_log_group_home_dir ./ > innodb_mirrored_log_groups 1 > innodb_max_dirty_pages_pct 90 > interactive_timeout 28800 > join_buffer_size 131072 > key_buffer_size 8388600 > language /usr/local/mysql/share/mysql/polish/ > large_files_support ON > license GPL > local_infile ON > locked_in_memory OFF > log OFF > log_update OFF > log_bin OFF > log_slave_updates OFF > log_slow_queries OFF > log_warnings ON > long_query_time 10 > low_priority_updates OFF > lower_case_file_system OFF > lower_case_table_names 0 > max_allowed_packet 1048576 > max_binlog_cache_size 4294967295 > max_binlog_size 1073741824 > max_connections 100 > max_connect_errors 10 > max_delayed_threads 20 > max_insert_delayed_threads 20 > max_heap_table_size 16777216 > max_join_size 18446744073709551615 > max_relay_log_size 0 > max_seeks_for_key 4294967295 > max_sort_length 1024 > max_user_connections 0 > max_tmp_tables 32 > max_write_lock_count 4294967295 > myisam_max_extra_sort_file_size 268435456 > myisam_max_sort_file_size 2147483647 > myisam_repair_threads 1 > myisam_recover_options OFF > myisam_sort_buffer_size 8388608 > net_buffer_length 16384 > net_read_timeout 30 > net_retry_count 10 > net_write_timeout 60 > new OFF > open_files_limit 1024 > pid_file /dysk/mysql/data/mysqld.pid > log_error > port 3306 > protocol_version 10 > query_alloc_block_size 8192 > query_cache_limit 1048576 > query_cache_size 0 > query_cache_type ON > query_prealloc_size 8192 > range_alloc_block_size 2048 > read_buffer_size 131072 > read_only OFF > read_rnd_buffer_size 262144 > rpl_recovery_rank 0 > server_id 0 > slave_net_timeout 3600 > skip_external_locking ON > skip_networking OFF > skip_show_database OFF > slow_launch_time 2 > socket /tmp/mysql.sock > sort_buffer_size 2097144 > sql_mode 0 > table_cache 64 > table_type MYISAM > thread_cache_size 0 > thread_stack 196608 > tx_isolation REPEATABLE-READ > timezone CEST > tmp_table_size 33554432 > tmpdir /tmp/ > transaction_alloc_block_size 8192 > transaction_prealloc_size 4096 > version 4.0.20-max > version_comment Official MySQL-max binary > version_compile_os pc-linux > wait_timeout 28800 > > a good example of question is: > select distinct logo, klient.klientid, klientnazwa, struktura, > concat(kodpocztowy,' ',miejscowosc) miasto, aparatnumer, > concat(ulicaskrot,' ',ulicanazwa,' ',posesja) ulica,concat('woj. > ',wojewodztwo,' powiat: ',powiat) wojpow, klientbranza branza, email, www, > wizytowka > from klient, klientulice, klientulica, klientmiejscowosci, > klientmiejscowosc, branzaslowa, branzaslowo, klientbranza, klientslowa, > klientslowo > where wojewodztwoid=7 AND klientulica.klientulica like'dwo%' AND > klient.klient
Re: great problem with questions
* DeRyl > I have database with 30 tables [some have over 2000k some other > over 4000k rec] > > when I want to run a sql with a few conditions the answer is > dramatically slow > [over 70 seconds!] > sql-s with one condition usually works well.. > > how is the corrcet way to optimize the database and sql questions? > the answer from sql should be under 1 second... I agree. For a start, show us the output of EXPLAIN select distinct logo, klient.klientid ... Then we will probably need the output of SHOW CREATE TABLE for some of the tables, depending on the output of the EXPLAIN command. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]