Order column in the second table

2014-06-12 Thread Lay András
Hi!

I have two tables:

CREATE TABLE `egyik` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `duma` varchar(255) NOT NULL,
  PRIMARY KEY (`id`),
  KEY `e_idx` (`duma`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;

INSERT INTO `egyik` VALUES (1,'aaa'),(2,'bbb');

CREATE TABLE `masik` (
  `id` int(10) unsigned NOT NULL,
  `szam` int(10) unsigned NOT NULL,
  KEY `m_idx` (`id`,`szam`),
  CONSTRAINT `masik_ibfk_1` FOREIGN KEY (`id`) REFERENCES `egyik`
(`id`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `masik` VALUES (1,7),(2,6);

I execute this query:

(root@localhost) [test] explain select e.id from egyik e,masik m
where e.id=m.id and e.duma='aaa'order by m.szam asc;
+--+-+---+--+---+---+-+---+--+---+
| id   | select_type | table | type | possible_keys | key   | key_len
| ref   | rows | Extra
|
+--+-+---+--+---+---+-+---+--+---+
|1 | SIMPLE  | e | ref  | PRIMARY,e_idx | e_idx | 767
| const |1 | Using where; Using index; Using temporary; Using
filesort |
|1 | SIMPLE  | m | ref  | m_idx | m_idx | 4
| test.e.id |1 | Using index
|
+--+-+---+--+---+---+-+---+--+---+
2 rows in set (0.00 sec)

How can i eliminate the Using temporary and Using filesort messages?

Thank you,

Lay

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Order column in the second table

2014-06-12 Thread Antonio Fernández Pérez
​Hi Lay,

If I don't mistake, you can't eliminate Using temporary and Using
filesort because you are using an order by. Try the explain again
removing order by and check the output.
When you use an order by, MySQL needs to use filesort and spends some time
sorting the result set. Also, create a temporary table with the rows
prevously. This is the reason.

Regards,

Antonio. ​


Re: Order column in the second table

2014-06-12 Thread Lay András
Hi!

On Thu, Jun 12, 2014 at 1:36 PM, Antonio Fernández Pérez
antoniofernan...@fabergames.com wrote:

 Hi Lay,

 If I don't mistake, you can't eliminate Using temporary and Using
 filesort because you are using an order by. Try the explain again
 removing order by and check the output.

Thank you, I know, without order no problem:

(root@localhost) [test] explain select e.id from egyik e,masik m
where e.id=m.id and e.duma='aaa';
+--+-+---+--+---+---+-+---+--+--+
| id   | select_type | table | type | possible_keys | key   | key_len
| ref   | rows | Extra|
+--+-+---+--+---+---+-+---+--+--+
|1 | SIMPLE  | e | ref  | PRIMARY,e_idx | e_idx | 767
| const |1 | Using where; Using index |
|1 | SIMPLE  | m | ref  | m_idx | m_idx | 4
| test.e.id |1 | Using index  |
+--+-+---+--+---+---+-+---+--+--+
2 rows in set (0.02 sec)

 When you use an order by, MySQL needs to use filesort and spends some time
 sorting the result set. Also, create a temporary table with the rows
 prevously. This is the reason.

Not every situation. If the order column in the first table, no problem too:

(root@localhost) [test] explain select e.id from egyik e,masik m
where e.id=m.id and e.duma='aaa'order by e.duma asc;
+--+-+---+--+---+---+-+---+--+--+
| id   | select_type | table | type | possible_keys | key   | key_len
| ref   | rows | Extra|
+--+-+---+--+---+---+-+---+--+--+
|1 | SIMPLE  | e | ref  | PRIMARY,e_idx | e_idx | 767
| const |1 | Using where; Using index |
|1 | SIMPLE  | m | ref  | m_idx | m_idx | 4
| test.e.id |1 | Using index  |
+--+-+---+--+---+---+-+---+--+--+
2 rows in set (0.00 sec)

But i need to order with column in the second table.

Bye!

Lay

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mysqlcheck downtime 30 minutes for Java web application

2014-06-12 Thread Tyfanie Wineriter
I would love to know the answer to this as well.

I know there are a number of functions of mysqlcheck... 
-c to check for corruption.. and ensure nothing needs to be repaired
-o to Optimize, which is kind of like defragmenting, removing deleted rows.. 
and this may be the one that is causing the 100% duplication  downtime..
-r for repair what -c found, or if there was some unanticipated downtime.

I'm trying to determine what the best practices is with MySQL databases.  As an 
Oracle DBA, I run stats fairly often for the CBO, but that doesn't look like 
the same thing that -o does.
I'm guessing that for a medium-usage database, I'd use -o during a specified 
downtime, perhaps accompanying the upgrade, whether it is monthly or whichever, 
but run -c more often to check for corruption.

Does anyone have any documentation or knowledge of best practices, especially 
for version 5.6?

~ Tyfanie Wineriter ~

Database Administrator
University of Oregon
1212 University of Oregon
Eugene, OR 97402-1212
(541) 346-1366


-Original Message-
From: Lukas Lehner [mailto:webleh...@gmail.com] 
Sent: Wednesday, June 11, 2014 6:38 AM
To: mysql
Subject: mysqlcheck downtime  30 minutes for Java web application

Hi

I read in MySQL, Fifth Edition by Paul DuBois that we should do a regular 
mysqlcheck.

We have all InnoDB tables. As far as I unterstand the tables get read and write 
locked during mysqlcheck.

The database will grow 100x in size, at the moment only initial load.

It means big downtime.

$ time mysqlcheck --all-databases
[...]

real29m41.449s
user0m0.019s
sys 0m0.015s

mysql SELECT table_schema Data Base Name, SUM( data_length +
index_length) / 1024 / 1024
- Data Base Size in MB FROM information_schema.TABLES GROUP BY 
table_schema ;
++--+
| Data Base Name | Data Base Size in MB |
++--+
| imzweffefr |   38420.97170544 |
| information_schema |   0.00781250 |
| mysql  |   0.63046169 |
++--+
3 rows in set (5.86 sec)

You all do mysqlcheck? Is it really mandatory? What's the risk if I don't do it 
regularly?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Order column in the second table

2014-06-12 Thread yoku ts.
Hi,

Would you try STRAIGHT_JOIN?

mysql56 ALTER TABLE masik DROP KEY idx_test, ADD KEY idx_test(szam, id);
Query OK, 0 rows affected (0.08 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql56 EXPLAIN SELECT e.id FROM masik m STRAIGHT_JOIN egyik e ON e.id=
m.id WHERE e.duma= 'aaa' ORDER BY m.szam ASC;
++-+---++---+--+-+-+--+-+
| id | select_type | table | type   | possible_keys | key  | key_len |
ref | rows | Extra   |
++-+---++---+--+-+-+--+-+
|  1 | SIMPLE  | m | index  | m_idx | idx_test | 8   |
NULL|2 | Using index |
|  1 | SIMPLE  | e | eq_ref | PRIMARY,e_idx | PRIMARY  | 4   |
d2.m.id |1 | Using where |
++-+---++---+--+-+-+--+-+
2 rows in set (0.00 sec)

This may be faster than original situation if e.duma doesn't have a well
cardinality or you can use LIMIT clause.
This can eliminate a temporary table and file, but this needs whole index
scan of m.idx_test.


Regards,

yoku0825




2014-06-12 20:42 GMT+09:00 Lay András and...@lay.hu:

 Hi!

 On Thu, Jun 12, 2014 at 1:36 PM, Antonio Fernández Pérez
 antoniofernan...@fabergames.com wrote:

  Hi Lay,
 
  If I don't mistake, you can't eliminate Using temporary and Using
  filesort because you are using an order by. Try the explain again
  removing order by and check the output.

 Thank you, I know, without order no problem:

 (root@localhost) [test] explain select e.id from egyik e,masik m
 where e.id=m.id and e.duma='aaa';

 +--+-+---+--+---+---+-+---+--+--+
 | id   | select_type | table | type | possible_keys | key   | key_len
 | ref   | rows | Extra|

 +--+-+---+--+---+---+-+---+--+--+
 |1 | SIMPLE  | e | ref  | PRIMARY,e_idx | e_idx | 767
 | const |1 | Using where; Using index |
 |1 | SIMPLE  | m | ref  | m_idx | m_idx | 4
 | test.e.id |1 | Using index  |

 +--+-+---+--+---+---+-+---+--+--+
 2 rows in set (0.02 sec)

  When you use an order by, MySQL needs to use filesort and spends some
 time
  sorting the result set. Also, create a temporary table with the rows
  prevously. This is the reason.

 Not every situation. If the order column in the first table, no problem
 too:

 (root@localhost) [test] explain select e.id from egyik e,masik m
 where e.id=m.id and e.duma='aaa'order by e.duma asc;

 +--+-+---+--+---+---+-+---+--+--+
 | id   | select_type | table | type | possible_keys | key   | key_len
 | ref   | rows | Extra|

 +--+-+---+--+---+---+-+---+--+--+
 |1 | SIMPLE  | e | ref  | PRIMARY,e_idx | e_idx | 767
 | const |1 | Using where; Using index |
 |1 | SIMPLE  | m | ref  | m_idx | m_idx | 4
 | test.e.id |1 | Using index  |

 +--+-+---+--+---+---+-+---+--+--+
 2 rows in set (0.00 sec)

 But i need to order with column in the second table.

 Bye!

 Lay

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




RE: mysqlcheck downtime 30 minutes for Java web application

2014-06-12 Thread Martin Gainty
Start Here to analyze your tables

Martin Gainty 
__ 
 _ _  _ _  _ ___ _  
  _   _ _   _ 
|_   _| |_ ___   |  _  |___ ___ ___| |_ ___   |   __|___|  _| |_ _ _ _ ___ ___ 
___   |   __|___ _ _ ___ _| |___| |_|_|___ ___ 
  | | |   | -_|  | | . | .'|  _|   | -_|  |__   | . |  _|  _| | | | .'|  _| 
-_|  |   __| . | | |   | . | .'|  _| | . |   |
  |_| |_|_|___|  |__|__|  _|__,|___|_|_|___|  |_|___|_| |_| |_|__,|_| 
|___|  |__|  |___|___|_|_|___|__,|_| |_|___|_|_|
   |_|  
  


 From: tyfan...@uoregon.edu
 To: webleh...@gmail.com; mysql@lists.mysql.com
 Subject: RE: mysqlcheck downtime  30 minutes for Java web application
 Date: Fri, 13 Jun 2014 00:02:48 +
 
 I would love to know the answer to this as well.
 
 I know there are a number of functions of mysqlcheck... 
 -c to check for corruption.. and ensure nothing needs to be repaired
 -o to Optimize, which is kind of like defragmenting, removing deleted rows.. 
 and this may be the one that is causing the 100% duplication  downtime..
 -r for repair what -c found, or if there was some unanticipated downtime.
 
 I'm trying to determine what the best practices is with MySQL databases.  As 
 an Oracle DBA, I run stats fairly often for the CBO, but that doesn't look 
 like the same thing that -o does.
 I'm guessing that for a medium-usage database, I'd use -o during a specified 
 downtime, perhaps accompanying the upgrade, whether it is monthly or 
 whichever, but run -c more often to check for corruption.
 
 Does anyone have any documentation or knowledge of best practices, especially 
 for version 5.6?
 
 ~ Tyfanie Wineriter ~
 
 Database Administrator
 University of Oregon
 1212 University of Oregon
 Eugene, OR 97402-1212
 (541) 346-1366
 
 
 -Original Message-
 From: Lukas Lehner [mailto:webleh...@gmail.com] 
 Sent: Wednesday, June 11, 2014 6:38 AM
 To: mysql
 Subject: mysqlcheck downtime  30 minutes for Java web application
 
 Hi
 
 I read in MySQL, Fifth Edition by Paul DuBois that we should do a regular 
 mysqlcheck.
 
 We have all InnoDB tables. As far as I unterstand the tables get read and 
 write locked during mysqlcheck.
 
 The database will grow 100x in size, at the moment only initial load.
 
 It means big downtime.
 
 $ time mysqlcheck --all-databases
 [...]
 
 real29m41.449s
 user0m0.019s
 sys 0m0.015s
 
 mysql SELECT table_schema Data Base Name, SUM( data_length +
 index_length) / 1024 / 1024
 - Data Base Size in MB FROM information_schema.TABLES GROUP BY 
 table_schema ;
 ++--+
 | Data Base Name | Data Base Size in MB |
 ++--+
 | imzweffefr |   38420.97170544 |
 | information_schema |   0.00781250 |
 | mysql  |   0.63046169 |
 ++--+
 3 rows in set (5.86 sec)
 
 You all do mysqlcheck? Is it really mandatory? What's the risk if I don't do 
 it regularly?
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
  

RE: mysqlcheck downtime 30 minutes for Java web application

2014-06-12 Thread Martin Gainty
I would recommend start Here to analyze your tables:
http://dev.mysql.com/doc/refman/5.1/en/analyze-table.html

Once you have evidence of FTS or unbalanced index entries then optimise
http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html

Martin Gainty 
__ 
Member
 _ _  _ _  _ ___ _  
  _   _ _   _ 
|_   _| |_ ___   |  _  |___ ___ ___| |_ ___   |   __|___|  _| |_ _ _ _ ___ ___ 
___   |   __|___ _ _ ___ _| |___| |_|_|___ ___ 
  | | |   | -_|  | | . | .'|  _|   | -_|  |__   | . |  _|  _| | | | .'|  _| 
-_|  |   __| . | | |   | . | .'|  _| | . |   |
  |_| |_|_|___|  |__|__|  _|__,|___|_|_|___|  |_|___|_| |_| |_|__,|_| 
|___|  |__|  |___|___|_|_|___|__,|_| |_|___|_|_|
   |_|  
  


 From: tyfan...@uoregon.edu
 To: webleh...@gmail.com; mysql@lists.mysql.com
 Subject: RE: mysqlcheck downtime  30 minutes for Java web application
 Date: Fri, 13 Jun 2014 00:02:48 +
 
 I would love to know the answer to this as well.
 
 I know there are a number of functions of mysqlcheck... 
 -c to check for corruption.. and ensure nothing needs to be repaired
 -o to Optimize, which is kind of like defragmenting, removing deleted rows.. 
 and this may be the one that is causing the 100% duplication  downtime..
 -r for repair what -c found, or if there was some unanticipated downtime.
 
 I'm trying to determine what the best practices is with MySQL databases.  As 
 an Oracle DBA, I run stats fairly often for the CBO, but that doesn't look 
 like the same thing that -o does.
 I'm guessing that for a medium-usage database, I'd use -o during a specified 
 downtime, perhaps accompanying the upgrade, whether it is monthly or 
 whichever, but run -c more often to check for corruption.
 
 Does anyone have any documentation or knowledge of best practices, especially 
 for version 5.6?
 
 ~ Tyfanie Wineriter ~
 
 Database Administrator
 University of Oregon
 1212 University of Oregon
 Eugene, OR 97402-1212
 (541) 346-1366
 
 
 -Original Message-
 From: Lukas Lehner [mailto:webleh...@gmail.com] 
 Sent: Wednesday, June 11, 2014 6:38 AM
 To: mysql
 Subject: mysqlcheck downtime  30 minutes for Java web application
 
 Hi
 
 I read in MySQL, Fifth Edition by Paul DuBois that we should do a regular 
 mysqlcheck.
 
 We have all InnoDB tables. As far as I unterstand the tables get read and 
 write locked during mysqlcheck.
 
 The database will grow 100x in size, at the moment only initial load.
 
 It means big downtime.
 
 $ time mysqlcheck --all-databases
 [...]
 
 real29m41.449s
 user0m0.019s
 sys 0m0.015s
 
 mysql SELECT table_schema Data Base Name, SUM( data_length +
 index_length) / 1024 / 1024
 - Data Base Size in MB FROM information_schema.TABLES GROUP BY 
 table_schema ;
 ++--+
 | Data Base Name | Data Base Size in MB |
 ++--+
 | imzweffefr |   38420.97170544 |
 | information_schema |   0.00781250 |
 | mysql  |   0.63046169 |
 ++--+
 3 rows in set (5.86 sec)
 
 You all do mysqlcheck? Is it really mandatory? What's the risk if I don't do 
 it regularly?
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql