Re: Mystery error in GRANT statement

2016-10-04 Thread James Moe
On 10/03/2016 08:16 PM, Richard wrote:
> If
> you want/need to use it I believe you need to use the "backtick" to
> quote the name
>
  Yes, that worked. Thank you.
  Is there an easy way to rename a database?

-- 
James Moe
moe dot james at sohnen-moe dot com
520.743.3936
Think.



signature.asc
Description: OpenPGP digital signature


Mystery error in GRANT statement

2016-10-03 Thread James Moe
opensuse v42.1
linux 4.1.31-30-default x86_64
10.0.26-MariaDB

I have a database named "sma-v4-01". The GRANT statement does not like
that database name:

MariaDB [sma-v4-01]> GRANT ALL ON 'sma-v4-01'.* TO
''@'sma-station14l' IDENTIFIED BY 'xx';

ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MariaDB server version for the right
syntax to use near ''sma-v4-01'.* TO 'x'@'sma-station14l'
IDENTIFIED BY 'xx'' at line 1

If I replace 'sma-v4-01' with 'sma_joomla', the statement is accepted.

What is wrong with 'sma-v4-01'?

-- 
James Moe
moe dot james at sohnen-moe dot com
520.743.3936
Think.





signature.asc
Description: OpenPGP digital signature


Mystery error in GRANT statement

2016-10-03 Thread James Moe
opensuse v42.1
linux 4.1.31-30-default x86_64
10.0.26-MariaDB

I have a database named "sma-v4-01". The GRANT statement does not like
that database name:

MariaDB [sma-v4-01]> GRANT ALL ON 'sma-v4-01'.* TO
''@'sma-station14l' IDENTIFIED BY 'xx';

ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MariaDB server version for the right
syntax to use near ''sma-v4-01'.* TO 'x'@'sma-station14l'
IDENTIFIED BY 'xx'' at line 1

If I replace 'sma-v4-01' with 'sma_joomla', the statement is accepted.

What is wrong with 'sma-v4-01'?

-- 
James Moe
moe dot james at sohnen-moe dot com
520.743.3936
Think.



signature.asc
Description: OpenPGP digital signature


Re: When to create a new user?

2015-08-19 Thread James Moe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/19/2015 05:29 AM, Richard Reina wrote:
 As new users sign up for the application should each get their own 
 MySQL username and password or is okay to execute their queries 
 with the same (one generic) MySQL username and password?
 
  That is rather vague.
  What data is stored for each user?
  What are these queries that a user may perform?

- -- 
James Moe
moe dot james at sohnen-moe dot com
520.743.3936
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAlXUvlgACgkQzTcr8Prq0ZPrHQCdFDqY9uEa1mS62LuUr7FhqzEa
6R4AoJu6L5Je6sXivtY31RPGgM8bIYv7
=vyyy
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new user?

2015-08-19 Thread James Moe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/19/2015 10:06 AM, Richard Reina wrote:
 Data stored for each user would be a list of places visited that
 and details relating to those trips. The type of queries they would
 be able to perform be able to read, update and create new records.
 
  I see no reason to create a unique user account for this use case.
Each user's data goes into a singe table, and a view based on the
user's ID would restrict data access for each user.

- -- 
James Moe
moe dot james at sohnen-moe dot com
520.743.3936
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAlXUv0QACgkQzTcr8Prq0ZOaXACdFrnbcxrJMsVq3cn6fzbfbdn4
iBoAnRX3USjmqnKWgdHGvuVBxrQnH++X
=bhJj
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: how to create unique key for long varchar?

2013-11-05 Thread Rick James
The odds of a spurious collision with MD5 (128 bits) can be phrased this way:

If you have 9 Trillion different items, there is one chance in 9 Trillion that 
two of them have the same MD5.

To phrase it another way, it is more likely to be hit by a meteor while winning 
the mega-lottery.



 -Original Message-
 From: Dan Nelson [mailto:dnel...@allantgroup.com]
 Sent: Tuesday, November 05, 2013 7:56 AM
 To: Li Li
 Cc: mysql@lists.mysql.com
 Subject: Re: how to create unique key for long varchar?
 
 In the last episode (Nov 05), Li Li said:
  I want to create a table with a long varchar column, maybe it's
 the url.
  according to dns spec, the url's max length is fixed. but I have
  to deal with url having long params such as
  a.html?q=fl=
  I want the url is unique when inserting it.
  I googled and found
  http://stackoverflow.com/questions/6800866/how-to-store-urls-in-mysql
  this post suggests use md5 of url. But in theory, there will be
  conflict that two different urls will have the same md5(even it's
  probablitiy is very small). I want to a absolute correct solution.
  one method i can come up with is using select ... for update
  1. begin transaction
  2. select url from tb where md5='' for update
  3. if the url is not exist, insert into this url; else do nothing
 
 It might be more efficient to optimize for the common case here.  The
 assumption is that an md5 (or sha1 or sha2) hash collision is extremely
 unlikely, so you could just insert your new row, and if you get a
 duplicate entry for primary key error, then you can select url from
 tb where md5='' , and compare the retreived url with the one you
 want to insert.
 
 --
   Dan Nelson
   dnel...@allantgroup.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Community Server 5.7.2 has been released (part 1)

2013-10-24 Thread Rick James
There's an old saying, If it ain't broke, don't fix it.

Why _might_ 5.6.x or 5.7.x be better for you?  Sure there might be some 
features you might want, might be some performance improvements that you might 
notice, etc.  And there might be some regressions that will bite you.  
Fortunately, regressions are rare.

You should probably upgrade to 5.6 soon, simply to avoid having to do a double 
upgrade when you eventually go to 5.7.

 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Thursday, October 24, 2013 7:01 AM
 To: mysql@lists.mysql.com
 Subject: Re: MySQL Community Server 5.7.2 has been released (part 1)
 
 MySQL fans,
 
  2013/09/21 18:04 +0200, Bjorn Munch 
 MySQL Server 5.7.2 (Milestone Release) is a new version of the world's
 most popular open source database. This is the second public milestone
 release of MySQL 5.7.
 
 Is this a good replacement for that 5.5.8 that I long ago downloaded and
 installed? or is it better to go for a 5.6, or an older 5.7?
 
  2013/09/20 15:47 +0530, Sunanda Menon 
 MySQL Server 5.6.14, a new version of the popular Open Source Database
 Management System, has been released. MySQL 5.6.14 is recommended for
 use on production systems.
 
 Is this better for me than any 5.7?
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Problem with having

2013-09-25 Thread Rick James
Still more to this saga

Comment 1:
... HAVING x;
The expression ( x ) is evaluated as a true/false value, based on whether x is 
nonzero (true) or zero (false).  Your 'x' is  MIN(date_time) , which is very 
likely to be nonzero, hence TRUE.  That is, the HAVING does nothing useful.

Comment 2:
This shows
1. a technique
2. how MariaDB optimizes it away, and
3. how you can get MariaDB to still do the group by trick:
https://mariadb.com/kb/en/group-by-trick-has-been-optimized-away/
(I do not know of other MySQL variants that have any trouble with the trick.)

 -Original Message-
 From: Larry Martell [mailto:larry.mart...@gmail.com]
 Sent: Tuesday, September 24, 2013 1:44 PM
 To: shawn green
 Cc: mysql mailing list
 Subject: Re: Problem with having
 
 On Tue, Sep 24, 2013 at 9:05 AM, shawn green
 shawn.l.gr...@oracle.comwrote:
 
  Hello Larry,
 
 
  On 9/23/2013 6:22 PM, Larry Martell wrote:
 
  On Mon, Sep 23, 2013 at 3:15 PM, shawn green
  shawn.l.gr...@oracle.com**
  wrote:
 
   Hi Larry,
 
 
  On 9/23/2013 3:58 PM, Larry Martell wrote:
 
   On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
  narula...@gmail.comwrote:
 
Hi,
 
 
  In your second query, you seem to have MIN(date_time), but you are
  talking about maximum. So your group by query is actually pulling
  the minimum date for this recipe.
 
 
   I pasted the wrong query in. I get the same results regardless of
  if I
  have
  MIN or MAX - I get the id of the max, but the date_time of the min.
 
 
 
On Mon, Sep 23, 2013 at 3:35 PM, Larry Martell 
  larry.mart...@gmail.com
 
  **wrote:
 
 
I want to find the rows from a table that have the max date_time
  for
 
  each
  recipe. I know I've done this before with group by and having,
  but I can't seem to get it to work now. I get the correct row id,
  but not the correct date_time. I'm sure I'm missing something
  simple.
 
  For purposes of showing an example, I'll use one recipe, 19166.
 
 
  For that recipe here's the row I would want:
 
  mysql select id, MAX(date_time) from data_cstmeta  where
  mysql recipe_id =
  19166;
  +-+-----+
  | id  | MAX(date_time)  |
  +-+-----+
 
  | 1151701 | 2013-02-07 18:38:13 |
  +-+-----+
 
  1 row in set (0.01 sec)
 
  I would think this query would give me that - it gives me the
  correct id, but not the correct date_time:
 
  mysql SELECT id, date_time as MaxDateTime  FROM data_cstmeta
  mysql where
  recipe_id = 19166 group by recipe_id HAVING MIN(date_time);
  +-+-----+
  | id  | MaxDateTime |
  +-+-----+
 
  | 1151701 | 2010-12-13 16:16:55 |
  +-+-----+
 
  1 row in set (0.01 sec)
 
  How can I fix this?
 
  Thanks!
  -larry
 
 
   You have to do a two-stage match. One stage to find the MAX() of
  a
  value
  for each recipe_id, the other to match that MAX() to one or more
  rows to give you the best ID values.
 
  Here's a subquery method of doing it.  There are many many others
  (google for groupwize maximum)
 
  SELECT a.id, b.MaxDateTime
  FROM data_cstmeta a
  INNER JOIN (
   SELECT MAX(date_time) MaxDateTime
   FROM data_cstmeta
   WHERE recipe_id = 19166
  ) b
 on b.MaxDateTime = a.date_time
  WHERE recipe_id = 19166;
 
 
   Having the recipe_id in the query was just to show an example. I
  really
  want the id's with the max date for each recipe_id:
 
  This is what I changed it to, which works, but is too slow. I need to
  find a more efficient solution:
 
  SELECT d1.id, d1.date_time as MaxDateTime
  FROM data_cstmeta d1
  LEFT JOIN data_cstmeta d2 ON d1.recipe_id = d2.recipe_id AND
  d1.date_time  d2.date_time
  WHERE d2.recipe_id IS NULL
 
 
  As I said, there are many many ways to solve this problem. Here is one
  that is going to perform much better for the generic case than what
  you are doing.
 
  CREATE TEMPORARY TABLE tmpMaxDates (key(recipe_id,maxdatetime)) SELECT
  recipe_id, max(date_time) maxdatetime FROM data_cstmeta GROUP BY
  recipe_id;
 
  SELECT a.id, b.maxdatetime
  FROM data_cstmeta a
  INNER JOIN tmpMaxDates b
on a.recipe_id = b.recipe_id
and a.date_time = b.maxdatetime;
 
  DROP TEMPORARY TABLE tmpMaxDates;
 
 
  Of course, an appropriate multi-column index on data_cstmeta would
  also make your technique much faster than it is today.
 
 
 Thanks much Shawn! This ran in a few  seconds vs. 30 minutes for my
 solution.


RE: All command has no content, only yield x rows in set

2013-09-18 Thread Rick James
Check your ~/.my.cnf and other places where configuration might be causing the 
problem (such as pager).

 -Original Message-
 From: Radoulov, Dimitre [mailto:cichomit...@gmail.com]
 Sent: Wednesday, September 18, 2013 7:32 AM
 To: zxycscj; mysql
 Subject: Re: All command has no content, only yield x rows in set
 
 On 18/09/2013 16:17, zxycscj wrote:
  execute mysql command has no content. help [...]
 
  mysql use ijdb;
  Database changed
  mysql show tables;
  7 rows in set (0.00 sec)
 
 [...]
 
 I suppose that the mysql pager is set to a non-default value.
 On the mysql prompt try:
 
 pager
 repeat your command here ...
 
 Regards
 Dimitre
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL version 3.23 to 5.x features

2013-09-05 Thread Rick James
Assuming that your goal is to migrate an old database, perhaps the things that 
will bite you the fastest:
 * TYPE=MyISAM -- ENGINE=MyISAM.
 * CHARACTER SETs -- no concept in until 4.1.  Use DEFAULT CHARACTER SET=latin1 
for now.  Later you can figure out how to migrate to utf8.  (Note that 5.5 
defaults to utf8, which is likely to dislike your bytes.)

I would dump the data, then import into 5.x.  Upgrading step by step would be 
quite tedious.

Migration to InnoDB is also desirable, but not mandatory.

 -Original Message-
 From: Nagaraj S [mailto:nagaraj@gmail.com]
 Sent: Thursday, August 22, 2013 12:16 AM
 To: shawn green
 Cc: mysql@lists.mysql.com
 Subject: Re: MySQL version 3.23 to 5.x features
 
 wow it really helped me a lot. I really thank Shawn,Dale Jesper for there
 inputs
 
 
 On Wed, Aug 21, 2013 at 7:13 PM, shawn green
 shawn.l.gr...@oracle.comwrote:
 
  Hello Naga,
 
 
  On 8/21/2013 6:45 AM, Nagaraj S wrote:
 
  Hello,
 
  Can anyone share the features/comparison from MySQL version 3.23 to
  5.x in single document? I can get from Google, however I have to
  navigate different pages/sites, if it is in single document that will
  be useful to see the overview of mysql features
 
 
  While not exactly just one single page for all changes, there is a
  single page of the fine manual within each new major version that
  describes the big features that are new or changed within that version.
 
  http://dev.mysql.com/doc/**refman/5.0/en/mysql-nutshell.**htmlhttp://
  dev.mysql.com/doc/refman/5.0/en/mysql-nutshell.html
  http://dev.mysql.com/doc/**refman/5.1/en/mysql-nutshell.**htmlhttp://
  dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html
  http://dev.mysql.com/doc/**refman/5.5/en/mysql-nutshell.**htmlhttp://
  dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html
  http://dev.mysql.com/doc/**refman/5.6/en/mysql-nutshell.**htmlhttp://
  dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html
 
  Unfortunately, to learn what we changed between 3.23 - 4.0 or between
  4.0
  - 4.1, you are going to need to review the change logs
  http://dev.mysql.com/doc/**refman/4.1/en/news.htmlhttp://dev.mysql.co
  m/doc/refman/4.1/en/news.html
 
 
  from
  http://dev.mysql.com/doc/**refman/4.1/en/index.htmlhttp://dev.mysql.c
  om/doc/refman/4.1/en/index.html
  
  This manual describes features that are not included in every edition
  of MySQL 3.23, MySQL 4.0, and MySQL 4.1; such features may not be
  included in the edition of MySQL 3.23, MySQL 4.0, or MySQL 4.1; licensed
 to you.
  
 
 
  I can pretty much summarize the deficiencies in 3.23 like this
  * No InnoDB, Archive, CSV, Federated, or Blackhole storage engines
  * No table partitioning
  * No Views
  * No Stored Procedures or Stored Functions
  * No Triggers
  * No Events
  * Severe scalability limits (won't run as fast with reasonably
  concurrent loads even on great hardware as later versions)
  * Completely out of print (unpublished) and unsupported.
  * Missing literally thousands of bug fixes and performance
  improvements
 
 
  Any new project should be starting out with 5.6.  Any production
  server should be on 5.5 or 5.6 by now or migrating soon. It is also a
  fairly safe bet that if you are still operating a 3.23 instance of
  MySQL that it is also time to upgrade your hardware.
 
  Regards,
  --
  Shawn Green
  MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware
  and Software, Engineered to Work Together.
  Office: Blountville, TN
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Core Dump

2013-09-05 Thread Rick James
bugs.mysql.com


 -Original Message-
 From: Ben Clewett [mailto:b...@clewett.org.uk]
 Sent: Thursday, September 05, 2013 6:38 AM
 To: mysql@lists.mysql.com
 Subject: Core Dump
 
 Dear MySQL,
 
 Using 5.1.56, I have experienced this core dump.  Is there anybody out
 there qualified to give an opinion on this?
 
 Many thanks,
 
 Ben Clewett.
 
 Thread pointer: 0x7fd5280dbd90
 Attempting backtrace. You can use the following information to find out
 where mysqld died. If you see no messages after this, something went
 terribly wrong...
 stack_bottom = 0x7fd51edf8100 thread_stack 0x4
 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x8b591e]
 /usr/sbin/mysqld(handle_segfault+0x351)[0x5e4ca1]
 /lib64/libpthread.so.0[0x7fda95f35a90]
 /usr/sbin/mysqld(my_hash_sort_simple+0x3d)[0x8c8a5d]
 /usr/sbin/mysqld(hp_hashnr+0x20a)[0x77b16a]
 /usr/sbin/mysqld(hp_search+0x66)[0x77c386]
 /usr/sbin/mysqld(heap_rnext+0x12f)[0x77eb9f]
 /usr/sbin/mysqld(_ZN7ha_heap10index_nextEPh+0x2d)[0x779fcd]
 /usr/sbin/mysqld(_ZN7handler15index_next_sameEPhPKhj+0x3a)[0x6d5e5a]
 /usr/sbin/mysqld(_ZN7handler21read_multi_range_nextEPP18st_key_multi_range
 +0x2f)[0x6d50af]
 /usr/sbin/mysqld(_ZN18QUICK_RANGE_SELECT8get_nextEv+0x14e)[0x6b73ae]
 /usr/sbin/mysqld[0x6d1246]
 /usr/sbin/mysqld(_Z12mysql_deleteP3THDP10TABLE_LISTP4ItemP10SQL_I_ListI8st
 _orderEyyb+0x904)[0x67a994]
 /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x36f7)[0x5f8a07]
 /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjPPKc+0x3d0)[0x5faec0]
 /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x7b6)[
 0x5fb686]
 /usr/sbin/mysqld(_Z10do_commandP3THD+0xe6)[0x5fc716]
 /usr/sbin/mysqld(handle_one_connection+0x246)[0x5eed26]
 /lib64/libpthread.so.0[0x7fda95f2e070]
 /lib64/libc.so.6(clone+0x6d)[0x7fda953f213d]
 
 Trying to get some variables.
 Some pointers may be invalid and cause the dump to abort.
 Query (0x7fd521a122a0): is an invalid pointer Connection ID (thread ID):
 47159998
 Status: NOT_KILLED
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MyISAM index missing rows

2013-08-12 Thread Rick James
 certain rows can no longer be found  -- Do CHECK TABLE.  (It will take a 
lng time.)  It may tell you to REPAIR TABLE, which will also take a lng 
time; but it will be necessary.  (This is a strong reason for going to InnoDB.  
But it will be 2x-3x bigger on disk.)

 -Original Message-
 From: Dolan Antenucci [mailto:antenucc...@gmail.com]
 Sent: Monday, August 12, 2013 10:26 AM
 To: mysql@lists.mysql.com
 Subject: MyISAM index missing rows
 
 Hi Everyone,
 
 I have a MyISAM table with 25 billion rows (structure: id1 int, id2 int,
 score float), and after I create an index on id1, certain rows can no
 longer be found.
 
 I've posted a detailed summary of my problem at dba.stackexchange.com, but
 haven't had success with finding a solution thus far.  Here's the URL to
 that post:
 http://dba.stackexchange.com/questions/47906/mysql-myisam-index-causes-
 query-to-match-no-rows-indexes-disabled-rows-match
 
 As that post describes, one oddity is with an EXPLAIN I run on the same
 query with indexes enabled vs. disabled. When disabled, rows = 25
 billion; when enabled, rows = 170 million.  Based on this, I'm wondering
 if some restriction is causing only 170 million rows to index. (Of course,
 I could be completely misinterpreting this EXPLAIN result).
 
 Here is my server version: 5.5.31-0ubuntu0.12.04.2 (Ubuntu), which should
 also be 64-bit (i've verified by running file /usr/sbin/mysqld (says
 ELF
 64 bit..)
 
 Any help is greatly appreciated!  Just let me know if you need more
 details
 
 Sincerely,
 
 Dolan Antenucci

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Concurrent read performance problems

2013-08-12 Thread Rick James
Please provide SHOW CREATE TABLE and SHOW TABLE STATUS for each table.
It smells like there is an inconsistency in the datatype of facts.accounts.id 
and what it is JOINing to.

Also provide the full SELECT.
How much RAM do you have?

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Sunday, August 11, 2013 2:16 PM
 To: Brad Heller
 Cc: Johnny Withers; MySQL General List
 Subject: Re: Concurrent read performance problems
 
 Good to hear. A word of warning, though: make sure you don't have more
 connections allocating those buffers than your machine can handle memory-
 wise, or you'll start swapping and performance will REALLY go down the
 drain.
 
 A query/index based solution would still be preferred. Could you for
 instance materialize that subselect and periodically refresh it? Other
 tricks may also be available.
 
 Brad Heller b...@cloudability.com wrote:
 Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
 fixed the issue. Bumping them both to 512MB got our performance back
 on-par. I came up with a way to avoid the contention using a complex
 set of temp tables, but performance was abysmal.
 
 By reverting to the more straight-forward query with the subselect as
 well as tweaking the max_healp_table_size and tmp_table_size I saw no
 resource contention causing slowdowns, as well as a 12x performance
 boost.
 Thanks
 for your help!
 
 *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
 Skype:
 brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability
 
 
 On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman
 vegiv...@tuxera.bewrote:
 
  True, which is why I said I suspected file-based sort :-) At one
 million
  rows, that seems to be an accurate guess, too. Still on the phone,
 though,
  and in bed. I'll read the thread better tomorrow, but you might get
 some
  benefit from cutting out the subselect if that's possible.
 
  If you have plenty of memory, have a look at the max_temp_table_size
 and
  max_heap_table variables, too; those decide when the sort goes to
 disk.
 
 
  Johnny Withers joh...@pixelated.net wrote:
 
  Just because it says filrsort doesn't mean it'll create a file on
 disk.
  Table schema and full query would be helpful here too
 
 
 http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort
 -mean-in-mysql/
  On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com
 wrote:
 
   Yes sorry, here's the explain. It was taken from MariaDB 5.5.32.
 Looks like
   there is a lot of filesort goin' on here. Also note that I'm only
 using the
   first two fields of the covering index (intentionally).
 
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
   | id   | select_type | table  | type  |
  possible_keys
  | key| key_len | ref
 | rows| Extra
   |
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
   |1 | PRIMARY | derived2 | ALL   | NULL
   | NULL   | NULL| NULL
  | 1004685 | Using temporary;
 Using
   filesort   |
   |2 | DERIVED | accounts   | range |
   PRIMARY,unique_account_identifiers | unique_account_identifiers |
 257 |
   NULL  |   3 |
 Using
   where; Using index; Using temporary; Using filesort |
   |2 | DERIVED | facts  | ref   | covering
   | covering   | 4   |
   facts.accounts.id|  334895 |
 Using
   where
 |
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
 
 
   *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514
 |
   Skype:
   brad.heller | @bradhe http://www.twitter.com/bradhe |
  @cloudabilityhttp://www.twitter.com/cloudability
 
 
   On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman
 vegiv...@tuxera.be
 
  wrote:
 
 
   On my phone now, but it smells of file-based sorting, making disk
 access
   the bottleneck. Can you provide the explain?
 
 
   Brad Heller b...@cloudability.com wrote:
 
 
   Hey list, first time posting here so apologies if this is the
 wrong
 
   forum
 
   for this but I'm really out of options on how to solve this
 problem!
 
   *Short version:*
 
 
  

RE: Performance Improvements with VIEW

2013-07-30 Thread Rick James
VIEWs are not well optimized.  Avoid them.

The SlowLog will probably point to the worst query; we can help you improve it 
(SHOW CREATE TABLE; SHOW TABLE STATUS; EXPLAIN)

Only minutes to go through 10 million records?  Sounds good.  It takes time to 
shovel through that much stuff.

Sending data (etc) -- yeah these states are useless information in my book. 
 They merely say you have a slow query.  Sorting results probably implies a 
GROUP BY or ORDER BY.  It _may_ be possible to avoid the sort (when we review 
the naughty query).

What kind of things are you doing?  If Data Warehouse 'reports', consider 
Summary Tables.  Non-trivial, but the 'minutes' will become 'seconds'.

 -Original Message-
 From: Bruce Ferrell [mailto:bferr...@baywinds.org]
 Sent: Tuesday, July 30, 2013 7:08 AM
 To: mysql@lists.mysql.com
 Subject: Re: Performance Improvements with VIEW
 
 
 On 07/30/2013 04:13 AM, Manivannan S. wrote:
  Hi,
 
  I've a table with 10 Million records in MySQL with INNODB engine. Using
 this table I am doing some calculations in STORED PROCEDURE and getting
 the results.
 
  In Stored Procedure I used the base table and trying to process all the
 records in the table. But it's taking more than 15 Minutes to execute the
 procedure. When executing the Procedure in the process list I am getting 3
 states like 'Sending data', 'Sorting Result' and 'Sending data' again.
 
  Then I created one view by using  the base table and updated the
 procedure by replacing that view in the place of a base table, it took
 only 4 minutes to execute the procedure with a view. When executing the
 Procedure in the process list I am getting 2 states like 'Sorting Result'
 and 'Sending data'. The first state of 'Sending data' is not happened with
 view, It's directly started with 'Sorting Result' state.
 
  When I'm referring some MySQL sites and other blogs, I have seen that
 VIEWS will never improve the performance. But here I see some improvements
 with a view.
 
  I would like to know how VIEW is improving the performance.
 
  Regards
  Manivannan S
 
 
 
 If you turn on your slow queries logs and activate log queries without
 indexes, I suspect you'll find your answer.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Question regarding creating a query

2013-07-30 Thread Rick James
 I have to update the query every time.
Therein lies the difficulty with the schema design.

You could write a stored procedure to locate all the tables (use 
information_schema.TABLES, etc) and build the UNION, and finally execute it.  
The SP would have something very remotely like the foreach you suggested.

 -Original Message-
 From: Sukhjinder K. Narula [mailto:narula...@gmail.com]
 Sent: Tuesday, July 30, 2013 11:13 AM
 To: mysql@lists.mysql.com
 Subject: Question regarding creating a query
 
 Hello,
 
 I have a question regarding creating a query as follows:
 
 I have several databases (all with same structure), which I to query. For
 instansce:
 
 db1, db2, db3 - all have table tb1 with field a, b and table tb2 with
 fields flag1, flag2
 
 So I want to query and get field a from tb for all db's. One way to do is
 union i.e.
 
 SELECT a FROM db1.tb1 WHERE (SELECT flag1 FROM db1.tb2) = 'y'
 UNION
 SELECT a FROM db2.tb1 WHERE (SELECT flag1 FROM db2.tb2) = 'y'
 UNION
 SELECT a FROM db3.tb1 WHERE (SELECT flag1 FROM db3.tb2) = 'y'
 
 But the problem here is that if I add more db's, I have to update the
 query every time.
 
 In addition to above, I also have a database e.g. common, which has a
 table called dbnames with field name, that keeps the name of all the
 databases I have (db1, db2, db3).
 
 So, what I would like to do is query the common db to get the names of the
 db's and then run the select query on each db.
 
 So here is the pseudocode of what I want to do:
 
 
 for each (SELECT name AS DbName FROM common.dbnames)
 
 (SELECT a FROM DbName.tb1 WHERE (SELECT flag1 FROM DbName.tb2) = 'y')  AS
 CONCAT(DbName, '-', a)
 
 
 Could you please advice if this possible and if yes, how can this be
 acheived.
 
 Many Thanks,
 
 SK

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-29 Thread Rick James
Most RAID controllers will happily do Elevator stuff like you mentioned.
So will Linux.

For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is optimal. 
 (The default, 'cfq', is not as good.)

A RAID controller with multiple drives striped (and optionally parity-checked) 
(RAID-5, -10) and with a BBU (Battery Backed Write Cache) is excellent for I/O.

I don't know about chronologically later.  InnoDB does the right thing, as 
long as the OS does not cheat on fsync, etc.

 1/10/10A/10aa342
Only 16 subdirectories per directory?  I would expect 256 to be more efficient 
overall.  This is because of fewer levels.  Scanning 256 is probably less 
costly than doing an extra level.  (Yeah, again, I can't _prove_ it in _your_ 
environment.)

4K tables on a single machine -- that is beginning to get into 'big' in 
reference to ulimit, table_open_cache, etc.  That is, if you went much past 
that, you would be getting into new areas of inefficiency.

I do not like splitting a database table into multiple tables, except by 
PARTITIONing.  PARTITIONing would also provide a 'instantaneous' way of purging 
old data.  (DROP PARTITION + REORGANIZE PARTITION)

Almost always (again no proof for your case), a single table is more efficient 
than many tables.  This applies to PARTITIONing, too, but there are can be 
other gains by using PARTITIONing.

InnoDB has a 64TB limit per PARTITION.

 -Original Message-
 From: william drescher [mailto:will...@techservsys.com]
 Sent: Saturday, July 27, 2013 4:32 AM
 To: mysql@lists.mysql.com
 Subject: Re: hypothetical question about data storage
 
 On 7/26/2013 6:58 PM, Chris Knipe wrote:
  The issue that we have identified is caused by seek time - hundreds of
  clients simultaneously searching for a single file.  The only real way
  to explain this is to run 100 concurrent instances of bonnie++ doing
  random read/writes... Your disk utilization and disk latency
  essentially goes through the roof resulting in IO wait and insanely
  high load averages (we've seen it spike to over 150 on a 8-core Xeon -
  at which time the application (at a 40 load average already) stops
  processing requests to prevent the server crashing).
 
 back in the day (many years ago) when I worked for IBM we had disk
 controllers that would queue and sort pending reads so that the heads
 would seek from low tracks across the disk to high tracks and then back to
 low. This resulted in very low seek _averages_.
 The controller was smart enough to make sure that if a write occurred,
 chronologically later reads got the right data, even if it had not been
 physically written to disk yet.
 
 Is there such a controller available now?
 
 bill
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: hypothetical question about data storage

2013-07-29 Thread Rick James
Elevator...  If the RAID _controller_ does the Elevator stuff, any OS 
optimizations are wasted.
And there have been benchmarks backing that up.  (Sorry, don't have any links 
handy.)

RAID 5/10 ...  The testing I have done shows very little difference.  However, 
you can slant the conclusion by picking one versus the other of:
For a given amount of disk space... RAID-X is better than Y.
For a given number of drives... RAID-Y is better than X.

When writing a random block, RAID-5 does not need to touch all the drives, only 
the one with parity.  Suitable XORs will update it correctly.  So, a write hits 
2 drives, whether you have RAID-5 or -10.

Some people make the chunk size 64KB (etc); not 512B.  With the Controller 
involved, there is not necessarily any benefit for large vs small chunk size.  
Writes are delayed until the it is optimal.  This leads to large streaming 
writes to each drive, regardless of chunk size (when writing a large stream).

A heavily used InnoDB system will be writing random 16KB blocks.

(I have no insight into RAID-6.)

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Monday, July 29, 2013 3:38 PM
 To: Rick James; will...@techservsys.com; mysql@lists.mysql.com
 Subject: RE: hypothetical question about data storage
 
 Rick James rja...@yahoo-inc.com wrote:
 
 For MySQL + RAID, a Linux elevator strategy of 'deadline' or 'noop' is
 optimal.  (The default, 'cfq', is not as good.)
 
 I should look into those again at some point. Do you have a brief word as
 to why they're better?
 
 
 A RAID controller with multiple drives striped (and optionally
 parity-checked) (RAID-5, -10) and with a BBU (Battery Backed Write
 Cache) is excellent for I/O.
 
 Very true. 10 is traditionally considered better - it's certainly faster -
 but 5 is of course cheaper :-)
 
 I'd like to add that 4+1 is the optimal configuration for RAID5 , as that
 makes for a stripe of 2kb, assuming 512b sectors of course. You then pick
 an fs that supports blocks of that size , which means that no write will
 ever need to perform a read first to calculate the checksum.
 
 
 
 
 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.


RE: hypothetical question about data storage

2013-07-26 Thread Rick James
Count the disk hits

If you have a filesystem directory, consider that it is designed to handle 
small numbers of files per directory.  Consider that there is a limited cache 
for directories, etc.  Plus there is the inode (vnode, whatever) storage for 
each file.  I don't know the details (and it varies wildly with filesystem 
(ext, xfs, zfs, etc)).

Looking at InnoDB...

Let's say you have a billion rows in a single table, and you need to fetch one 
row by the PRIMARY KEY, and it is a MD5 (sha-1, UUID, etc).  Such a key is 
_very_ random.

A billion rows would need about 5 levels of BTree.  The top levels would 
quickly all be cached.  (100M blocks * 16KB = 1.6GB.)  If the leaf nodes add up 
to 200GB, that is probably bigger than you innodb_buffer_pool_size.  In that 
case, a _random_ fetch is likely to be a cache miss.

A cache miss is about 100ms on normal rotating-media; perhaps 10ms on SSDs.  
This limits your reads to 10 (or 100) per second.

If you have big BLOBs in the table, then it gets messier.  InnoDB does not put 
more than 8K of a row in the actual 16KB block.  The rest is stored in another 
block(s).  So, it is likely to take an extra disk hit (200ms/20ms).

If your data size is 100 times as big as your buffer pool, then it becomes 
likely that the next level of the BTree won't be fully cacheable.  Now 
300ms/30ms.

I think it is likely that the small number of disk hits for InnoDB is better 
than the many disk hits for traversing a directory tree (with large 
directories) in the filesystem.  I vote for InnoDB over the directory tree.

Yes, you will have seeks.

No, adding more RAM won't help much.  Here's an argument:
Suppose your data is 20 times as big as the buffer pool and you are doing 
random fetches (MD5, etc).  Then 1/20 of fetches are cached; 95% cache miss.  
Estimated time: 0.95 * 100ms = 95ms.
Now you double your RAM.  1/10 cached - 90% cache miss - 90ms average - Not 
much improvement over 95.

 -Original Message-
 From: ckn...@savage.za.org [mailto:ckn...@savage.za.org] On Behalf Of
 Chris Knipe
 Sent: Friday, July 26, 2013 12:30 AM
 To: Johan De Meersman
 Cc: mysql
 Subject: Re: hypothetical question about data storage
 
 Hi All,
 
 Thanks for the responces, and I do concur.  I was taking a stab in the
 dark so to speak.
 
 We are working with our hosting providers currently and will be
 introducing a multitude of small iSCSI SANs to split the storage
 structure over a multitude of disks...   This is something that needs
 to be addressed from a systems perspective rather than an architectural
 one.
 
 SSD (or Fusion and the like) are unfortunately still way to expensive for
 the capacity that we require (good couple of TBs) - so mechanical disks it
 would need to be.  However, with the use of SANs as we hope, we should be
 able to go up from 4 to over 64 spindles whilst still being able to share
 the storage and have redundancy.
 
 Many thanks for the inputs and feedbacks...
 
 --
 C
 
 
 On Fri, Jul 26, 2013 at 9:23 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:
  Hey Chris,
 
  I'm afraid that this is not what databases are for, and the first thing
 you'll likely run into is amount of concurrent connections.
 
  This is typically something you should really tackle from a systems
 perspective. Seek times are dramatically improved on SSD or similar
 storage - think FusionIO cards, but there's also a couple of vendors
 (Violin comes to mind) who provide full-blown SSD SANs.
 
  If you prefer staying with spinning disks, you could still improve the
 seeks by focusing on the inner cylinders and potentially by using variable
 sector formatting. Again, there's SANs that do this for you.
 
  Another minor trick is to turn off access timestamp updates when you
 mount the filesystem (noatime).
 
  Also benchmark different filesystems, there's major differences between
 them. I've heard XFS being recommended, but I've never needed to benchmark
 for seek times myself. We're using IBM's commercial GPFS here, which is
 good with enormous amounts of huge files (media farm here), not sure how
 it'd fare with smaller files.
 
  Hope that helps,
  Johan
 
  - Original Message -
  From: Chris Knipe sav...@savage.za.org
  To: mysql@lists.mysql.com
  Sent: Thursday, 25 July, 2013 11:53:53 PM
  Subject: hypothetical question about data storage
 
  Hi all,
 
  We run an VERY io intensive file application service.  Currently, our
  problem is that our disk spindles are being completely killed due to
  insufficient SEEK time on the hard drives (NOT physical read/write
  speeds).
 
  We have an directory structure where the files are stored based on
  the MD5 checksum of the file name, i.e.
  /0/00/000/44533779fce5cf3497f87de1d060
  The majority of these files, are between 256K and 800K with the ODD
  exception (say less than 15%) being more than 1M but no more than 5M
  in size.  The content of the files are pure text (MIME Encoded).
 
  We believe that storing these 

RE: From DATE_FORMAT and back to origin date in mysql date column

2013-07-24 Thread Rick James
I'm unclear on your task, but maybe this function will help:
http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_str-to-date

(It is confusing to have dato as both a column name and an alias.)

 -Original Message-
 From: Karl-Arne Gjersøyen [mailto:karlar...@gmail.com]
 Sent: Wednesday, July 24, 2013 7:02 AM
 To: MySQL Mailinglist
 Subject: From DATE_FORMAT and back to origin date in mysql date column
 
 SELECT DATE_FORMAT(dato, '%e/%c/%Y') AS dato FROM transportdokument WHERE
 dato = '2013-07-20' AND dato = '2013-07-24' GROUP BY dato DESC is
 working perfect in my PHP file.
 
 But I need to transfer the date back from my norwegian formatted date to
 the origin date format in WHERE dato = '$standard_date_format';
 
 What need I do to fix this?
 
 Thanks for your time and help to learn me programming!
 
 Karl

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Replication question

2013-07-24 Thread Rick James
 4) 3 tables from the slaves are to be replicated back to the master
NO.

However, consider Percona XtraDb Cluster or MariaDB+Galera.  They allow 
multiple writable masters.  But they won't let you be so selective about tables 
not being replicated.
Here are the gotchas for Galera usage:
http://mysql.rjweb.org/doc.php/galera
If you can live with them (plus replicating everything), it may be best for you.

 -Original Message-
 From: rich gray [mailto:r...@richgray.com]
 Sent: Wednesday, July 24, 2013 8:21 AM
 To: mysql@lists.mysql.com
 Subject: Replication question
 
 I have been asked to set up multiple database replication which I have
 done before for simple cases however there are some nuances with this
 instance that add some complexity and I'd like to hear your collective
 expertise on this proposed scenario:-
 
 1) Single master database
 2) n (probably 3 to start with) number of slave databases
 3) All but 5 tables (123 tables in total) are to be replicated from the
 master to all the slaves
 4) 3 tables from the slaves are to be replicated back to the master
 
 It is mainly item 4) that concerns me - the primary ID's are almost
 certain to collide unless I seed the auto increment ID to partition the
 IDs into separate ranges or does MySQL handle this issue?
 There are some foreign keys on one of the 3 slave to master tables but
 they are pointing at some extremely static tables that are very unlikely
 to change.
 
 Is the above a feasible implementation...?
 
 Thanks in advance for any advice/pointers!
 
 Rich
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: InnoDB problem.

2013-07-23 Thread Rick James
Did you change innodb_log_file_size?

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Tuesday, July 23, 2013 9:57 AM
 To: Luis H. Forchesatto; mysql list
 Subject: Re: InnoDB problem.
 
 Eek.
 
 No immediate clue here, but maybe someone else does - so please keep the
 list in CC at all times :-p
 
 Random question: were the files backed up from a different version? I'd
 expect some kind of warning about that in the logs, really, but you never
 know.
 
 - Original Message -
 
  From: Luis H. Forchesatto luisforchesa...@gmail.com
  To: Johan De Meersman vegiv...@tuxera.be
  Sent: Tuesday, 23 July, 2013 6:34:47 PM
  Subject: Re: InnoDB problem.
 
  The error log:
 
  130723 10:04:23 [ERROR] Plugin 'InnoDB' init function returned error.
  130723 10:04:23 [ERROR] Plugin 'InnoDB' registration as a STORAGE
  ENGINE failed.
  130723 10:04:23 [Note] Event Scheduler: Loaded 0 events
  130723 10:04:23 [Note] C:\xampp\mysql\bin\mysqld.exe: ready for
  connections.
  Version: '5.1.41' socket: '' port: 3306 Source distribution
 
  2013/7/23 Johan De Meersman  vegiv...@tuxera.be 
 
  --
 
  Att.
 
  Luis H. Forchesatto
  http://br.linkedin.com/pub/luis-henrique-forchesatto/73/684/a67
 
 --
 
 Unhappiness is discouraged and will be corrected with kitten pictures.


RE: InnoDB problem.

2013-07-23 Thread Rick James
Either change it back, or delete the log files so that they will be built in 
the new size.  (Backup the entire tree, just in case.)

From: Manuel Arostegui [mailto:man...@tuenti.com]
Sent: Tuesday, July 23, 2013 1:05 PM
To: Rick James
Cc: Johan De Meersman; Luis H. Forchesatto; mysql list
Subject: Re: InnoDB problem.



2013/7/23 Rick James rja...@yahoo-inc.commailto:rja...@yahoo-inc.com
Did you change innodb_log_file_size?

innodb_log_file_size error always appears in the logs...he only posted a few 
lines of his log...but I guess (or I want to believe) he's gone through the 
whole log before starting the thread :-)




Manuel





 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.be]
 Sent: Tuesday, July 23, 2013 9:57 AM
 To: Luis H. Forchesatto; mysql list
 Subject: Re: InnoDB problem.

 Eek.

 No immediate clue here, but maybe someone else does - so please keep the
 list in CC at all times :-p

 Random question: were the files backed up from a different version? I'd
 expect some kind of warning about that in the logs, really, but you never
 know.

 - Original Message -

  From: Luis H. Forchesatto 
  luisforchesa...@gmail.commailto:luisforchesa...@gmail.com
  To: Johan De Meersman vegiv...@tuxera.bemailto:vegiv...@tuxera.be
  Sent: Tuesday, 23 July, 2013 6:34:47 PM
  Subject: Re: InnoDB problem.

  The error log:

  130723 10:04:23 [ERROR] Plugin 'InnoDB' init function returned error.
  130723 10:04:23 [ERROR] Plugin 'InnoDB' registration as a STORAGE
  ENGINE failed.
  130723 10:04:23 [Note] Event Scheduler: Loaded 0 events
  130723 10:04:23 [Note] C:\xampp\mysql\bin\mysqld.exe: ready for
  connections.
  Version: '5.1.41' socket: '' port: 3306 Source distribution

  2013/7/23 Johan De Meersman  vegiv...@tuxera.bemailto:vegiv...@tuxera.be 
  

  --

  Att.

  Luis H. Forchesatto
  http://br.linkedin.com/pub/luis-henrique-forchesatto/73/684/a67

 --

 Unhappiness is discouraged and will be corrected with kitten pictures.



--
Manuel Aróstegui
Systems Team
tuenti.comhttp://tuenti.com


RE: Mysql cache issues???

2013-07-22 Thread Rick James
For most, not all, production servers, these two are the 'right' settings:
   query_cache_type = OFF
   query_cache_size = 0
Both are needed to avoid some code paths from being unnecessarily followed.  
(Maybe someday, that will be fixed, too.)

I recommend only 50M as the max for _size.

Here are some metrics to look at to see if the QC is worth having.  (Of course, 
you have to run with it ON or DEMAND for a while to get values for these.)

Qcache_free_memory / query_cache_size -- good value..bad value: 0%,100%
 -- Meaning: Pct Query Cache free -- What to do if 'bad': lower query_cache_size
Qcache_lowmem_prunes / Uptime -- good value..bad value: 0,15
 -- Meaning: Query Cache spilling -- What to do if 'bad': increase 
query_cache_size
Qcache_not_cached / Uptime -- good value..bad value: 0,80
 -- Meaning: SQL_CACHE attempted, but ignored -- What to do if 'bad': Rethink 
caching; tune qcache
Qcache_free_blocks * 4096 / query_cache_size -- good value..bad value: 0,1
 -- Meaning: Fragmentation in qcache -- What to do if 'bad': decrease 
query_cache_min_res_unit
Qcache_hits / Qcache_inserts -- good value..bad value: 10,1
 -- Meaning: Hit to insert ratio -- high is good
Qcache_hits / (Qcache_hits + Com_select) -- good value..bad value: 100%,25%
 -- Meaning: Hit ratio -- What to do if 'bad': Use _type=DEMAND and use 
SELECT SQL_NO_CACHE more often
Qcache_total_blocks * query_cache_min_res_unit / Qcache_queries_in_cache
 -- Meaning: estimate of query size -- What to do if 'bad': adjust 
query_cache_min_res_unit
Qcache_queries_in_cache
 -- Meaning: Queries cached
(Com_select + Qcache_hits) / (Com_insert + Com_update + Com_delete + 
Com_replace)
 -- Meaning: Read to write ratio

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Monday, July 15, 2013 11:53 PM
 To: shawn green; mysql@lists.mysql.com
 Subject: Re: Mysql cache issues???
 
 Shawn,
 
 I can't help but wonder wether that first paragraph means there are
 concrete plans to redo the qc?
 
 
 shawn green shawn.l.gr...@oracle.com wrote:
 Hello Egoitz,
 
 On 7/15/2013 1:35 PM, Egoitz Aurrekoetxea wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 15/07/13 17:27, Reindl Harald wrote:
 
 ... snip...
  i would say my caches are working perfectly (not only the mysql
 cache, also opcache etc.) since whe have generate times down to
  0.006 seconds for a typical CMS page here which runs in more than
  200 installations on the main machine, at high load mysqld is  never
 the problem
 
  without the query cache the overall performance drops by 30-40%
 
 
 
  Hi,
 
  The query cache hit rate is near 90% so I assume it's doing all
  properly... now I'm using 1GB as cache but... I will do some
  tries... till I see some significant behavior either due to success
 or
  failure... I was basically wondering what did you though about
  performance penalty due to the mysql cache... just that...
 
  Thank you very much then
  ... signature snipped ...
 
 
 Until we redesign the query cache, those stalls will remain. It is
 unwize to keep so many sets of query results around if they are not
 actually being used.
 
 As has been covered already, the freeze required to perform the purge
 of all results associated with a specific table can at times be
 extended (durations of 20-30 minutes are not unusual with cache sizes
 around 1GB). What you may find is that even if some of your results are
 reused
 
 frequently for a short period of time, they are not reused at all
 beyond a certain moment. This means you have hundreds or thousands of
 sets of query results sitting idle in your cache.  Reduce the size of
 your cache until you start to see your reuse rate or efficiency rate
 decline significantly. You may be surprised how small that is for your
 workload.
 
 To achieve scalability: customize your cache structures to your
 workload (this may mean caching the results somewhere other than
 MySQL), optimize your tables for efficient storage and retrieval, and
 optimize your queries to be as efficient as practical. There are other
 scalability options such as replication and sharding that can also be
 introduced into your production environment to reduce the cost of
 computation on each copy (or portion) of your data. However, this is a
 topic best handled in a separate thread.
 
 --
 Sent from Kaiten Mail. Please excuse my brevity.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Need query to determine different column definitions across tables

2013-07-09 Thread Rick James
Another flavor to try:

SELECT  COLUMN_NAME,
group_concat(db_tbl SEPARATOR ' ') as db_tbls,
group_concat(DISTINCT info SEPARATOR ' | ') as infos
FROM (
SELECT  COLUMN_NAME,
concat(TABLE_SCHEMA, '.', TABLE_NAME) as db_tbl,
concat(COLUMN_TYPE, ' ', CHARACTER_SET_NAME) as info
FROM  `COLUMNS`
WHERE TABLE_SCHEMA = 'test'  -- optionally restrict to a db
 ) x
GROUP BY COLUMN_NAME
HAVING infos LIKE '%|%';

Notice how it uses GROUP_CONCAT() and HAVING to do the filtering.

 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Monday, July 08, 2013 7:57 PM
 To: mysql@lists.mysql.com
 Subject: RE: Need query to determine different column definitions across
 tables
 
 
 
  -Original Message-
  From: Daevid Vincent [mailto:dae...@daevid.com]
  Sent: Monday, July 08, 2013 2:11 PM
  To: mysql@lists.mysql.com
  Subject: Need query to determine different column definitions across
 tables
 
  I'm noticing that across our several databases and hundreds of tables
  that column definitions are not consistent. I'm wondering if there is
  a tool or query (using INFORMATION_SCHEMA perhaps) that will show me
  all databases, tables and columns where they don't match (by column
 name).
 
  For example in one table `foo_id` might be UNSIGNED and in other's it
  is not. Or maybe it's INT(11) in some and INT(10) or worse MEDIUMINT
  in
 others.
  Or extending further Charset/Collation might mismatch and be that
  stupid latin1_swedish_ci and fixed to be utf8 in others.
 
  Stuff like that. I want to see everything where there is some
 difference.
 
 Well, here's the query I'm using currently. Will post updates as I tweak
 it.
 
 USE `information_schema`;
 
 SELECT
 t1.`COLUMN_NAME`,
 t1.`TABLE_NAME`,
 t1.`COLUMN_TYPE`,
 -- CONCAT(t1.`TABLE_NAME`,'.',t1.`COLUMN_TYPE`) as t1_type,
 t2.`TABLE_NAME`,
 t2.`COLUMN_TYPE`
 -- CONCAT(t2.`TABLE_NAME`,'.',t2.`COLUMN_TYPE`) AS t2_type FROM
 `COLUMNS` AS t1
 LEFT JOIN `COLUMNS` AS t2
  ON t1.`COLUMN_NAME` = t2.`COLUMN_NAME`
 AND t1.`COLUMN_TYPE`  t2.`COLUMN_TYPE`
 WHERE t1.`TABLE_SCHEMA` = 'mydatabase'
   AND t2.`TABLE_NAME` IS NOT NULL
 -- HAVING t2_type IS NOT NULL
 ORDER BY `COLUMN_NAME` ASC;
 
 Having separate columns there is easier to read/compare than CONCAT() I
 think.
 
 Another bulk version that comes in handy:
 
 SELECT `COLUMN_NAME`, `COLUMN_TYPE`, `TABLE_SCHEMA`,
   `TABLE_NAME`, `CHARACTER_SET_NAME`, `COLLATION_NAME`
 FROM   `COLUMNS` WHERE `TABLE_SCHEMA` = ' mydatabase '
 ORDER BY `COLUMN_NAME`;
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: how to get the levels of a table or a index in Mysql 5.6?

2013-07-08 Thread Rick James
Or maybe the number of levels in the BTree?

Rule of Thumb:  logarithm base 100

 -Original Message-
 From: Hartmut Holzgraefe [mailto:hart...@skysql.com]
 Sent: Monday, July 08, 2013 6:38 AM
 To: mysql@lists.mysql.com
 Subject: Re: how to get the levels of a table or a index in Mysql 5.6?
 
 On 08.07.2013 04:23, 李炜(平安科技数据库技术支持部) wrote:
 
  how to get the levels of a table or a index in Mysql 5.6?
 
 Level? What is level supposed to be in that context?
 Cardinality? Or something completely different?
 
 /me confused ...
 
 --
 Hartmut Holzgraefe hart...@skysql.com
 Principal Support Engineer (EMEA)
 SkySQL AB - http://www.skysql.com/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: Need query to determine different column definitions across tables

2013-07-08 Thread Rick James
See if you like this:
SELECT  TABLE_SCHEMA as db, TABLE_NAME, COLUMN_NAME,
CHARACTER_SET_NAME, COLUMN_TYPE
FROM  `COLUMNS`
ORDER BY  3,4,5;

You might be able to embellish on it to avoid consistent definitions, etc.

 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Monday, July 08, 2013 2:11 PM
 To: mysql@lists.mysql.com
 Subject: Need query to determine different column definitions across
 tables
 
 I'm noticing that across our several databases and hundreds of tables that
 column definitions are not consistent. I'm wondering if there is a tool or
 query (using INFORMATION_SCHEMA perhaps) that will show me all databases,
 tables and columns where they don't match (by column name).
 
 For example in one table `foo_id` might be UNSIGNED and in other's it is
 not. Or maybe it's INT(11) in some and INT(10) or worse MEDIUMINT in
 others.
 Or extending further Charset/Collation might mismatch and be that stupid
 latin1_swedish_ci and fixed to be utf8 in others.
 
 Stuff like that. I want to see everything where there is some difference.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: database perfomance worries

2013-07-04 Thread Rick James
Set innodb_buffer_pool_size to 70% of _available_ ram.  That may be 11G on your 
16GB machine, unless you have a lot of other bulky stuff there.  Do _not_ make 
it so large that it leads to swapping.  Swapping is much worse on performance 
than shrinking the buffer_pool.

36 seconds for a single-row UPDATE using the PRIMARY KEY -- Something else 
_must_ have been interfering.  DELETE was suggested; ALTER is another 
possibility.  Even with a totally cold cache, that UPDATE should have taken 
much less than one second.  I suspect the problem will not recur.

 KEY `status` (`status`),
That index will probably never be used, due to low cardinality.  Either DROP 
it, or make it 'compound'.

 `flags` varchar(10) DEFAULT NULL COMMENT 'pipe-separated flags',
Consider the SET datatype.

5.6 has some performance improvements, but not related to this query.

Please have the slowlog turned on.  There could be extra, useful, info in it.


 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, July 02, 2013 7:28 PM
 To: Singer Wang
 Cc: Andy Wallace; mysql list
 Subject: Re: database perfomance worries
 
 
   We are on a quest to improve the overall performance of our database.
  It's
   generally
   working pretty well, but we periodically get big slowdowns for no
  apparent
   reason. A
   prime example today - in the command line interface to the DB, I
   tried to update one record, and got:
  
   mysql update agent set number_of_emails = 5 where acnt =
 'AR287416';
   Query OK, 1 row affected (36.35 sec)
   Rows matched: 1  Changed: 1  Warnings: 0
  
   36 seconds to update one table? The primary key is `acnt`. If I run
   the same (basic)
 
 
 Check if there is some DELETE running for the selected table.
 
 If there is a DELETE involving whole table it might be locking up
 database.
 
 Look into mysql-slow.log
 
 Try to optimize your application queries with EXPLAIN.
 
 
 [!!] InnoDB data size / buffer pool: 7.8G/5.5G Variables to adjust:
 innodb_buffer_pool_size (= 7G)
 
 2 GB innodb_buffer_pool is a joke for a dataset of 33 GB
 
 that leads in permanently I/O on mixed load at the chances are high
 that there are times where nothing needed to operate is in the
 buffer_pool and on concurrent load mysqld ends in repeatly swap data in
 and out of the pool
 
 at least all repeatly accessed tables should fit permanently in the
 buffer
 
 it depends on the load and how much data you're acquiring.
 
 if you have 33GB in total, but only using few same tables in total size of
 less than 2GB at the same time it would work just fine.
 
 for example I have 136GB of data, but my buffer is only about 10Gb, but
 most of the queries work just fine (I'm using it for mostly read-only
 things).
 
 but ofc, you need to check your system usage, if mysqld swaps its a bad
 thing and most likely you need to either upgrade your hardware or consider
 checking your data architecture (i.e. use LIMIT for quieries, add more
 indexes, split large tables for a smaller ones which you really update or
 store large data in mongodb etc).
 
 
 
   command again a few seconds later, I get:
  
   mysql update agent set number_of_emails = 15 where acnt =
  'AR287416';
   Query OK, 1 row affected (0.00 sec)
   Rows matched: 1  Changed: 1  Warnings: 0
  
   Why would we be getting such huge variations? We're running Solaris
   10 on i386, with
   4 processors and 16GB of memory, MySQL 5.1.46-log. We are working
   out a plan to upgrade to MySQL 5.6, but I certainly don't want to
   depend on that upgrade to solve all performance problems.
  
   CREATE TABLE `agent`
   (
 `acnt` varchar(20) NOT NULL,
 `passwd` varchar(20) NOT NULL,
 `package` char(2) DEFAULT NULL,
 `data_template` varchar(20) DEFAULT 'NULL',
 `default_search_type` enum('1','2','3') NOT NULL DEFAULT '1',
 `status` enum('A','T','P','C','D','X','**S') NOT NULL
 `flags` varchar(10) DEFAULT NULL COMMENT 'pipe-separated flags',
 `aliases` varchar(4000) NOT NULL DEFAULT '',
 `offices` varchar(4000) NOT NULL DEFAULT '',
 `license_no` varchar(40) NOT NULL DEFAULT '',
 `agent_code` varchar(20) DEFAULT NULL,
 `office_code` varchar(20) DEFAULT NULL,
 `parent_acnt` varchar(20) DEFAULT NULL,
 `number_of_agentlinks` int(11) DEFAULT NULL,
 `number_of_emails` int(11) DEFAULT NULL,
 `fname` varchar(30) DEFAULT NULL,
 `lname` varchar(30) DEFAULT NULL,
  
   whole bunch of other fields
  
 PRIMARY KEY (`acnt`),
 KEY `parent_acnt` (`parent_acnt`),
 KEY `status` (`status`),
 KEY `email` (`email`)
   ) ENGINE=InnoDB DEFAULT CHARSET=utf8
  
  
   --
   Andy Wallace
   iHOUSEweb, Inc.
   awall...@ihouseweb.com
   (866) 645-7700 ext 219
   --
   Sometimes it pays to stay in bed on Monday, rather than spending
   the
  rest
   of the week debugging Monday's code.
   - Christopher Thompson
  
   --
   MySQL General Mailing List
   For list archives: 

RE: Master not creating new binary log.

2013-07-04 Thread Rick James
What setting(s) did you change to move to the separate partition?
   SHOW VARIABLES LIKE '%bin%';
   SHOW VARIABLES LIKE '%dir%';
   (there may be other VARIABLES worth checking)
What steps did you take for the move?  (Actually move bin.1?  Start over?  etc?)

Consider using expire_logs_days.

5.0 -- I don't think anything relevant has changed during 4.0 thru 5.6.

 -Original Message-
 From: Machiel Richards - Gmail [mailto:machiel.richa...@gmail.com]
 Sent: Wednesday, July 03, 2013 3:20 AM
 To: mysql list
 Subject: Master not creating new binary log.
 
 Hi all
 
I hope all are well.
 
I would like some assistance with something really strange.
 
We currently have a problem with a master slave setup running mysql
 5.0.
 
 This is one of our legacy servers which are in the planning to be
 upgraded, however in order for this to be done the replication needs to be
 up and running.
 
 The problem we have currently however is that the binary logs on
 the master was moved to a seperate partition due to disc space
 restrictions.
 
 A new binlog file called mysql-bin.1 was created and everything
 seemed to work fine.
 
  However, the moment the file reached the file size of 100Mb, it
 does not go on to create a new binlog file called mysql-bin.2 and the
 replication fails stating that it is unable to read the binary log file.
 
  Thus far we have done a flush logs and reset master , but the
 same problem occurs, where it creates mysql-bin.1 and the moment it
 reaches it's max size and suppose to create a new file, it stops and does
 not create the new one.
 
  I really hope this makes sense, and that someone can perhaps
 point us in the correct direction.
 
  Any help would be appreciated.
 
 Regards
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Get Affected Rows after Stored Procedure COMMIT

2013-07-03 Thread Rick James
Fetch rows_affected after each INSERT/UPDATE.  Tally them in @variables, if you 
like.  The information is not (I think) available after COMMIT.

 -Original Message-
 From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
 Sent: Tuesday, July 02, 2013 4:30 AM
 To: [MySQL]
 Subject: Get Affected Rows after Stored Procedure COMMIT
 
 Hi,
 
 I have a number of INSERT and UPDATE statements in a MySQL Stored
 Procedure, that works in the form of START TRANSACTION followed by COMMIT.
  Also I am handling any EXCEPTION.
 
 However, after calling COMMIT, how can I get the number of Rows that were
 affected either INSERTED or UPDATTED ?
 
 Thanks,
 Neil

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: 1 file

2013-07-03 Thread Rick James
I once found a slowlog called simply 1.  But I did not track down the cause.  
Possibly it was a not-so-correct configuration script.

SHOW VARIABLES LIKE '%dir%';

ibdata1 grows (never shrinks) when data is added, ALTER is done, etc.  It will 
reuse free space within itself.

innodb_file_per_table=1 is recommended

Having an explicit PRIMARY KEY on InnoDB tables is recommended.  (MEMORY did 
not care much.)

 -Original Message-
 From: Larry Martell [mailto:larry.mart...@gmail.com]
 Sent: Wednesday, July 03, 2013 9:29 AM
 To: shawn green
 Cc: mysql@lists.mysql.com
 Subject: Re: 1 file
 
 On Wed, Jul 3, 2013 at 9:51 AM, shawn green shawn.l.gr...@oracle.com
 wrote:
  Hello Larry,
 
 
  On 7/3/2013 11:27 AM, Larry Martell wrote:
 
  We recently changed from in memory files to InnoDB files. Today we
  noticed that in every server's data dir there is file called '1' that
  seems to get updated every time the iddata1 file gets updated. On
  some servers it's comparable in size to the iddata1 file, on other
  servers it's 10-15x larger, and on others it's 1/2 the size. What is
  this file. Googling revealed nothing about this.
 
 
  That is not something an official MySQL build would do. Consult with
  the person (or group) that compiled your binaries.
 
  Now, if you have enabled --innodb-file-per-table and if you have named
  your table '1' then that file is probably '1.ibd'.  That would be
  expected. But that seems unlikely based on your other details.
 
  Did you also enable a separate undo log, perhaps? Although if you had,
  it should be 'undo1' not just '1'
  http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_i
  nnodb_undo_tablespaces
 
  So, that simple '1' file also seems unusual to me.
 
 Thanks for the reply.
 
 I asked our DBA group and here's the answer I got:
 
 The file is currently accessed by mysqld, please don’t delete it.
 Looking at the file header, it appeared to be an innodb datafile.
 But no idea how it was created.
 
 Sigh.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: Mysql resource limits.

2013-07-02 Thread Rick James
cgroups won't work for individual MySQL users, only for mysqld as a whole.

Monitor the slowlog and help the naughty users fix their naughty queries.

 -Original Message-
 From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
 Sent: Tuesday, July 02, 2013 3:07 AM
 To: mysql@lists.mysql.com
 Subject: Mysql resource limits.
 
 Hi All.
 
 I would like to limit resources available to a given user in mysql. I know
 that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
 I also know that cgroups can be used at operating system level.
 
 What are your experiences in limiting resources in mysql? I've user
 percona statistics and had information provided by it. Are there any
 better solutions?
 
 Best regards,
 Rafal Radecki.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Full text search and sign as a part of the keyword

2013-07-02 Thread Rick James
FULLTEXT (at least the MyISAM version) has 3 gotchas:
ft_min_word_len=4, stopwords, and the 50% rule

 -Original Message-
 From: shawn green [mailto:shawn.l.gr...@oracle.com]
 Sent: Tuesday, July 02, 2013 10:21 AM
 To: mysql@lists.mysql.com
 Subject: Re: Full text search and  sign as a part of the keyword
 
 Hello,
 
 (my response is not top-posted)
 On 7/2/2013 12:50 PM, l...@afan.net wrote:
 
 
 
  Another correction: Searching for Com, the test org is NOT gonna be
  listed but all others will.
 
  Searching for Com no results at all.
 
 
 
 
 
 
 
  Actually, looks like I'm wrong.
 
  For testing purpose I made an org
 
  CompMe
 
  When search for Comp it's gonna
 
  be shown on the list.
 
 
 
  When search for Comp it's also gonna be shown.
  But
 
  Construction Company as well.
 
  Then I changed the name of
 
  the test org to ComMe.
 
 
 
  Searching for Com, the test org is gonna be listed.
 
 
 
  Though, Com no results at
 
  all.
 
  ?!?
 
 
 
  Hi to all,
  I have this full text search query
  SELECT name, org_id,
  address_id
  FROM organization
  WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN
  BOOLEAN
  MODE)
  and I'm not getting any results. And there IS a org ABC, Inc.
  My assumption is the ampersand sign as a part of the keyword.
  Any idea?
 
 
 Read this:
 http://dev.mysql.com/doc/refman/5.5/en/server-system-
 variables.html#sysvar_ft_boolean_syntax
 
 Then search on Comp Me.
 
 Let us know your results.
 
 --
 Shawn Green
 MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware and
 Software, Engineered to Work Together.
 Office: Blountville, TN
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: best way to copy a innodb table

2013-07-02 Thread Rick James
The particular example given here is unsafe and slow.

* Without an ORDER BY, you are not guaranteed that the chunks will be distinct.

* If there are any INSERTs/DELETEs between chunk copies, you will get 
dups/missing rows for two reasons:  the inserted/deleted rows, and the OFFSET 
is not quite right.

* OFFSET requires walking over the skipped rows.  As you get farther into the 
table, this takes longer.  That is, you have an ORDER(N**2) operation, not what 
could be ORDER(N).

* If replication is involved, 1M rows is a lot -- there will be noticeable 
delays where other replication activity is stalled.

If you have an AUTO_INCREMENT PRIMARY KEY, then using WHERE id  100 AND id 
= 200 is a better approach -- Order(N), and chunks guaranteed to be 
distinct.  Still, it is not immune from INSERTs/DELETEs.  Replication is fixed 
by decreasing chunk size (and by avoiding OFFSET).

 -Original Message-
 From: Arjun [mailto:na...@yahoo.com]
 Sent: Tuesday, July 02, 2013 12:48 AM
 To: mysql@lists.mysql.com
 Subject: Re: best way to copy a innodb table
 
 Well, the easy way to chunk the inserts is by use of limit. Here is what I
 used for one of my projects:
 
 Insert ignore into t1 (f1, f2, f3)
 Select f1, f2, f3 from t2 limit 100, 100
 
 Inserts 1M records at a time starting from 1M th record in t2 and you can
 keep incrementing this offset as you progress. This will help in
 monitoring the table inserts and at the same time move chunks of records
 from source table.
 
 Enjoy!
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mysql on zfs

2013-06-28 Thread Rick James
Prefer xfs on RHEL.

Certain stalls are inherent in older InnoDBs, but MariaDB 5.5 should have the 
Percona fixes that greatly smoothed out that problem.

What kind of drives?  A RAID controller with caching helps for datasets that 
big.

innodb_flush_log_at_trx_commit = 1 is a big performance killer if you are 
inserting one row at a time.

Check sync_binlogs, too.

 -Original Message-
 From: nixofortune [mailto:nixofort...@gmail.com]
 Sent: Friday, June 28, 2013 1:29 AM
 To: mysql list
 Subject: mysql on zfs
 
 Hi guys,
 Did you have any experience running MyLSQ or in my case MariaDB 5.5.31 on
 FreeBSD on top of zfs?
 We are using Samsung Pro 840 SSD drives and experiencing temporary stalls.
 Our workload very much skewed towards inserts into big InnoDB tables
 (70-100Gig) the dataset overall 1.5T.
 I have feeling that ZFS is not mature enough to be used on production.
 The speed is not great either 2k-6k/s.
 I disabled innodb_checksums = 0 , innodb_doublewrite = 0  but the stalls
 up to 8min still there.
 Would it be better option to move to EXT4? We need FS snapshots for
 backups.
 Your thought guys.
 Many thanks.
 Igor
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: NOW() is stuck...

2013-06-26 Thread Rick James
Submit a bug:
http://bugs.mysql.com
Alas, you probably cannot provide a reproducible test case.  Still, someone 
might start at the code and discover a possible cause.

 -Original Message-
 From: Andy Wallace [mailto:awall...@ihouseweb.com]
 Sent: Wednesday, June 26, 2013 3:10 PM
 To: mysql list
 Subject: NOW() is stuck...
 
 We've been having some issues with one of our MySQL servers lately, and
 currently the dang thing is stuck. For at least the last hour, NOW() is
 returning the same
 value:
 
 mysql select now();
 +-+
 | now()   |
 +-+
 | 2013-06-26 02:27:14 |
 +-+
 
 The system variable timestamp also has that same time value stored in
 it. How can we kick this loose so that the values are more current with
 real time? (it is currently 3:08PM here, despite our MySQL instance
 thinking it's 2am. The system time on the machine is correct:
 
 $ date
 Wed Jun 26 15:08:56 PDT 2013
 
 
 This is MySQL 5.1.46 running on solaris2.10.
 
 Any ideas short of restarting the MySQL engine? I'm willing to do that,
 but would much rather wait and not do it in the middle of the day.
 
 Thanks,
 Andy
 
 
 --
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 Sometimes it pays to stay in bed on Monday, rather than spending the rest
 of the week debugging Monday's code.
 - Christopher Thompson
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: space gone after MyISAM REPAIR TABLE

2013-06-26 Thread Rick James
 (`site_id`,`index_date`,`index_month`,`index_year`,`keyword`,`source`,`visits`,
  
 `bounced_visits`,`transactions`,`revenue`,`value_per_click`,`conversions`,`goal_value`);

May we see the SHOW CREATE TABLE?  Some of this smells bad.
* It is almost always bad to split day/month/year into multiple fields.
* Often a fact table, which this sounds like, should not have extra indexes.
* Is each datatype as small as is practical?
* Are any of the fields VARCHAR, and could be 'normalized'?

I would expect this to the fastest way to convert (assuming you have the disk 
space):
   CREATE TABLE new LIKE old;
   ALTER TABLE new DROP (all indexes), ADD PRIMARY KEY (either autoincrement or 
a 'natural' compound PK), ENGINE=InnoDB;
   INSERT INTO new SELECT site_id, ..., goal_value FROM old;
   ALTER TABLE new ADD INDEX (...);

What version of MySQL are you running?  Newer versions do the ALTER TABLE 
faster (online??), and may require you to do one add at a time.

Another issue...
If the data in `old` is in the same order as the PRIMARY KEY of `new`, then 
INSERT..SELECT will run fast.  (No need to jump around to find where to put 
each row.)
Case 1:  You are adding an AUTO_INC -- it will be in the 'right' order.
Case 2:  The new PK is approximately the order of the insertions into `old` -- 
probably run fast.  (However, I do not see a likely natural PK that would allow 
this

INSERT ... SELECT...ORDER BY (new PK) -- This would make the INSERT part fast, 
but the SELECT part would be slow.  (You can't win)

Your task is all about disk hits.  By understanding what MySQL has to do, you 
can 'predict' whether a plan will be slow or slower.

Back to the secondary indexes...
What are the SELECTs that will benefit from them?  (Sometimes discussing this 
can lead to fewer/better INDEXes.  Often it leads to suggesting Summary 
Table(s).)


 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 26, 2013 11:46 AM
 To: li...@netrogenic.com; Jay Ess; mysql@lists.mysql.com
 Subject: Re: space gone after MyISAM REPAIR TABLE
 
 You can't actually move innodb tables around until 5.6 where you have
 transpotable tablespaces.
 
 I suggest having a good hard look at pt-online-schema-change or
 whatsitcalled.
 
 Jay Ess li...@netrogenic.com wrote:
 On 2013-06-26 18:31, nixofortune wrote:
  What would be the best way to convert BIG MyISAM table into InnoDB?
 We do not
  have SLAVE.
 
 I would do it on another computer. Then copy the table to the server
 and then add the data that has been added from the original table.
 
 And/or i would experiment with TokuDB. I havent had the time to do it
 myself but will probably soon. I am too looking for a lengthy 1
 billion+ row conversion.
 
 --
 Sent from Kaiten Mail. Please excuse my brevity.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: How can I drop a table that is named “logs/#sql-ib203” and appeared after MySQL crash?

2013-06-25 Thread Rick James
If a crash occurs in the middle of an ALTER, the files may not get cleaned up.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Thursday, June 20, 2013 12:57 PM
 To: mysql@lists.mysql.com
 Subject: Re: How can I drop a table that is named “logs/#sql-ib203” and
 appeared after MySQL crash?
 
 i know what happened but how get rid of these two bullshit files after
 *three years* not touched and used by mysqld
 
 Am 20.06.2013 21:28, schrieb Rick James:
  #sql files are temp tables that vanish when the ALTER (or whatever)
 finishes.  If you find one sitting around, it sounds like a crash happened
 in the middle of the ALTER.
 
  -Original Message-
  From: Reindl Harald [mailto:h.rei...@thelounge.net]
  Sent: Wednesday, June 19, 2013 12:19 PM
  To: mysql@lists.mysql.com
  Subject: Re: How can I drop a table that is named “logs/#sql-ib203”
  and appeared after MySQL crash?
 
 
 
  Am 19.06.2013 21:00, schrieb Franck Dernoncourt:
  `USE logs; SHOW TABLES;` does not list the table `logs/#sql-ib203`,
  but when trying to `ALTER` the table that was being changed during
  the crash MySQL complains about the existence of the table
  `logs/#sql-
  ib203`:
 
  ERROR 1050: Table 'logs/#sql-ib203' already exists
 
  SQL Statement:
 
  ALTER TABLE `logs`.`srv_logs` DROP COLUMN `filenum` , DROP COLUMN
  `agent` , DROP COLUMN `ip` , DROP COLUMN `event_source`
 
  DROP TABLE `logs/#sql-ib203`; does not work, neither do some name
  variants `/#sql-ib203`, `#sql-ib203`, `sql-ib203`. (respectively
  Error
  Code: 1051.
  Unknown table 'logs.logs/#sql-ib203',  Error Code: 1051. Unknown
  table
 
  I use innodb_file_per_table. There was a mysql_datadir/logs/#sql-
  ib203.ibd
  file (or maybe .frm, sorry I forgot) that I deleted.
 
  Any idea how to get rid of this ghostly table `logs/#sql-ib203`?
 
  interesting question, i have identical crap since 2009 also after a
  crash und these blind table has the same structure as a used
  existing one
 
  if i delete the #-files mysql whines every startup while they are
  never used and it is ridiculous that there are references in the
  table space to this useless crap and no mysql version from 5.1.8 to
  5.5.32 is fixing this
 
  -rw-rw 1 mysql mysql 8,4K 2011-07-24 11:49 #sql2-704-271.frm
  -rw-rw 1 mysql mysql  64K 2011-07-24 11:49 #sql2-704-271.ibd


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: SHOW INNODB STATUS - FILE I/O - OS reads/writes/syncs?

2013-06-25 Thread Rick James
SHOW GLOBAL STATUS LIKE 'Innodb%';
Then do some math -- usually dividing by Uptime.
That will give you some insight in how hard the I/O is working, and how full 
the buffer_pool is.

 -Original Message-
 From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
 Sent: Friday, June 21, 2013 4:59 AM
 To: mysql@lists.mysql.com
 Subject: SHOW INNODB STATUS - FILE I/O - OS reads/writes/syncs?
 
 Hi All.
 
 I've searched but with no luck... what do exactly these variables mean:
 
 1343928 OS file reads, 1085452262 OS file writes, 19976022 OS fsyncs
 
 ?
 I am wondering if my innodb_buffer_pool setting is not to low. Does 'file
 reads' show number of times innodb files have been read into memory from
 server's start? What about file writes/fsyncs?
 
 Best regards,
 Rafal Radecki.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: UPDATE_TIME for InnoDB in MySQL 5.7

2013-06-25 Thread Rick James
Yeah, why not flush them to disk on a clean shutdown, and periodically before 
that?

 -Original Message-
 From: Dotan Cohen [mailto:dotanco...@gmail.com]
 Sent: Sunday, June 23, 2013 10:39 AM
 To: mysql.
 Subject: UPDATE_TIME for InnoDB in MySQL 5.7
 
 The MySQL 5.7 changelog mentions:
 Beginning with MySQL 5.7.2, UPDATE_TIME displays a timestamp value for
 the last UPDATE, INSERT, or DELETE performed on InnoDB tables.
 Previously, UPDATE_TIME displayed a NULL value for InnoDB tables. For
 MVCC, the timestamp value reflects the COMMIT time, which is considered
 the last update time. Timestamps are not persisted when the server is
 restarted or when the table is evicted from the InnoDB data dictionary
 cache.
 
 This is great news! However, I would in fact need the UPDATE_TIME to
 persist across database server resets. Is this feature being considered or
 discussed? Where might I find it online?
 
 Thank you to the MySQL team and to Oracle for filling in InnoDB;s missing
 features!
 
 --
 Dotan Cohen
 
 http://gibberish.co.il
 http://what-is-what.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: space gone after MyISAM REPAIR TABLE

2013-06-25 Thread Rick James
Switch to InnoDB so you won't have to repair after crashes.
Caution:  InnoDB takes 2x-3x the disk space per table.  Be sure to use 
innodb_file_per_table=1.
 Repair by sort. is usually much faster than repair by keycache; you 
probably got 'sort' because of this being big enough:  myisam_sort_buffer_size 
= 526M

 -Original Message-
 From: nixofortune [mailto:nixofort...@gmail.com]
 Sent: Monday, June 24, 2013 12:35 PM
 To: mysql@lists.mysql.com
 Subject: Re: space gone after MyISAM REPAIR TABLE
 
 On 24/06/13 19:57, Reindl Harald wrote:
 
  Am 24.06.2013 18:47, schrieb Johan De Meersman:
  - Original Message -
  From: nixofortune nixofort...@gmail.com
 
  Hi guys,
  any suggestions? I just repaired 90G MyISAM table with REPAIR TABLE
  command. the space on the hard drive gone down from 165 Gig to 70
  Gig. I understand that during repair process MySQL creates temp file
  and remove it after the job done.  Or removal process executes on
  the server restart? how can I get that space back? I can't check the
  table directory as I don't have root perm on that box.
  Oops... Can you run [show global variables like
 'innodb_file_per_table';] ?
 
  I kind of expect it to be OFF, which means that the temp table would
 have been created in the main tablespace. If that's the case, that space
 has been permanently assimilated by the global tablespace; the only way to
 get it back would be a full dump of all your (innodb) tables, stop server,
 delete tablespace, start server and import the data again. Be sure to read
 the documentation carefully before doing such an intrusive operation.
  While you're doing that, use the opportunity to set
  innodb_file_per_table to ON :-p
  he spoke about MYISAM table
 
  the space on the hard drive gone down from 165 Gig to 70 Gig how can
  I get that space back?
  I can't check the table directory as I don't have root perm
  well, someone should look at the dadadir and error-log it is not
  uncommon that a repair to such large tables fails due too small
  myisam_sort_buffer_size and i suspect the operation failed and some
  temp file is laying around
 
 Thanks Reindl, It looks like Repair operation completed successfully.
 Overall it took 2Hours to complete with OK massage and some other message
 related to the index size. Repair process went through Repair by sort.
 myisam_sort_buffer_size = 526M.
 Provider runs MySQL on FreeBSD + ZFS file system. Could it be up to
 snapshots as well?
 I will ask them to look inside of datadir as we migrated this DB from
 Solaris just day before. This is a new DB for me and I never worked with
 MyISAM tables of that size.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: How can I drop a table that is named “logs/#sql-ib203” and appeared after MySQL crash?

2013-06-20 Thread Rick James
#sql files are temp tables that vanish when the ALTER (or whatever) finishes.  
If you find one sitting around, it sounds like a crash happened in the middle 
of the ALTER.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Wednesday, June 19, 2013 12:19 PM
 To: mysql@lists.mysql.com
 Subject: Re: How can I drop a table that is named “logs/#sql-ib203” and
 appeared after MySQL crash?
 
 
 
 Am 19.06.2013 21:00, schrieb Franck Dernoncourt:
  `USE logs; SHOW TABLES;` does not list the table `logs/#sql-ib203`,
  but when trying to `ALTER` the table that was being changed during the
  crash MySQL complains about the existence of the table `logs/#sql-
 ib203`:
 
  ERROR 1050: Table 'logs/#sql-ib203' already exists
 
  SQL Statement:
 
  ALTER TABLE `logs`.`srv_logs` DROP COLUMN `filenum` , DROP COLUMN
  `agent` , DROP COLUMN `ip` , DROP COLUMN `event_source`
 
  DROP TABLE `logs/#sql-ib203`; does not work, neither do some name
  variants `/#sql-ib203`, `#sql-ib203`, `sql-ib203`. (respectively Error
 Code: 1051.
  Unknown table 'logs.logs/#sql-ib203',  Error Code: 1051. Unknown table
 
  I use innodb_file_per_table. There was a mysql_datadir/logs/#sql-
 ib203.ibd
  file (or maybe .frm, sorry I forgot) that I deleted.
 
  Any idea how to get rid of this ghostly table `logs/#sql-ib203`?
 
 interesting question, i have identical crap since 2009 also after a crash
 und these blind table has the same structure as a used existing one
 
 if i delete the #-files mysql whines every startup while they are never
 used and it is ridiculous that there are references in the table space to
 this useless crap and no mysql version from 5.1.8 to 5.5.32 is fixing this
 
 -rw-rw 1 mysql mysql 8,4K 2011-07-24 11:49 #sql2-704-271.frm
 -rw-rw 1 mysql mysql  64K 2011-07-24 11:49 #sql2-704-271.ibd
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: SLAVE aware of binary log file switch?

2013-06-17 Thread Rick James
 Waiting for master to send event -- just means that nothing is being 
replicated at the moment.

The Yes+Yes says that things are running.

Seconds_behind_master = 0 says that the Slave is essentially caught up.  NULL 
means something is broken.  0 _may_ indicate a problem, or it may indicate a 
brief delay.

 -Original Message-
 From: Mihamina Rakotomandimby [mailto:miham...@rktmb.org]
 Sent: Monday, June 17, 2013 5:35 AM
 To: mysql@lists.mysql.com
 Subject: Re: SLAVE aware of binary log file switch?
 
 On 2013-06-17 14:43, Denis Jedig wrote:
  Say the binary log file (on the master) has reached its maximum size,
  so that it has to switch to a +1 binary log file: does he inform
  the SLAVE of that switch so that the SLAVE updates its information
  about the MASTER status?
  The master does not inform the slave via an immediate communication
  channel, but the slave knows how to keep up because the end of the
  binary log file contains continuation information - i.e. the name of
  the next log file to fetch.
 
 OK.
 
 I'm sorry I was mislead by the output of:
 [mihamina@prod-ebidual ~]$ echo SHOW SLAVE STATUS \G; | mysql -uroot -
 px | grep 'Slave_'
 Slave_IO_State: Waiting for master to send event --
 this
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
 
 --
 RMA.



RE: How do I select all rows of table that have some rows in another table (AND, not OR)

2013-06-12 Thread Rick James
Thinking out of the box... (And posting my reply at the 'wrong' end of the 
email.)...

Are there fewer than 64 genres?  Use a SET or BIGINT UNSIGNED.

AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61)

--
AND  genre  ((110) | (138)) != 0
AND  genre  ((122) | (161))  = 0

This would avoid having that extra table, and save a lot of space.

If you have more than 64 genres, then 
Plan A: clump them into some kind of application grouping and use multiple 
INTs/SETs.
Plan B: do mod  div arithmetic to compute which genre field to tackle.

For B, something like:
AND  (genre1  (10)) + (genre3  (18)) != 0
AND  (genre2  (12)) + (genre6  (11))  = 0
(That's assuming 10 bits per genre# field.  I would use 32 and INT UNSIGNED.)



 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Tuesday, June 11, 2013 4:17 PM
 To: mysql@lists.mysql.com
 Cc: 'shawn green'
 Subject: RE: How do I select all rows of table that have some rows in
 another table (AND, not OR)
 
 
 
  -Original Message-
  From: shawn green [mailto:shawn.l.gr...@oracle.com]
  Sent: Tuesday, June 11, 2013 2:16 PM
  To: mysql@lists.mysql.com
  Subject: Re: How do I select all rows of table that have some rows in
  another table (AND, not OR)
 
  Hello Daevid,
 
  On 6/11/2013 3:59 PM, Daevid Vincent wrote:
   I am trying to implement a filter so that a user could select
   various
  genres
   they want in or out. Perhaps they like 'action' and 'car chases'
   but don't like 'foreign' and 'drifting' (or whatever...)
  
   So I want something sort of like this, however IN() is using an OR
   comparison when I need it to be an AND
  
   SELECT DISTINCT
s.`scene_id` AS `id`,
GROUP_CONCAT(sg.`genre_id`) FROM
`dvds` AS d
JOIN `scenes_list` AS s
ON s.`dvd_id` = d.`dvd_id`
JOIN `scenes_genres` AS sg
ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN (10,38)
AND sg.`genre_id` NOT IN (22,61) GROUP BY
   s.`scene_id`;
  
   This is giving me way way too many rows returned.
  
   For example, I would expect this scene_id to be in the result set:
  
   SELECT * FROM scenes_genres WHERE scene_id = 17;
  
   scene_id  genre_id
     --
  17   1
  17   3
  17  10 --
  17  19
  17  38 --
  17  53
  17  58
  17  59
  
   And this scene ID to NOT be in the result set:
  
   SELECT * FROM scenes_genres WHERE scene_id = 11;
  
   scene_id  genre_id
     --
  11   1
  11  10 --
  11  19
  11  31
  11  32
-- but does not have 38
  11  59
  
   I've tried various subselect ideas, but of course this fails b/c
 genre_id
   can't be multiple things at one time (AND)
  
   JOIN `scenes_genres` AS sg
 ON sg.`scene_id` = s.`scene_id`
AND sg.`genre_id` IN (
  SELECT `genre_id` FROM `scenes_genres`
   WHERE `genre_id` = 10
  AND `genre_id` = 38
  AND `genre_id`  22
  AND `genre_id`  61
   )
  
   And straight up like this failure too...
  
  JOIN `scenes_genres` AS sg
ON sg.`scene_id` = s.`scene_id`
   AND (sg.`genre_id` = 10 AND sg.`genre_id` = 38)
   AND NOT (sg.`genre_id` = 22 AND sg.`genre_id` = 61)
  
   So I'm sort of out of ideas at this point and hoping someone has a
   way
 to
  do
   this.
  
   Also, just for SG this is how we are currently implementing it, but
   we
  feel
   the REGEXP is killing our queries and while clever is a bit hacky
   and nullifies any indexes we have on the genres column as it
   requires a file_sort table scan to compare substrings basically...
  
   SELECT * FROM scene_all_genres WHERE scene_id = 17;
  
   scene_id  genres
     
  17  1|3|10|19|38|53|58|59|
  
   SELECT * FROM scene_all_genres WHERE scene_id = 11;
  
   scene_id  genres
     ---
  11  1|10|19|31|32|59|
  
   SELECT DISTINCT
s.`scene_id` AS `id`,
sg.`genres`
   FROM
`scene_all_genres` AS sg,
`dvds` AS d,
`scenes_list` AS s
   WHERE  dvd_id` = d.`dvd_id`
AND sg.`scene_id` = s.`scene_id`
AND sg.`genres` REGEXP '[[::]]10[[::]].*[[::]]38[[::]]'
AND sg.`genres` NOT REGEXP
  '(([[::]]22[[::]])|([[::]]61[[::]]))'
  
   ;
  
   http://dev.mysql.com/doc/refman/5.0/en/regexp.html#operator_regexp
  
   [[::]], [[::]]
  
   These markers stand for word boundaries. They match the beginning
   and
 end
  of
   words, respectively. A word is a sequence of word characters that 

RE: How do I select all rows of table that have some rows in another table (AND, not OR)

2013-06-12 Thread Rick James
I'm the ORIGINAL Rick James, B  (And, I'm still alive.)  LOL

If you are using PHP, you might want to stop at 31 bits per INT/SET.  PHP seems 
not to yet be in the 64-bit world.

 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Wednesday, June 12, 2013 11:26 AM
 To: mysql@lists.mysql.com
 Subject: RE: How do I select all rows of table that have some rows in
 another table (AND, not OR)
 
 This idea is so fancy pants and clever I *wish* it could have worked for
 me.
 I checked and we actually have 65 genres currently (with more to come I'm
 sure) LOL *FML*. I'm almost ashamed I didn't think of this. I 3 me some
 bitmasks and this solution is so elegant. It's unfortunate there isn't a
 way to use more than 64-bits natively.
 
 You're RICK JAMES Bitch! :-p   (please tell me you know the Dave
 Chappelles
 skit I'm referring to)
 
  -Original Message-
  From: Rick James [mailto:rja...@yahoo-inc.com]
  Sent: Wednesday, June 12, 2013 9:39 AM
  To: Daevid Vincent; mysql@lists.mysql.com
  Cc: 'shawn green'
  Subject: RE: How do I select all rows of table that have some rows in
  another table (AND, not OR)
 
  Thinking out of the box... (And posting my reply at the 'wrong' end of
  the email.)...
 
  Are there fewer than 64 genres?  Use a SET or BIGINT UNSIGNED.
 
  AND sg.`genre_id` IN (10,38)
  AND sg.`genre_id` NOT IN (22,61)
 
  --
  AND  genre  ((110) | (138)) != 0
  AND  genre  ((122) | (161))  = 0
 
  This would avoid having that extra table, and save a lot of space.
 
  If you have more than 64 genres, then
  Plan A: clump them into some kind of application grouping and use
  multiple INTs/SETs.
  Plan B: do mod  div arithmetic to compute which genre field to tackle.
 
  For B, something like:
  AND  (genre1  (10)) + (genre3  (18)) != 0
  AND  (genre2  (12)) + (genre6  (11))  = 0 (That's assuming
  10 bits per genre# field.  I would use 32 and INT
  UNSIGNED.)
 
 
 
   -Original Message-
   From: Daevid Vincent [mailto:dae...@daevid.com]
   Sent: Tuesday, June 11, 2013 4:17 PM
   To: mysql@lists.mysql.com
   Cc: 'shawn green'
   Subject: RE: How do I select all rows of table that have some rows
   in another table (AND, not OR)
  
  
  
-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql@lists.mysql.com
Subject: Re: How do I select all rows of table that have some rows
in another table (AND, not OR)
   
Hello Daevid,
   
On 6/11/2013 3:59 PM, Daevid Vincent wrote:
 I am trying to implement a filter so that a user could select
 various
genres
 they want in or out. Perhaps they like 'action' and 'car
 chases'
 but don't like 'foreign' and 'drifting' (or whatever...)

 So I want something sort of like this, however IN() is using an
 OR
 comparison when I need it to be an AND

 SELECT DISTINCT
  s.`scene_id` AS `id`,
  GROUP_CONCAT(sg.`genre_id`) FROM
  `dvds` AS d
  JOIN `scenes_list` AS s
  ON s.`dvd_id` = d.`dvd_id`
  JOIN `scenes_genres` AS sg
  ON sg.`scene_id` = s.`scene_id`
  AND sg.`genre_id` IN (10,38)
  AND sg.`genre_id` NOT IN (22,61) GROUP BY
 s.`scene_id`;

 This is giving me way way too many rows returned.

 For example, I would expect this scene_id to be in the result set:

 SELECT * FROM scenes_genres WHERE scene_id = 17;

 scene_id  genre_id
   --
17   1
17   3
17  10 --
17  19
17  38 --
17  53
17  58
17  59

 And this scene ID to NOT be in the result set:

 SELECT * FROM scenes_genres WHERE scene_id = 11;

 scene_id  genre_id
   --
11   1
11  10 --
11  19
11  31
11  32
  -- but does not have 38
11  59

 I've tried various subselect ideas, but of course this fails b/c
   genre_id
 can't be multiple things at one time (AND)

 JOIN `scenes_genres` AS sg
   ON sg.`scene_id` = s.`scene_id`
  AND sg.`genre_id` IN (
SELECT `genre_id` FROM `scenes_genres`
 WHERE `genre_id` = 10
AND `genre_id` = 38
AND `genre_id`  22
AND `genre_id`  61
 )

 And straight up like this failure too...

JOIN `scenes_genres` AS sg
  ON sg.`scene_id` = s.`scene_id`
 AND (sg.`genre_id` = 10 AND sg.`genre_id` = 38)
 AND NOT (sg

RE: string-likeness

2013-06-03 Thread Rick James
Soundex is the 'right' approach, but it needs improvement.  So, find an 
improvement, then do something like this...
Store the Soundex value in a column of its own, INDEX that column, and JOIN on 
that column using =.  Thus, ...
* You have spent the effort to convert to Soundex once, not on every call.
* Multiple strings will have the same Soundex, but generally not many will have 
the same.  Hence, the JOIN won't be 1:1, but rather some small number.

Other approaches (eg, Levenshtein) need both strings in the computation.  It 
_may_ be possible to work around that by the following.
Let's say you wanted to a match if
* one letter was dropped or added or changed, or
* one pair of adjacent letters was swapped.
Then...  For a N-letter word, store N+1 rows:
* The word, as is,
* The N words, each shortened by one letter.
Then an equal match on that hacked column will catch single 
dropped/added/changed letter with only N+1 matches.
(Minor note:  doubled letters make the count less than N+1.)

 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Monday, June 03, 2013 8:30 AM
 To: mysql@lists.mysql.com
 Subject: string-likeness
 
 I wish to join two tables on likeness, not equality, of character strings.
 Soundex does not work. I am using the Levenstein edit distance, written in
 SQL, a very costly test, and I am in no position to write it in C and link
 it to MySQL--and joining on equality takes a fraction of a second, and
 this takes hours. Any good ideas?
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Audit Table storage for Primary Key(s)

2013-05-31 Thread Rick James
UUID PRIMARY KEY (or even secondary index) -- 
Once the table gets big enough (bigger than RAM cache), each row INSERTed (or 
SELECTed) will be a disk hit.  (Rule of Thumb:  only 100 hits/sec.)  This is 
because _random_ keys (like UUID) make caching useless.  Actually, the slowdown 
will be gradual.  For example, once the table is 5 times as big as the cache, 
80% (1-1/5) of the INSERTs/SELECTs will hit disk.
Bottom line -- Avoid UUIDs in huge tables, if at all possible.  (Exception:  
The bits in type-1 UUIDs can be rearranged to be roughly chronological.)

BIGINT -- You cannot possibly hit its max with any existing hardware.

MyISAM -- PRIMARY KEY is just another secondary index.  Secondary indexes are 
separate BTrees.

InnoDB -- PRIMARY KEY and data coexist in the same BTree.  Secondary indexes 
are separate BTrees.

So, assuming this audit table will be huge (too big to be cached), you need 
to carefully consider every index, both for writing and for reading.

You mentioned that you might audit 50 tables?  An index that starts with 
table_name would be inserting/selecting in 50 spots.  If the second part of the 
index is something 'chronological', such as an AUTO_INCREMENT or TIMESTAMP, 
then there would be 50 hot spots in the index.  This is quite efficient.  
INDEX(table_name, UUID) would be bad because of the randomness.

InnoDB may be the preferred engine, even though the footprint is bigger.  This 
is because careful design of the PK could lead to INSERTs into hot spot(s), 
plus SELECTs being able to take advantage of locality of reference.  With 
PRIMARY KEY(table_name, ...), and SELECT .. WHERE tablename='...', InnoDB will 
find all the rows together (fewer disk hits); MyISAM will find the data 
scattered (more disk hits, hence slower).

Another aspect...  Would your SELECTs say WHERE ... AND timestamp BETWEEN... 
?  And, would you _usually_ query _recent_ times?  If so, there could be a 
boost from doing both of these
** PARTITION BY RANGE(TO_DAYS(timestamp))
** Move timestamp to the _end_ of any indexes that it is in.

I would be happy to discuss these principles further.  To be able to discuss 
more specifically, please provide
** Your tentative SHOW CREATE TABLE
** how big you plan for the table to become (#rows or GB),
** how much RAM you have

 -Original Message-
 From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
 Sent: Friday, May 31, 2013 4:05 AM
 Cc: [MySQL]
 Subject: Re: Audit Table storage for Primary Key(s)
 
 Based on the little information available, I would make a lookup field
 consisting of tablename and primary keys.
 
 (although I still believe that storing this information in the database
 in the first place is probably the wrong approach, but to each his own)
 
 / Carsten
 
 On 31-05-2013 12:58, Neil Tompkins wrote:
  The kind of look ups will be trying to diagnose when and by who
  applied a update.  So the primary key of the audit is important.  My
  question is for performance, should the primary key be stored as a
  indexed field like I mentioned before, or should I have a actual
  individual field per primary key
 
 
  On Fri, May 31, 2013 at 11:03 AM, Carsten Pedersen
  cars...@bitbybit.dk mailto:cars...@bitbybit.dk wrote:
 
  Again: Unless you can give some idea as to the kind of lookups
 you
  will be performing (which fields? Temporal values? etc.), it is
  impossible to give advice on the table structure. I wouldn't
 blame
  anyone for not being able to do so; saving data for debugging
 will
  always be a moving target and almost by definition you don't know
  today what you'll be looking for tomorrow.
 
  That's why I think that using CSV tables _the contents of which
 can
  subsequently be analyzed using any of a number of text file
  processing tools_ may indeed be your best initial option.
 
  On UUIDs vs. INTs: (1) Please do yourself a favor and read up on
 how
  UUIDs are generated. If it's the same server that generates all
 the
  UUIDs, you won't get a lot of uniqueness for the amount of space
  you'll be using for your data and index; (2) Please do the math
 of
  just how many inserts you can do per second over the next 1.000
  years if you use a longint auto-increment field for your PK.
 
  / Carsten
 
  On 31-05-2013 11 tel:31-05-2013%2011:14, Neil Tompkins wrote:
 
  Thanks for your response.  We expect to use the Audit log
 when
  looking into
  exceptions and/or any need to debug table updates.  I don't
  think a CSV
  table would be sufficient as we are wanting to use a
 interface
  to query
  this data at least on a daily basis if not weekly.
 
  I use UUID because we have currently 54 tables, of which
  probably 30 will
  be audited.  So a INT PK wouldn't work because of the number
 of
  updates we
  are applying.
 
 
  On Fri, May 31, 2013 at 9:58 AM, Carsten 

RE: Bug in BETWEEN same DATETIME

2013-05-29 Thread Rick James
(To ramble in a slightly different direction...)

I claim that the world gained half a second when we went from round time to 
square time a few decades ago.  Before then, announcers on radio/tv would 
look at their round-shape analog clock to see what time it was; they would 
perform a ROUND() function before announcing the time.  Now they look at their 
square-shaped digital clock and perform FLOOR().  So, what you hear on radio/tv 
is half a second behind what you used to hear.   ;)

 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Friday, May 24, 2013 11:08 AM
 To: mysql@lists.mysql.com
 Subject: Re: Bug in BETWEEN same DATETIME
 
  2013/05/24 09:49 -0400, shawn green 
 Or we could coerce datetime values back to their date values when both
 are being used. The trick now becomes choosing between rounding the
 datetime value (times past noon round to the next date) or do we use
 the floor() function all the time.
 
 This is simply wrong. Timestamps are not numbers: we do not add
 timestamps, and when we subtract them we do not consider the difference
 something of the same type. Therefore, one does well to be wary when
 applying to a timestamp the notion rounding.
 
 But containment generally applys: an event on MAY 25th from 1pm to 4pm
 is within May 25th, which is within May,  When containment fails,
 then there is trouble: what is the first weekend of August? or the
 first week of August? better to say, the weekend or week of August 1st,
 or 2d, or ...; day is a common divisor to calendar-month, weekend,
 and week.
 
 Therefore, when I learnt that in version 4 MySQL had gone from
 interpreting a comparison between DATE and a finer timestamp by the
 DATE to interpreting it by the finer timestamp I believed that MySQL
 was going the wrong way--that MySQL had gone from a realization of an
 intuitive sense of containing, as above, to one on which too much
 thought had been expended, with a loss of intuitive sense.
 
 I consider the change of 2013/5/25-13 to 2013/5/25 to be truncation,
 not any sort of rounding; that is, it is a matter of notation, but one
 which intuitivly expresses containment.
 
 These notions sometimes change over the years, and by nation. When the
 first public striking clock was set up in Milan, it pointed to hours I
 through XXIV, with sunset falling within the 24th hour--that is, the
 24th hour ends with 24 o'clock s being struck. This persists to this
 day in the German expression viertel sechs, which means that the
 sixth hour is one-fourth over, or, as we would say it, quarter after
 five. (Like expressions are found amongst the Germans s neighbors, but
 in English never took root.) Nowadays we are are more inclined to
 associate both quarter after five and quarter to six (dreiviertel
 sechs) with 5 o'clock than 6 o'clock; this accompanies the change of
 notation from 1 through 24 to 0 through 23.
 
 I find MySQL s automatic conversion sometimes to be downright screwy;
 (version 5.5.8) consider SELECT NULL and SELECT NULL UNION SELECT
 NULL; in one of my views there is a complex wholly numeric expression
 that becomes varbinary(32).
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Bug in BETWEEN same DATETIME

2013-05-24 Thread Rick James
For years (even decades), I have stayed out of trouble by assuming a 'date' 
represents the instant in time corresponding to midnight at the start of that 
day.  In MySQL (until 5.6), that is equivalent to a 1-second DATETIME.  I also 
assume midnight belongs to the day that it is the _start_ of.

 There is also a strong desire to make a database server try to do exactly 
 what the user tells it to do.
That is difficult, because of definitions and representation.
A common problem is comparing a FLOAT value to a 'decimal' value like 1.23.  
MySQL does a good job of covering some cases, but there are still cases between 
DECIMAL, FLOAT, DOUBLE, and literals that will register as inequality, to the 
surprise of the user.
I see the DATE problem as another thing where the user needs to understand the 
computer's algorithm, which, as Shawn points out is:

We do one thing (make dates represent midnight on that date when they need to 
be compared to datetime values) and allow the users to decide how to handle the 
rest of the comparison according to their specific needs.

WHERE datetime_col = '2013-01-01'
  AND datetime_col   '2013-01-01' + INTERVAL 1 DAY
but for predictability and reliability, this is one rewrite that may not 
always be true.
So, to be safe, one should perhaps say:
  WHERE datetime_col = '2013-01-01 00:00:00'
AND datetime_col   '2013-01-01 00:00:00' + INTERVAL 1 DAY

IN_DATE (or maybe ON_DAY) is an interesting idea.  I assume it would be 
transliterated by the parser into something like the expression above, then 
optimized based on which part(s) are columns and which are literals.

 '2013-05-14 17:00:00' = '2013-01-01'
 AND '2013-05-14 17:00:00' = '2013-05-14' + INTERVAL 12 HOUR
There's an extra second in that!  (I call it the midnight bug.)

I perceive (rightly or wrongly) that comparing a TIMESTAMP to something first 
converts the TIMESTAMP value to a string ('2013-...').  Shawn, perhaps this 
statement belongs as part of the 'algorithm' explanation?

Yes, you might get in trouble if the same SELECT were run in two different 
timezones at the same time.  Or, TIMESTAMP might help you get the right 
answer.

There are something like 5 different datetime concepts.  MySQL covers 2 of 
them.
DATETIME is a picture of _your_ clock.
TIMESTAMP is an instant in the _universe_.
For these, and others, think of a recurring event on a calendar, a sporting 
event, an appointment (potentially in a diff timezone), train schedule, etc.

 -Original Message-
 From: shawn green [mailto:shawn.l.gr...@oracle.com]
 Sent: Friday, May 24, 2013 6:50 AM
 To: mysql@lists.mysql.com
 Subject: Re: Bug in BETWEEN same DATETIME
 
 Hello Rick,
 
 On 5/23/2013 7:08 PM, Rick James wrote:
  Watch out for CAST(), DATE(), and any other function.  In a WHERE
 clause, if you hide an indexed column inside a function, the index
 cannot be used for optimization.
 
 INDEX(datetime_col)
 ...
 WHERE DATE(datetime_col) = '2013-01-01'
  will not use the index!
 
  The workaround is messy, but worth it (for performance):
 WHERE datetime_col = '2013-01-01'
   AND datetime_col   '2013-01-01' + INTERVAL 1 DAY (or any of a
  zillion variants)
 
  (Yeah, it seems like the optimizer could do the obvious
 transformation
  for you.  Hint, hint, Shawn.)
 
 
 Or we could coerce datetime values back to their date values when both
 are being used. The trick now becomes choosing between rounding the
 datetime value (times past noon round to the next date) or do we use
 the
 floor() function all the time.
 
 This has been discussed and the consensus was that the most predictable
 and performant behavior was to extend a date value to become a datetime
 value by associating it with midnight ().
 
 Let's look at some examples:
 a)   '2013-05-14 07:00:00' = '2013-05-14'
 This is true as the datetime value is 7 hours after midnight.
 
 b)'2013-05-14 07:00:00' = '2013-05-14'
AND '2013-05-14 07:00:00'  '2013-05-15'
 This is true as the time value is somewhen between both midnights.
 
 c)'2013-05-14 07:00:00'  '2013-05-14' + INTERVAL 8 HOURS
 This is false. The offset applied to the date term means the time
 portion of the resulting datetime value is 0800, not . (0700 
 0800) is false.
 
 d) And what if instead of comparing against the FLOOR() of each date we
 rounded datetime values up or down to their nearest dates?
 '2013-05-14 17:00:00' = '2013-05-14'
 This would be false because the datetime value would have rounded up to
 '2013-05-15'.
 
 
 There is also a strong desire to make a database server try to do
 exactly what the user tells it to do. If the user wants to compare a
 value to another value with an equality check, we should do that.  It
 would be very odd behavior if an equality check suddenly turns into a
 ranged check.  I realize how much time it would save people to not need
 to include both ends of the range:
  WHERE datetime_col = '2013-01-01

RE: Bug in BETWEEN same DATETIME

2013-05-23 Thread Rick James
I use this; it keeps me out of trouble whether I am using
* MySQL's DATE vs DATETIME vs TIMESTAMP
* Sybase dates (to minute or to millisecond, hence :59:59 does not work)
* leap year

WHERE dt = ?
  AND dt   ? + INTERVAL ? DAY

I fill in the first two ? with the same starting date.

 -Original Message-
 From: Michael Dykman [mailto:mdyk...@gmail.com]
 Sent: Thursday, May 23, 2013 2:56 PM
 To: MySql
 Subject: Re: Bug in BETWEEN same DATETIME
 
 where cast(transaction_date as date) BETWEEN '2013-04-16' AND
 
 This approach might be problematic in that it requires that every row
 in the source table be examined so that it's transaction_date can be
 casted.
  The original formulation is more efficient as it allows an index on
 transaction_date to be used, if one exists.
 WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND
 '2013-04-16 23:59:59'
 
 Although you probably get the result you want by just incrementing the
 day on the upper-limit.
 WHERE `transaction_date` BETWEEN '2013-04-16 AND '2013-04-17'
 
  - michael dykman
 
 
 On Thu, May 23, 2013 at 5:07 PM, Peterson, Timothy R 
 timothy_r_peter...@uhc.com wrote:
 
  You probably want
  where cast(transaction_date as date) BETWEEN '2013-04-16' AND
  '2013-04-16'
  That works on my test case
 
  You could also change the where clause to be = date and  date+1
 
 
 
  -Original Message-
  From: Daevid Vincent [mailto:dae...@daevid.com]
  Sent: Thursday, May 23, 2013 3:56 PM
  To: mysql@lists.mysql.com
  Subject: Bug in BETWEEN same DATETIME
 
  I just noticed what I consider to be a bug; and related, has this
 been
  fixed in later versions of MySQL?
 
  We are using:
  mysql  Ver 14.12 Distrib 5.0.92, for portbld-freebsd8.1 (amd64) using
  5.2
 
  If you use BETWEEN and the same date for both parts (i.e. you want a
  single
  day) it appears that the operator isn't smart enough to consider the
  full day in the cases where the column is a DATETIME
 
  http://dev.mysql.com/doc/refman/5.0/en/comparison-
 operators.html#opera
  to
  r_be
  tween
 
  WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16'
 
  I actually have to format it like this to get results
 
  WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND '2013-04-
 16
  11:59:59'
 
  As it appears that in the first instance it defaults the time to
  00:00:00
  always, as verified by this:
 
  WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16
 11:59:59'
 
  So,  I think it's probably safe to assume that if someone is using
 the
  BETWEEN on datetime columns, their intent more often than not is to
  get the full 24 hour period, not the 0 seconds it currently pulls by
  default.
 
  I also tried these hacks as per the web page above, but this doesn't
  yield results either
 
  WHERE `transaction_date` BETWEEN CAST('2013-04-16' AS DATE) AND
  CAST('2013-04-16' AS DATE) WHERE `transaction_date` BETWEEN
  CAST('2013-04-16' AS DATETIME) AND CAST('2013-04-16' AS DATETIME)
 
  This one works, but I fail to see how it's any more beneficial than
  using a string without the CAST() overhead?
 
  WHERE `transaction_date` BETWEEN CAST('2013-04-16 00:00:00' AS
  DATETIME) AND
  CAST('2013-04-16 11:59:59' AS DATETIME)
 
  Or is there some other magical incantation that is supposed to be
 used
  (without me manually appending the time portion)?
 
  This e-mail, including attachments, may include confidential and/or
  proprietary information, and may be used only by the person or entity
  to which it is addressed. If the reader of this e-mail is not the
  intended recipient or his or her authorized agent, the reader is
  hereby notified that any dissemination, distribution or copying of
  this e-mail is prohibited. If you have received this e-mail in error,
  please notify the sender by replying to this message and delete this
 e-mail immediately.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 --
  - michael dykman
  - mdyk...@gmail.com
 
  May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Bug in BETWEEN same DATETIME

2013-05-23 Thread Rick James
Watch out for CAST(), DATE(), and any other function.  In a WHERE clause, if 
you hide an indexed column inside a function, the index cannot be used for 
optimization.

  INDEX(datetime_col)
  ...
  WHERE DATE(datetime_col) = '2013-01-01'
will not use the index!

The workaround is messy, but worth it (for performance):
  WHERE datetime_col = '2013-01-01'
AND datetime_col   '2013-01-01' + INTERVAL 1 DAY
(or any of a zillion variants)

(Yeah, it seems like the optimizer could do the obvious transformation for you. 
 Hint, hint, Shawn.)

 -Original Message-
 From: shawn green [mailto:shawn.l.gr...@oracle.com]
 Sent: Thursday, May 23, 2013 3:50 PM
 To: mysql@lists.mysql.com
 Subject: Re: Bug in BETWEEN same DATETIME
 
 
 
 On 5/23/2013 4:55 PM, Daevid Vincent wrote:
  I just noticed what I consider to be a bug; and related, has this
 been
  fixed in later versions of MySQL?
 
  We are using:
  mysql  Ver 14.12 Distrib 5.0.92, for portbld-freebsd8.1 (amd64) using
  5.2
 
  If you use BETWEEN and the same date for both parts (i.e. you want a
  single
  day) it appears that the operator isn't smart enough to consider the
  full day in the cases where the column is a DATETIME
 
  http://dev.mysql.com/doc/refman/5.0/en/comparison-
 operators.html#opera
  tor_be
  tween
 
  WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16'
 
  I actually have to format it like this to get results
 
  WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND '2013-04-
 16
  11:59:59'
 
 
  From the Fine Manual...
 http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-
 conversion.html
 ###
   Conversion of DATE values:
 
  Conversion to a DATETIME or TIMESTAMP value adds a time part of
 '00:00:00' because the DATE value contains no time information.
 ...
   Prior to MySQL 5.0.42, when DATE values are compared with DATETIME
 values, the time portion of the DATETIME value is ignored, or the
 comparison could be performed as a string compare. Starting from MySQL
 5.0.42, a DATE value is coerced to the DATETIME type by adding the time
 portion as '00:00:00'. To mimic the old behavior, use the CAST()
 function to cause the comparison operands to be treated as previously.
 For example:
 
 date_col = CAST(datetime_col AS DATE)
 
 ###
 
 That seems pretty clear to me as not a bug.
 --
 Shawn Green
 MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware
 and Software, Engineered to Work Together.
 Office: Blountville, TN
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mysql server - which filesystem to choose? Is it really that important nowadays?

2013-05-22 Thread Rick James
ext does less well with simultaneous IOPs than xfs.

 -Original Message-
 From: Manuel Arostegui [mailto:man...@tuenti.com]
 Sent: Wednesday, May 22, 2013 12:22 AM
 To: Rafał Radecki
 Cc: mysql@lists.mysql.com
 Subject: Re: Mysql server - which filesystem to choose? Is it really
 that important nowadays?
 
 2013/5/22 Rafał Radecki radecki.ra...@gmail.com
 
  Hi All.
 
  I use mysql/perconna/maria on my production CentOS 6 Linux servers. I
  currently try to choose the default filesystem for partitions with
  mysql data. Some time ago (previous dba) reiserfs was the choice but
  now it is not in the kernel and the main author is in prison.
 
  From what I've read xfs and ext4 are valid choices and performance
  benchmarks over the web show that they are comparable (no clear
 winner).
  I've also read that with every new kernel there can be changes in
  performance in every filesystem ( for example
 
  http://gtowey.blogspot.com/2013/02/serious-xfs-performance-
 regression-
  in.html
   ).
 
  From your experiences: which filesystem to choose for a mysql db? Is
  ext4 or xfs better? Or is it more a case of proper filesystem tuning
  to my workload? Any articles worth reading which you can recommend?
 
 
 Hi Rafal,
 
 I guess it really depends on your workload, your HW, kernel etc.
 From my experience, having XFS with lazy-count=1 and kernels 2.6.31.X
 gives better performance in our HW RAID 10 + BBU servers. We do have
 this configuration in around 200 DBs without any stability issue.
 I still have pending to test ext4/xfs with 3.2.X kernels...
 
 Manuel.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Version 5.6.2-m5 Boolean Datatype

2013-05-22 Thread Rick James
In query syntax, TRUE is the same as 1; FALSE is the same as 0.

A minor syntax note:  ENUM('FALSE', 'TRUE') would require quotes when using it.

If you have multiple 'flags', consider the SET datatype.  (Yeah, it is somewhat 
clumsy.)

If you have installed 5.6, simply try BOOL or BOOLEAN.  I suspect (without 
proof) that it works the same as it has for more than a decade.


 -Original Message-
 From: Wm Mussatto [mailto:mussa...@csz.com]
 Sent: Wednesday, May 22, 2013 1:57 PM
 To: mysql@lists.mysql.com
 Subject: Re: Version 5.6.2-m5 Boolean Datatype
 
 Why not use ENUM with True|False or Y|N  Only issue is it doesn't throw
 and error of you enter an illegal value but I don't think I've ever
 flagged the field as NOT NULL.
 On Wed, May 22, 2013 11:32, Darryle Steplight wrote:
  Hey Neil,
   Why not just store it as a TINYINT, that's what I do when I only
 care
  about 0 or 1 values?
 
 
  On Wed, May 22, 2013 at 2:19 PM, Neil Tompkins
  neil.tompk...@googlemail.com
  wrote:
 
  Hi Shawn
 
  I plan in installing the latest MySQL version tomorrow.   Does MySQL
 not
  support Bool eg true and false
 
  Neil
 
  On 22 May 2013, at 19:05, shawn green shawn.l.gr...@oracle.com
 wrote:
 
   Hello Neil,
  
   On 5/22/2013 1:05 PM, Neil Tompkins wrote:
   Hi, Like the link states
  
   For clarity: a TINYINT(1) datatype does NOT ENFORCE a boolean
   value
  data
   entry. For instance, it's still possible to insert a value of 2
  (any
   integer up to the TINYINT max value). I personally don't see the
  added
   value of a 'BOOLEAN' synonym type which infact behaves unlike a
  boolean
   should.
  
   Has BOOL, BOOLEAN been taken out of MySQL 5.6 ?
  
  
   On Wed, May 22, 2013 at 6:01 PM, Ian Simpson
   i...@it.myjobgroup.co.uk
  wrote:
  
   BOOLEAN is a synonym for TINYINT(1) in MySQL:
  
   http://dev.mysql.com/doc/refman/5.6/en/numeric-type-
 overview.html
  
  
   On 22 May 2013 17:55, Neil Tompkins
   neil.tompk...@googlemail.com
  wrote:
  
   Hi,
  
   I've just created some tables that I designed using the MySQL
  Workbench
   Model.  However, the database type BOOLEAN which was in my
   models
  has
  been
   converted to TINYINT(1);  I'm currently running MySQL Version
  5.6.2-m5 on
   Windows 2008 server.
  
   Any ideas why this has been removed ?
  
  
   This is exactly the same behavior that MySQL has had for over a
  decade.
  Nothing has been added or removed since release 4.1.0 (2003-04-03)
   http://dev.mysql.com/doc/refman/4.1/en/numeric-type-overview.html
   http://dev.mysql.com/doc/refman/4.1/en/news-4-1-x.html
  
   Also, why are you using a pre-release (milestone) version of 5.6
   when
  the full release (GA) versions of 5.6 are available?
   http://dev.mysql.com/doc/relnotes/mysql/5.6/en/
  
   Regards,
   --
   Shawn Green
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: fragmentation in innodb tables

2013-05-21 Thread Rick James
The fragmented message is bogus.  It says it to everyone.  Almost no one 
needs to OPTIMIZE their tables.

 -Original Message-
 From: Miguel González [mailto:miguel_3_gonza...@yahoo.es]
 Sent: Tuesday, May 21, 2013 2:03 PM
 To: mysql@lists.mysql.com
 Subject: fragmentation in innodb tables
 
 Dear all,
 
I'm a newbie in MySQL so bare my questions.
 
I have run mysqltuner.pl and It says I have fragmentation in my
 tables. Searching around I found this script which reports the
 fragmentation in my tables:
 
 
   #!/bin/sh
 
 echo -n MySQL username:  ; read username echo -n MySQL password:  ;
 stty -echo ; read password ; stty echo ; echo
 
 mysql -u $username -p$password -NBe SHOW DATABASES; | grep -v
 'lost+found' | while read database ; do mysql -u $username -
 p$password -NBe SHOW TABLE STATUS; $database | while read name
 engine version rowformat rows avgrowlength datalength maxdatalength
 indexlength datafree autoincrement createtime updatetime checktime
 collation checksum createoptions comment ; do if [ $datafree -gt 0 ]
 ; then fragmentation=$(($datafree * 100 / $datalength)) echo
 $database.$name is $fragmentation% fragmented.
 mysql -u $username -p$password -NBe OPTIMIZE TABLE $name;
 $database
 fi
 done
 done
 
 
 I have run it and reports that several of my innodb tables are
 fragmented
 
 I have read several articles and I'm a bit confused. I have enabled
 innodb_file_per_table from the very beginning
 
 # INNODB #
 
 innodb_log_files_in_group  = 2
 innodb_log_file_size   = 512M
 innodb_flush_log_at_trx_commit = 1
 innodb_file_per_table  = 1
 innodb_buffer_pool_size= 2G
 
 I have run either optimize table and alter table mytable engine=INNODB
 and both commands don't end up shrinking the space in the idb files.
 The script above reports the same fragmentation.
 
 Regards,
 
 Miguel
 
 
 
 
 
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Dropping column/index from MyISAM table increases disk space without calling optimize

2013-05-14 Thread Rick James
Sorry, I can't address your specific question, but I have several other tips, 
some of which may save a lot of space...

USING HASH -- ignored; presumably BTree is used instead.

What Version of MySQL?  5.5(?) can drop an InnoDB (only?) index live.

BTrees sometimes grow after any sort of copy/reorg.  Inserting rows (of data or 
index) in sorted order will produce a more compact BTree.  Random insertion 
will (in theory) lead to about 40% overhead.

Tricking MyISAM into Rebuild by sorting instead of rebuild by keybuffer 
should take the more compact approach, and probable be faster.

OPTIMIZE (on MyISAM) will squeeze out any wasted space caused by DELETEs or 
diff-length UPDATEs.

pt-online-schema-change uses a TRIGGER to do the work 'live'.

columnF is exactly 8 bytes in MyISAM.  That is 10% of the total?

These are so close to the same that I question they need for all of them:
   UNIQUE KEY `unique2` 
 (`columnR`,`columnB`,`columnA`,`columnN`,`columnL`,`columnM`)
   UNIQUE KEY `unique3` (`columnR`,`columnB`,`columnA`,`columnO`,`columnI`)
   UNIQUE KEY `unique4` (`columnR`,`columnB`,`columnA`,`columnJ`)
Do you need the UNIQUEness constraints?  Or would it suffice to simply say
INDEX(B, A)

Hmmm... I would consider replacing these 8 indexes
u3 RBAOI
u4 RBAJ
u2 RBANLM
k4 RBANIOH
k7  BANOH
k9 RBAINOH
k5 RBAJH
u1 RBNAKOM
With 1:
INDEX(columnB, columnA, columnN)
Even if that doesn't work out, I suspect you could consolidate some.  After 3-4 
fields, an index becomes not very useful, and takes a lot of disk space.

(These look different enough to ignore)
k1 QN
k2 REPB
k3 BFNAR
k6 RNAHIBO
k8 BGNAR

 PARTITION BY RANGE (columnR)
For performance, it is usually better to put columnR at the _end_ of the 
indexes.  Better yet would be to leave columnR off the indexes all together -- 
partition pruning does most of what you need it for.

Since columnR smells like year+week, and each partition has a distinct value of 
it, I think you definitely should not include R in any of the non-unique 
indexes -- this would save ~3 bytes per row per index.  Also, definitely move R 
to the _end_ of the UNIQUE indexes -- this would improve performance, 
especially when you query over a range of columnR.

   `columnQ` datetime DEFAULT NULL,
DATETIME takes 8 bytes and TIMESTAMP takes only 4 (until 5.6); consider 
changing to TIMESTAMP.

Most operations on a PARTITIONed table open _all_ partitions, even if they only 
use _one_.  For that reason, I recommend no more than, say, 50 partitions in a 
table.

 -Original Message-
 From: Michael Finch [mailto:mfi...@brightedge.com]
 Sent: Tuesday, May 14, 2013 12:56 PM
 To: mysql@lists.mysql.com
 Subject: Dropping column/index from MyISAM table increases disk space
 without calling optimize
 
 Hey,
 
 We have a table with a column and index that we don't need anymore, so
 we are trying to find the best way to get rid of them
 
 Path 1) Create a new table with the updated schema (excluding that
 column and index), and insert from the old table into the new table
 Path 2) Alter the original table to drop the column and index
 
 The problem with either of these is that when we are done, the table
 size has grown significantly (~15%).
 
 size of orig table: 2026.5G
 size of new table after inserting all of the data or running the alter:
 2473.5M ~~flush the table~~ size of orig table: 2026.5G size of new
 table after inserting all of the data or running the alter: 2292.2M
 ~~optimize the table~~ size of new table after inserting all of the
 data or running the alter: 1912.7M
 
 The only way to get the size of the new table below the size of the
 original table is by optimizing it (this holds true for innodb as
 well). Our problem is that since this table is so big, it takes long
 enough just to alter it, let alone optimize it afterwards.
 
 Can anyone explain the best method for us to get rid of the
 column/index and reclaim the maximum amount of disk space? If we really
 need to alter + optimize that's okay, but I'd like to understand why
 that is. Any help is greatly appreciated.
 
 Thanks!!!
 Michael
 
 
 
 
 Here's the schema of the original table. We want to drop columnF and
 key3
 
 
 CREATE TABLE `tableT` (
   `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
   `columnA` int(11) unsigned NOT NULL,
   `columnB` int(11) unsigned NOT NULL,
   `columnC` varchar(200) DEFAULT NULL,
   `columnD` varchar(200) DEFAULT NULL,
   `columnE` varchar(200) DEFAULT NULL,
   `columnF` bigint(20) unsigned NOT NULL DEFAULT '0',
   `columnG` bigint(20) unsigned NOT NULL DEFAULT '0',
   `columnH` tinyint(4) DEFAULT NULL,
   `columnI` smallint(3) DEFAULT NULL,
   `columnJ` smallint(3) DEFAULT NULL,
   `columnK` smallint(3) DEFAULT NULL,
   `columnL` smallint(3) DEFAULT NULL,
   `columnM` smallint(3) DEFAULT '0',
   `columnN` tinyint(4) DEFAULT NULL,
   `columnO` tinyint(4) DEFAULT '0',
   `columnP` tinyint(4) DEFAULT NULL,
   `columnQ` datetime DEFAULT NULL,
   `columnR` mediumint(8) unsigned NOT 

RE: Slow Response -- What Does This Sound Like to You?

2013-05-10 Thread Rick James
`.`Facility_Name` in ('Fremont Family Care')
  and  `Query1`.`Appointment_Date` between cast(cast('2011-01-01' as date) 
as date)
   and cast(cast('2013-05-07' as date) 
as date)
  and  `Query1`.`Appointment_Provider_ID` = 60922;
---

The big problem is 
   FROM ( SELECT ... ) JOIN ( SELECT ... ) ON ...
Neither of those subqueries has an index, so there will be table scans.  The 
solution is to CREATE TEMPORARY TABLE ... SELECT for each one, then add an 
index.

You SELECT a bunch of rows as Query1, then filter??  Can't you move the 
filtering into the subquery??

There is no need for either CAST in cast(cast('2013-05-07' as date) as date); 
simply use '2013-05-07'.

What does the {} syntax do?? 

Contradictory:
where  `Query1`.`Appointment_Provider_ID` in (9118, 9119, 60922, 9116, 
47495)
  and  `Query1`.`Appointment_Provider_ID`   = 60922;
The IN filter does nothing useful.

I think those changes will make the query run _much_ faster.  If not, provide 
the SHOW CREATE TABLE for the tables being used here, plus EXPLAIN SELECT.

 -Original Message-
 From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
 Sent: Friday, May 10, 2013 11:36 AM
 To: Rick James; Bruce Ferrell; mysql@lists.mysql.com
 Subject: RE: [Suspected Spam][Characteristics] RE: Slow Response --
 What Does This Sound Like to You?
 
 
  1. MyISAM locks _tables_.  That can cause other connections to be
  blocked.  Solution: switch to InnoDB.  Caution:  There are a few
  caveats when switching; see
  https://kb.askmonty.org/en/converting-tables-from-myisam-to-innodb/
 
  2. As mentioned by Shawn, the Query Cache can be more trouble than it
  is worth.  However 90 seconds cannot be blamed on the QC.  Still,
  shrink it or turn it off:
  * If frequently writing to tables, turn it off (type=OFF _and_
 size=0)
  * If less frequently, then decide which queries will benefit, add
  SQL_CACHE to them, set type=DEMAND and size=50M (no larger).
 
  3. Meanwhile, try to make that long query more efficient.
  Can you show it to us, together with SHOW CREATE TABLE, SHOW TABLE
  STATUS, and EXPLAIN ?
 
 
 Thanks for the feedback, Rick.
 
 There are 1200+ tables in the database, so I don't think you want a
 SHOW CREATE TABLE, SHOW TABLE STATUS, EXPLAIN for all of them. :-) The
 query in question is always some variation of the following. From
 looking at this, which table(s) would you like to see this information
 for?
 
 
 # Time: 130507 18:14:26
 # User@Host: site150_DbUser[site150_DbUser] @ cognos08.mycharts.md
 [192.168.10.85] # Query_time: 82  Lock_time: 0  Rows_sent: 1
 Rows_examined: 914386 select
 (mod(dayofweek(`Query1`.`Appointment_Date`)+7-1,7)), {fn CONCAT({fn
 CONCAT({fn CONCAT(cast(extract(hour from
 `Time_Difference_Query`.`Created_Date`) as char(25)), ':')},
 cast(extract(minute from `Time_Difference_Query`.`Created_Date`) as
 char(25)))}, ':')}, `Time_Difference_Query`.`Created_Date`,
 `Query1`.`Appointment_Provider_Name` from (select distinct
 `EMR_ENCOUNTER`.`encType` as Encounter_Type , case  when
 `EMR_ENCOUNTER`.`encLock` = 0 then 'UnLocked' else 'Locked' end  as
 Chart_Lock_Status , `EMR_ENCOUNTER`.`notesDoneTime` as
 Notes_Done_Time , `EMR_ENCOUNTER`.`dateOut` as Notes_Done_Date ,
 `EMR_ENCOUNTER`.`timeIn` as Appointments_Checked_In ,
 `EMR_ENCOUNTER`.`timeOut` as Appointments_Checked_Out ,
 `EMR_ENCOUNTER`.`depTime` as Appointments_Departure_Time ,
 `EMR_ENCOUNTER`.`arrivedTime` as Appointments_Arrived_Time ,
 `EMR_ENCOUNTER`.`endTime` as Appointment_End_Time ,
 `EMR_ENCOUNTER`.`startTime` as Appointment_Start_Time ,
 `EMR_ENCOUNTER`.`date` as Appointment_Date ,
 `EMR_ENCOUNTER`.`encounterID` as Encounter_ID ,
 `EDI_FACILITIES`.`Name` as Facility_Name ,
 `APPOINTMENT_PROVIDER`.`uid` as Appointment_Provider_ID , {fn
 CONCAT({fn CONCAT({fn CONCAT({fn
 CONCAT(`APPOINTMENT_PROVIDER`.`ulname`, ', ')},
 `APPOINTMENT_PROVIDER`.`ufname`)}, ' ')},
 `APPOINTMENT_PROVIDER`.`uminitial`)} as Appointment_Provider_Name
 from (`enc` `EMR_ENCOUNTER` LEFT OUTER JOIN `edi_facilities`
 `EDI_FACILITIES` on `EMR_ENCOUNTER`.`facilityId` =
 `EDI_FACILITIES`.`Id`) LEFT OUTER JOIN (`doctors` `APPOINTMENT_DOCTOR`
 INNER JOIN `users` `APPOINTMENT_PROVIDER` on
 `APPOINTMENT_DOCTOR`.`doctorID` = `APPOINTMENT_PROVIDER`.`uid`) on
 `EMR_ENCOUNTER`.`doctorID` = `APPOINTMENT_DOCTOR`.`doctorID` where
 `EMR_ENCOUNTER`.`encType` = 2 and `EMR_ENCOUNTER`.`date` between
 cast('2011-01-01' as date) and cast('2013-05-07' as date) and
 `EMR_ENCOUNTER`.`patientID`  8663 and `EMR_ENCOUNTER`.`VisitType` 
 'PTDASH' and `EMR_ENCOUNTER`.`deleteFlag` = 0 and
 `APPOINTMENT_PROVIDER`.`UserType` = 1 and
 `APPOINTMENT_PROVIDER`.`delFlag` = 0 and `EDI_FACILITIES`.`DeleteFlag`
 = 0) `Query1` LEFT OUTER JOIN (select distinct
 `Addressed_Query`.`moddate` as Locked_Date ,
 `Created_Query`.`moddate` as Created_Date ,
 `Created_Query`.`encounterid` as encounterid ,
 `Created_Query`.`reason` as reason

RE: Triggers

2013-05-10 Thread Rick James
Triggers use whatever code you put in them.

Recommendations for what?

 -Original Message-
 From: Aastha [mailto:aast...@gmail.com]
 Sent: Friday, May 10, 2013 11:55 AM
 To: mysql@lists.mysql.com
 Subject: Triggers
 
 If triggers use complex business rules and large transaction.
 What would we be recommendations?
 I need three possible ways.
 
 Thanks,

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Rick James
MyISAM?  Or InnoDB?
Lock_time perhaps applies only to table locks on MyISAM.

SHOW ENGINE InnoDB STATUS;
You may find some deadlocks.

Is Replication involved?

Anyone doing an ALTER?

 -Original Message-
 From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
 Sent: Thursday, May 09, 2013 1:58 PM
 To: mysql@lists.mysql.com
 Subject: Slow Response -- What Does This Sound Like to You?
 
 We have a situation where users complain that the system periodically
 freezes for 30-90 seconds. We check the slow query logs and find that
 one user issued a complex query that did indeed take 30-90 seconds to
 complete. However, NO slow queries are recorded for the other 50 users,
 before, during, or after the freeze. Note that the complex query in
 question always shows: Lock_time: 0.
 
 Q: What conditions could cause single query to lock up a database for a
 while for all users (even though it shows lock time: 0)  but no other
 slow queries would show in the logs for any other users who are hitting
 the database at the same time?
 
 OS: RHEL3 x64
 CPU: 8 x 2.9GHz Xeon
 RAM: 32GB
 Disk: RAID 5 (6 x 512GB SSD)
 MySQL: 5.0.95 x64
 Engine: MyISAM
 
 --
 Eric Robinson
 
 
 
 
 
 
 
 Disclaimer - May 9, 2013
 This email and any files transmitted with it are confidential and
 intended solely for mysql@lists.mysql.com. If you are not the named
 addressee you should not disseminate, distribute, copy or alter this
 email. Any views or opinions presented in this email are solely those
 of the author and might not represent those of Physicians' Managed Care
 or Physician Select Management. Warning: Although Physicians' Managed
 Care or Physician Select Management has taken reasonable precautions to
 ensure no viruses are present in this email, the company cannot accept
 responsibility for any loss or damage arising from the use of this
 email or attachments.
 This disclaimer was added by Policy Patrol:
 http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread Rick James
1. MyISAM locks _tables_.  That can cause other connections to be blocked.  
Solution: switch to InnoDB.  Caution:  There are a few caveats when switching; 
see
https://kb.askmonty.org/en/converting-tables-from-myisam-to-innodb/

2. As mentioned by Shawn, the Query Cache can be more trouble than it is worth. 
 However 90 seconds cannot be blamed on the QC.  Still, shrink it or turn it 
off:
* If frequently writing to tables, turn it off (type=OFF _and_ size=0)
* If less frequently, then decide which queries will benefit, add SQL_CACHE to 
them, set type=DEMAND and size=50M (no larger).

3. Meanwhile, try to make that long query more efficient.  Can you show it to 
us, together with SHOW CREATE TABLE, SHOW TABLE STATUS, and EXPLAIN ?

 -Original Message-
 From: Bruce Ferrell [mailto:bferr...@baywinds.org]
 Sent: Thursday, May 09, 2013 6:05 PM
 To: mysql@lists.mysql.com
 Subject: Re: Slow Response -- What Does This Sound Like to You?
 
 On 05/09/2013 03:25 PM, Robinson, Eric wrote:
 
  -Original Message-
  From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
  Sent: Thursday, May 09, 2013 1:58 PM
  To: mysql@lists.mysql.com
  Subject: Slow Response -- What Does This Sound Like to You?
 
  We have a situation where users complain that the system
  periodically
  freezes for 30-90 seconds. We check the slow query logs and
  find that
  one user issued a complex query that did indeed take 30-90
  seconds to
  complete. However, NO slow queries are recorded for the other 50
  users, before, during, or after the freeze. Note that the complex
  query in question always shows: Lock_time: 0.
 
  Q: What conditions could cause single query to lock up a
  database for
  a while for all users (even though it shows lock time: 0)  but no
  other slow queries would show in the logs for any other
  users who are
  hitting the database at the same time?
 
  OS: RHEL3 x64
  CPU: 8 x 2.9GHz Xeon
  RAM: 32GB
  Disk: RAID 5 (6 x 512GB SSD)
  MySQL: 5.0.95 x64
  Engine: MyISAM
 
 
 
  MyISAM?  Or InnoDBm to have been finished Lock_time perhaps applies
  only to table locks on MyISAM.
 
  SHOW ENGINE InnoDB STATUS;
  You may find some deadlocks.
 
  Is Replication involved?
 
  Anyone doing an ALTER?
 
 
 
  MyISAM, no replication involved, and nobody is altering the database.
 This happens whenever people run certain reports.
 
 
  --Eric
 
 One thing I'd look at to start is the error log, if enabled.  After
 that, I'd look at running mysqltuner to get a look at statistics before
 and after one of these events.  I know there are those who prefer the
 Percona toolkit, but those pull lots raw stats and offers little in
 terms of suggestions... Unless you wish to engage Percona.
 
 Be aware, there are two versions of mysqltuner.  The one I use is found
 at http://mysqltuner.pl.  I know, it's old, but it at least runs.  The
 newer one doesn't seem to have been brought to completion.
 
 You might want to enable the slow query option that logs queries that
 execute without indexes.  They can be real killers.  Reports that use
 views often cause this as views become complex joins under the hood
 that can easily miss your indexes resulting in full table scans.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Chain Replication QUestion

2013-05-01 Thread Rick James
 1) Enable log-bin on master2 (slave that will be converted to a master)
That does not 'convert' it -- it makes it both a Master and a Slave (a Relay).

The CHANGE MASTER is probably correct, but it is difficult to find the right 
spot.
A simple way is to 
1. Stop all writes everywhere.
2. Wait for replication to catchup everywhere.
3. FLUSH LOGS everywhere.
4. Now CHANGE MASTER on the Slave to the start (POS=0 or 4) of the freshly 
created binlog in the machine that is the Slave's new Master.
5. Start writes.


 -Original Message-
 From: Richard Reina [mailto:gatorre...@gmail.com]
 Sent: Wednesday, May 01, 2013 6:00 AM
 To: Manuel Arostegui
 Cc: mysql@lists.mysql.com
 Subject: Re: Chain Replication QUestion
 
 Hello Manuel,
 
 Thank you for your reply. Could I do the following?:
 
 1) Enable log-bin on master2 (slave that will be converted to a master)
 2) Enable log-slave-updates on master2
 3) Execute CHANGE MASTER to on another existing slave so that it gets
 it's updates from master2 instead of master1.
 
 Thanks for the help thus far.
 
 
 2013/4/30, Manuel Arostegui man...@tuenti.com:
  2013/4/30 Richard Reina gatorre...@gmail.com
 
  I have a few slaves set up on my local network that get updates from
  my main mysql database master. I was hoping to turn one into a
 master
  while keeping it a slave so that I can set up a chain.  Does anyone
  know where I can find a how to or other documentation for this
  specific task?
 
 
  It is quite easy:
 
  Enable log-slave-updates in the slave you want to be a master.
  Do a mysqldump -e --master-data=2 and put that mysqldump in the
 future
  slaves. Take a look at the first lines of the mysqldump where you'll
  find the position and logfile those slaves need to start the
 replication from.
  You can also use xtrabackup if you like.
 
  Manuel.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Adding language support to tables

2013-05-01 Thread Rick James
You want to say either Germany or Deutschland, depending on a 
language_code somewhere?

Remove the strings you have in those tables now; add about 4 new tables, each 
one paralleling the existing tables, but more rows and these columns:
* id (the PK of the existing table)
* language code (ENUM or TINYINT UNSIGNED indicating the language)
* spelling (utf8 version for the language in question)

There may be better ways to do your task, but see how this feels.  Sketch out 
the CREATE TABLEs, INSERTs and SELECTs.

(Adding N columns for N languages is a maintenance and coding nightmare.  
Tomorrow, you will need N+1 languages.)

I would jettison the id in:
  `country_id` INT NOT NULL ,
  `country_code` CHAR(2) NOT NULL ,
and use country_code as the PRIMARY KEY, and make it ASCII, not UTF8.  That 
would turn the 4-byte id into a 2-byte string.

I gather you are using an new-enough NDB so that FOREIGN KEYs are implemented?

 -Original Message-
 From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
 Sent: Wednesday, May 01, 2013 5:53 AM
 To: [MySQL]
 Subject: Adding language support to tables
 
 Hi,
 
 I've the following database structure of 4 tables for geographical
 information
 
 
 
 CREATE  TABLE IF NOT EXISTS `mydb`.`country` (
 
   `country_id` INT NOT NULL ,
 
   `country_code` CHAR(2) NOT NULL ,
 
   `name` VARCHAR(255) NOT NULL ,
 
   PRIMARY KEY (`country_id`) ,
 
   UNIQUE INDEX `country_code_UNIQUE` (`country_code` ASC) )
 
 ENGINE = ndbcluster
 
 DEFAULT CHARACTER SET = utf8
 
 COLLATE = utf8_unicode_ci;
 
 
 
 
 
 CREATE  TABLE IF NOT EXISTS `mydb`.`region` (
 
   `region_id` INT NOT NULL ,
 
   `name` VARCHAR(255) NOT NULL ,
 
   `country_code` CHAR(2) NOT NULL ,
 
   PRIMARY KEY (`region_id`) ,
 
   INDEX `FK_country_code` (`country_code` ASC) ,
 
   CONSTRAINT `FK_country_code`
 
 FOREIGN KEY (`country_code` )
 
 REFERENCES `mydb`.`country` (`country_code` )
 
 ON DELETE NO ACTION
 
 ON UPDATE NO ACTION)
 
 ENGINE = ndbcluster
 
 DEFAULT CHARACTER SET = utf8
 
 COLLATE = utf8_unicode_ci;
 
 
 
 
 
 CREATE  TABLE IF NOT EXISTS `mydb`.`city` (
 
   `city_id` INT NOT NULL ,
 
   `region_id` INT NOT NULL ,
 
   `name` VARCHAR(255) NOT NULL ,
 
   `latitude` DOUBLE NOT NULL ,
 
   `longitude` DOUBLE NOT NULL ,
 
   PRIMARY KEY (`city_id`) ,
 
   INDEX `FK_region_id` (`region_id` ASC) ,
 
   CONSTRAINT `FK_region_id`
 
 FOREIGN KEY (`region_id` )
 
 REFERENCES `mydb`.`region` (`region_id` )
 
 ON DELETE NO ACTION
 
 ON UPDATE NO ACTION)
 
 ENGINE = ndbcluster
 
 DEFAULT CHARACTER SET = utf8
 
 COLLATE = utf8_unicode_ci;
 
 
 
 
 
 CREATE  TABLE IF NOT EXISTS `mydb`.`district` (
 
   `district_id` INT NOT NULL ,
 
   `city_id` INT NOT NULL ,
 
   `name` VARCHAR(255) NOT NULL ,
 
   `latitude` DOUBLE NOT NULL ,
 
   `longitude` DOUBLE NOT NULL ,
 
   PRIMARY KEY (`district_id`) ,
 
   INDEX `FK_city_id` (`city_id` ASC) ,
 
   CONSTRAINT `FK_city_id`
 
 FOREIGN KEY (`city_id` )
 
 REFERENCES `mydb`.`city` (`city_id` )
 
 ON DELETE NO ACTION
 
 ON UPDATE NO ACTION)
 
 ENGINE = ndbcluster
 
 DEFAULT CHARACTER SET = utf8
 
 COLLATE = utf8_unicode_ci;
 
 
 
 
 Basically I'm wanting to add language support for each table to
 translate the name field in each instance.  All other information will
 remain the same.
 
 Therefore is my best approach to add some sort of look-up table with
 the translation...?
 
 Thanks
 Neil

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: how to list record in column (instead of a row)

2013-04-30 Thread Rick James
Or 
SELECT ... \G
(replace ';' with '\G')

 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Wednesday, April 24, 2013 9:55 AM
 To: mysql list
 Subject: Re: how to list record in column (instead of a row)
 
  2013/04/24 09:06 -0700, Rajeev Prasad 
 this table has many columns and only 1 record. select * from table;
 generates an unreadable list. how can i list the record as in two
 columns? (column name and its value)? i looked at UNPIVOT, but could
 not get it to work.
 SQL select * from table UNPIVOTE INCLUDE NULLS;
  select * from table UNPIVOTE INCLUDE NULLS
 * ERROR at line 1: ORA-00933: SQL command not properly ended
 
 From MySQL client, if started with flag '-G':
   select * from table
   ego
 
 For the same program there is flag '--auto-vertical-output'.
 
 But it seems you are using Oracle; this is MySQL list.
 In any case, you wrote both 'UNPIVOT' and 'UNPIVOTE'; I suspect the
 former is better.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Performance of delete using in

2013-04-30 Thread Rick James
Please provide
   SHOW CREATE TABLE cdsem_event_message_idx \G
   EXPLAIN SELECT * FROM cdsem_event_message_idx where event_id in () \G
   SHOW VARIABLES LIKE 'autocommit';

These can impact DELETE speed:
   * secondary indexes
   * whether event_id is indexed.
   * disk type and speed -- ordinary SATA vs RAID vs SSD vs ...
   * ENGINE -- SHOW CREATE will provide that info
   * MySQL version -- perhaps IN optimization has improved over time

Rule of Thumb:  100 iops.  Hence 1500 deletes is likely to take 15 seconds if 
they are randomly place, no secondary keys, and on non-RAIDed SATA drive.

DELETEing one row at a time incurs network and parsing overhead, so it is not 
surprising that it is slower.  That seems like a lot of overhead, so I would 
guess you are using InnoDB and have most of autocommit=1 and sync_binlog=1 and 
innodb_flush_log_at_trx_commit=1

 -Original Message-
 From: Denis Jedig [mailto:d...@syneticon.net]
 Sent: Wednesday, April 24, 2013 10:50 PM
 To: mysql@lists.mysql.com
 Subject: Re: Performance of delete using in
 
 Larry,
 
 Am 25.04.2013 02:19, schrieb Larry Martell:
 
  delete from cdsem_event_message_idx where event_id in ()
 
  The in clause has around 1,500 items in it.
 
 Consider creating a temporary table, filling it with your IN
 values and joining it to cdsem_event_message_idx ON event_id for
 deletion.
 
 Kind regards,
 
 Denis Jedig
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Rookie question

2013-04-30 Thread Rick James
OR would not show dups.

WHERE duespaid AND cat1 OR cat2
means
WHERE (duespaid AND cat1) OR cat2
That is probably not what you wanted -- add parens like
WHERE duespaid AND (cat1 OR cat2 ...)

But...

That is not a good way to build a schema.  What will happen when you add 
category9?

Plan A:  Have another table that says which categories a user has.  There would 
be 0-8 rows in this new table for each category.
SELECT d.* FROM directory d JOIN categories c ON d.userid = c.userid
WHERE c.category IN (1,2,3,4,5,6,7,8);

Plan B:  Use a SET as a single column for all the categories.  Then
AND (categories  x'ff') != x'00'
would check that at least one bit is on in the bottom 8 bits of that SET.  
(TINYINT UNSIGNED would work identically.  Change to SMALLINT UNSIGNED for 9-16 
categories; etc.)

There is probably a Plan C.

 -Original Message-
 From: Gary Smith [mailto:li...@l33t-d00d.co.uk]
 Sent: Monday, April 29, 2013 10:43 AM
 To: mysql@lists.mysql.com
 Subject: Re: Rookie question
 
 On 29/04/2013 18:29, Patrice Olivier-Wilson wrote:
  Hi all:
 
  I have a membership directory where folks can belong to more than one
 category. But all folks do not qualify for a category. So I want to
 list folks who have qualified in a category but not have them repeat.
 So if member 1 is in cat 3 and cat 5, I want their name only to show up
 once. Here's what I have so far, but it shows a member listed more than
 once.
 select distinct ?
 
 Gary
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Cluster or MySQL Cloud

2013-04-30 Thread Rick James
See also Percona XtraDB Cluster.

Will you nodes be in the same physical location?  If so, what about floods, 
earthquakes, etc?

Clouds are ephemeral; data wants to persist

 -Original Message-
 From: Andrew Morgan [mailto:andrew.mor...@oracle.com]
 Sent: Tuesday, April 30, 2013 12:36 AM
 To: Neil Tompkins
 Cc: [MySQL]
 Subject: RE: MySQL Cluster or MySQL Cloud
 
 Hi Neil,
 
  If you use MySQL Cluster then you have synchronous replication between
 the 2 data nodes which means that if one should fail you're guaranteed
 that the other contains the effects of every committed transaction and
 that the change has already been applied and so there is no delay while
 relay logs are applied before the automatic failover kicks in - which
 is why it can take less than a second.
 
  You also have a good scale-out story with MySQL Cluster as you can
 just continue to add more nodes (256 in total, 48 of which can be data
 nodes) withou having to worry about partitioning, failover etc.
 
 Regards, Andrew.
 
  -Original Message-
  From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
  Sent: 29 April 2013 14:50
  To: Andrew Morgan
  Cc: [MySQL]
  Subject: Re: MySQL Cluster or MySQL Cloud
 
  Hi Andrew,
 
  Thanks for your response and the useful white paper.  I've read the
  document in great detail.  I'm looking for the best up time possible
  for my application and am still struggling to see the major
  differences with MySQL cluster compared to MySQL in the Cloud on
  multiple servers; apart from MySQL Cluster being much better solution
  for automatic failover including IP failover.
 
  Regards, Neil
 
 
  On Mon, Apr 29, 2013 at 8:47 AM, Andrew Morgan
  andrew.mor...@oracle.comwrote:
 
   Hi Neil,
  
I hate just sending people off to white papers but you might get
   some good insights by taking a look at the MySQL Guide to High
   Availability Solutions paper -
   http://www.mysql.com/why-mysql/white-papers/mysql-guide-to-high-
  availa
   bility-solutions/
  
   Regards, Andrew.
  
   Andrew Morgan - MySQL High Availability Product Management
   andrew.mor...@oracle.com @andrewmorgan www.clusterdb.com
  
-Original Message-
From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
Sent: 27 April 2013 23:28
To: [MySQL]
Subject: Fwd: MySQL Cluster or MySQL Cloud
   
 If deploying MySQL in the Cloud with two MySQL servers with
 master to
master replication i have a good failover solution.

 Whats the different in terms of availability if we opted for
 MySQL
   Cluster
instead ?
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
  
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Long integer constant problem in views

2013-04-30 Thread Rick James
  WHERE id = UNHEX('36a461c81cab40169791f49ad65a3728')

 -Original Message-
 From: Martin Koch [mailto:m...@issuu.com]
 Sent: Tuesday, April 30, 2013 8:18 AM
 To: mysql@lists.mysql.com
 Subject: Long integer constant problem in views
 
 Hi List
 
 I have a table with a primary key with type binary(16) and a few
 columns.
 I'd trying to create a view that looks up all rows with a particular
 key, i.e. something like
 
 CREATE OR REPLACE VIEW foo AS
   SELECT *
   FROM mytable
   WHERE id = X'36a461c81cab40169791f49ad65a3728';
 
 I use the mysql command line client to create the view. When I now
 inspect the view using, say, mysql workbench, the id has been mangled
 in the where clause which now reads
 
 WHERE (`mytable`.`id` = 0x9791f49ad65a37)
 
 I then tried expressing the id as something that will fit within 8
 bytes
 
 WHERE id = X'36a461c81cab4016'  16 | X'9791f49ad65a3728';
 
 Unfortunately, this makes the query much slower.
 
 I have now worked around this in another way, but I'd like to know if
 there is a way of doing this?
 
 Thanks,
 /Martin Koch

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Troubleshoot excessive memory usage in InnoDB

2013-04-29 Thread Rick James
Memory leaks are unlikely, but possible.  Upgrade to a newer version.

Killing threads that say Sleep _may_ free up some memory, but unlikely to be 
more than even 1MB each.  It _may_ cause grief for the developers, if they 
haven't bulletproofed their code enough to handle lost connection.  Mostly 
that frees up
thread_stack= 192K

Once you have followed the advice in
  http://mysql.rjweb.org/doc.php/memory
there is not much more that an infrastructure guy can do.  I have roots in 
many sides of this issue.  Once I have tuned a system, I turn to the queries, 
schema, overall architecture, etc.

Some caches act like there is a memory leak.  What happens is that they grow as 
needed, up to some specified limit.  This is especially visible for 
key_buffer_size.

Query_cache_size = 256M may be hurting performance; I recommend no more than 
50M.  (The link explains.)

Until the system starts swapping, there should be no problem with the growing 
memory usage.  At that point, performance will tank.  The quick fix is to 
decrease innodb_buffer_pool_size and/or key_buffer_size.

If you provide SHOW GLOBAL STATUS and SHOW VARIABLES, I can look for other 
issues.

 -Original Message-
 From: Denis Jedig [mailto:d...@syneticon.net]
 Sent: Saturday, April 20, 2013 2:16 AM
 To: mysql@lists.mysql.com
 Subject: Re: Troubleshoot excessive memory usage in InnoDB
 
 19.04.2013 23:39, Ilya Kazakevich:
 
  Try to use tuning-primer.sh: this scripts reads your variables  and
  prints memory size you need for that.
 
 I tried that. The results are inconspicious:
 
 MEMORY USAGE
 Max Memory Ever Allocated : 5.27 G
 Configured Max Per-thread Buffers : 1.92 G Configured Max Global
 Buffers : 5.15 G Configured Max Memory Limit : 7.07 G Physical Memory :
 22.98 G Max memory limit seem to be within acceptable norms
 
 Although the logics behind the tuning primer script are rather simple
 and I understand predicting the memory usage for MySQL is much harder:
 http://www.mysqlperformanceblog.com/2012/03/21/troubleshooting-mysql-
 memory-usage/
 
 20.04.2013 00:26, Rick James:
 
   What's the STATUS value of Threads_running?  If it really is  
 ~60-100 connection threads, then there could be any of a few   temp
 allocations for the queries.  Some allocations are   per-subquery.
 
 Usually around 2-4. I also tried checking if killing / resetting
 existing (idle) connections would significantly reduce memory usage
 when mysqld has reached ~20 GB - it would not, so this is either not
 related to connection states or the memory is leaking from there in a
 way which would be unaffected by closing the connection.
 
   Is the system I/O bound?  Or CPU bound?  Or neither?
 
 Neither - the system has plenty of headroom for both. The data working
 set easily fits into the RAM, the amount of UPDATEs is negligible
 (resulting in  100 write requests per second for the I/O subsystem).
 1-minute load average is 2-3 under normal
 (non-swapping) conditions with 6 CPU cores available.
 
   I recommend you optimize the queries.
 
 I cannot do much about it. I am the infrastructure guy who is fixing
 the obviously broken DBMS. What I still cannot figure out is if the
 behavior is due to a misconfiguration or a regression / bug to file.
 And MySQL counters are not exactly helping - it is completely opaque to
 me where the memory is going.
 
 --
 Denis Jedig
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Troubleshoot excessive memory usage in InnoDB

2013-04-19 Thread Rick James
What's the STATUS value of Threads_running?  If it really is ~60-100 
connection threads, then there could be any of a few temp allocations for the 
queries.  Some allocations are per-subquery.

5.6 has a lot of new tricks for optimizing certain subqueries -- such as 
testing out all possible indexes, then creating the optimal one.

Is the system I/O bound?  Or CPU bound?  Or neither?

I recommend you optimize the queries.  Provide us with EXPLAIN for the query 
you see most often in SHOW PROCESSLIST, together with SHOW TABLE STATUS and 
SHOW CREATE TABLE.  The solution may be as easy as adding an index or turning a 
subquery into a JOIN.

Granted, that would not help to nail down the suspected memory leak.

 -Original Message-
 From: Ilya Kazakevich [mailto:ilya.kazakev...@jetbrains.com]
 Sent: Friday, April 19, 2013 2:40 PM
 To: 'Denis Jedig'; mysql@lists.mysql.com
 Subject: RE: Troubleshoot excessive memory usage in InnoDB
 
 Hello,
 
 Try to use tuning-primer.sh: this scripts reads your variables  and
 prints memory size you need for that.
 
 Here is example of its output:
 MEMORY USAGE
 Max Memory Ever Allocated : 2.86 G
 Configured Max Per-thread Buffers : 1.80 G Configured Max Global
 Buffers : 2.10 G Configured Max Memory Limit : 3.91 G Physical Memory :
 5.82 G
 
 I am not sure if it works correctly with 5.6
 
 
 Ilya
 
 -Original Message-
 From: Denis Jedig [mailto:d...@syneticon.net]
 Sent: Saturday, April 20, 2013 1:17 AM
 To: mysql@lists.mysql.com
 Subject: Troubleshoot excessive memory usage in InnoDB
 
 Hi all.
 
 In a specific MySQL installation of 5.6.10 using InnoDB tables, I am
 observing unusual memory consumption patterns. The memory usage is
 growing constantly - even beyond the physical memory limits. The
 entire
 on-disk storage is 41 GB (uncompressed), yet memory usage is happily
 growing to values larger than 50 GB.
 
 The databases mainly experience read load with complex queries and
 subSELECTs running ~60-100 connection threads.
 
 Although the docs state that there should be no memory leaks, this
 case certainly looks like one at first glance.
 
 http://dev.mysql.com/doc/refman/5.6/en/memory-use.html suggests that
 temporary in-memory tables would be used  for this purpose so I
 suspected unfreed temporary tables to be the culprit. But memory usage
 growth rates did not change significantly even after lowering
 tmp_table_size to 2M (from 64M). Also, I have been unable to find a
 way
 to determine the size of in-memory temporary tables at any given time.
 
 Some of the STATUS counters:
 
 | Com_select| 424614  |
 | Com_update| 3444|
 | Created_tmp_disk_tables   | 1716|
 | Created_tmp_files | 43  |
 | Created_tmp_tables| 4002|
 | Uptime| 5112|
 
 The total number of tables over all databases is 1370. my.cnf contains
 the following memory-related values:
 
 max_allowed_packet  = 16M
 thread_stack= 192K
 thread_cache_size   = 8
 max_connections= 1000
 innodb_buffer_pool_size = 5000M
 innodb_log_file_size= 256M
 innodb_flush_method = O_DIRECT
 query_cache_limit   = 1M
 query_cache_size= 256M
 join_buffer_size= 256k
 tmp_table_size  = 2M
 max_heap_table_size = 64M
 read_buffer_size= 1M
 ft_min_word_len = 3
 open_files_limit= 1
 
 A replication slave of this very host is running 5.6.10 with MyISAM
 tables and the mysqld process does not exceed 1 GB in memory
 utilization even after several hours of operation under similar load.
 
 I have posted a question to
 http://dba.stackexchange.com/questions/40413 which I will update
 with
 further information as I get it.
 
 Any hints on how to hunt the resource hog greatly appreciated,
 --
 Denis Jedig
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-17 Thread Rick James
Run your query twice; take the second time.  

One more thing -- avoid the Query cache.  That could lead to really bogus 
timings.

 Yes, but  I need cache to be  my database size to prevent other pages from 
 pushing out pages for my query, right?

Well, yes and no.  If the cache is big enough, there won't be any thrashing.
If the working set is smaller than the cache, then there won't be any 
thrashing.  That is, if you don't access all the data/index blocks, there could 
be room for everything that is needed (the working set).
I often see, say, 100GB on disk and only 5GB of cache, yet the system is 
humming along fine -- the working set is  5GB and/or the accesses to other 
blocks is infrequent enough so that it is not a big issue.

 I speak about query optimization in general

General tips, many relating to optimization:
http://mysql.rjweb.org/doc.php/ricksrots
Quick lesson in compound indexes (something that novices don't understand -- 
and a significant optimization principle):
http://mysql.rjweb.org/doc.php/index1

 I will increase it now.

I hope you are referring to increasing RAM.

 But I will need to disable swapping also to prevent my OS from swapping out 
 InnoDB pages.

NO!  Don't disable swapping; avoid swapping.  Disabling could cause a crash 
or other nasties.  Avoiding means decreasing the tunables so that mysqld does 
not need to be swapped.
This gives the main tunables:
http://mysql.rjweb.org/doc.php/memory
Decrease each as much as practical for your situation.  (For example, change 
max_connections from 200 down to 5 -- assuming you don't need more than 5 
simultaneous connections.)  innodb_buffer_pool_size is probably the biggest 
memory consumer, so it is the easiest way to shrink mysqld's footprint.


 -Original Message-
 From: Ilya Kazakevich [mailto:ilya.kazakev...@jetbrains.com]
 Sent: Wednesday, April 17, 2013 8:05 AM
 To: Rick James
 Cc: 'MySQL'
 Subject: RE: Mesaure query speed and InnoDB pool
 
 Hello Rick,
 
 Run your query twice; take the second time.  For most queries the
 first
 run brings everything into cache, then the second gives you a
 repeatable,
 though
 cached, timing.
 Yes, but  I need cache to be  my database size to prevent other pages
 from pushing out pages for my query, right?
 Or I need to do at the dedicated server..
 
 Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique
 your indexes and query plan.
 I speak about query optimization in general)
 
 
 Handler* is another way to get consistent values.  These numbers are
 unaffected by caching.
 What variable exactly should I take?
 Why can't I use Innodb_pages_read? That is number of page reads
 regardless its source (pool or disk), is not it?
 
 
 1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if
 you
 are running only InnoDB, the buffer_pool should be set to about 70% of
 available
 RAM.
 I will increase it now.
 But I will need to disable swapping also to prevent my OS from swapping
 out InnoDB pages.
 
 Ilya.
 
 
  -Original Message-
  From: Ananda Kumar [mailto:anan...@gmail.com]
  Sent: Tuesday, April 16, 2013 2:06 AM
  To: Ilya Kazakevich
  Cc: MySQL
  Subject: Re: Mesaure query speed and InnoDB pool
 
  Does your query use proper indexes.
  Does your query scan less number blocks/rows can you share the
  explain plan of the sql
 
 
  On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich 
  ilya.kazakev...@jetbrains.com wrote:
 
   Hello,
  
   I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
   it reads data from disk and about 2 seconds when data already
   exists in pool. And it may take 10 seconds when _some_ pages are
 on
   disk and
  some are in pool.
   So, what is the best way to test query performance? I have several
  ideas:
   * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of
 actual
   time
   * Set pool as small as possible to reduce its effect on query
 speed
   * Set pool larger than my db and run query to load all data into
   pool and measure speed then
  
   How do you measure your queries' speed?
  
   Ilya Kazakevich
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-16 Thread Rick James
Run your query twice; take the second time.  For most queries the first run 
brings everything into cache, then the second gives you a repeatable, though 
cached, timing.

Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique your 
indexes and query plan.

Handler* is another way to get consistent values.  These numbers are unaffected 
by caching.

1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if you are 
running only InnoDB, the buffer_pool should be set to about 70% of available 
RAM.

 -Original Message-
 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Tuesday, April 16, 2013 2:06 AM
 To: Ilya Kazakevich
 Cc: MySQL
 Subject: Re: Mesaure query speed and InnoDB pool
 
 Does your query use proper indexes.
 Does your query scan less number blocks/rows can you share the explain
 plan of the sql
 
 
 On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich 
 ilya.kazakev...@jetbrains.com wrote:
 
  Hello,
 
  I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it
  reads data from disk and about 2 seconds when data already exists in
  pool. And it may take 10 seconds when _some_ pages are on disk and
 some are in pool.
  So, what is the best way to test query performance? I have several
 ideas:
  * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
  time
  * Set pool as small as possible to reduce its effect on query speed
  * Set pool larger than my db and run query to load all data into pool
  and measure speed then
 
  How do you measure your queries' speed?
 
  Ilya Kazakevich
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-16 Thread Rick James
Swapping is really bad.  Shrink buffer_pool as needed to avoid swapping.  The 
70-80% 'rule' works for 'most' machines today, because most machines have a lot 
more than 2GB 'available' to MySQL.  As you observed, 2GB box would probably 
swap if buffer_pool were 1.4GB, so 800-900M is better.  Meanwhile, a 20GB box 
would be fine with 14GB, even 16GB.  The best formula would be something more 
complex than a simple percent.  I recommend 70% of available ram because it 
safely covers most machines today.  Then I backpeddle like this when I see that 
the machine is smaller.  (Sorry, I was assuming you had a bigger machine, and 
had not allocated as much as you could.)  The old default of 8M is terrible.  
Even the new default is puny (most of the time).

 -Original Message-
 From: Igor Shevtsov [mailto:nixofort...@gmail.com]
 Sent: Tuesday, April 16, 2013 8:38 AM
 To: mysql@lists.mysql.com
 Subject: Re: Mesaure query speed and InnoDB pool
 
 Hi Rick,
 I thought you have to dedicate 70-80% of available RAM not a total RAM.
 Saying if I have 2 gig of RAM on my exclusively innodb box, and I
 dedicate 1.4Gig to innodb pool, my 64-bit linux machine will start
 swapping heavily.
 If I set it to 800-900M, it just fine and I have like 100M of RAM left
 for some occasional process. I did try it.
 Thanks,
 Igor
 
 On 16/04/13 16:21, Rick James wrote:
  Run your query twice; take the second time.  For most queries the
 first run brings everything into cache, then the second gives you a
 repeatable, though cached, timing.
 
  Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will
 critique your indexes and query plan.
 
  Handler* is another way to get consistent values.  These numbers are
 unaffected by caching.
 
  1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if
 you are running only InnoDB, the buffer_pool should be set to about 70%
 of available RAM.
 
  -Original Message-
  From: Ananda Kumar [mailto:anan...@gmail.com]
  Sent: Tuesday, April 16, 2013 2:06 AM
  To: Ilya Kazakevich
  Cc: MySQL
  Subject: Re: Mesaure query speed and InnoDB pool
 
  Does your query use proper indexes.
  Does your query scan less number blocks/rows can you share the
  explain plan of the sql
 
 
  On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich 
  ilya.kazakev...@jetbrains.com wrote:
 
  Hello,
 
  I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
  it reads data from disk and about 2 seconds when data already
 exists
  in pool. And it may take 10 seconds when _some_ pages are on disk
  and
  some are in pool.
  So, what is the best way to test query performance? I have several
  ideas:
  * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
  time
  * Set pool as small as possible to reduce its effect on query speed
  * Set pool larger than my db and run query to load all data into
  pool and measure speed then
 
  How do you measure your queries' speed?
 
  Ilya Kazakevich
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL on RHEL4

2013-04-08 Thread Rick James
I would guess it would work.
Better to upgrade to RHEL 6.3.

 -Original Message-
 From: Keith Keller [mailto:kkel...@wombat.san-francisco.ca.us]
 Sent: Friday, April 05, 2013 6:46 AM
 To: mysql@lists.mysql.com
 Subject: Re: MySQL on RHEL4
 
 On 2013-04-05, Nitin Mehta ntn...@yahoo.com wrote:
 
  However, i'm not sure which case it is because in the downloads, it
 says Platform: Oracle and Redhat 45
  Isn't it supposed to work?
 
 The downloads section is titled Oracle and Red Hat 4 and 5, but each
 individual download is clearly marked as just 5.  (I agree that the
 title of the section is misleading.)  So no, it is not supposed to
 work.
 
 --keith
 
 
 --
 kkel...@wombat.san-francisco.ca.us
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Update and lock question.

2013-04-08 Thread Rick James
An optimization (at least in InnoDB) is to delay updating the secondary 
index(es).  If you can provide a reproducible test case, it would probably be 
worth filing a bug at bugs.mysql.com

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Friday, April 05, 2013 2:56 PM
 To: Urvashi Pathak
 Cc: mysql
 Subject: Re: Update and lock question.
 
 Thanks Urvashi.
 
 Based on your answer, instead of the data I looked into the index, and
 it appears that it was an index issue...
 
 I think I have nailed the wait lock contdition due a updating indexes
 unnecesarely...
 
 
 On Thu, Apr 4, 2013 at 10:52 AM, Urvashi Pathak
 urvashi_pat...@symantec.com
  wrote:
 
 
 
  Hi Andrés,
 
  Select for update makes sure that no other process can change the
 data
  between you selected it for update and then actually changed it and
  commit it.
  If you do not use select for update  then it is possible that some
  other process can change the data  in the mean time between you
  selected and  actually changes it. In this case you not see the
 result
  you actually intend to have.
 
  Innodb will only lock whole table only if there is no where clause in
  the update statement, which I sure you do not have. Innodb follows
 row
  level locking.
 
 
  -Urvi
 
  -Original Message-
  From: Andrés Tello [mailto:mr.crip...@gmail.com]
  Sent: Thursday, April 04, 2013 9:08 AM
  To: mysql
  Subject: Update and lock question.
 
  I'm doing some tests, but have a questions about locking.
 
  In a innodb table, if you issue an select for update lock for a row,
  supposedly, it only locks that row, but if you don't issue a select
  for update, and trow the update... does it locks the hole table?
 
  The update goes over an indexed field, or the effect of locking the
  hole table is due I'm updating an indexed field?
 
  This is because I'm running into dead locks, but I know there is no
  select for update to the row being updated.
 
 
  Thanks.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Determing number of queries

2013-04-06 Thread Rick James
Richard, there is more to a system than number of queries.
Please post these in a new thread on http://forums.mysql.com/list.php?24 :
   SHOW GLOBAL STATUS;
   SHOW VARIABLES;
   Ram size
I will do some analysis and provide my opinion.

 -Original Message-
 From: Manuel Arostegui [mailto:man...@tuenti.com]
 Sent: Thursday, April 04, 2013 3:20 PM
 To: h...@tbbs.net
 Cc: mysql@lists.mysql.com
 Subject: Re: Determing number of queries
 
 2013/4/4 h...@tbbs.net
 
   2013/04/04 22:40 +0200, Manuel Arostegui 
  You can start with show innodb status;  It is now show engine
  innodb status
 
 
 Yep, sorry, not used to it just yet :-)
 
 
 --
 Manuel Aróstegui
 Systems Team
 tuenti.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread Rick James
(Thanks for the comment, spameden.)
Well, I was going to drop the thread, but he baited me.  I _do_ know something 
about web serving...

Should I recount the number of times I have traced a database meltdown back to 
MaxClients being too big?  They are _ugly_ meltdowns -- hundreds of 
point-queries stumbling over themselves, flooding the slowlog with queries that 
should never take more than milliseconds.  More and more db requests come in, 
non finishing, thereby stalling the web server threads, etc.

Another point to make -- once a web server (Apache or...) has saturated the CPU 
(or other shared resource), there is really no advantage, only disadvantage, in 
starting more web pages.  The will simply contend for the saturated resource, 
thereby slowing down _all_ threads.  It is better (at this point) to queue up 
(or drop) further requests, thereby giving the CPU a chance to actually finish 
something.

Yet another point...  If
   [ SUM(MaxClients) over the web servers you have ] 
   [ SUM(max_connections) over the Slaves ],
then you are threatening to have mysql refuse connections; this probably leads 
to broken web pages, maybe even 404s or 500s.  Granted, you have (at least) 3 
choices: decrease MaxClients, increase max_connections, or add more Slaves.  If 
mysql has most of max_connections _actively_ running querieds, then it is 
probably stumbling badly, so I vote against increasing that.  Adding a Slave 
cannot be done 'instantly'.  That leaves decreasing MaxClients, which is quick 
and easy.  Furthermore, the SE (in one of the meltdowns) killed Apache; this 
led to a prompt clear up of all the issues -- poor web response, mysql melting 
down, etc.

Sometimes less is better!

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Tuesday, April 02, 2013 8:29 AM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 
 
 Am 02.04.2013 16:09, schrieb spameden:
  2013/3/24 Reindl Harald h.rei...@thelounge.net
  mailto:h.rei...@thelounge.net
 
  Am 24.03.2013 05:20, schrieb spameden:
   2013/3/19 Rick James rja...@yahoo-inc.com
 mailto:rja...@yahoo-inc.com:
   you never have hosted a large site
   Check my email address before saying that.
  
   :D
 
  as said, big company does not have only geniusses
 
  I do not judge only on 1 parameter, Rick has been constantly helping
  here and I'm pretty sure he has more knowledge on MySQL than you.
 
 but the MySQL knowledge alone is not enough in context of a webserver
 not to say irrelevant
 
   20 may be low, but 100 is rather high.
   Never use apache2 it has so many problems under load..
 
  if you are too supid to configure it yes
 
  Ever heard about Slow HTTP DoS attack?
 
 my config says yes as i heard about many things because it is my daily
 job
 
 0 0 LOGtcp  --  eth0   *  !local-network/24
 0.0.0.0/0multiport dports
 80,443 tcpflags: 0x17/0x02 #conn src/32  50 limit: avg 100/hour burst
 5 LOG flags 0 level 7 prefix Firewall
 Slowloris: 
 0 0 DROP   tcp  --  eth0   *  !local-network/24
 0.0.0.0/0multiport dports
 80,443 tcpflags: 0x17/0x02 #conn src/32  50


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread Rick James
SELECT is not performed in the same thread as nginx; it is performed in another 
process, or even (in big web setups) in a different host.  Therefore, nginx 
would be in some form of wait state, thereby not really using the CPU.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Tuesday, April 02, 2013 2:00 PM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 
 
 Am 02.04.2013 22:56, schrieb Rick James:
  I hear that nginx is very fast for a certain class of web serving.
 
 yes
 
  But what happens if a web page needs to do a SELECT?
 
 what should happen?
 
  Is nginx single-threaded, thereby sitting idle waiting for the
 SELECT?
 
 why should it do that?
 
  And, should you run 8 nginx web servers on an 8-core box?
 
 why should you do that?
 
 http://en.wikipedia.org/wiki/Nginx
 nginx uses an asynchronous event-driven approach to handling requests
 
  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Tuesday, April 02, 2013 7:10 AM
  To: Reindl Harald
  Cc: mysql@lists.mysql.com
  Subject: Re: How to change max simultaneous connection parameter in
  mysql.
 
  2013/3/24 Reindl Harald h.rei...@thelounge.net
 
 
 
  Am 24.03.2013 05:20, schrieb spameden:
  2013/3/19 Rick James rja...@yahoo-inc.com:
  you never have hosted a large site
  Check my email address before saying that.
 
  :D
 
  as said, big company does not have only geniusses
 
 
  I do not judge only on 1 parameter, Rick has been constantly helping
  here and I'm pretty sure he has more knowledge on MySQL than you.
 
 
 
  20 may be low, but 100 is rather high.
  Never use apache2 it has so many problems under load..
 
  if you are too supid to configure it yes
 
 
  Ever heard about Slow HTTP DoS attack?
 
 
 
  The best combo is php5-fpm+nginx.
  Handles loads of users at once if well tuned
 
  Apache 2.4 handles the load of 600 parallel executed php-scripts
  from our own CMS-system
 
 
  Nginx serves static content way better than apache2 (did few
  benchmarks already).
 
  nginx+php5-fpm handles better load than apache2-prefork+mod_php
 
  you can google benchmarks if you dont trust me
 
  also nginx eats much less memory than apache2
 
  php5-fpm can be tuned as well to suit your needs if you have lots of
  dynamic content
 
 
  maybe you guys should learn what a opcode-cache is and how to
  compile and optimize software (binaries and config)
 
  o'rly?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread Rick James
I hear that nginx is very fast for a certain class of web serving.  But what 
happens if a web page needs to do a SELECT?  Is nginx single-threaded, thereby 
sitting idle waiting for the SELECT?  And, should you run 8 nginx web servers 
on an 8-core box?

 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, April 02, 2013 7:10 AM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 2013/3/24 Reindl Harald h.rei...@thelounge.net
 
 
 
  Am 24.03.2013 05:20, schrieb spameden:
   2013/3/19 Rick James rja...@yahoo-inc.com:
   you never have hosted a large site
   Check my email address before saying that.
  
   :D
 
  as said, big company does not have only geniusses
 
 
 I do not judge only on 1 parameter, Rick has been constantly helping
 here and I'm pretty sure he has more knowledge on MySQL than you.
 
 
 
   20 may be low, but 100 is rather high.
   Never use apache2 it has so many problems under load..
 
  if you are too supid to configure it yes
 
 
 Ever heard about Slow HTTP DoS attack?
 
 
 
   The best combo is php5-fpm+nginx.
   Handles loads of users at once if well tuned
 
  Apache 2.4 handles the load of 600 parallel executed php-scripts from
  our own CMS-system
 
 
 Nginx serves static content way better than apache2 (did few benchmarks
 already).
 
 nginx+php5-fpm handles better load than apache2-prefork+mod_php
 
 you can google benchmarks if you dont trust me
 
 also nginx eats much less memory than apache2
 
 php5-fpm can be tuned as well to suit your needs if you have lots of
 dynamic content
 
 
  maybe you guys should learn what a opcode-cache is and how to compile
  and optimize software (binaries and config)
 
  o'rly?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Defragmentation of MySQL tables, how many times have I to do it?

2013-03-28 Thread Rick James
How often to OPTIMIZE?

The Short answer:  Never.

The Long answer:  A _few_ tables _may_ need OPTIMIZE _sometimes_.

One test:  Is the free space (according to SHOW TABLE STATUS or equivalent 
information_schema stuff) is  10%, then OPTIMIZE.  Maybe.  However...  That 
math works OK for MyISAM, but is unreliable for InnoDB because (1) things are 
freed in big chunks, thereby showing large free space, and (2) hiding small 
chunks that don't yet showing in the free space.  That is, the number may be 
too high or too low.

BTrees are inherently 'stable'.  InnoDB merges adjacent BTree blocks.  These 
comments lead to the conclusion that there is rarely anything to gain by 
OPTIMIZEing an InnoDB table or MyISAM indexes.

MyISAM data, after lots of DELETEs/UPDATEs/INSERTs of _variable_ length rows 
('Dynamic') can cause fragmentation of individual rows.  Normally a row is 
contiguous in the .MYD file; but it could be in multiple pieces if there were 
small free spots when it was inserted.  So..., if there is a lot of churn, it 
may be useful to OPTIMIZE.  However, I would suggest only once a month.  This 
is perhaps the only case I have found for OPTIMIZEing MyISAM for performance.

With PARTITIONing, do not attempt to OPTIMIZE a single PARTITION; it will 
reOPTIMIZE the entire table (at least in the InnoDB case).  Instead, use ALTER 
TABLE..REORGANIZE.. on one partition into itself.

I have never traced a performance issue in InnoDB to the need for OPTIMIZE.

The Query Cache is irrelevant to this discussion.



 -Original Message-
 From: Bheemsen Aitha [mailto:pgb...@motorola.com]
 Sent: Thursday, March 28, 2013 8:59 AM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: Defragmentation of MySQL tables, how many times have I to
 do it?
 
 Hi Reindl,
 
 I would like to implement your strategy of optimizing tables. Can you
 please share how are running these scripts? Where does the mysql-
 wrapper-class exist? And what parameters need to be passed?
 
 Thanks
 Bheem Aitha
 MySQL and Oracle DBA
 On Mar 28, 2013 4:43 AM, Reindl Harald h.rei...@thelounge.net
 wrote:
 
 
 
  Am 28.03.2013 12:28, schrieb Antonio Fernández Pérez:
   So, is it not necessary (not recommended) to defragment tables if I
   have
  a
   lot of write operations (writing or deleting)?
 
  it is recommended but not permanently and not blindly
 
  i use a daily cronjob which runs optimize table on tables with = 50
  KB overhead based on this methods of a internal mysql-wrapper-class
 
 
   public function optimizeall($action, $returntables, $flush,
  $min_overhead, $only_myisam=true)  {
$output = '';
$dblist = $this-showdatabases();
foreach($dblist as $akt)
{
 if($akt != 'information_schema'  $akt != 'performance_schema')
 {
  if(function_exists('apache_reset_timeout'))
  {
   apache_reset_timeout();
  }
  $output .= $this-optimizetables($akt, $action, $returntables,
  array(), $min_overhead, $only_myisam);
  if($flush)
  {
   echo $output;
   @ob_end_flush();
   flush();
   $output = '';
  }
 }
}
return $output;
   }
 
 
   public function optimizetables($database, $action='optimize',
  $returntables=0, array $tablelist=array(), $min_overhead=0,
  $only_myisam=true)  {
global $rh_php_sapi_name;
$first  = false;
$output = '';
$sql= '';
if(empty($database))
{
 $database = $this-parent-db;
}
if(empty($tablelist))
{
 $tablelist = $this-showtables($database);
}
if(!empty($tablelist))
{
 foreach($tablelist as $akt)
 {
  $ignore = false;
  if($only_myisam)
  {
   $this-parent-select_db($database);
   $type_result = $this-parent-query('SHOW TABLE STATUS LIKE \''
 .
  $akt . '\'', 1, 0);
   $type_row = $this-parent-fetch_assoc($type_result);
   if(strtolower($type_row['Engine']) == 'innodb')
   {
$ignore = true;
   }
  }
  if(!$ignore  ($min_overhead == 0 ||
  $this-get_table_overhead($database, $akt) = $min_overhead))
  {
   if($first)
   {
$sql .= ', ';
   }
   else
   {
$sql = $action . ' table ';
   }
   $sql .= '`' . $database . '`.`' . $akt . '`';
   $first = true;
   if($returntables)
   {
$output .= $database . '.' . $akt;
if($rh_php_sapi_name != 'cli')
{
 $output .= 'br /';
}
$output .= MY_LE;
   }
  }
 }
 if($action != 'all')
 {
  if(!empty($sql))
  {
   $result = $this-parent-query($sql);
  }
 }
 else
 {
  if(!empty($sql))
  {
   $zsp = $sql;
   $result = $this-parent-query(str_replace('all', 'check',
 $zsp),
  1, 0);
   $result = $this-parent-query(str_replace('all', 'repair',
  $zsp), 1, 0);
   $result = $this-parent-query(str_replace('all', 'optimize',
  $zsp), 1, 0);
  }
 }
}
return $output;
   }
 
 

--
MySQL General Mailing 

RE: Retrieve most recent of multiple rows

2013-03-18 Thread Rick James
select * from tab where anwer_timestamp in (select max(anwer_timestamp) from 
tab where q_id in (select distinct q_id from tab) group by q_id);

That query will be extremely slow if you have lots of data.  This is because 
the construct in (select...) is not optimized (until version 5.6).

select t.*
from tab t
join ( select max(answer_timestamp) as ts from tab group by q_id ) x 
on t.answer_timestamp = x.ts
That should almost work.  It fails to do what you want if there are duplicate 
timestamps.

So, you make a second pass, this time taking the max(q_id) from each.  (I'll 
leave that as an exercise for the student.)

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Thursday, March 14, 2013 8:29 AM
 To: Ananda Kumar
 Cc: MySQL; Stefan Kuhn
 Subject: Re: Retrieve most recent of multiple rows
 
 
 - Original Message -
  From: Ananda Kumar anan...@gmail.com
  Subject: Re: Re: Retrieve most recent of multiple rows
 
  select * from tab where anwer_timestamp in (select
 max(anwer_timestamp) from tab where q_id in (select distinct q_id from
 tab) group by q_id);
 
 This is entirely equivalent to
 select * from tab where anwer_timestamp in (select
 max(anwer_timestamp) from tab group by q_id);
 
 Additionally, there are no double timestamps in the dataset you used,
 whereas there are in OP's given dataset.
 
 It's also fairly hard to provide an accurate solution as long as OP
 hasn't clarified what exactly they want, really.
 
 
 
 
 --
 Unhappiness is discouraged and will be corrected with kitten pictures.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Error#: 2002

2013-03-18 Thread Rick James
Check directory permissions, and check out the 'answers' in here:
http://forums.mysql.com/read.php?10,284776,284936

 -Original Message-
 From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
 Sent: Monday, March 18, 2013 12:05 PM
 To: mysql@lists.mysql.com
 Subject: MySQL Error#: 2002
 
 I have about 60 websites based on mysql and php. Suddenly they have all
 gone blank, just white pages. The files are still on the server and I
 can see the tables in all the databases via myphpadmin interfact. I'm
 not getting any response from hosting gods yet. When I try to connect
 to server via Dreamweaver, the error message is:  MySQL Error#: 2002
 Can't connect to local MySQL server through socket
 '/var/lib/myswl/mysql.sock'(111).
 
 I have one site that uses a different IP number that that site is ok.
 My static sites, ie, no database inclusion, are ok.
 
 Any ideas what to look for, most appreciated.
 
 Regards,
 
 Patrice Olivier-Wilson
 828-628-0500
 http://Biz-comm.com
 b...@biz-comm.com
 
 
 Everything will be alright in the end, so if it is not alright, it is
 not yet the end. - Quote from movie: The Best Exotic Marigold Hotel
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-03-18 Thread Rick James
20 is plenty if your pages run fast enough.
Excess clients after MaxClients are queued in Apache.  If the 20 are consuming 
resources (eg cpu/disk) it is better to queue the excess than to have everybody 
stumbling over each other.
In MySQL, the excess clients beyond max_connections are give an error.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Monday, March 18, 2013 12:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 
 
 Am 18.03.2013 19:26, schrieb Rick James:
  If you are running Apache with MaxClients set too high, that can
 cause the problem.
 
 too high is relative
 
  That Apache setting should be something like 20.  (Other web servers
  have similar settings.)
 
 20 is a laughable value as long you are not hosting only sites with no
 users at all
 
 i have seen MaxClients 500 be critical while the hardware was not
 overloaded and we had THOUSANDS of users which liked to get the website
 with all it's images, no way with stupid settings of 20 which means
 only ONE USER at the same time can fetch a single page with images and
 stylesheets
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-03-18 Thread Rick James
If you are running Apache with MaxClients set too high, that can cause the 
problem.  That Apache setting should be something like 20.  (Other web servers 
have similar settings.)

 -Original Message-
 From: Igor Shevtsov [mailto:nixofort...@gmail.com]
 Sent: Saturday, March 16, 2013 1:45 AM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 Hi Kevin,
 In your MySQL client pls execute:
 SET GLOBAL max_connections = 200;
 To Make the change permanent you can edit /etc/my.cnf or whatever MySQL
 config file you you've got in your system Look for this line
 max_connections under [mysqld] secction, add it if it's not in the
 config. make sure it looks like:
 max_connections = 200
 
 No MySQL restart required
 Cheers,
 Igor
 
 
 
 On 16/03/13 07:39, Manuel Arostegui wrote:
  2013/3/16 Kevin Peterson qh.res...@gmail.com
 
  I am using PHP along with mysql. Mysql default configuration allows
  to have 100 simultaneous connection which I want to chane to 200.
 Please help.
 
 
  If you're reaching too many connections quite often, this change can
  imply memory problems in your server. If you are close to get your
  server to swap...be careful with this parameter as any swapping will
  affect your performance.
 
  Manuel.
 
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to change max simultaneous connection parameter in mysql.

2013-03-18 Thread Rick James
 you never have hosted a large site
Check my email address before saying that.

20 may be low, but 100 is rather high.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Monday, March 18, 2013 1:36 PM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.
 
 
 
 Am 18.03.2013 21:01, schrieb Rick James:
  20 is plenty if your pages run fast enough
 
 it is not
 you never have hosted a large site
 
  Excess clients after MaxClients are queued in Apache
 
 so what - it doe snot help you
 been there, done that
 
 if you have some hundret USERS at the same time any every of them is
 requesting the same page with a lot of images you are simply DEAD with
 a limit of
 20 in your configuration
 
  If the 20 are consuming resources (eg cpu/disk) it is better to queue
  the excess than to have everybody stumbling over each other.
 
 if your server can not serve more than 20 simultaionous requests you
 are not doing any serious things
 
 sorry, 20 can be done with any crappy notebook these days
 
  In MySQL, the excess clients beyond max_connections are give an
 error.
 
  -Original Message-
  From: Reindl Harald [mailto:h.rei...@thelounge.net]
  Sent: Monday, March 18, 2013 12:15 PM
  To: mysql@lists.mysql.com
  Subject: Re: How to change max simultaneous connection parameter in
  mysql.
 
 
 
  Am 18.03.2013 19:26, schrieb Rick James:
  If you are running Apache with MaxClients set too high, that can
  cause the problem.
 
  too high is relative
 
  That Apache setting should be something like 20.  (Other web
 servers
  have similar settings.)
 
  20 is a laughable value as long you are not hosting only sites with
  no users at all
 
  i have seen MaxClients 500 be critical while the hardware was not
  overloaded and we had THOUSANDS of users which liked to get the
  website with all it's images, no way with stupid settings of 20
 which
  means only ONE USER at the same time can fetch a single page with
  images and stylesheets


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: a little doubt on text about MySQL

2013-03-18 Thread Rick James
Possibly related:
http://ronaldbradford.com/blog/why-sql_mode-is-important-2011-06-01/
http://rpbouman.blogspot.com/2009/01/mysqls-sqlmode-my-suggestions.html
http://gabrito.com/post/when-installing-mysql-always-set-the-sql-mode


 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Saturday, March 16, 2013 6:23 AM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: a little doubt on text about MySQL
 
  2013/03/16 03:44 +0100, Reindl Harald 
 what are you speaking about?
 
 you can define it in my.cnf and YOU are responsible for the
 configuration as you are also responsible the develop php code with
 error_reporting = E_ALL  These SQL-modes that pertain to type-
 safety are really part of the _type_: ALLOW_INVALID_DATES NO_ZERO_DATE
 NO_ZERO_IN_DATE Their value when one does CREATE TABLE ... really
 belongs to the newly created table, if not to particular fields in the
 table. It is type-declaration. This one, NO_AUTO_VALUE_ON_ZERO, is part
 of the table s type, and belongs with the newly created table, or with
 the fields on which it bears (MyISAM).
 
 It really is not right that one who designs a table designs it with one
 date setting or another in mind, then another, who uses that table,
 changes any of these in local SQL mode, and thereby changes the type.
 
 As for this one, NO_AUTO_CREATE_USER, there is no reason for letting it
 differ in local or global SQL-mode from that defined in my.cnf
 (my.ini). Inasmuch as MySQL lets one set that apart from the
 configuration file, there is a problem, especially from dropping it.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mysql cluster and auto shard

2013-03-18 Thread Rick James
Clustrix now has a software version of their auto-sharding system.  (It used to 
be that they only sold an 'appliance'.)

 -Original Message-
 From: Andrew Morgan [mailto:andrew.mor...@oracle.com]
 Sent: Monday, March 18, 2013 6:51 AM
 To: Mike Franon
 Cc: mysql@lists.mysql.com
 Subject: RE: mysql cluster and auto shard
 
 
 
  -Original Message-
  From: Mike Franon [mailto:kongfra...@gmail.com]
  Sent: 18 March 2013 13:34
  To: mysql@lists.mysql.com
  Subject: mysql cluster and auto shard
 
  I am looking at  the best way to scale writes.
 
  Either using sharding with our existing infrastructure, or moving to
  mysql cluster.
 
  Does anyone have any pros/cons to using mysql cluster?  I am trying
 to
  find a much better understanding on how the auto sharding works?  Is
  it true we do not need to change code much on application level?
 
 As a starting point, I think it's worth taking a look at this white
 paper... http://www.mysql.com/why-mysql/white-papers/mysql-cluster-
 evaluation-guide/
 
 Most things will continue to work when migrating to MySQL Cluster but
 of course (as with any storage engine) to get the best performance
 you'll probably need to make some changes; this second paper explains
 how to optimize for MySQL Cluster - hopefully that will give a good
 feeling for the types of changes that you might need/want to make...
 http://www.mysql.com/why-mysql/white-papers/guide-to-optimizing-
 performance-of-the-mysql-cluster/
 
 
 
 
  Thanks
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Blob implementation question

2013-03-13 Thread Rick James
A lot of details are missing...
Engine:  MyISAM?  InnoDB?  other?
Let's see the SELECT.
If InnoDB, let's see the transaction, if it is part of such.
If InnoDB, which (COMPACT, etc) are you using.
You are asking about a single row with the 500MB, correct?

In general, each request will ask for the same row, and will be blocked at some 
level.  The data will be fetched from disk and cached (radically differently, 
depending on the Engine).  Each request will be satisfied -- perhaps 
sequentially, perhaps simultaneously.

The resultset will need to be built at some point.  This will probably take 
up 500MB+ of extra RAM.  This might lead to swapping or running out of RAM.

If the SELECT needs to build a temp table, it will be MyISAM, and it will be on 
disk.  But not all SELECTs need to build a temp table.  This, for example, 
won't:
SELECT myblob FROM mytable WHERE id=123;
This probably will (if foo is not indexed):
SELECT myblob FROM mytable ORDER BY foo;

 -Original Message-
 From: Adam Ilardi [mailto:mastaskill...@gmail.com]
 Sent: Wednesday, March 13, 2013 9:16 AM
 To: mysql
 Subject: Blob implementation question
 
 Hello All,
 
 I'm trying to grok the blob implementation. This scenario is contrived
 to understand blobs please don't suggest I shouldn't do this. If you
 have a theoretical machine. Ubuntu with 4 cores/4gb of ram and we'll
 say once mysql is running 500mb of free disk space.
 
 I have a 500mb blob stored in a table and 30 concurrent requests come
 in to select the blob's bytes. How does mysql handle this situation @
 an implementation level?
 
 Would mysql buffer the blob data to the disk?
 Would mysql keep 30 large in memory buffers for the data?
 
 I'd like to know when I would be @ risk of either filling up the disk
 or running out of ram in this situation. I'm also curious as to the
 code level details about how blobs are read and transmitted to a
 client.
 
 
 Thanks,
 Adam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to return resultset from MySQL Stored Procedure using prepared statement?

2013-03-13 Thread Rick James
What language are you using?
In Perl, there is
   $sth-more_results;

 -Original Message-
 From: Girish Talluru [mailto:girish.dev1...@gmail.com]
 Sent: Wednesday, March 13, 2013 5:24 AM
 To: mysql@lists.mysql.com
 Subject: How to return resultset from MySQL Stored Procedure using
 prepared statement?
 
 DELIMITER $$
 CREATE PROCEDURE List_IL()
 BEGIN
 
   DECLARE Project_Number_val VARCHAR( 255 );
   DECLARE Temp_List_val VARCHAR(255);
   DECLARE Project_List_val VARCHAR(255);
   DECLARE FoundCount INT;
   DECLARE Project_Number INT;
   DECLARE db_Name VARCHAR(255);
 
 
 
   DECLARE no_more_rows BOOLEAN;
   DECLARE loop_cntr INT DEFAULT 0;
   DECLARE num_rows INT DEFAULT 0;
 
 
   DECLARE   projects_curCURSOR FOR
 SELECT  Project_Id
 FROMProject_Details;
 
 
   DECLARE CONTINUE HANDLER FOR NOT FOUND
 SET no_more_rows = TRUE;
 
 
   OPEN projects_cur;
   select FOUND_ROWS() into num_rows;
 
   the_loop: LOOP
 
 FETCH  projects_cur
 INTO   Project_Number_val;
 
 
 IF no_more_rows THEN
 CLOSE projects_cur;
 LEAVE the_loop;
 END IF;
 
 SET Project_List_val = CONCAT(Project_Number_val, '_List');SET
 db_Name='panel';
 
 SELECT COUNT(1) INTO  FoundCount  FROM information_schema.tables WHERE
 table_schema = `db_Name`  AND table_name = `Project_List_val`;
 
  IF FoundCount = 1 THENSET @Project_Number=Project_Number_val;
 SET @sql = CONCAT(' SELECT Panel_Id,', Project_Number_val,'
 FROM ', @Project_List_val,' Where status=1');
 
 PREPARE stmt FROM @sql;EXECUTE stmt;DEALLOCATE PREPARE stmt;END IF;
 
 SET loop_cntr = loop_cntr + 1;
   END LOOP the_loop;
 
 END $$
 *
 **In the above stored procedure How can I get the all the rows selected
 during execution of prepared statement and after the loop terminates I
 want to return the entire result set whichever calls the stored
 procedure. Can you please help me how to do this?*

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: auto_increment field behavior

2013-03-12 Thread Rick James
What settings?  (innodb_autoinc_lock_mode comes to mind, but there may be 
others.)

It is acceptable, by the definition of AUTO_INCREMENT, for it to burn the 
missing 15K ids.

 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:34 PM
 To: mysql@lists.mysql.com
 Subject: auto_increment field behavior
 
 Hi, I'm running MySQL-5.5 on Ubuntu
 
 ~ $ mysqld -V
 mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
 ((Ubuntu))
 
 Would like to know if it's normal behavior with auto_increment field
 (tried both signed and unsigned now):
 
 mysql show create table phone_codes;
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 | Table   | Create Table
 
 
 
 
 
 
 
 
   |
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 | phone_codes | CREATE TABLE `phone_codes` (
   `pc_id` int(11) NOT NULL AUTO_INCREMENT,
   `pc_type` enum('ABC','DEF') DEFAULT NULL,
   `pc_code` decimal(3,0) NOT NULL,
   `pc_from` decimal(7,0) NOT NULL,
   `pc_to` decimal(7,0) NOT NULL,
   `pc_capacity` decimal(8,0) NOT NULL,
   `pc_operator` varchar(255) DEFAULT NULL,
   `pc_city` varchar(255) DEFAULT NULL,
   `pc_region` varchar(255) DEFAULT NULL,
   PRIMARY KEY (`pc_id`),
   KEY `pc_code` (`pc_code`),
   KEY `pc_code_from_to` (`pc_code`,`pc_from`,`pc_to`),
   KEY `pc_operator` (`pc_operator`),
   KEY `pc_city` (`pc_city`),
   KEY `pc_region` (`pc_region`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 1 row in set (0.00 sec)
 
 mysql load data infile '/tmp/ABC3x' into table phone_codes fields
 terminated by ';' enclosed by '#' lines terminated by '\r\n'
 (pc_code,pc_from,pc_to,pc_capacity,pc_operator,pc_city,pc_region) SET
 pc_type='ABC'; Query OK, 17922 rows affected (4.44 sec)
 Records: 17922  Deleted: 0  Skipped: 0  Warnings: 0
 
 mysql show create table phone_codes;
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---+
 | Table   | Create Table
 
 
 
 
 
 
 
 
|
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 

RE: auto_increment field behavior

2013-03-12 Thread Rick James
AUTO_INCREMENT guarantees that it will not assign the same number twice.  
That's about all it is willing to guarantee.

With InnoDB, if a transaction starts, uses an auto_inc value, then rolls back, 
that id is lost.

When you have multiple threads loading data into the same table, diff values of 
innodb_autoinc_lock_mode give you diff tradeoff between speed and 
predictability.  If replication is involved, you want predictability.

InnoDB and MyISAM act differently, especially after recovering from a crash.

If you DELETE the _highest_ id, then restart the server, that id will be 
reused.  (This is irritating to some people.)  Otherwise, a deleted id will not 
be reused.

There may be more.  Most of those are covered here:
http://mysql.rjweb.org/doc.php/ricksrots




 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:46 PM
 To: Rick James
 Cc: mysql@lists.mysql.com
 Subject: Re: auto_increment field behavior
 
 2013/3/13 Rick James rja...@yahoo-inc.com:
  What settings?  (innodb_autoinc_lock_mode comes to mind, but there
 may
  be others.)
 Hi, Rick.
 
 Many thanks for the quick answer here is my settings:
 
 mysql show variables like '%inc%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | auto_increment_increment| 1 |
 | auto_increment_offset   | 1 |
 | div_precision_increment | 4 |
 | innodb_autoextend_increment | 8 |
 | innodb_autoinc_lock_mode| 1 |
 +-+---+
 5 rows in set (0.00 sec)
 
 
 
  It is acceptable, by the definition of AUTO_INCREMENT, for it to burn
 the missing 15K ids.
 
 I don't get this explanation, could you please explain bit more? So
 it's completely normal for AUTO_INCREMENT field to act like this?
 
 
 
  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Tuesday, March 12, 2013 2:34 PM
  To: mysql@lists.mysql.com
  Subject: auto_increment field behavior
 
  Hi, I'm running MySQL-5.5 on Ubuntu
 
  ~ $ mysqld -V
  mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
  ((Ubuntu))
 
  Would like to know if it's normal behavior with auto_increment field
  (tried both signed and unsigned now):
 
  mysql show create table phone_codes;
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | Table   | Create Table
 
 
 
 
 
 
 
 
|
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | phone_codes | CREATE TABLE `phone_codes` (
`pc_id` int(11) NOT NULL AUTO_INCREMENT,
`pc_type` enum('ABC','DEF') DEFAULT NULL,
`pc_code` decimal(3,0) NOT NULL,
`pc_from` decimal(7,0) NOT NULL,
`pc_to` decimal(7,0) NOT NULL,
`pc_capacity` decimal(8,0) NOT NULL,
`pc_operator` varchar(255) DEFAULT NULL,
`pc_city` varchar(255) DEFAULT NULL,
`pc_region` varchar(255) DEFAULT NULL,
PRIMARY KEY (`pc_id`),
KEY `pc_code` (`pc_code`),
KEY `pc_code_from_to` (`pc_code`,`pc_from`,`pc_to`),
KEY `pc_operator` (`pc_operator`),
KEY `pc_city` (`pc_city`),
KEY `pc_region` (`pc_region`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8

RE: MySQL Cluster Solution

2013-03-07 Thread Rick James
What do _you_ mean by  a new High Availability solution?

See also Percona Cluster.  It uses InnoDB (XtraDB), so that might be zero 
change for you.  Oops, except that you should check for errors after COMMIT.

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Thursday, March 07, 2013 7:06 AM
 To: Neil Tompkins
 Cc: [MySQL]
 Subject: Re: MySQL Cluster Solution
 
 - Original Message -
  From: Neil Tompkins neil.tompk...@googlemail.com
  Subject: MySQL Cluster Solution
 
  I've used in the past MySQL Community Server 5.x.  Everything is
 fine,
  however I'm now wanting to implement a new High Availability solution
  and am considering MySQL Cluster.  However, I heard that MySQL
 Cluster
  doesn't support store procedures ?  Are there any other restrictions
 I
  need to be aware of.
 
 It is a completely different product, Neil, which just happens to also
 have a gateway for MySQL. It is not 'just another storage engine' -
 study it hard, and do extensive testing before you even consider
 switching.
 
 That is not to say that it might not be a good match for your needs;
 just that it's not a quick switch.
 
 
 --
 Unhappiness is discouraged and will be corrected with kitten pictures.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



RE: data loss due to misconfiguration

2013-02-26 Thread Rick James
If it is MyISAM and there is some form of corruption, you might get the symptom 
you see.  Do CHECK TABLE.

information_schema has the same flaw in row count as SHOW TABLE STATUS for 
InnoDB.

 -Original Message-
 From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
 Sent: Tuesday, February 26, 2013 11:04 AM
 To: Zachary Stern; mysql@lists.mysql.com
 Subject: RE: data loss due to misconfiguration
 
 Are you actually querying the table (select count(*) from table_name),
 or just the stats (show table status)? Is the table Innodb?
 
 If  you're using Innodb and aren't doing a select count (or other
 select query) on the table, then yes you'll have varying results. This
 is because unlike MyISAM, Innodb does not keep a count of the records.
 Using show table status gives just an estimation. This would be my
 first path of investigation.
 
 http://dev.mysql.com/doc/refman/5.5/en/show-table-status.html
 
 The number of rows. Some storage engines, such as MyISAM, store the
 exact count. For other storage engines, such as InnoDB, this value is
 an approximation, and may vary from the actual value by as much as 40
 to 50%. In such cases, use SELECT COUNT(*) to obtain an accurate
 count.
 
 
 
 -Original Message-
 From: Zachary Stern [mailto:z...@enternewmedia.com]
 Sent: Tuesday, February 26, 2013 12:42 PM
 To: mysql@lists.mysql.com
 Subject: data loss due to misconfiguration
 
 Is such a thing possible? There are no errors or issues, but we can
 query a table, get X number of rows, query it later, and all of the
 sudden be missing a thousand rows.
 
 I know this isn't much to go on, but I'm not even sure what information
 to provide. Will be happy to give anything you guys might be able to
 think of.
 
 TIA.
 
 -Zachary
 
 
 
 Notice: This communication may contain privileged and/or confidential
 information. If you are not the intended recipient, please notify the
 sender by email, and immediately delete the message and any attachments
 without copying or disclosing them. LBI may, for any reason, intercept,
 access, use, and disclose any information that is communicated by or
 through, or which is stored on, its networks, applications, services,
 and devices.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL 5.1: incorrect arithmetic calculation

2013-02-21 Thread Rick James
They are both right.  It is a matter of how many decimal places you want to 
display:

mysql SELECT 365 * 1.67 * ( 1 - 0.10);
+--+
| 365 * 1.67 * ( 1 - 0.10) |
+--+
| 548.5950 |
+--+
1 row in set (0.00 sec)

mysql SELECT ROUND(365 * 1.67 * ( 1 - 0.10), 2);
++
| ROUND(365 * 1.67 * ( 1 - 0.10), 2) |
++
| 548.60 |
++
1 row in set (0.00 sec)

 -Original Message-
 From: Alex Keda [mailto:ad...@lissyara.su]
 Sent: Thursday, February 14, 2013 9:36 PM
 To: mysql@lists.mysql.com
 Subject: MySQL 5.1: incorrect arithmetic calculation
 
 bkp0# mysql h5000_bill
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 1643184
 Server version: 5.1.68-log FreeBSD port: mysql-server-5.1.68
 
 Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights
 reserved.
 
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective owners.
 
 Type 'help;' or '\h' for help. Type '\c' to clear the current input
 statement.
 
 mysql set names utf8;
 Query OK, 0 rows affected (0.00 sec)
 
 mysql SELECT * FROM `WorksCompliteAgregate` WHERE (`ContractID` =
 10369
 AND `Month` = 497);
 +--++---+---+--++--
 ---++
 | ID   | ContractID | Month | ServiceID | Comment  | Cost   |
 Discont | Amount |
 +--++---+---+--++--
 ---++
 | 10551851 |  10369 |   497 | 1 | №20440 |   1.67 | 0.10
 |365 |
 | 10551854 |  10369 |   497 | 2 | №20441 | 150.00 | 1.00
 |  1 |
 +--++---+---+--++--
 ---++
 2 rows in set (0.00 sec)
 
 mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)) as `Summ` FROM
 `WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);
 ++
 | Summ   |
 ++
 | 548.59 |
 ++
 1 row in set (0.00 sec)
 
 mysql SELECT SUM(`Amount`*`Cost`*(1-`Discont`)*100)/100 as `Summ` FROM
 `WorksCompliteAgregate` WHERE (`ContractID` = 10369 AND `Month` = 497);
 ++
 | Summ   |
 ++
 | 548.594985 |
 ++
 1 row in set (0.00 sec)
 
 mysql SELECT 365 * 1.67 * ( 1 - 0.10);
 +--+
 | 365 * 1.67 * ( 1 - 0.10) |
 +--+
 | 548.5950 |
 +--+
 1 row in set (0.00 sec)
 
 mysql
 ===
 
 but, my desktop calculator gives the result 548.60
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MyISAM table size vs actual data, and performance

2013-02-21 Thread Rick James
* Smells like some huge LONGTEXTs were INSERTed, then DELETEd.  Perhaps just a 
single one of nearly 500M. 

* Yes, there is an impact on full table scans -- it has to step over the empty 
spots.  Or maybe not -- one big cow chip of 500MB would be easy to leap over.

* OPTIMIZE TABLE is the primary way to recover the space.  It _may_ be that 
space on the _end_ is automatically recovered.  If so, you might see the .MYD 
shrink even when OPTIMIZE is not run.

* LONGTEXT is almost never useful.  Do you really think there are thingies that 
big?  Consider changing it to MEDIUMTEXT -- that would truncate any biggies to 
16MB.

* Smells like a key-value (EAV) schema design.  Such is destined to fail when 
trying to scale.  Yeah, you are probably stuck with Drupal.  Here are my 
comments and recommendations on EAV:  http://mysql.rjweb.org/doc.php/eav

* Please try to find a way in your Email client to display STATUS without 
losing the spacing.

* When you switched to InnoDB, I hope you had innodb_file_per_table turned on.  
That way, you can actually recoup the space when doing ALTER.  Otherwise, you 
will be stuck with a bloated ibdata1 file that you cannot easily shrink.

* In InnoDB, the LONGTEXT will usually be stored separately, thereby making a 
full table scan relatively efficient.

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Friday, February 15, 2013 4:21 AM
 To: mysql.
 Subject: MyISAM table size vs actual data, and performance
 
 
 
 Hey list,
 
 I've got another peculiar thing going on :-) Let me give you a quick
 summary of the situation first: we host a number of Drupal sites, each
 site and it's db on separate VMs for reasons that are not important to
 this scenario. MySQL is 5.0.51a-24+lenny4-log (Debian); I don't have
 the exact Drupal version here but it's likely to be a 5.x branch.
 
 The easy thing to say would of course be upgrade your versions, but
 that's not an option right now. I don't really care if that means I
 have no actual *fix* for the problem - I know how to work around it.
 I'm just looking for a cause, ideally maybe even a specific known bug.
 Strangely enough, I'm seeing this on three distinct installs; but
 others with the same versions and setup (but different sites) seem to
 not exhibit the issue.
 
 So, what I'm seeing is this: Drupal's variable table keeps growing,
 but there does not seem to be more data. I understand how record
 allocation and free space in datafiles works, but this is well beyond
 the normal behaviour.
 
 
 http://www.tuxera.be/filestore/heciexohhohj/df-year.png
 
 As you can see here (the lime green line of /data), growth occurs
 gradually (and the issue happened in september, as well), until it
 seems to reach a certain point. At some point, however, performance on
 that table (notably select * - it's a drupal thing) pretty much
 instantly plummets, and the query takes around half a minute to run -
 whereas now, after reclaiming the free space, it takes 0.03 seconds.
 
 I don't have the exact numbers as I wasn't on-site yesterday evening,
 but since the disk is 5GB, the reclaimed space yesterday must have been
 around 850MB - for a table that is now 30MB. No records were deleted
 from the table, the workaround is as simple as OPTIMIZE TABLE
 variable - simply rebuild the table. The logs make no mention of a
 crashed table, so it's very unlikely that this is a borked index. Even
 if it were, I wouldn't expect a scan of 30MB in 1202 rows to take half
 a minute, on a table that is accessed so often that it's relevant
 blocks are bound to be in the filesystem cache.
 
 The table's structure is fairly simple, too:
 
 
 
 CREATE TABLE `variable` (
 `name` varchar(128) NOT NULL DEFAULT '', `value` longtext NOT NULL,
 PRIMARY KEY (`name`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8
 
 
 
 
 I currently have another system that's also growing that table, here's
 a bit of session:
 
 
 blockquote
 mysql show table status like 'variable';
 +--++-++--++---
 --+-+--+---+---
 -+-+-+-+---
 --+--++-+
 | Name | Engine | Version | Row_format | Rows | Avg_row_length |
 | Data_length | Max_data_length | Index_length | Data_free |
 | Auto_increment | Create_time | Update_time | Check_time | Collation |
 | Checksum | Create_options | Comment |
 +--++-++--++---
 --+-+--+---+---
 -+-+-+-+---
 --+--++-+
 | variable | MyISAM | 10 | Dynamic | 1188 | 795 | 493277732 |
 | 281474976710655 | 41984 | 492332716 | NULL | 2011-12-13 16:18:53 |
 | 2013-02-15 12:35:18 | 2012-10-17 15:45:11 | utf8_general_ci | NULL |
 |
 | |
 

RE: replication fails after upgrade to 5.6

2013-02-21 Thread Rick James
It is safer to have the Slave be a newer version.

 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Thursday, February 21, 2013 10:30 AM
 To: mysql@lists.mysql.com
 Subject: Re: replication fails after upgrade to 5.6
 
 
 
 Am 21.02.2013 19:11, schrieb Mike Franon:
  Is the only way to really fix this is to upgrade master?  I thought
  you can replicate from master - slave if version is higher on slave,
  just not the other way around?
 
 normally no
 
 but take a look at the changelogs of myslq in the last years
 80 out of 100 fixes are replication bugs
 
  On Thu, Feb 21, 2013 at 1:03 PM, Reindl Harald
 h.rei...@thelounge.net wrote:
  update the master ASAP in a short timeframe too and re-init
  replication if needed
 
  normally both should have exactly the same version
 
  the slaves must be updated first because otherwise a master may
 write
  instructions in the binlog the older slave does not undersatdn at
  all, but as said normally both should have the same version
 
  Am 21.02.2013 18:03, schrieb Mike Franon:
  So I created a new test box on AWS, and just did one upgrade from
  5.0.96 to 5.1, like I did before and replication will not work from
  a master with 5.0.96 to a slave with 5.1.68
 
  I keep getting Error 1062, Duplicate Entry for key
 
  I get no errors when I do a mysql_upgrade, all comes back ok.
 
  I was curious if anyone had any ideas?
 
  Thanks
 
  On Wed, Feb 20, 2013 at 5:51 PM, Mike Franon kongfra...@gmail.com
 wrote:
  This is on a slave, i only upgraded on one box which is the slave
 i
  have not touched master
 
  On Wed, Feb 20, 2013 at 5:41 PM, Reindl Harald
 h.rei...@thelounge.net wrote:
 
 
  Am 20.02.2013 23:27, schrieb Mike Franon:
  So I successfully upgraded a test db server from 5.0.96 all the
  way up to 5.6
 
  Replication as the slave, where the master is 5.0.96, started
  working for about 10 minutes and then got the following error:
 
  [ERROR] Slave SQL: Error 'Duplicate entry 'data' for key
  'PRIMARY'' on query. Default database: 'test'. Query: 'UPDATE
 IGNORE , Error_code:
  1062
 
  All of our other slaves on 5.0.96 are fine, so I know it has to
  do with 5.6 but just not sure what, when ir an mysql_upgrade
  everything was OK
 
  did you surely upgrade and restart the slaves first?
 
  i personally would NOT go to 5.6 now
 
  it is a very young release and looking and the typical changelogs
  replication has always the most fixed bugs


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Rick James
Ditto.  I would mysqldump 5.0, load it onto a 5.5 (or 5.6) box that you have as 
a slave of the 5.0 master.  The load may uncover some issues.  Testing reads 
may uncover issues.  The replication stream will test the writes; it may 
uncover issues.

After being comfortable with that, build new slaves off the 5.5/5.6 box.  Then 
cutover writes to that box.  And jettison the 5.0 boxes.

5.5 - 5.6 may have more changes/improvements that all of 5.0-5.1-5.5.  (Or, 
at least, Oracle salesmen would like you to believe it.)  There is clearly a 
lot new optimizations in 5.6.

So should you go all the way to 5.6?  Maybe.  You need to do a lot of shakedown 
anyway.

 -Original Message-
 From: Mihail Manolov [mailto:mihail.mano...@liquidation.com]
 Sent: Thursday, February 14, 2013 2:22 PM
 To: Mike Franon
 Cc: Akshay Suryavanshi; mysql@lists.mysql.com
 Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
 You could jump from 5.0 directly to 5.5 and skip 5.1. I have without
 any issues. There are some configuration file change, which you may
 want to consider checking. I definitely recommend upgrading your
 development servers for an extensive testing. Some queries _may_ run
 slower or not work at all and you may have to rearrange how you join
 tables in your queries.
 
 The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
 On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
  Great thanks for the info, I guess the best way to do this is take a
  spare server, set it up with our standard setup, and then start the
  upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
  and test.
 
 
 
 
 
 
 
 
  On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
  akshay.suryavansh...@gmail.com wrote:
  Mike,
 
  5.6 is GA now, so its stable release. Also you should not jump to
 5.6
  directly, atleast from 5.0.
 
  There are many bug fixes and changes in 5.1, so you should consider
  this way.
 
  5.0--5.1--5.5 (all slaves first, and then the master)
 
  And further 5.5 -- 5.6 (again all slaves first and then the master)
 
  Hope this helps.
 
  Cheers!
 
  On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon kongfra...@gmail.com
 wrote:
 
  I have 1 master with many slaves, using the master only for inserts
  and the rest are readers.
 
 
  Is 5.6 stable?  Or better off to go to 5.5?
 
  If so do I need to make a few steps or can go straight from 5.0 to
 5.6?
 
 
  Any best practices and recommendations?
 
  Thanks
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: slave replication with lots of 'duplicate entry' errors

2013-02-14 Thread Rick James
Recommend, for security reasons, you rectify that.
If you need some SUPER action, create a stored procedure with privileges 
'creator', so the security hole is still mostly contained.

 -Original Message-
 From: Robert Citek [mailto:robert.ci...@gmail.com]
 Sent: Thursday, February 14, 2013 2:59 PM
 To: Rick James
 Cc: mysql
 Subject: Re: slave replication with lots of 'duplicate entry' errors
 
 On Thu, Feb 14, 2013 at 5:46 PM, Rick James rja...@yahoo-inc.com
 wrote:
  Is it in read only mode?
  Furthermore, are all users logging in as non-SUPER users?  Note:
 root bypasses the readonly flag!
 
 No.  The user that is commonly used does have Super privileges.  I am
 not sure why, but it does.
 
 Regards,
 - Robert


RE: slave replication with lots of 'duplicate entry' errors

2013-02-14 Thread Rick James
 Is it in read only mode?
Furthermore, are all users logging in as non-SUPER users?  Note:  root bypasses 
the readonly flag!

 -Original Message-
 From: Manuel Arostegui [mailto:man...@tuenti.com]
 Sent: Thursday, February 14, 2013 1:55 PM
 To: Robert Citek
 Cc: mysql
 Subject: Re: slave replication with lots of 'duplicate entry' errors
 
 2013/2/13 Robert Citek robert.ci...@gmail.com
 
  On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek
 robert.ci...@gmail.com
  wrote:
   Any other possibilities?  Do other scenarios become likely if there
   are two or more tables?
  
   Of those, which are the most likely?
 
  [from off-list responder]:
   Other possibility: The replication is reading from master not from
   the
  point when the dump was done, but some time before and is fetching
  insert statements which are already in the dump.
 
  To prevent that I used the coordinates in the dump file included with
  --master-data=2.  Could the coordinates still be off?
 
 
 Hello,
 
 Are you sure nothing is getting inserted directly into the slave? Is it
 in read only mode?
 If you're starting replication using the values provided by --master-
 data=2 (which should be something like):
 
  -- Position to start replication or point-in-time recovery from
 
 -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
 MASTER_LOG_POS=240814775;
 
 And if you're using the right IP, there's no reason to have duplicate
 entries unless someone is writing directly into the slave.
 
 Manuel.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Rick James
Singer, do you have some examples?

 -Original Message-
 From: Singer Wang [mailto:w...@singerwang.com]
 Sent: Thursday, February 14, 2013 2:59 PM
 To: Mihail Manolov
 Cc: Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com
 Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
 There are queries that works with 5.1/5.0 that do not work with 5.5, I
 would test extensively..
 
 S
 
 
 On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov 
 mihail.mano...@liquidation.com wrote:
 
  You could jump from 5.0 directly to 5.5 and skip 5.1. I have without
  any issues. There are some configuration file change, which you may
  want to consider checking. I definitely recommend upgrading your
  development servers for an extensive testing. Some queries _may_ run
  slower or not work at all and you may have to rearrange how you join
 tables in your queries.
 
  The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
  On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
   Great thanks for the info, I guess the best way to do this is take
 a
   spare server, set it up with our standard setup, and then start the
   upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
   and test.
  
  
  
  
  
  
  
  
   On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
   akshay.suryavansh...@gmail.com wrote:
   Mike,
  
   5.6 is GA now, so its stable release. Also you should not jump to
   5.6 directly, atleast from 5.0.
  
   There are many bug fixes and changes in 5.1, so you should
 consider
   this way.
  
   5.0--5.1--5.5 (all slaves first, and then the master)
  
   And further 5.5 -- 5.6 (again all slaves first and then the
   master)
  
   Hope this helps.
  
   Cheers!
  
   On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon
 kongfra...@gmail.com
  wrote:
  
   I have 1 master with many slaves, using the master only for
   inserts and the rest are readers.
  
  
   Is 5.6 stable?  Or better off to go to 5.5?
  
   If so do I need to make a few steps or can go straight from 5.0
 to 5.6?
  
  
   Any best practices and recommendations?
  
   Thanks
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 


RE: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Rick James
Are you saying there was a regression in the Optimizer?

(Sounds like a workaround is to do STRAIGHT_JOIN -- yuck!)

I compared several hundred slow queries on 5.1 versus MariaDB 5.5.  I found 
several improvements.

 -Original Message-
 From: Mihail Manolov [mailto:mihail.mano...@liquidation.com]
 Sent: Thursday, February 14, 2013 3:30 PM
 To: Rick James
 Cc: Singer Wang; Mike Franon; Akshay Suryavanshi;
 mysql@lists.mysql.com
 Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
 The ones that didn't work for me required table rearrangement in the
 query. MySQL 5.5 was very particular about the table join order.
 
 On Feb 14, 2013, at 6:11 PM, Rick James wrote:
 
  Singer, do you have some examples?
 
  -Original Message-
  From: Singer Wang [mailto:w...@singerwang.com]
  Sent: Thursday, February 14, 2013 2:59 PM
  To: Mihail Manolov
  Cc: Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com
  Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
  There are queries that works with 5.1/5.0 that do not work with 5.5,
  I would test extensively..
 
  S
 
 
  On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov 
  mihail.mano...@liquidation.com wrote:
 
  You could jump from 5.0 directly to 5.5 and skip 5.1. I have
 without
  any issues. There are some configuration file change, which you may
  want to consider checking. I definitely recommend upgrading your
  development servers for an extensive testing. Some queries _may_
 run
  slower or not work at all and you may have to rearrange how you
 join
  tables in your queries.
 
  The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
  On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
  Great thanks for the info, I guess the best way to do this is take
  a
  spare server, set it up with our standard setup, and then start
 the
  upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to
 5.6
  and test.
 
 
 
 
 
 
 
 
  On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
  akshay.suryavansh...@gmail.com wrote:
  Mike,
 
  5.6 is GA now, so its stable release. Also you should not jump to
  5.6 directly, atleast from 5.0.
 
  There are many bug fixes and changes in 5.1, so you should
  consider
  this way.
 
  5.0--5.1--5.5 (all slaves first, and then the master)
 
  And further 5.5 -- 5.6 (again all slaves first and then the
  master)
 
  Hope this helps.
 
  Cheers!
 
  On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon
  kongfra...@gmail.com
  wrote:
 
  I have 1 master with many slaves, using the master only for
  inserts and the rest are readers.
 
 
  Is 5.6 stable?  Or better off to go to 5.5?
 
  If so do I need to make a few steps or can go straight from 5.0
  to 5.6?
 
 
  Any best practices and recommendations?
 
  Thanks
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Rick James
Sounds like something that, once discovered, can be fixed in the old version -- 
then it works correctly in both.

That is what happened with a 4.0-5.1 conversion years ago.  With 1000 
different tables and associated code, we encountered two incompatibilities.  
One had to do with NULLs, the other with precedence of commajoin vs explicit 
JOIN.

From: Singer Wang [mailto:w...@singerwang.com]
Sent: Thursday, February 14, 2013 3:41 PM
To: Rick James
Cc: Mihail Manolov; Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com
Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

Its a very pedantic case, but we had a few instances where it was an issue at 
my last job. It basically involved multi-table deletes and aliasing.. I quote 
the change notes for MySQL 5.5.3


Incompatible Change: Several changes were made to alias resolution in 
multiple-table DELETE statements so that it is no longer possible to have 
inconsistent or ambiguous table aliases.

§  In MySQL 5.1.23, alias declarations outside the table_references part of the 
statement were disallowed for theUSING variant of multiple-table DELETE syntax, 
to reduce the possibility of ambiguous aliases that could lead to ambiguous 
statements that have unexpected results such as deleting rows from the wrong 
table.

Now alias declarations outside table_references are disallowed for all 
multiple-table DELETE statements. Alias declarations are permitted only in the 
table_references part.

Incorrect:



DELETE FROM t1 AS a2 USING t1 AS a1 INNER JOIN t2 AS a2;

DELETE t1 AS a2 FROM t1 AS a1 INNER JOIN t2 AS a2;

Correct:



DELETE FROM t1 USING t1 AS a1 INNER JOIN t2 AS a2;

DELETE t1 FROM t1 AS a1 INNER JOIN t2 AS a2;

§  Previously, for alias references in the list of tables from which to delete 
rows in a multiple-table delete, the default database is used unless one is 
specified explicitly. For example, if the default database is db1, the 
following statement does not work because the unqualified alias reference a2 is 
interpreted as having a database of db1:

§

§  DELETE a1, a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2

WHERE a1.idhttp://a1.id=a2.idhttp://a2.id;

To correctly match an alias that refers to a table outside the default 
database, you must explicitly qualify the reference with the name of the proper 
database:



DELETE a1, db2.a2 FROM db1.t1 AS a1 INNER JOIN db2.t2 AS a2

WHERE a1.idhttp://a1.id=a2.idhttp://a2.id;

Now alias resolution does not require qualification and alias references should 
not be qualified with the database name. Qualified names are interpreted as 
referring to tables, not aliases.

Statements containing alias constructs that are no longer permitted must be 
rewritten. (Bug #27525)



On Thu, Feb 14, 2013 at 6:11 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.com wrote:
Singer, do you have some examples?

 -Original Message-
 From: Singer Wang [mailto:w...@singerwang.commailto:w...@singerwang.com]
 Sent: Thursday, February 14, 2013 2:59 PM
 To: Mihail Manolov
 Cc: Mike Franon; Akshay Suryavanshi; 
 mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

 There are queries that works with 5.1/5.0 that do not work with 5.5, I
 would test extensively..

 S


 On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov 
 mihail.mano...@liquidation.commailto:mihail.mano...@liquidation.com wrote:

  You could jump from 5.0 directly to 5.5 and skip 5.1. I have without
  any issues. There are some configuration file change, which you may
  want to consider checking. I definitely recommend upgrading your
  development servers for an extensive testing. Some queries _may_ run
  slower or not work at all and you may have to rearrange how you join
 tables in your queries.
 
  The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
  On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
   Great thanks for the info, I guess the best way to do this is take
 a
   spare server, set it up with our standard setup, and then start the
   upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
   and test.
  
  
  
  
  
  
  
  
   On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
   akshay.suryavansh...@gmail.commailto:akshay.suryavansh...@gmail.com 
   wrote:
   Mike,
  
   5.6 is GA now, so its stable release. Also you should not jump to
   5.6 directly, atleast from 5.0.
  
   There are many bug fixes and changes in 5.1, so you should
 consider
   this way.
  
   5.0--5.1--5.5 (all slaves first, and then the master)
  
   And further 5.5 -- 5.6 (again all slaves first and then the
   master)
  
   Hope this helps.
  
   Cheers!
  
   On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon
 kongfra...@gmail.commailto:kongfra...@gmail.com
  wrote:
  
   I have 1 master with many slaves, using the master only for
   inserts and the rest are readers.
  
  
   Is 5.6 stable?  Or better off to go to 5.5?
  
   If so do I need to make a few steps or can go straight from 5.0

RE: IF and CASE

2013-02-05 Thread Rick James
As a Rule of Thumb, function evaluation time is not significant to the overall 
time for running a query.  (I see IF and CASE as 'functions' for this 
discussion.)

Do you have evidence that says that IF is slower?  Perhaps using BENCHMARK()?



 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Monday, February 04, 2013 11:31 PM
 To: mysql@lists.mysql.com
 Subject: IF and CASE
 
 It is my impression that when their functions are equivalent, IF takes
 more time than CASE. Comment?
 
 Do they always evaluate all their arguments?
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



  1   2   3   4   5   6   7   8   9   10   >