Re: InnoDB error 5

2013-11-21 Thread Shawn Green

Hello Nick,

On 11/21/2013 10:32 AM, Nick Cameo wrote:

OOoopppsss! I do mean for recovery/continual backup. I will do it
manually, but basically get all the data on a USB disk and be able to
recover/move it (the data) on another machine, the same machine etc..

I hope I did not just open up a can of worms. We just went live and
this post gave me a rude awakening. What is an effective easy to
follow protocol for backup and recovery in mysql!

Nick from Toronto



There are two basic types of backups, logical and physical.

Logical backups are performed by a utility that converts your database 
objects into their CREATE ... commands and exports your data as INSERT 
... commands (or as delimited files). These kinds of backups are quite 
portable and compress well. An example of such a tool is mysqldump.


http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html

Physical backups can happen many different ways.  The easiest version to 
make/restore is the 'cold copy'.  This is exactly what it sounds like. 
Shutdown your mysqld and make a copy of everything. At the absolute 
minimum you need the ibdata files, the ib_log files, and all folders 
inside your --datadir location.


Warm or hot copies are provided by tools that coordinate with the server 
to synchronize the state of the InnoDB data to the moment the non-InnoDB 
data has been captured. One example of this is MySQL Enterprise Backup.

http://dev.mysql.com/doc/mysql-enterprise-backup/3.9/en/index.html

Additional details abound in the manual:
http://dev.mysql.com/doc/refman/5.6/en/backup-types.html


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Update Column in table only if variable is Not NULL

2013-10-30 Thread Shawn Green

Hi Neil,

On 10/30/2013 9:55 AM, Neil Tompkins wrote:

Shawn


What I need is that if I pass say 10 parameters/variables to a query, I
only want to update the column/field if the value passed is NOT NULL.


On Wed, Oct 30, 2013 at 3:41 AM, Shawn Green shawn.l.gr...@oracle.comwrote:


Hi,

On 10/29/2013 9:52 PM, h...@tbbs.net wrote:


2013/10/29 11:35 -0400, Shawn Green 



My favorite technique is the COALESCE function for this on a

column-by-column basis

SET FieldName1 = Now(), FieldName2 = COALESCE(:MyVariable, FieldName2)

but if MyVariable is NULL, FieldName1 reflects the attempt to change, not
change.




The way I understood the second explanation was like this.

He wants to update a row of data. The FieldName1 field is always updated
to the current date and time.  If any of the new values (passed in via
variables) are not NULL for a specific column, replace the value on the row
with the new value otherwise maintain the current value.

He may yet mean something completely different than how I read it the
second time.



Then we agree on your intentions.  The COALESCE() construction I 
demonstrated will work just fine for you.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Update Column in table only if variable is Not NULL

2013-10-29 Thread Shawn Green

Hello Neil,

On 10/28/2013 5:23 PM, Neil Tompkins wrote:

Hi Shawn

Thanks for your reply.  Maybe my example wasn't detailed enough.
  Basically the snippet of the UPDATE statement I provided shows
updating only 1 field.  However in my live working example, I have about
20 possible fields that might need to be updated if the variable
passed for each field is NOT NULL.

Therefore, I felt this needs to be done at database level in the stored
procedure.  How can I accomplish this.

Thanks
Neil


On Mon, Oct 28, 2013 at 6:17 PM, Shawn Green shawn.l.gr...@oracle.com
mailto:shawn.l.gr...@oracle.com wrote:

Hello Neil,

On 10/28/2013 2:06 PM, Neil Tompkins wrote:

Hi

If I have a update statement like

UPDATE MY_TABLE
SET FieldName1 = Now(), FieldName2 = :MyVariable
WHERE FieldName3 = 'Y'

How can I only update the FieldName2 field if the value of
MyVariable is
NOT NULL ?

Thanks
Neil


This needs to be a decision you make at the application level to not
execute the UPDATE command in the first place. Not every decision
needs to be made by the database. Plus, it will save you the time of
a full network round trip just to get a result from the server that
you affected 0 rows (parsing, optimizing, executing).

Now, if this was just a typo and your :MyVariable was meant to be
@MyVariable (a MySQL user variable) then you can put that test in
the WHERE clause of the command

UPDATE MY_TABLE
SET FieldName1 = Now(), FieldName2 = @MyVariable
WHERE FieldName3 = 'Y'  AND @MyVariable IS NOT NULL

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql



My favorite technique is the COALESCE function for this on a 
column-by-column basis


SET FieldName1 = Now(), FieldName2 = COALESCE(:MyVariable, FieldName2)

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Update Column in table only if variable is Not NULL

2013-10-29 Thread Shawn Green

Hi,

On 10/29/2013 9:52 PM, h...@tbbs.net wrote:

2013/10/29 11:35 -0400, Shawn Green 

My favorite technique is the COALESCE function for this on a column-by-column 
basis

SET FieldName1 = Now(), FieldName2 = COALESCE(:MyVariable, FieldName2)

but if MyVariable is NULL, FieldName1 reflects the attempt to change, not 
change.




The way I understood the second explanation was like this.

He wants to update a row of data. The FieldName1 field is always updated 
to the current date and time.  If any of the new values (passed in via 
variables) are not NULL for a specific column, replace the value on the 
row with the new value otherwise maintain the current value.


He may yet mean something completely different than how I read it the 
second time.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Update Column in table only if variable is Not NULL

2013-10-28 Thread Shawn Green

Hello Neil,

On 10/28/2013 2:06 PM, Neil Tompkins wrote:

Hi

If I have a update statement like

UPDATE MY_TABLE
SET FieldName1 = Now(), FieldName2 = :MyVariable
WHERE FieldName3 = 'Y'

How can I only update the FieldName2 field if the value of MyVariable is
NOT NULL ?

Thanks
Neil



This needs to be a decision you make at the application level to not 
execute the UPDATE command in the first place. Not every decision needs 
to be made by the database. Plus, it will save you the time of a full 
network round trip just to get a result from the server that you 
affected 0 rows (parsing, optimizing, executing).


Now, if this was just a typo and your :MyVariable was meant to be 
@MyVariable (a MySQL user variable) then you can put that test in the 
WHERE clause of the command


UPDATE MY_TABLE
SET FieldName1 = Now(), FieldName2 = @MyVariable
WHERE FieldName3 = 'Y'  AND @MyVariable IS NOT NULL

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL Connector/ODBC 5.2.6 has been released

2013-10-10 Thread Shawn Green

Hello Saquib,

On 10/9/2013 7:17 PM, Saqib N Ali wrote:

no binaries for AIX?

--
saqib
http://hivemined.net


The last ODBC package built for AIX was 5.1.8 released Oct 2010
http://downloads.mysql.com/archives.php?p=mysql-connector-odbc-5.1o=aix

This is inline with our EOL announcements that AIX remains a very 
low-demand platform and that modern versions of MySQL (5.5 or later) are 
not supported on AIX.

http://www.mysql.com/support/eol-notice.html
http://www.mysql.com/support/supportedplatforms/database.html

The source code remains available for you to build on your platform of 
choice. Compiled binaries for AIX for ODBC 5.2 are not available for 
download directly from our sites. To access the source code, in the 
Select Platform field choose Source Code

http://dev.mysql.com/downloads/connector/odbc/5.2.html#downloads

Should demand for our products on AIX improve, we may reassess this 
decision in the future.


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: discovery of functional dependencies from data

2013-10-07 Thread shawn green

Hello Javad,

On 10/7/2013 4:20 AM, javad bakhshi wrote:

Hello everyone,

I was wondering if anyone could provide me with some sort of code for
discovering functional dependencies from data. I can't use wizards that are 
available in DBMS.

A starting point would be appreciated also.

P.S. What I want is to discover the relations between two columns without 
having any meta data available.


Best regards,
Javad Bakhshi,
Computer Science M.Sc
Department of IT, Uppsala University



Based on my Google research: discover functional dependency is a 
highly academic way of saying normalize my data.  Sure, your way 
involves set theory and dependency trees but the practical effect is the 
same.


Is there code to help you normalize your data? Somewhere there probably 
is.  Can it do so without any metadata? possibly.


However some metadata is required to establish a relationship. That may 
be a foreign key constraint or similar column names. However something 
external to your actual data itself (the naked values) is required to 
indicate that two columns in different tables are somehow related. Even 
what type of data it is (int, char, date, etc) is a form of metadata 
because that helps the database determine what that sequence of bytes is 
meant to represent. It provides meaning beyond what the data itself 
contains.


I could have the exact same list of names in two different tables but 
they could mean two totally different things.  One could be the list of 
people hired by a certain HR person used for an audit of the hiring 
person's performance.  One could be the list of people working in a 
particular group and be used to ensure that they have the proper 
security privileges they need.   It's identical data for totally 
different purposes.  Some metadata (even if it is outside the database 
itself) is required to establish context.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Problem with having

2013-09-24 Thread shawn green

Hello Larry,

On 9/23/2013 6:22 PM, Larry Martell wrote:

On Mon, Sep 23, 2013 at 3:15 PM, shawn green shawn.l.gr...@oracle.comwrote:


Hi Larry,


On 9/23/2013 3:58 PM, Larry Martell wrote:


On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
narula...@gmail.comwrote:

  Hi,


In your second query, you seem to have MIN(date_time), but you are
talking about maximum. So your group by query is actually pulling the
minimum date for this recipe.



I pasted the wrong query in. I get the same results regardless of if I
have
MIN or MAX - I get the id of the max, but the date_time of the min.



  On Mon, Sep 23, 2013 at 3:35 PM, Larry Martell larry.mart...@gmail.com

**wrote:

  I want to find the rows from a table that have the max date_time for

each
recipe. I know I've done this before with group by and having, but I
can't
seem to get it to work now. I get the correct row id, but not the
correct
date_time. I'm sure I'm missing something simple.

For purposes of showing an example, I'll use one recipe, 19166.


For that recipe here's the row I would want:

mysql select id, MAX(date_time) from data_cstmeta  where recipe_id =
19166;
+-+---**--+
| id  | MAX(date_time)  |
+-+---**--+
| 1151701 | 2013-02-07 18:38:13 |
+-+---**--+
1 row in set (0.01 sec)

I would think this query would give me that - it gives me the correct
id,
but not the correct date_time:

mysql SELECT id, date_time as MaxDateTime  FROM data_cstmeta  where
recipe_id = 19166 group by recipe_id HAVING MIN(date_time);
+-+---**--+
| id  | MaxDateTime |
+-+---**--+
| 1151701 | 2010-12-13 16:16:55 |
+-+---**--+
1 row in set (0.01 sec)

How can I fix this?

Thanks!
-larry



You have to do a two-stage match. One stage to find the MAX() of a value
for each recipe_id, the other to match that MAX() to one or more rows to
give you the best ID values.

Here's a subquery method of doing it.  There are many many others (google
for groupwize maximum)

SELECT a.id, b.MaxDateTime
FROM data_cstmeta a
INNER JOIN (
 SELECT MAX(date_time) MaxDateTime
 FROM data_cstmeta
 WHERE recipe_id = 19166
) b
   on b.MaxDateTime = a.date_time
WHERE recipe_id = 19166;



Having the recipe_id in the query was just to show an example. I really
want the id's with the max date for each recipe_id:

This is what I changed it to, which works, but is too slow. I need to find
a more efficient solution:

SELECT d1.id, d1.date_time as MaxDateTime
FROM data_cstmeta d1
LEFT JOIN data_cstmeta d2 ON d1.recipe_id = d2.recipe_id AND
d1.date_time  d2.date_time
WHERE d2.recipe_id IS NULL



As I said, there are many many ways to solve this problem. Here is one 
that is going to perform much better for the generic case than what you 
are doing.


CREATE TEMPORARY TABLE tmpMaxDates (key(recipe_id,maxdatetime))
SELECT recipe_id, max(date_time) maxdatetime
FROM data_cstmeta
GROUP BY recipe_id;

SELECT a.id, b.maxdatetime
FROM data_cstmeta a
INNER JOIN tmpMaxDates b
  on a.recipe_id = b.recipe_id
  and a.date_time = b.maxdatetime;

DROP TEMPORARY TABLE tmpMaxDates;


Of course, an appropriate multi-column index on data_cstmeta would also 
make your technique much faster than it is today.


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Problem with having

2013-09-23 Thread shawn green

Hi Larry,

On 9/23/2013 3:58 PM, Larry Martell wrote:

On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
narula...@gmail.comwrote:


Hi,

In your second query, you seem to have MIN(date_time), but you are
talking about maximum. So your group by query is actually pulling the
minimum date for this recipe.



I pasted the wrong query in. I get the same results regardless of if I have
MIN or MAX - I get the id of the max, but the date_time of the min.




On Mon, Sep 23, 2013 at 3:35 PM, Larry Martell larry.mart...@gmail.comwrote:


I want to find the rows from a table that have the max date_time for each
recipe. I know I've done this before with group by and having, but I can't
seem to get it to work now. I get the correct row id, but not the correct
date_time. I'm sure I'm missing something simple.

For purposes of showing an example, I'll use one recipe, 19166.


For that recipe here's the row I would want:

mysql select id, MAX(date_time) from data_cstmeta  where recipe_id =
19166;
+-+-+
| id  | MAX(date_time)  |
+-+-+
| 1151701 | 2013-02-07 18:38:13 |
+-+-+
1 row in set (0.01 sec)

I would think this query would give me that - it gives me the correct id,
but not the correct date_time:

mysql SELECT id, date_time as MaxDateTime  FROM data_cstmeta  where
recipe_id = 19166 group by recipe_id HAVING MIN(date_time);
+-+-+
| id  | MaxDateTime |
+-+-+
| 1151701 | 2010-12-13 16:16:55 |
+-+-+
1 row in set (0.01 sec)

How can I fix this?

Thanks!
-larry



You have to do a two-stage match. One stage to find the MAX() of a value 
for each recipe_id, the other to match that MAX() to one or more rows to 
give you the best ID values.


Here's a subquery method of doing it.  There are many many others 
(google for groupwize maximum)


SELECT a.id, b.MaxDateTime
FROM data_cstmeta a
INNER JOIN (
SELECT MAX(date_time) MaxDateTime
FROM data_cstmeta
WHERE recipe_id = 19166
) b
  on b.MaxDateTime = a.date_time
WHERE recipe_id = 19166;

Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't Connect Localhost

2013-09-03 Thread shawn green

Hello John,

On 9/2/2013 12:49 AM, John Smith wrote:


On Sun, 1/9/13, Reindl Harald h.rei...@thelounge.net wrote:

  Subject: Re: Can't Connect Localhost
  To: mysql@lists.mysql.com
  Date: Sunday, 1 September, 2013, 2:44 PM


I looked in mysql.config.pl and no localhost :(



mysql.config.pl from what software damned?
mysql.config.pl does not exist in context of mysql


It exists in the following folder on my Win8 box:

/Program Files (x86)/MySQL/MySQL Server 5.5/bin

Yes, apparently it does exist in context of mysql. At any rate, it didn't work. 
Is there another config file in there?


*what client software you are using to connect to a
mysql-server*


I had written a python script. However, even if I try your and the other 
fellow's suggestion and open up a command prompt with admin privileges and type
mysql -u root -h 127.0.0.1 -p
pass
it just opens up a mysql session, and that only if I do it from the folder 
where that script lives since it's not in the path. Otherwise, it complains it 
can't find the script.
TIA,
John




I am also confused by your apparent dependency on this script 
(mysql_config.pl) to create a client connection.  This is not the 
location that stores or processes or influences your startup parameters. 
 Neither can it be used for creating client-server connections.


http://dev.mysql.com/doc/refman/5.6/en/mysql-config.html


The location of your configuration details is a 'my.ini' file or files 
located in one of these locations


http://dev.mysql.com/doc/refman/5.6/en/option-files.html

What happens when you temporarily rename mysql_config.pl to something 
else? What error messages do you get?


Also.  You have established that using the command line client 
(mysql.exe) works just fine for you.  What is the other tool are you 
attempting to use as a MySQL client that has been giving you problems? 
There are at least two connection libraries you can launch from Python 
that allow you to connect to MySQL.  Which one is yours?


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to update MySQL table based on 3 other tables

2013-09-03 Thread shawn green

Hello Neil,

On 8/24/2013 5:21 AM, Neil Tompkins wrote:

I have the following four MySQL tables

Region
RegionId

City
CityId
RegionId

Hotel
HotelId
CityId

HotelRegion
HotelId
RegionId

I'm struggling to write a UPDATE statement to update the City table's
RegionId field from data in the HotelRegion table.

Basically how can I update the City table with the correct RegionId where
the HotelId in the HotelRegion table matches the City table's CityId.

This is my UPDATE statement at the moment

UPDATE City cSET c.RegionId = (SELECT DISTINCT(HotelRegion.RegionId)
FROM HotelRegion INNER JOIN Hotel ON Hotel.HotelID =
HotelRegion.HotelIDINNER JOIN City ON City.CityId = Hotel.CityIdWHERE
City.CityId = 1233)WHERE c.CityId = 1233



Have you tried the multi-table syntax of the UPDATE command?
http://dev.mysql.com/doc/refman/5.6/en/update.html


UPDATE City c INNER JOIN HotelRegion h ON h.HotelID = c.CityID
SET City.RegionID = h.RegionID
WHERE ...

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL version 3.23 to 5.x features

2013-08-21 Thread shawn green

Hello Naga,

On 8/21/2013 6:45 AM, Nagaraj S wrote:

Hello,

Can anyone share the features/comparison from MySQL version 3.23 to 5.x in
single document? I can get from Google, however I have to navigate
different pages/sites, if it is in single document that will be useful to
see the overview of mysql features



While not exactly just one single page for all changes, there is a 
single page of the fine manual within each new major version that 
describes the big features that are new or changed within that version.


http://dev.mysql.com/doc/refman/5.0/en/mysql-nutshell.html
http://dev.mysql.com/doc/refman/5.1/en/mysql-nutshell.html
http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html
http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

Unfortunately, to learn what we changed between 3.23 - 4.0 or between 
4.0 - 4.1, you are going to need to review the change logs

http://dev.mysql.com/doc/refman/4.1/en/news.html


from http://dev.mysql.com/doc/refman/4.1/en/index.html

This manual describes features that are not included in every edition of 
MySQL 3.23, MySQL 4.0, and MySQL 4.1; such features may not be included 
in the edition of MySQL 3.23, MySQL 4.0, or MySQL 4.1; licensed to you.




I can pretty much summarize the deficiencies in 3.23 like this
* No InnoDB, Archive, CSV, Federated, or Blackhole storage engines
* No table partitioning
* No Views
* No Stored Procedures or Stored Functions
* No Triggers
* No Events
* Severe scalability limits (won't run as fast with reasonably 
concurrent loads even on great hardware as later versions)

* Completely out of print (unpublished) and unsupported.
* Missing literally thousands of bug fixes and performance improvements


Any new project should be starting out with 5.6.  Any production server 
should be on 5.5 or 5.6 by now or migrating soon. It is also a fairly 
safe bet that if you are still operating a 3.23 instance of MySQL that 
it is also time to upgrade your hardware.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: uncertain future of Oracle MySQL exams [1Z0-871, 1Z0-872, 1Z0-873, 1Z0-874]

2013-07-22 Thread shawn green

Hello Lukas,

On 7/22/2013 8:16 AM, Lukas Lehner wrote:

Hi

I am now responsible for LAMP applications. I worked previously with Oracle
11g.
It seems that those exams are an easy win for me. I guess my preparation
effort is about 60 hours.

1Z0-871 MySQL 5 Developer Certified Professional Exam, Part I
1Z0-872 MySQL 5 Developer Certified Professional Exam, Part II
1Z0-873 MySQL 5 Database Administrator Certified Professional Exam, Part I
1Z0-874 MySQL 5 Database Administrator Certified Professional Exam, Part II

My concern is about the future of MySQL and the MySQL certifications.

- it seems the exams didn't change since 2005
- Linux distributions ship MariaDB (not MySQL)
- Oracle Press don't publish an official study guide
- MySQL OCP exams are simpler than other OCP exams (for example Oracle 11g)

what do you think?



You are correct. However, after being acquired twice in rapid succession 
and after much internal MySQL reorganization due to each, a few 
resources are back in place to keep up with this stuff again.


For some pretty solid legal reasons Oracle tries to avoid announcing 
much of anything before it is actually ready to be used. Stay tuned to 
the publicity channels for any official announcements if or when they 
are made.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mysql cache issues???

2013-07-15 Thread shawn green

Hello Reindl,

On 7/15/2013 10:28 AM, Reindl Harald wrote:



Am 15.07.2013 16:00, schrieb Hartmut Holzgraefe:

On 15.07.2013 15:31, Egoitz Aurrekoetxea wrote:

Could Mysql cache cause a performance penalty, when invalidating
queries of a big Mysql cache zone?


can, and *will* ... see also http://dom.as/tech/query-cache-tuner/


Optimal size for your query cache: 0

thats ridiculous


cache is locked while entries are being purged to prevent handing out
cached results that may already be out-of-date, and the more active
cache entries that need to be purged the longer it stays locked ...


but in most cases the benefit outbeats this overhead massive
since you have more read than write

[OK] Query cache efficiency: 93.6% (40M cached / 42M selects)
nobody can tell me that all these 40M queries would have been
faster without cache by reduce the overhead



I just want to verify your that your efficiency formula.

Query Cache Efficiency = Qcache_hits / (Qcache_hits + Com_select)

The way yours is described it looks like it may be a simple ratio of 
Qcache_hits to Com_select which would be inaccurate.



Another way of judging the efficiency of the cache is to compare how 
many times a cached result is actually reused (on average).  Compute a 
very broad reuse rate with:


Reuse estimate = Qcache_hits/Qcache_inserts

If that ratio is under about 5 you have very poor reuse. If it is less 
than 1, you are getting no reuse at all for some of the results you are 
caching. If that's the case, strongly consider disabling the cache or 
using more selective caching techniques as already discussed.




I'm not trying to pick on you, Reindl. Your cache may be doing 
splendidly well. I just didn't want anyone else to see your efficiency 
rating and derive the wrong formula on their own.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mysql cache issues???

2013-07-15 Thread shawn green

Hello Egoitz,

On 7/15/2013 1:35 PM, Egoitz Aurrekoetxea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 15/07/13 17:27, Reindl Harald wrote:


... snip...
i would say my caches are working perfectly (not only the mysql
cache, also opcache etc.) since whe have generate times down to
0.006 seconds for a typical CMS page here which runs in more than
200 installations on the main machine, at high load mysqld is
never the problem

without the query cache the overall performance drops by 30-40%




Hi,

The query cache hit rate is near 90% so I assume it's doing all
properly... now I'm using 1GB as cache but... I will do some
tries... till I see some significant behavior either due to success or
failure... I was basically wondering what did you though about
performance penalty due to the mysql cache... just that...

Thank you very much then
... signature snipped ...



Until we redesign the query cache, those stalls will remain. It is 
unwize to keep so many sets of query results around if they are not 
actually being used.


As has been covered already, the freeze required to perform the purge of 
all results associated with a specific table can at times be extended 
(durations of 20-30 minutes are not unusual with cache sizes around 
1GB). What you may find is that even if some of your results are reused 
frequently for a short period of time, they are not reused at all beyond 
a certain moment. This means you have hundreds or thousands of sets of 
query results sitting idle in your cache.  Reduce the size of your cache 
until you start to see your reuse rate or efficiency rate decline 
significantly. You may be surprised how small that is for your workload.


To achieve scalability: customize your cache structures to your workload 
(this may mean caching the results somewhere other than MySQL), optimize 
your tables for efficient storage and retrieval, and optimize your 
queries to be as efficient as practical. There are other scalability 
options such as replication and sharding that can also be introduced 
into your production environment to reduce the cost of computation on 
each copy (or portion) of your data. However, this is a topic best 
handled in a separate thread.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: restore question

2013-07-05 Thread shawn green

Hello Jim,

On 7/5/2013 3:11 PM, Jim Sheffer wrote:

Hi everyone-

This is probably a no brainer (I'm new to Navicat) but I have a backup of a 
database from Navicat.

I want to be able to see if a certain field has changed since this morning in the backup (We are 
having problems with an order that somehow duplicated the items.  I need to see if 
there was only 1 of each item or two removed from inventory).  I don't need to do a 
restore into the database, just have a look at the backup.

Is this possible without going through the hoops of creating a copy of the 
database and restoring to the copy (I assume this is possible) - I DO NOT want 
to restore into the currently running database :-)

Any suggestions would b greatly appreciated!



If the Navicat backup used the same process as mysqldump, then your 
table's data is stored in a plain-text SQL script.


Step 1) find the CREATE TABLE...  command for the table you are 
interested in.


Step 2) Just after the table's definition, you should see a sequence of 
INSERT commands. Those are your rows. Use your find or grep or search 
skills to identify the primary key values for the rows you are 
interested in. Visually parse that row (it's contained in its own set of 
() parentheses) to find the 'old' values you seek.


Sorry that it's such a manual process but you didn't want to restore so 
you get to pretend to be the database server :)


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: 1 file

2013-07-04 Thread shawn green

Hello Larry,

On 7/3/2013 11:27 AM, Larry Martell wrote:

We recently changed from in memory files to InnoDB files. Today we
noticed that in every server's data dir there is file called '1' that
seems to get updated every time the iddata1 file gets updated. On some
servers it's comparable in size to the iddata1 file, on other servers
it's 10-15x larger, and on others it's 1/2 the size. What is this
file. Googling revealed nothing about this.



That is not something an official MySQL build would do. Consult with the 
person (or group) that compiled your binaries.


Now, if you have enabled --innodb-file-per-table and if you have named 
your table '1' then that file is probably '1.ibd'.  That would be 
expected. But that seems unlikely based on your other details.


Did you also enable a separate undo log, perhaps? Although if you had, 
it should be 'undo1' not just '1'

http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_undo_tablespaces

So, that simple '1' file also seems unusual to me.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Full text search and sign as a part of the keyword

2013-07-02 Thread shawn green

Hello,

(my response is not top-posted)
On 7/2/2013 12:50 PM, l...@afan.net wrote:




Another correction: Searching for Com, the test org is NOT
gonna be listed but all others will.

Searching for Com no results at all.

�
�







Actually, looks like I'm wrong.



For testing purpose I made an org



CompMe



When search for Comp it's gonna



be shown on the list.







When search for Comp it's also gonna be shown.

But


Construction Company as well.



Then I changed the name of



the test org to ComMe.







Searching for Com, the test org is gonna be listed.







Though, Com no results at



all.



?!?



�



Hi to all,
I have this full text search query
SELECT name, org_id,
address_id
FROM organization
WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN

BOOLEAN

MODE)
and I'm not getting any results. And there IS a org ABC,
Inc.
My assumption is the ampersand sign as a part of the
keyword.
Any idea?




Read this:
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_ft_boolean_syntax

Then search on Comp Me.

Let us know your results.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW INNODB STATUS - FILE I/O - OS reads/writes/syncs?

2013-06-24 Thread shawn green

Hello Rafał,

On 6/24/2013 4:26 AM, Rafał Radecki wrote:

As I can see the changes in these values are use by percona cacti
monitoring templates to graph InnoDB I/O.
Can anyone answer the question finally? ;)


2013/6/21 Hartmut Holzgraefe hart...@skysql.com


On 21.06.2013 13:59, Rafał Radecki wrote:

Hi All.

I've searched but with no luck... what do exactly these variables mean:

1343928 OS file reads, 1085452262 OS file writes, 19976022 OS fsyncs

?


these are the total number of reads/writes/fsyncs (number of system
calls actually?) since the server started (or maybe last FLUSH call?)
and not very meaningful by themselves without knowing the time span
it took to come up to those counter values.

The per second values on the following line are much more interesting.



I don't understand how Hartmut's answer was insufficient. The InnoDB 
engine must get data from the disk (reads), send data to the disk 
(writes), and occasionally tell the operating system that it must flush 
its buffers to disk to ensure durability (fsync).


Why are you so interested in these numbers?

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Session ID Generation

2013-06-22 Thread shawn green

Hello Steven,

On 6/21/2013 8:50 AM, Steven Siebert wrote:

Great, thanks to all.

I don't mean to defend our auditors, because they are a PITA, but they do
appear to be decently knowledgeable in general - but they aren't, not can
they be expected to, be specific application-level experts - otherwise, the
number of auditors we would be required to hire would be cost
prohibitive...there is a necessary balance =)  Just because MySQL
implements this way (and, obviously is concious of these security
concerns), doesn't mean the latest NoSQL solution deployed to github,
written in python during a cocaine fuelled weekend, does...they aren't here
to say no to whatever software I desire to use, they just need to
verify.  So, really, the wand of ignorance should be pointed in my
direction =)

This leads me to my final question: is this documented anywhere beyond the
source code and this thread?  I was specifically searching for session id
generation, but clearly this search was too narrow. I'll look more
generally for how MySQL establishes connections and maintains sessions -
but if you happen to know where it might be document off the top of your
head, I would appreciate it.

Thanks again for everyone's insightful and quite helpful responses.
... snipped  ...


I believe that between the source code and the MySQL Internals manual, 
you will get more answers than you might have been looking for.


Of course, if you need any clarification you can always bring those 
questions back to the list.


http://dev.mysql.com/doc/internals/en/client-server-protocol.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How do I select all rows of table that have some rows in another table (AND, not OR)

2013-06-12 Thread shawn green

Hello Daevid,

On 6/11/2013 7:17 PM, Daevid Vincent wrote:




-Original Message-
From: shawn green [mailto:shawn.l.gr...@oracle.com]
Sent: Tuesday, June 11, 2013 2:16 PM
To: mysql@lists.mysql.com
Subject: Re: How do I select all rows of table that have some rows in
another table (AND, not OR)

Hello Daevid,

... snip ...


Shawn, thank you for taking the time to reply. I wasn't expecting the
solution to be so much work with multiple statements like that. I was
thinking it could be done in one (or two, as in split out a portion of it in
PHP and re-insert it to the original SQL to avoid a JOIN or something). Part
of the issue is that we use PHP to generate the $sql string by appending
bits and pieces depending on the search criteria thereby keeping the 'path'
through the SQL statement simple and relatively linear. To implement this
would require significant re-writing and/or special cases where we could
introduce errors or omissions in the future. The frustrating part is that
the REGEXP query we use now only takes about 2 seconds on my DEV VM (same
database as PROD), however when the RDBMS is loaded it then takes up to 30
seconds so in theory it's not even that inefficient given the # rows. We do
use memcached for the results, but since there are so many combinations a
user could choose, our hit ratio is not so great and therefore the cache
isn't doing us much good and this is why the RDBMS can get loaded up easily.

How can an OR be so simple using IN() but AND be so overly complex?
Seems that mysql should have another function for ALL() that works just like
IN() to handle this kind of scenario.




As I said, we could have used a single command but you would have not 
been able to review the 'best' match scenario only those rows that were 
'complete' matches.


And, those three commands can easily be encapsulated within a stored 
procedure. Pass in two strings (one listing the values to find, one 
listing the values to reject) and use PREPARED STATEMENTS within the 
procedure to build the IN() lists within the first and second commands. 
 The value in the 'HAVING' clause in the last command (unless you use 
the other option of reviewing the list of 'closest' matches) can be set 
to the number of items in the list of things to find parameter to your 
procedure.


I was demonstrating a principle you could use and not necessarily giving 
you a full solution.  Some tweaking may be required.


Also,  by encapsulating what I wrote within a stored procedure, this 
changes my 3 statements to a single call that you can easily configure 
from your PHP application.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How do I select all rows of table that have some rows in another table (AND, not OR)

2013-06-11 Thread shawn green
 would not have allowed us to evaluate a partial match 
(like 5 out of 7 target genres), only complete matches would have been 
returned.


I am still very interested in seeing alternative solutions :)

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Bug in BETWEEN same DATETIME

2013-05-24 Thread shawn green

Hello Rick,

On 5/23/2013 7:08 PM, Rick James wrote:

Watch out for CAST(), DATE(), and any other function.  In a WHERE clause, if 
you hide an indexed column inside a function, the index cannot be used for 
optimization.

   INDEX(datetime_col)
   ...
   WHERE DATE(datetime_col) = '2013-01-01'
will not use the index!

The workaround is messy, but worth it (for performance):
   WHERE datetime_col = '2013-01-01'
 AND datetime_col   '2013-01-01' + INTERVAL 1 DAY
(or any of a zillion variants)

(Yeah, it seems like the optimizer could do the obvious transformation for you. 
 Hint, hint, Shawn.)



Or we could coerce datetime values back to their date values when both 
are being used. The trick now becomes choosing between rounding the 
datetime value (times past noon round to the next date) or do we use the 
floor() function all the time.


This has been discussed and the consensus was that the most predictable 
and performant behavior was to extend a date value to become a datetime 
value by associating it with midnight ().


Let's look at some examples:
a)   '2013-05-14 07:00:00' = '2013-05-14'
This is true as the datetime value is 7 hours after midnight.

b)'2013-05-14 07:00:00' = '2013-05-14'
  AND '2013-05-14 07:00:00'  '2013-05-15'
This is true as the time value is somewhen between both midnights.

c)'2013-05-14 07:00:00'  '2013-05-14' + INTERVAL 8 HOURS
This is false. The offset applied to the date term means the time 
portion of the resulting datetime value is 0800, not . (0700  0800) 
is false.


d) And what if instead of comparing against the FLOOR() of each date we 
rounded datetime values up or down to their nearest dates?

   '2013-05-14 17:00:00' = '2013-05-14'
This would be false because the datetime value would have rounded up to 
'2013-05-15'.



There is also a strong desire to make a database server try to do 
exactly what the user tells it to do. If the user wants to compare a 
value to another value with an equality check, we should do that.  It 
would be very odd behavior if an equality check suddenly turns into a 
ranged check.  I realize how much time it would save people to not need 
to include both ends of the range:

WHERE datetime_col = '2013-01-01'
  AND datetime_col   '2013-01-01' + INTERVAL 1 DAY
but for predictability and reliability, this is one rewrite that may not 
always be true.


Perhaps a new operator like IN_DATE is what you are looking for?
(example)
   '2013-05-14 17:00:00' IN_DATE '2013-05-14'
This would be true.

But what if the date field were a due date and all submissions needed to 
be in by noon on those dates?  This IN_DATE operator would return true 
for submissions beyond the deadline, too (a wrong result).  But a direct 
comparison would be true:

   '2013-05-14 17:00:00' = '2013-01-01'
   AND '2013-05-14 17:00:00' = '2013-05-14' + INTERVAL 12 HOUR

Of course you could always write that as a stored function, too.
(pseudo declaration)
CREATE FUNCTION IN_DATE(dateval, datetimeval) (...)

(example usage)
IN_DATE('2013-05-14','2013-05-14 17:00:00')
This could be true, too.  But a stored function would interfere with the 
use of indexes to resolve the query.


Then we get into the problems of how would this operator or function 
would handle comparisons to TIMESTAMP columns and many other variations 
like how to see if a datetime is within a 2 day span instead of one. 
It's a major ball of wax to rewrite these queries in the optimizer just 
to avoid one extra line of SQL code per comparison. To us Keep It 
Simple, Stupid! (KISS) seemed like the better approach to the problem. 
We do one thing (make dates represent midnight on that date when they 
need to be compared to datetime values) and allow the users to decide 
how to handle the rest of the comparison according to their specific needs.


Respectfully,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Bug in BETWEEN same DATETIME

2013-05-24 Thread shawn green

Hi Rick,

Thank you for continuing this. It brings up a few good points.

On 5/24/2013 12:17 PM, Rick James wrote:

For years (even decades), I have stayed out of trouble by assuming a 'date' 
represents the instant in time corresponding to midnight at the start of that 
day.  In MySQL (until 5.6), that is equivalent to a 1-second DATETIME.  I also 
assume midnight belongs to the day that it is the _start_ of.


There is also a strong desire to make a database server try to do exactly what 
the user tells it to do.

That is difficult, because of definitions and representation.
A common problem is comparing a FLOAT value to a 'decimal' value like 1.23.  
MySQL does a good job of covering some cases, but there are still cases between 
DECIMAL, FLOAT, DOUBLE, and literals that will register as inequality, to the 
surprise of the user.
I see the DATE problem as another thing where the user needs to understand the 
computer's algorithm, which, as Shawn points out is:

We do one thing (make dates represent midnight on that date when they need to be 
compared to datetime values) and allow the users to decide how to handle the rest of the 
comparison according to their specific needs.


WHERE datetime_col = '2013-01-01'
  AND datetime_col   '2013-01-01' + INTERVAL 1 DAY

but for predictability and reliability, this is one rewrite that may not always 
be true.

So, to be safe, one should perhaps say:
   WHERE datetime_col = '2013-01-01 00:00:00'
 AND datetime_col   '2013-01-01 00:00:00' + INTERVAL 1 DAY

IN_DATE (or maybe ON_DAY) is an interesting idea.  I assume it would be 
transliterated by the parser into something like the expression above, then optimized 
based on which part(s) are columns and which are literals.


 '2013-05-14 17:00:00' = '2013-01-01'
 AND '2013-05-14 17:00:00' = '2013-05-14' + INTERVAL 12 HOUR

There's an extra second in that!  (I call it the midnight bug.)



It includes the extra second only because your second comparison is 
using = and not just 




I perceive (rightly or wrongly) that comparing a TIMESTAMP to something first 
converts the TIMESTAMP value to a string ('2013-...').  Shawn, perhaps this 
statement belongs as part of the 'algorithm' explanation?



Sort of. TIMESTAMP is not a universal constant (as you say later); it is 
timezone-aware. This causes all sorts of problems when the UTC value it 
keeps internally is used for date-literal comparisons on or about the 
time of Daylight Saving Time changes happen in the timezones that 
support them.


During the 'spring forward' period (which depends on hemisphere) there 
will be a gap of times that do not exist. The clock shifts from 
01:59.59.99 to 03:00:00. During the 'fall back' period, the 
values will repeat the 02:00:00 to 02:59:59.99 range. Some 
queries will return multiple rows.


To be timezone agnostic, always store date values relative to UTC or 
store them as integer values using the FROM_UNIXTIME() and 
UNIX_TIMESTAMP() functions. Then convert the absolute time to a local 
timezone during presentation.


http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html


Yes, you might get in trouble if the same SELECT were run in two different timezones at 
the same time.  Or, TIMESTAMP might help you get the right answer.

There are something like 5 different datetime concepts.  MySQL covers 2 of 
them.
 DATETIME is a picture of _your_ clock.
 TIMESTAMP is an instant in the _universe_.
For these, and others, think of a recurring event on a calendar, a sporting 
event, an appointment (potentially in a diff timezone), train schedule, etc.



For an explanation between the DATETIME and TIMESTAMP data types, I 
encourage the others on this list to review:

http://dev.mysql.com/doc/refman/5.6/en/datetime.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Bug in BETWEEN same DATETIME

2013-05-23 Thread shawn green



On 5/23/2013 4:55 PM, Daevid Vincent wrote:

I just noticed what I consider to be a bug; and related, has this been fixed
in later versions of MySQL?

We are using:
mysql  Ver 14.12 Distrib 5.0.92, for portbld-freebsd8.1 (amd64) using  5.2

If you use BETWEEN and the same date for both parts (i.e. you want a single
day) it appears that the operator isn't smart enough to consider the full
day in the cases where the column is a DATETIME

http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_be
tween

WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16'

I actually have to format it like this to get results

WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND '2013-04-16
11:59:59'



From the Fine Manual...
http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-conversion.html
###
 Conversion of DATE values:

Conversion to a DATETIME or TIMESTAMP value adds a time part of 
'00:00:00' because the DATE value contains no time information.

...
 Prior to MySQL 5.0.42, when DATE values are compared with DATETIME 
values, the time portion of the DATETIME value is ignored, or the 
comparison could be performed as a string compare. Starting from MySQL 
5.0.42, a DATE value is coerced to the DATETIME type by adding the time 
portion as '00:00:00'. To mimic the old behavior, use the CAST() 
function to cause the comparison operands to be treated as previously. 
For example:


date_col = CAST(datetime_col AS DATE)

###

That seems pretty clear to me as not a bug.
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Version 5.6.2-m5 Boolean Datatype

2013-05-22 Thread shawn green

Hello Neil,

On 5/22/2013 1:05 PM, Neil Tompkins wrote:

Hi, Like the link states

For clarity: a TINYINT(1) datatype does NOT ENFORCE a boolean value data
entry. For instance, it's still possible to insert a value of 2 (any
integer up to the TINYINT max value). I personally don't see the added
value of a 'BOOLEAN' synonym type which infact behaves unlike a boolean
should.

Has BOOL, BOOLEAN been taken out of MySQL 5.6 ?


On Wed, May 22, 2013 at 6:01 PM, Ian Simpson i...@it.myjobgroup.co.ukwrote:


BOOLEAN is a synonym for TINYINT(1) in MySQL:

http://dev.mysql.com/doc/refman/5.6/en/numeric-type-overview.html


On 22 May 2013 17:55, Neil Tompkins neil.tompk...@googlemail.com wrote:


Hi,

I've just created some tables that I designed using the MySQL Workbench
Model.  However, the database type BOOLEAN which was in my models has been
converted to TINYINT(1);  I'm currently running MySQL Version 5.6.2-m5 on
Windows 2008 server.

Any ideas why this has been removed ?



This is exactly the same behavior that MySQL has had for over a decade. 
Nothing has been added or removed since release 4.1.0 (2003-04-03)

http://dev.mysql.com/doc/refman/4.1/en/numeric-type-overview.html
http://dev.mysql.com/doc/refman/4.1/en/news-4-1-x.html

Also, why are you using a pre-release (milestone) version of 5.6 when 
the full release (GA) versions of 5.6 are available?

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NET START MYSQL QUESTION?

2013-05-12 Thread shawn green

Hello Reindl,

On 5/11/2013 11:52 AM, Reindl Harald wrote:

...  virtually
nonofy is using mysql on windows seriously as i have
not touched windows since 2006 at all



Your experience is not indicative of the population as a whole. Many 
important and mission-critical installations exist on Windows.


You may not, or ever, choose to use Windows as a base platform but many 
people do and do superbly with their choice of OS.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow Response -- What Does This Sound Like to You?

2013-05-09 Thread shawn green

Hello Eric,

On 5/9/2013 7:13 PM, Robinson, Eric wrote:

-Original Message-
From: Wm Mussatto [mailto:mussa...@csz.com]
Sent: Thursday, May 09, 2013 3:50 PM
To: Robinson, Eric
Cc: Rick James; mysql@lists.mysql.com
Subject: RE: Slow Response -- What Does This Sound Like to You?

On Thu, May 9, 2013 15:25, Robinson, Eric wrote:



-Original Message-
From: Robinson, Eric [mailto:eric.robin...@psmnv.com]
Sent: Thursday, May 09, 2013 1:58 PM
To: mysql@lists.mysql.com
Subject: Slow Response -- What Does This Sound Like to You?

We have a situation where users complain that the system

periodically

freezes for 30-90 seconds. We check the slow query logs and

find that

one user issued a complex query that did indeed take 30-90

seconds to

complete. However, NO slow queries are recorded for the other 50
users, before, during, or after the freeze. Note that

the complex

query in question always shows: Lock_time: 0.

Q: What conditions could cause single query to lock up a

database for

a while for all users (even though it shows lock time:

0)  but no

other slow queries would show in the logs for any other

users who are

hitting the database at the same time?

OS: RHEL3 x64
CPU: 8 x 2.9GHz Xeon
RAM: 32GB
Disk: RAID 5 (6 x 512GB SSD)
MySQL: 5.0.95 x64
Engine: MyISAM





MyISAM?  Or InnoDB?
Lock_time perhaps applies only to table locks on MyISAM.

SHOW ENGINE InnoDB STATUS;
You may find some deadlocks.

Is Replication involved?

Anyone doing an ALTER?




MyISAM, no replication involved, and nobody is altering the

database.

This happens whenever people run certain reports.


--Eric

This may be a dumb question, but have you verified that the
applications do not issue a Lock TABLES ...? Either the big
one or one of the others.



I have not verified this, but it should be easy to find out. Hopefully that is 
not the case as it is a canned application and we don't have access to the code.

--Eric



Another option to keep in mind is the effect of a very large Query 
Cache. Each change to a table must invalidate every query (and their 
results) that derived from that table. For large caches, that can bring 
the server to a cold halt until the purge complete.


Try disabling it entirely and see how that affects performance or make 
it much smaller.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Cannot load time zone tables to MySQL

2013-04-06 Thread shawn green

Hello Joe

On 4/5/2013 10:25 AM, Joe Kosinski wrote:

I have installed the community edition version 5.6 to my MacBook running
OSX 10.8.3.  I am trying to set up MythTV backend which requires the
time zone tables to be loaded for the database I will be using.

I have not been able to load the tables using the syntax of version 5.6.

I have tried mysql *mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql
-u* root but nothing happens and my prompt just goes to  instead of mysql.

I have attached my terminal output.



You are attempting to perform a system-level command from within the 
MySQL CLI client.


Drop back to a system prompt and try again.

Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Cannot load time zone tables to MySQL

2013-04-05 Thread shawn green

Hi Joseph,

We request that except for sensitive information that you keep all 
responses on the list.  This way the entire community can kick in for 
assistance.


On 4/5/2013 12:25 PM, Joseph Kosinski wrote:

Thanks for your response. This has been troubling me for days! I am
not too familiar with terminal commands and just learning. I guess I
don't know how to drop back to a system prompt. I thought I did but
whenever I try to execute the command I get an error as shown in my
attached message.


(from the attached log)

ast login: Fri Apr  5 11:43:09 on console
Joseph-Kosinskis-MacBook:~ josephkosinski$ mysql_tzinfo_to_sql 
/usr/share/zoneinfo | mysql -u root mysql
-bash: mysql: command not found
-bash: mysql_tzinfo_to_sql: command not found
Joseph-Kosinskis-MacBook:~ josephkosinski$ cd ..
Joseph-Kosinskis-MacBook:Users josephkosinski$ mysql_tzinfo_to_sql 
/usr/share/zoneinfo | mysql -u root mysql
-bash: mysql: command not found
-bash: mysql_tzinfo_to_sql: command not found
Joseph-Kosinskis-MacBook:Users josephkosinski$ cd ..
Joseph-Kosinskis-MacBook:/ josephkosinski$ mysql_tzinfo_to_sql 
/usr/share/zoneinfo | mysql -u root mysql
-bash: mysql: command not found
-bash: mysql_tzinfo_to_sql: command not found
Joseph-Kosinskis-MacBook:/ josephkosinski$ cd ..
Joseph-Kosinskis-MacBook:/ josephkosinski$


...earlier replies snipped ...

Good news! you are at a system prompt.

You do not seem to have the appropriate folder in your search path so 
the system cannot find it for you automatically. You can either update 
the search path or you can use a command like this to call the program:

/path/to/mysql_tzinfo_to_sql
(replace /path/to/ with the appropriate names of your folders)

This program is installed to the same .../bin folder as your mysqld 
binary.  Use a which command to clarify where that is on your system.


which mysql_tzinfo_to_sql

Then use the path it identifies when executing the utility.
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Join query returning duplicate entries

2013-04-04 Thread shawn green

Hello Trimurthy,

On 4/4/2013 3:21 AM, Trimurthy wrote:

Hi list,
 i wrote the following query and it is returning duplicate entries
as shown below, can any one suggest me how to avoid this duplicate entries,
with out using distinct.

Query:

select p.date,p.coacode,p.type,p.crdr,p.quantity,p.amount from
ac_financialpostings p join (select iac from ims_itemcodes where (cat =
'Male Birds' or cat = 'Female Birds')) i on p.coacode = i.iac where p.trnum
like '%02' and p.date between '2012-10-04' and '2013-04-04' order by date



Some more options to the DISTINCT clause may be either EXISTS or IN()

Examples:

select ...
from
ac_financialpostings p
WHERE
  exists (select 2 from ims_itemcodes where (cat = 'Male Birds' or cat 
= 'Female Birds') and ims_itemcodes.iac = p.coacode)

AND p.trnum like '%02'
AND p.date between '2012-10-04' and '2013-04-04'
order by date

select ...
from ac_financialpostings p
WHERE
   p.coacode IN(select iac from ims_itemcodes where (cat =
'Male Birds' or cat = 'Female Birds'))
 AND p.trnum
like '%02' and p.date between '2012-10-04' and '2013-04-04' order by date

Or you can use the DISTINCT clause in your subquery, too

select ...
from ac_financialpostings p
join (select DISTINCT iac from ims_itemcodes where (cat =
'Male Birds' or cat = 'Female Birds')) i on p.coacode = i.iac where p.trnum
like '%02' and p.date between '2012-10-04' and '2013-04-04' order by date

Or you can use an explicit temporary table

CREATE TEMPORARY TABLE tmp_iac (key(iac)) ENGINE=MEMORY
SELECT DISTINCT iac
FROM ims_itemcodes
WHERE cat IN('Male Birds', 'Female Birds');

SELECT ...
FROM ac_finanancialpositions p
JOIN tmp_iac
  ON tmp_iac.iac = p.coacode
WHERE ...
ORDER BY ... ;

DROP TEMPORARY TABLE tmp_iac;

The advantage to this is that before MySQL 5.6, the implicit temporary 
table created by your subquery was not indexed. For more than a trivial 
number of rows to compare against, that can reduce overall performance 
because the results of your subquery would need to be scanned for each 
row of the outer table in the main query it was joining to. Based on the 
WHERE clause, all rows from the outer table may not be in the 'join set' 
so this is not always a Cartesian product problem.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Possible issues with replication filters

2013-03-25 Thread shawn green

Hello Dimitre,

On 3/24/2013 5:23 PM, Radoulov, Dimitre wrote:

Hi all,
we need to move a few databases from one host to another.
During the process we need to use replication filters (to replicate
only the databases to be moved).
The current MySQL version is 5.5, the new one will be 5.6.
We're using mixed binlog_format.

We're planing to use:

replicate-wild-do-table=db_name1.%
replicate-wild-do-table=db_name2.%
...

The documentation states the following about the replicate-wild-do-table
option:

==
This option applies to tables, views, and triggers. It does not apply to
stored procedures and functions,
or events. To filter statements operating on the latter objects, use one
or more of the |--replicate-*-db| options.
==

Does anybody know how exactly this option doesn't apply for stored
procedures, functions or events?
Is the creation DDL skipped or their execution isn't replicated?

Anyway, I assume that with the following configuration (i.e. by adding
replicate-do-db),
I'll be able to replicate all statements related to db_name1 and db_name2
(excluding possible grant/revoke statements, because the mysql system
database
won't be replicated).

Is that correct?
Is there a better (more robust) way to replicate a subset of databases?

replicate-wild-do-table=db_name1.%
replicate-wild-do-table=db_name2.%
...
replicate-do-db=db_name1
replicate-do-db=db_name2
...


Thanks
Dimitre



Stored procedures and Functions do not exist at the table level. They 
only exist at the global (system) level. Therefore there is no way to 
filter statements like CREATE PROCEDURE or DROP FUNCTION based on 
table-level substring matches.


The key to notice is 'operating on' in the text you quoted. If you limit 
execution of those DDL statements to just a few databases, then any user 
with enough privileges that start the DDL command from the context of 
the permitted database will be able to affect those objects on the slave 
via replication. If you change one of them on the master and you do it 
from the context of a database that is not on the 'do list', then that 
change will not be applied to the slave via replication.


Based on your example, a DBA starting in the db_name1 database or 
db_name2 database would be able to affect a PROCEDURE or FUNCTION on the 
master and through replication, the same command would execute on the 
slave.


Controlling this behavior is one of the uses of the 'principle of least 
privileges'. In short, it means you give each user just enough rights to 
do what it is they are supposed to do. For example, you want very few 
users to have the 'super' privilege or the 'with create option' option 
on their accounts. In my position I see many servers operating where 
every user (including applications) are operating with root privileges. 
 This is as bad for databases as it is for operating systems.


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key on multiple columns

2013-03-21 Thread shawn green



On 3/21/2013 12:43 PM, Abhishek Choudhary wrote:

   CREATE TABLE test2 (
 ID INT NOT NULL AUTO_INCREMENT,
 col1 INT NOT NULL,
 col2 INT NOT NULL,
 PRIMARY KEY (ID),
 CONSTRAINT fk FOREIGN KEY (col1, col2)
   REFERENCES test1(ID, ID)
 ON UPDATE CASCADE
 ON DELETE RESTRICT
   ) ENGINE=InnoDB;



i think error is because of referencing the same column in test1 table (ID,ID) .
try to change the column name  then run the code hope ur problem will solve out 
.


Another solution would be to make two FK declarations, one for each column.

  CREATE TABLE test2 (
ID INT NOT NULL AUTO_INCREMENT,
col1 INT NOT NULL,
col2 INT NOT NULL,
PRIMARY KEY (ID),
CONSTRAINT fk FOREIGN KEY (col1)
  REFERENCES test1(ID)
ON UPDATE CASCADE
ON DELETE RESTRICT ,
CONSTRAINT fk2 FOREIGN KEY (col2)
  REFERENCES test1(ID)
ON UPDATE CASCADE
ON DELETE RESTRICT
  ) ENGINE=InnoDB;

This is the preferred syntax and it meets your original intent of 
associating both col1 and col2 to the ID column of the other table.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: data loss due to misconfiguration

2013-02-26 Thread shawn green

Hello Zachary,

On 2/26/2013 4:42 PM, Zachary Stern wrote:

Any idea what can cause this? Can it be misconfiguration? Could it be
because I'm messing with MySQL's memory usage so much, or the thread
settings?

My config is linked at the bottom, for reference. I might be doing
something terribly dumb.

The stuff I've done under # * Fine Tuning worries me the most, but I'm
just not sure how I might be causing this.

I've seen things like this on other servers before but never been able to
nail down the issue. So my config options being the common denominator here
- I suspect that it's something I'm doing.

However - if there were missing commits, the data is gone as you said - so
wouldn't I never see it to begin with?

https://gist.github.com/zacharyalexstern/5042483



If binary logging is enabled, review your logs to ensure that the 
changes you think are going into your database, really are.  If they 
are, look for anything that may remove data from a table like DELETE 
commands or ALTER TABLE ... PARTITION ... commands (dropping or resizing 
your partitions)


If you haven't already, verify in your error log that your MySQL isn't 
being randomly killed by your OS for exhausting its RAM.


If those are both a bust, enable the General Query Log and see if any 
other weird commands you don't recognize are being sent to your database.


http://dev.mysql.com/doc/refman/5.6/en/server-logs.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: adding column breaks query

2013-01-28 Thread Shawn Green

Hello Larry,

On 1/28/2013 10:42 AM, Larry Martell wrote:

I have a query that works fine:


SELECT data_ppl.name as 'P/P/L', data_recipe.name as Recipe,
data_cstmeta.date_time as 'Last Run'
FROM data_recipe, data_ppl, data_cstmeta
INNER JOIN (SELECT id,recipe_id, MAX(date_time) as MaxDateTime
FROM data_cstmeta
GROUP BY recipe_id) grouped
ON data_cstmeta.id = grouped.id
AND data_cstmeta.date_time = grouped.MaxDateTime
WHERE data_cstmeta.ppl_id = data_ppl.id
AND data_cstmeta.recipe_id = data_recipe.id
ORDER BY data_cstmeta.date_time desc;


Now I need to add one more column to it, data_tool.name, so I did this:


SELECT data_ppl.name as 'P/P/L', data_tool.name as Tool,
data_recipe.name as Recipe, data_cstmeta.date_time as 'Last Run'
FROM data_recipe, data_ppl, data_cstmeta, data_tool
INNER JOIN (SELECT id,tool_id,recipe_id, MAX(date_time) as MaxDateTime
FROM data_cstmeta
GROUP BY recipe_id) grouped
ON data_cstmeta.id = grouped.id
AND data_cstmeta.date_time = grouped.MaxDateTime
WHERE data_cstmeta.ppl_id = data_ppl.id
AND data_cstmeta.recipe_id = data_recipe.id
AND data_cstmeta.tool_id = data_tool.id
ORDER BY data_cstmeta.date_time desc;

And now it fails with:

ERROR 1054 (42S22): Unknown column 'data_cstmeta.id' in 'on clause'

I've messed around with this for a day and I can't figure out what
stupid mistake I'm making.



You didn't add just one column, you added an entire table to the FROM 
clause.  Also, you used a comma-join instead of an ANSI JOIN clause.


Because of changes we made in 5.0.12 to make MySQL behave more like the 
SQL standard requires, the priority of the comma operator in join 
operations was demoted. This means that when the explicit JOIN 
declaration is being evaluated (which has a higher order of precedence) 
only the derived table (grouped) and the base table `data_tool` are 
visible to the ON clause. The `data_cstmeta` implicit join will be 
evaluated later as it is based on a comma-join.


I suggest you rewrite your queries to only use the explicit JOIN ... ON 
... syntax for all of your joins to avoid these problems in the future.

http://dev.mysql.com/doc/refman/5.5/en/join.html

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Automatic reply: access denied to non-root@localhost null-string user in USER_PRIVILEGES

2013-01-03 Thread Shawn Green

Hello Round,

On 12/27/2012 5:34 AM, Round Square wrote:

On 12/26/2012 01:25 PM, Igor Shevtsov wrote:

You mysql.user table might be corrupted.
If you have access to it as a root user, try check table mysql.user, and repair 
table mysql.user if table corruption was detected.
Alternatively, shut down mysql server, cd /var/lib/mysql/mysql (to your 
$datadir/mysql directory) and run
mysqlcheck -r mysql user




I deleted the row with the empty user from mysql.user, then restarted the 
daemon, and all seems back to normal now.

One lingering question is:  why did mysql allow this to happen?  Could this be 
considered a bug?  After all, an inadvertent and seemingly harmless insertion 
leads to authentication failure for all users.

Are there any other known similar gotchas?  The fix for this one appears so 
trivial as to perhaps NOT call for a restore-from-backup.  But there could be 
other similar glitches that might call for that?

Thanks!




No. Your tables were not corrupted. There is no need to restore from 
backup. In fact, it may have been an inappropriate restore that created 
this situation in the first place.


What you had was either an intentional or unintentional failure in 
security. The system was performing appropriately for the accounts that 
you had configured.








On 26/12/12 18:00, Round Square wrote:

...
Poking around in puzzlement and comparing the current, broken state with the 
functioning state (from backup) I discovered that in the broken version there 
is this extra line in the information_schema.USER_PRIVILEGES table:

| ''@'localhost' | NULL  | USAGE
   | NO   |

(Note the null-string user prepended to @localhost)

Again: the functional, non-broken state does NOT have this entry.  Thus, my 
current theory is that this line is the culprit.  Prior to the failure I had a 
surge of experimental installations, installing third-party software that 
created mysql tables, and can't clearly retrace everything I did, at this 
point, to pinpoint the installation that may have caused it.

Be that as it may...

(1) Is my theory correct?
(2) If that line should not be there...
 (a) How do I remove it, properly? I don't have debian-sys-maint 
privileges to delete the line. (Or do I?)
 (b) Are there other tables, besides USER_PRIVILEGES, that would need 
to be updated/purged



Yes. Your theory is correct. Why it had such an effect on your other 
logins is covered in the free, searchable, and publicly-available user 
manual (the only kind we have):

http://dev.mysql.com/doc/refman/5.5/en/connection-access.html

Removing a user account (a MySQL login) does not require 
debian-sys-maint privileges as this is not a Linux-level account. You 
need an appropriately-privileged  MySQL account to do this maintenance.


As you most likely discovered, we cannot directly modify the tables of 
the INFORMATION_SCHEMA database. Those tables are constructed 
dynamically (on demand) using information permanently stored in other 
places.  In order to remove that account, you need to issue a DROP USER 
command or you need to be able to edit the `mysql`.`user` table.

http://dev.mysql.com/doc/refman/5.5/en/drop-user.html
http://dev.mysql.com/doc/refman/5.5/en/grant-table-structure.html




--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: sales data every publisher

2012-12-18 Thread Shawn Green

On 12/18/2012 3:52 AM, Haidar Pesebe wrote:

Hi all--


There are 3 tables which each like this:

NOTE: The linkage between table A and table B is ID and IDPUBLISHER, while Table
B to C is the ISBN. Sometimes there are some titles that are not sold
in a given month.

TABLE A (Publisher)
  ---
  ID : NAME : EMAIL :
  ---
  1 : ABC : abc@abc
  2 : CDE : cde@cde
  ---

  TABLE B (BOOKS TABLE)
  
  : IDBOOK: TITLE : PUBLISHER ID : ISBN
  
  : 1 :TITLE 01 :  1 : 001
  : 2 :TITLE 02 :  1 : 002
  : 3 :TITLE 03 :  2 : 003
  : 4 :TITLE 04 :  2 :004
  

  TABLE C (SALES OF BOOKS)

  --
  : IDSALES : ISBN : PRICE : QTY : DATE :

  ---
  : 1: 001: 100: 20 :   2012-12-01 :
  : 2: 001: 100  : 11 :   2012-12-01 :
  : 3: 002: 60  : 15 :   2012-12-01 :
  : 4: 003: 30: 10 :   2012-12-01 :
  : 5: 003: 30  : 7 :2012-12-01 :
  : 6: 003: 30: 8 :2012-12-01 :
  : 7: 004: 50  : 10 :   2012-12-01 :
  ---

  How do I call up the sales in December 2012 for ABC Publisher or call the
  sale in 2012 for ABC publisher?



RESULT OF Sales Books of ABC Publisher in December 2012

---


No. : Books Title : ISBN  :QTY : AMOUNT

---


1. : Title 01 : 001   :  31 : 3,100

2. : Tile 02  : 002   :  15 : 900


 and so on .

---


help me to solve this problem



1) You need some joins. This is how you link your rows together. For 
data that can be there but isn't required to be there, you use one of 
the OUTER JOIN terms of LEFT JOIN or RIGHT JOIN. This allows us to 
combine columns from different tables into the same report. This is also 
a good time to learn about using aliases for table names and column names


2) You need a GROUP BY to summarize certain values (like amount) for the 
rows you retrieve.


3) You use some conditions in a WHERE clause to limit what it is you 
want to summarize.



SELECT
  p.name  # the name of the publisher
, b.title as 'Books Title'
, b.ISBN
, SUM(s.QTY) as 'QTY'  # the number actual books sold
, SUM(s.QTY * s.PRICE) as 'AMOUNT'  # the total value of all books sold
FROM publishers p
INNER JOIN books b
  ON b.`publisher id` = p.id
# this is optional information as a book may not have any sales data for 
the given date range so we use a LEFT JOIN

LEFT JOIN sales s
  ON s.ISBN = b.ISBN
WHERE s.date = '2012-12-01' and s.date  '2013-01-01'
GROUP BY p.name, b.title, b.ISBN

For more details:
http://dev.mysql.com/doc/refman/5.5/en/group-by-functions.html
http://dev.mysql.com/doc/refman/5.5/en/examples.html
http://dev.mysql.com/doc/refman/5.5/en/select.html
http://dev.mysql.com/doc/refman/5.5/en/join.html

And, as always, you can ask the list.

Best wishes,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Basic SELECT help

2012-12-18 Thread Shawn Green

Hi Neil,

On 11/22/2012 7:14 PM, h...@tbbs.net wrote:

2012/11/22 14:30 +, Neil Tompkins 

I'm struggling with what I think is a basic select but can't think how to
do it : My data is

id,type

1000,5
1001,5
1002,2
1001,2
1003,2
1005,2
1006,1

 From this I what to get a distinct list of id where the type equals 2 and 5

Any ideas ?

This ugly one, which generalizes:

select id,group_concat(type) AS tl from the_table group by id having 
find_in_set('2',tl) and find_in_set('5',tl)

Ugly becaus it involves so much converting between number  string.

For full generality one would indeed write
GROUP_CONCAT(type ORDER BY type)
and pass my tl and a string, say '1,2', to a procedure that using 
SUBSTRING_INDEX taking the strings for arrays ensures that all found in the first string 
is also in the second string. There are times when I wish SQL had arrays.


The fun part of solving this is to remember that SQL is a set-oriented 
language. For each element in the set, none of them can be both 2 and 5 
at the same time. So, you have to build two sets and check to see which 
rows are in both.


One pattern works if you need to aggregate for just a few terms

SELECT a.id
from (select distinct id from mytable where type=2) a
INNER JOIN (select distinct id from mytable where type=5) b
  on a.id=b.id

However, this gets numerically very expensive with more than a few JOINS 
to the pattern.  Also, there is no index on either of the temporary 
results (a or b) so this is a full Cartesian product of both tables. 
That means that although it gives you a correct answer, it will not 
scale to 10's of rows (or more) in either set.



So, here is a way to assemble the same result that uses much less 
resources. Remember, each row you want is a member of a set.


CREATE TEMPORARY TABLE tmpList (
  id int
, type int
, PRIMARY KEY (id,type)
)

INSERT IGNORE tmpList
SELECT id,type
FROM mytable
WHERE type in (2,5)

SELECT id, count(type) hits
FROM tmplist
GROUP BY id
HAVING hits=2

DROP TEMPORARY TABLE tmpList

Can you see why this works?

I created an indexed subset of rows that match either value (2 or 5) but 
only keep one example of each. I accomplished that by the combination of 
PRIMARY KEY and INSERT IGNORE. Then I counted many type values each ID 
value represented in the subset.  If I looked for 2 terms and I ended up 
with hits=2, then I know that those ID values matched on both terms.


You can expand on this pattern to also do partial (M of N search terms) 
or best-fit determinations.


I hope this was the kind of help you were looking for.

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help with left outer join

2012-12-12 Thread Shawn Green

On 12/11/2012 7:22 PM, h...@tbbs.net wrote:

... (Are all the distinct ids really needed? When one joins on a
field with the same name in both tables, one may use 'USING', and
only the common field, with neither NULL, shows up in the output.)



This is a perfectly acceptable naming convention to use. For example if 
you have a field on the `art` table that references the ID column of the 
`person` table to indicate the owner and another field to indicate the 
person who created the art, you might want to use the names 
`owner_person_id` and `artist_person_id` to keep them separate from the 
`id` column used to uniquely identify the work of art itself.


In this design pattern, each table has a numeric ID column (string-based 
primary keys are perfectly legal but have their drawbacks and should be 
used with care) and to reference it from another table you can use the 
pattern parenttable_id.  It keeps your naming conventions clean and 
predictable.


If I were to try to use a USING operator in my opening example, I would 
be trying to match the PK fields of two separate types of data.


(the USING example)
SELECT ... FROM art INNER JOIN person USING(id)...

Let's say that I renamed the id fields to art_id and person_id to make 
them table-specific. This still fails because a person's identifier as 
an owner is not the same as a work of art's creator. It also does not 
allow me to use the `person` table more than once in a single query.


(FAIL: a renamed USING example)
SELECT ...
FROM art
INNER JOIN person USING(person_id) --- does this refer to the owner or 
the creator of the art?


(the name template example)
SELECT ...
FROM art
INNER JOIN person owner
  on art.owner_person_id = owner.id
INNER JOIN person artist
  on art.artist_person_id = artist.id
...

I admit I am a control freak when it comes to the accuracy of my 
queries, the integrity of my data, and the ease of maintenance for my 
SQL statements. Because of this, I much prefer the regular 
predictability of the ANSI JOIN syntax (demonstrated above) for any JOIN 
except for an intentional Cartesian product.  For that case alone, I 
prefer a comma join


(Cartesian product example)
SELECT ...
FROM table1, table2
...

Not only is the ANSI syntax the only way to specify an OUTER join (such 
as LEFT JOIN or RIGHT JOIN) it forces the author of the statement to 
recognize that they do or do not have an ON clause for this table 
relationship.  Separating the same conditions into a WHERE clause makes 
it very easy to overlook a table relationship and accidentally create a 
Cartesian product which can often devastate query performance. It is 
perfectly legal to use the comma-join syntax with MySQL but I strongly 
recommend against it just because it can only be used for INNER joins or 
CROSS joins and because it forces you to put your relationship 
conditions in the WHERE clause.


Another strike (to me, anyway) against the comma join is that in order 
to process joins more like the specifications in the SQL standards, we 
demoted the precedence of the comma operator with 5.0.12. These changes 
also affected the behavior of the USING and NATURAL JOIN operators.

http://dev.mysql.com/doc/refman/5.5/en/join.html

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: replication problem

2012-12-05 Thread Shawn Green

On 12/3/2012 7:24 AM, Trimurthy wrote:

hi list,   i am trying to set up replication but i couldn't
complete because of the following error
Could not find first log file name in binary log index file can any one
please help me.


... snipped ...

This one should have been easy for the list to advise you about. Let me 
step in to keep things on track.


For each instance of MySQL that creates binary logs there will be an 
'index' file that keeps track of which files this instance is managing. 
For example, if the binary logs are called mysql-bin.00, then the 
index file will be called mysql-bin.index


What has happened is that the contents of the index file no longer 
matches the inventory of actual binary logs present on disk. They have 
either been moved, erased, or had their permissions changed so that the 
system user that the mysqld daemon executes as can no longer see them.


The solution is to manually update the .index file with the current 
listing of binary log files. The longer term solution is to stop 
manually removing binary log files and allow the system to perform that 
for you with a PURGE BINARY LOGS command.  When you have MySQL do the 
purging, the .index file will be automatically updated.


For more details about the binary log, please read:
http://dev.mysql.com/doc/refman/5.5/en/binary-log.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Dynamic crosstab got me lost.

2012-11-19 Thread Shawn Green

On 11/19/2012 9:02 AM, Mogens Melander wrote:


On Mon, November 19, 2012 13:49, Jan Steinman wrote:

From: Mogens Melander mog...@fumlersoft.dk

So, I got a little further with my problem. I found an article
on:

http://stackoverflow.com/questions/3122424/dynamic-mysql-query-view-for-crosstab

Describing how to do the dynamic generation of SQL statements. That's
all good, kind of. The resulting SQL looks like this:

SELECT main.code
, IF(iconstandardrel.icon = 4,1,0) AS 'internationalt_produkt.eps'
, IF(iconstandardrel.icon = 3,1,0) AS 'god_vaerdi.eps'
, IF(iconstandardrel.icon = 2,1,0) AS 'for_miljoeets_skyld.eps'
, IF(iconstandardrel.icon = 1,1,0) AS 'ergonomisk_produkt.eps'
, IF(iconstandardrel.icon = 6,1,0) AS 'saml_selv.eps'
, IF(iconstandardrel.icon = 12,1,0) AS 'brandfarlig.eps'
FROM iconstandardrel
JOIN main ON main.code = iconstandardrel.code
JOIN iconstandard ON iconstandard.id = iconstandardrel.icon
ORDER BY iconstandardrel.code;

Which produces results like:

101577, 1, 0, 0, 0, 0, 0
101679, 0, 1, 0, 0, 0, 0
101679, 1, 0, 0, 0, 0, 0
101681, 1, 0, 0, 0, 0, 0
101748, 0, 1, 0, 0, 0, 0
101748, 1, 0, 0, 0, 0, 0

But I would like to have One line per code:

101577, 1, 0, 0, 0, 0, 0
101679, 1, 1, 0, 0, 0, 0
101681, 1, 0, 0, 0, 0, 0
101748, 1, 1, 0, 0, 0, 0

Is it possible to achieve this in pure SQL ?


I think you need GROUP BY main.code.



No, that ain't it either. I've tried that.

But thanks anyway :)

I can't figure out what to call this operation, to do a search.
Someone out there must have done this before.



You need both GROUP BY and either SUM or MAX, like this

SELECT
...
, SUM(IF(iconstandardrel.icon = 3,1,0)) AS 'god_vaerdi.eps'
...
GROUP BY main.code;


or

SELECT
...
, MAX(IF(iconstandardrel.icon = 3,1,0)) AS 'god_vaerdi.eps'
...
GROUP BY main.code;

That will combine (aggregate) your rows together.
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Dumping drupal databases

2012-10-30 Thread Shawn Green

On 10/30/2012 12:34 PM, Tim Johnson wrote:

... snip ...

   Clearly, I have failed to pose my question clearly, because the
   responses indicate that I have not been understood, so I will try
   again:

   *
   mysqldump does not recognize the drupal databases!
 Example :
   linus:prj tim$ mysqldump event -hlocalhost -utim
   -pXX  event.sql
   mysqldump: Got error: 1049: Unknown database 'event' when
   selecting the database
... snip ...


Your syntax is inverted. Put the name of the database at the end.

mysqldump -hlocalhost -utim -pXX event event.sql

If you fail to do so, you would login as the 'anonymous user' if it is 
still enabled on that machine and that account typically has zero 
privileges.


Check the command-line syntax here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Backup Error while backing up MySQL Cluster

2012-10-24 Thread Shawn Green

On 10/24/2012 11:57 AM, Bheemsen Aitha wrote:

Hi,

After following the steps at the following website, I tried to do an online
backup of the cluster.

http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-backup-using-management-client.html

It is a plain vanilla command which is below. The cluster is almost an
empty database, but backup is crashing at least one data node and was never
successful.

ndb_mgm -e START BACKUP WAIT COMPLETED

Did anyone have this kind of error before? I  tried searching on web but
could not find a solution.

Here is the error I received.

Connected to Management Server at: localhost:1186
Waiting for completed, this may take several minutes
Backup failed
*  3001: Could not start backup
*Backup aborted due to node failure: Permanent error: Internal error

ALERT-- Node 2: Backup 2 started from 49 has been aborted. Error: 1326


Here is little background about our setup.

OS: Redhat Linux 5.8
Cluster: MySQL 5.5, NDB 7.2.7
Cluster was installed and set up on two hosts using MCM, one host hosting
mysqld, ndb_mgmd and the other hosting
ndbmtd1 and ndbmtd2.

I even tried by setting up the following parameters, but got the same error
again.


set BackupDataBufferSize:ndbmtd=256M attcluster;
set BackupLogBufferSize:ndbmtd=256M attcluster;
set BackupMemory:ndbmtd=512M attcluster;
set BackupWriteSize:ndbmtd=32M attcluster;
set BackupMaxWriteSize:ndbmtd=128M attcluster;

Here are some links I found on web similar to my error.


http://forums.mysql.com/read.php?25,230891,230959#msg-230959
http://grokbase.com/t/mysql/cluster/0578z8cj71/backup-error
http://bugs.mysql.com/bug.php?id=66104



At the bottom of the bug you found, it says:
[7 Sep 6:31] Ole John Aske

This bug has been fixed in MySQL CLuster 7.2.8 which is now available on 
http://dev.mysql.com/downloads/cluster/


You need to upgrade to receive this fix. Let us know if that works.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Possible to copy the key field to another on INSERT?

2012-10-17 Thread Shawn Green

Hello Dehua,

On 10/17/2012 3:33 AM, Dehua Yang wrote:

select   LAST_INSERT_ID() ; Under high concurrency  , it would return other
threads value to you.



Incorrect. The results of LAST_INSERT_ID() are connection-specific. The 
activity on other connections will not change or alter the value for the 
current connection.  Only actions taken by the current connection can 
change this value.


http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread Shawn Green

On 10/15/2012 7:15 PM, spameden wrote:

Thanks a lot for all your comments!

I did disable Query cache before testing with

set query_cache_type=OFF

for the current session.

I will report this to the MySQL bugs site later.




First. What are all of your logging settings?

SHOW GLOBAL VARIABLES LIKE '%log%';

Next. When you physically look in the slow query log, how long does it 
say that it took this command to execute?


And last, before you can ask MySQL to fix a bug, you must first ensure 
it's a MySQL bug. Please try to reproduce your results using official 
binaries, not those constructed by a third party.  If the problem exists 
in our packages, do tell us about it. If the problem is not reproducible 
using official MySQL products, then please report it to the appropriate 
channel for the product you are using.


MySQL Bugs -
http://bugs.mysql.com/

Thanks!
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Odd Behavior During Replication Start-Up

2012-10-16 Thread Shawn Green

On 10/16/2012 4:02 PM, spameden wrote:

2012/10/16 Tim Gustafson t...@soe.ucsc.edu


Thanks for all the responses; I'll respond to each of them in turn below:


you can not simply copy a single database in this state
innodb is much more complex like myisam


I know; that's why I rsync'd the entire /var/db/mysql folder (which
includes the ib_logfile and ibdata files, as well as all other
database and table data), not just individual databases.  I also made
sure that flush tables with read lock had been executed before
creating the snapshot.  The steps I followed were verbatim what the
MySQL documentation said to do.  The MySQL documentation even mentions
ZFS snapshots as an effective way to make a backup:

http://dev.mysql.com/doc/refman/5.5/en/flush.html


I have to agree with Harald on this: filesystem snapshots are not an
effective way to clone innodb databases.  The rsync-based method
described has worked for me in large scale data situations very
reliably.


I'm confused: in the first sentence, you say snapshots are bad (which
directly contradicts the official MySQL documentation), and in the
second sentence you say rsync is good.  Why would an rsync of a file
system snapshot not be good enough?  By the way: I forgot to mention
that I also did create a snapshot when the MySQL server on db-01 was
actually shut down, and got the same sort of results.




... snip ...

The part you have all missed here is this:

121016 10:40:20 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!

As InnoDB operates, it copies data in the background into the tablespace 
file(s). You cannot stop this using FLUSH TABLES WITH READ LOCK.  What 
you need to do is to wait for the database to quiesce using the 
procedure documented at the bottom of this page for using ALTER TABLE 
IMPORT TABLESPACE


http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
###
In this context, a “clean” .ibd file backup is one for which the 
following requirements are satisfied:


There are no uncommitted modifications by transactions in the .ibd 
file.


There are no unmerged insert buffer entries in the .ibd file.

Purge has removed all delete-marked index records from the .ibd file.

mysqld has flushed all modified pages of the .ibd file from the 
buffer pool to the file.


You can make a clean backup .ibd file using the following method:

Stop all activity from the mysqld server and commit all transactions.

Wait until SHOW ENGINE INNODB STATUS shows that there are no active 
transactions in the database, and the main thread status of InnoDB is 
Waiting for server activity. Then you can make a copy of the .ibd file.

###

The same rules apply to performing a hot backup of the main tablespace 
file(s) as they do to backing up any individual tablespaces.


Of course, if you have achieved a clean shutdown, then the on-disk image 
is consistent and, as mentioned in other replies, your rsync will work 
just fine. Also, if you are doing the cold-shutdown method, you can 
start replication from position 4 of the next binary log created after 
you restart your master. The step to save the master's binary log 
position can be skipped if you are not trying to do a hot (or warm) 
backup.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Doubt Regd. Circular Replication In Mysql

2012-09-24 Thread Shawn Green

Hello Benjamin,

On 9/24/2012 10:52 AM, Stillman, Benjamin wrote:

replicate-same-server-id = 0 keeps MySQL from replicating binary log entries 
from itself. For instance, here's a rough overview:

You write to Server A.
Server A writes that to its binary log.
Server B reads Server A's binary log and completes the same thing.
Because log-slave-updates is enabled, Server B writes it to its own binary log.
Server C reads Server B's binary log and completes the same thing.
Again, with log-slave-updates enabled, Server C writes it to its own binary log.
Server A reads Server C's binary log.

Here's where the issue starts. Without replicate-same-server-id = 0, Server A 
will complete the insert/update/delete as it reads it from Server C's binary 
log. However, this query originated from Server A, so it's just going to do it 
again. Then it's again replicated to Server B, Server C, and so on. This can 
create a loop and/or break replication. For instance, if you drop a table on A. 
It replicates across, and back to A. Replication will error out because when it 
tries to drop the same table again, it already doesn't exist. You need 
replicate-same-server-id = 0 set so that it knows not to execute any binary log 
entries with its own server ID.



Not true.

Replication, by default, operates with --replicate-same-server-id=0. The 
only time you need to change it to a 1 is for certain recovery 
scenarios. We added this variable specifically to allow for exceptions 
to the rule that every server in a replication chain (or ring) must have 
their own, unique, --server-id value.


It's not required for normal operations. In fact we recommend you do not 
set it at all. Each server will automatically ignore any event that 
originates from a server with the same --server-id setting unless you 
specifically set --replicate-same-server-id=1 .


Regards
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: secure user name for MySQL account?

2012-09-24 Thread Shawn Green

Hello Arthur,

On 9/24/2012 4:25 PM, Arthur Fuller wrote:

On this note, one thing that really bugs me about MySQL passwords is the
inability to use special characters. In the SQL Server world, I let users
choose their own passwords, but obeying these rules:

It cannot be a dictionary word or sequence of words.
It must contain at least one numeric digit.
It must contain a mix of upper and lower case.
It must contain at least one special character.

That combination makes a password very difficult to crack. I don't know why
MySQL falls so short in this respect.



MySQL continues to improve in this respect. While it's true that our 
last big security change was the enhanced password hash function 
introduced in 4.1 we have not been completely insensitive to the needs 
of our customers.  For example, check out the list of account and 
security improvements arriving in MySQL 5.6

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

In particular, the password complexity threshold can be configured using 
the new Password Validation plugin:

http://dev.mysql.com/doc/refman/5.6/en/validate-password-plugin.html

Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Temporary table creation fails

2012-09-10 Thread Shawn Green

On 9/10/2012 9:55 AM, Garot Conklin wrote:

Not to beat the perms to death but /tmp should have the sticky bit set as 
well... so 1777 not just 0777.  Perhaps hard kill any lingering mysql PIDS 
unless this is production and u expect other DB's to be running... if u have 
duplicated this DB schema somewhow by mistake and a second or first identical 
instance is running it is plausable to assume that the other running instance 
has locked the file it is using preventing it from being written to.




Excellent point! Yes, you can have multiple MySQL instances running on 
the same host machine but they cannot share the same --tmpdir location. 
Name collisions can and do occur between them if they attempt to share a 
common folder. Verify that you have properly isolated each instance from 
every other by following these guidelines:

http://dev.mysql.com/doc/refman/5.5/en/multiple-servers.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL, UTF8 and collations

2012-08-28 Thread Shawn Green

On 8/28/2012 4:49 AM, Johan De Meersman wrote:

...
Guess I'll be fixing it manually (well, sed is my friend) in a mysqldump before 
syncing up the second node after it's been upgraded.



There is another method you can use that doesn't require 
dump+sed+restore.  Convert the column from it's current type to BINARY 
then back to the correct character set. This technique is described in 
the old 4.1 manual when we first introduced character sets. Back then, 
everyone was putting all sorts of data into latin1 fields and converting 
it back on the client.


http://dev.mysql.com/doc/refman/4.1/en/charset-conversion.html

The example uses BLOB but BINARY will also work.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Having trouble with SQL query

2012-08-27 Thread Shawn Green

Hello Rich,

On 8/27/2012 12:19 PM, rich gray wrote:

Hi Nitin
Thanks - I tried that and got 0 rows...
I have spent more time on describing my problem -- see below hopefully
this will make the issue more clear...
Rich
... snip ...


There are many resources out there that can tell you how to build this 
type of data structure. However, my favorite and the one I think is most 
accessible is this:


http://www.sitepoint.com/hierarchical-data-database/

As you can see, his menu also has branches (fruit) and leaves (cherry, 
banana) just as your equipment menu does. I think this will be an 
excellent starting point for you to use to build the menu tree. From 
there, it should be easy to extend this to link your leaf nodes to any 
information records you may want.


Let us know if we can give any additional insights or suggestions.
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: view query is slow

2012-08-23 Thread Shawn Green

On 8/23/2012 2:30 PM, James W. McNeely wrote:

I am working on a view based on this query:

===
SELECT
-- Patient Info
p.IdPatient,
p.IdLastword MRN,
p.NameLast,
p.NameFirst,
p.Addr1,
p.Addr2,
p.AddrCity,
p.AddrState,
p.AddrZip,
p.Gender,
p.DateOfBirth,
-- Provider Info
af.IdAffil,
af.PractName,
af.OfficeName,
-- Exam Info
e.IdExam,
e.dateexam,
a.WorkArea dept,
a.Room location,
e.ProcModeCode,
e.ProcName,
e.IdRefSite,
ec.IdCPT,
e.zzk exam_zzk,
ec.zzk examcpt_zzk
FROM patient_ p
LEFT JOIN exams e ON e.IdPatient = p.IdPatient
LEFT JOIN Examcpt_ ec ON (e.IdExam = ec.IdExam AND '1' = Quantity)
LEFT JOIN Copy_ c ON CONCAT(e.IdAppt , '0') = c.IdApptType
LEFT JOIN Appt_ a ON e.IdAppt = a.IdAppt
LEFT JOIN Affil_ af ON c.IdPractAffil = af.IdAffil
WHERE
p.AddrState = 'WA'
AND e.statusnumber = '4'
AND e.IdRefSite  'S50'
AND e.IdRefSite  'S51'
AND e.IdREfSite  'S63'
AND p.DateOfBirth  DATE_ADD(CURDATE(), INTERVAL '-2' MONTH)
AND a.zzk IS NOT NULL


If I run this query itself (not in the view), and add this:

AND e.dateexam = '2012-08-13'

it runs like lightning, super fast. But if I run the query against the view, for 
example SELECT * FROM exam_view WHERE dateexam = '2012-08-13'

It is so glacially slow that I end up having to kill the query. What is going 
on, and how can I fix this?



Look at the two EXPLAINs. I believe that when you run the query 
directly, you get to optimize that term into the execution of the view. 
When you run it through the view, the ALGORITHM is set to force the view 
to materialize all of the rows in the query, then scan those to find the 
rows that match your condition.


When you execute the query manually, you are getting the benefits of 
peformance as you would have for ALGORITHM=MERGE in the VIEW. However 
since you are not getting those benefits, it looks like you are in an 
ALGORITHM=TEMPTABLE situation.


http://dev.mysql.com/doc/refman/5.5/en/view-algorithms.html

The explain plans will clearly show which situation you are in.
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with correlated subquery

2012-08-22 Thread Shawn Green

Hello Martin,

On 8/22/2012 8:30 AM, Martin Gainty wrote:


assign realistic alias names
OuterJoin should be called OuterJoin
InnerJoin should be called InnerJoin



Almost!  MySQL does not have a simple OUTER JOIN command (some RDBMSes 
call this a FULL OUTER JOIN). What we do have is the option to include 
the OUTER keyword into our LEFT or RIGHT joins. For example, both of 
these are acceptable:


LEFT OUTER JOIN
LEFT JOIN

Also, you need a space between inner and join as in INNER JOIN.


If you want only the most restricitive criteria that match resultsets from both 
select statements use INNER JOIN
if you want all results from both resultsets (cartesian JOIN) use OUTER JOIN



Again, MySQL does not have a plain OUTER JOIN. If you want a full 
Cartesian product of two tables, use the 'comma join' syntax with no 
criteria for matching the tables in the WHERE clause.


SELECT ... FROM tableA, tableB WHERE 

Syntax details are located here:
http://dev.mysql.com/doc/refman/5.5/en/join.html



Finally: Be aware FUNCTIONS such as AVG cast off indexing and should be avoided 
unless the FUNCTION(columnName) itself is indexed
GROUP BY re-arranges your query so its best to introduce GROUP BY in stages



Not exactly. If you wrap a column in a function and attempt to use the 
results of that function in the WHERE clause, then you are correct. 
However based on the way your define your indexes, the data you process 
in a function may actually come from the index and save you a trip to 
the underlying table. In this case, the index could make your function 
faster by skipping an additional retrieval step.




use realistic alias names like Dept and EmployeeNumber and avoid aliases that 
cause confusion like 'a' or 'foo'


Excellent advice.


Develop in stages and write down what YOU EXPECT vs WHAT each query produces ..
If the Individual Select doesnt produce expected results..STOP and correct the 
SELECT Statement


Also excellent advice.


Diagram out what you expect results from any of the INNER JOIN, OUTER JOIN, 
LEFT JOIN or RIGHT JOIN operations will produce
If the executed JOIN Statement does not produce expected results STOP and 
correct the JOIN clause BEFORE incorporating more functionality

Obfuscation and confusion can hopelessly sidetrack any intelligent analysis


Well put!

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: search any matching word in given string in table column which has only one word

2012-08-11 Thread Shawn Green

On 8/11/2012 12:05 PM, Rajeev Prasad wrote:

here is given string of type:...   as23,rt54,yh79
and i have to lookup in a table which has column id which has only one four 
letter word. i have to select that row in which the colum 'id' value matches any of the 
word in given string...
... snipped ...


Hello Rajeev,

Relational database theory has been shown to be quite effective at 
solving problems like yours.  The problem with your design is that you 
do not have the ability to do an exact-value index for every 4-letter 
set in your longer strings. The relational solution is to make two 
tables. One for the row and one for the list of values that belong to 
that row.


Another approach to this is to look at your lists of values as 4-letter 
words. For this design techniques of full-text indexing may be 
applicable. However many full-text engines will omit or ignore words 
that appear too often in the data as they have very low selectivity. For 
example, if you had a database of news articles and each row contained 
the contents of one article, then the most common words like a, an, 
the, like, with, on, and so forth are very likely to appear in 
every single row of data. This makes those terms useless as search 
items.  If you happen to have one of your 4-letter words of data that 
manages to appear in enough rows to cross that threshold, then using a 
full-text index will fail to locate any rows that contain that value.


My preference is to use the two-table approach as I can index both the 
descriptive data (on the parent table) and all of the values that appear 
in the list (on the child table) to make retrieval both accurate and 
very fast.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How often the slave to pull data from master?

2012-08-01 Thread Shawn Green

On 8/1/2012 1:13 PM, Rick James wrote:

That leads to another question...
Does this 'separate cache' contain an extra copy (vs the 'only' copy) of the 
non-transacted events?
I would expect that it must be an extra copy.



No. The non-transacted events are applied to the tables they belong to 
as they are executed.  We track them in this cache for replication 
purposes. Are you worried about a race between a crash and the time 
where the cache is applied to the main binary logs?


This is a very unlikely event for two reasons:

1) the non-transacted change must occur within a transaction

2) most users that use transactions only use transaction-enabled storage 
engines (InnoDB).


So in order to lose non-transacted changes from a crash that happens 
before the cache is appended to the binary log, you must
a) have a sequence of statements (or ROW-based changes) that is smaller 
than --binlog-cache-size limit

b) have a mixed-mode transaction.

If this is a valid concern for your usage pattern set 
--binlog-cache-size to a smaller value.

http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html#sysvar_binlog_cache_size

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How often the slave to pull data from master?

2012-07-30 Thread Shawn Green

On 7/29/2012 12:52 AM, Zhigang Zhang wrote:

Hi

If there are additional events beyond that, it

retrieves those in sequence after it writes the current statement into
the relay logs.

I have a question about this:

Whether the slave is sent a signal whenever the master generates each event?



Yes, the slave receives a signal but it is not a TCP (networked) signal 
that the slave receives. Let me add a few more details to the 'retrieval 
of events from the Binary log' part of the replication process.


Each slave that connects to a master will open a 'dump thread' process 
that keeps track of the end of the binary log. This is a mini-daemon 
that runs in the context of the overall MySQL server. The slave holds 
and maintains an open connection with the dump thread and when there are 
no new events to transmit to the slave, the thread goes to sleep. This 
does not break the TCP/IP connection to the slave.


The 'signal' to the 'slave' is not sent via TCP to the remote server. It 
is sent to it's proxy, the dump thread, using a pthread_cond_signal 
event. This wakes the thread and initiates the streaming of new data to 
the slave.


The dump thread is initiated by the slave when it makes its connection. 
This is considered a 'slave process' as it is the dedicated local 
listener for the slave process. Each slave gets its own dump thread. 
When the slave disconnects, the dump thread is destroyed.


Using a low-level kernel signal is much more efficient (in terms of 
network usage and CPU cycles) than to continuously ping one server from 
the other. This is why we chose this design.


For any additional lower-level details than this, I encourage you to 
review the source as it can tell you much more than most would like to 
be shared in a general forum like this list.


Also, there is a more technical discussion on the internal mechanics of 
MySQL already in place. The proper place to ask for more details would 
be the Internals mailing list.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How often the slave to pull data from master?

2012-07-30 Thread Shawn Green

Hello David,

On 7/30/2012 11:46 AM, David Lerer wrote:

Thanks Shawn. This is very useful.
Could you shed some light on how rolled-back transactions (or not-yet-committed 
transactions for that matter) are treated as far as the binary logs?  Are these 
updates actually go to the binary logs, thus trigger replication to the salve?
Thanks, David.


There are two scenarios to worry about

1) This transaction involves no other storage engine except InnoDB.

2) This transaction involves any other storage engine as well as InnoDB.

In both cases, they start out the same. A temporary binary log cache is 
generated to keep track of the transaction as it occurs. All events are 
recorded to this cache which may become a temporary file if you exceed 
certain limits.


In the event of a COMMIT, this separate cache (or file) is appended to 
the normal binary log file at the position that corresponds to the 
moment where the COMMIT occurs.


In the event of a ROLLBACK, there is a difference. In scenario 1) where 
all changes are transacted, the cache is just dropped. As there were no 
changes to the actual data, nothing needs to enter the binary log.  In 
scenario 2), however, permanent changes to the data exist on the master. 
In this case, the entire cache is appended (as normal) but the last 
command is the ROLLBACK. This applies the non-transacted changes to the 
slave while removing the transacted changes from the slave thus keeping 
the two systems in sync.


Beginning with 5.5.9, we created a separate cache specifically for the 
non-transacted events to reduce our replication overhead. Only those 
statements will be transmitted in the event of a ROLLBACK.


http://dev.mysql.com/doc/refman/5.5/en/server-status-variables.html#statvar_Binlog_cache_disk_use

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How often the slave to pull data from master?

2012-07-26 Thread Shawn Green

Hello Rick,

You nearly got it all correct.

On 7/26/2012 1:21 PM, Rick James wrote:

My understanding is...

* There is a permanent connection (TCP/IP socket) between the Master and each 
Slave.  This is opened by the Slave when the slave starts (or restarts or recovers from a 
network glitch).

* Master writes to binlog _and_ to that connection whenever there is a 'write' 
operation.  (I do not know whether the binlog is written/flushed before or 
after or simultaneous with writing to the Slave(s).)


The slave is sent a signal that more data is available. The slave then 
requests that data from the master's binary log at the 'next' position 
it expects it to be. If there are additional events beyond that, it 
retrieves those in sequence after it writes the current statement into 
the relay logs.




* If the connection is interrupted, the Slave will reconnect and ask the Master for 
old info.  This comes from the binlog.  Otherwise, the Master's binlog file 
is not part of the flow of data from Master to Slave.



Once the slave can reconnect, it asks for the 'next' statement in 
sequence (based on the end of the last statement it has received). Any 
statements beyond that already in the binary logs retrieved in sequence.


If there is no 'next' statement, it waits for a signal that more data is 
available.


If the binary log was shortened (which can occasionally happen during an 
InnoDB recovery processing) the slave throws an error and waits for the 
DBA to fix things.


The removal of statements and the conditions necessary to remove some 
events from the end of the binary log is discussed here:

http://dev.mysql.com/doc/refman/5.1/en/binary-log.html


* Semi-sync (5.5) adds some chatter from Slave to Master so that the Master can 
wait for at least one Slave to have written to its relay log.

* Percona Cluster, Galera, etc., add more complexity to the replication traffic.
...


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Shawn Green

On 7/17/2012 8:22 PM, Carl Kabbe wrote:

On Monday, I asked if there were consultants out there who could help set up an 
NDB high availability system.  As I compared our needs to NDB, it became 
obvious that NDB was not the answer and more obvious that simply adding high 
availability processes to our existing Innodb system was.

So, I am back asking if there are consultants lurking on this list that could 
help with this project.



As has been discussed on this list many times before, there are many 
ways to measure 'high availability'. Most of them deal with what kind of 
disaster you want to survive or return to service from.  If all you are 
looking for is additional production capacity then the terms you may 
want to investigate are 'scale out', 'partitioning', and 'replication'. 
All high-availability solutions require at least some level of hardware 
redundancy. Sometimes they require multiple layers in multiple locations.


Several of those features of MySQL also help with meeting some 
high-availability goals.


Are you willing to discuss your specific desired availability thresholds 
in public?


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Shawn Green

Hello Carl,

On 7/18/2012 11:11 AM, Carl Kabbe wrote:

We are actually facing both capacity and availability issues at the same time.
...


It sounds to me like you need a combination of sharding (one master per 
client or set of clients) combined with multiple slaves (one for backups 
only). If you share read queries between master and slave already, then 
you can continue with this. By using the one slave for backups only, it 
only needs to process the replication stream so it will be able to 
maintain itself most up to date. This would be the machine you switch to 
in event of failover. All machines (masters and slaves) need to have the 
same capacity.


Separating your clients to multiple machines will help with uptime and 
throughput. If you lose one, only some of your clients lose their 
connection while you fail over. Also, because each master does not need 
to handle ALL of your clients at one time (just some of them), you can 
use much cheaper hardware to handle the load. The other advantage is 
disk usage. By sharing your traffic over multiple disks (not just one 
big RAID array or SAN or NAS for ALL of your clients at once) you are 
actually providing more capacity for transactions than you would with a 
single large array.


Yes, this may make maintenance a little more interesting but this way 
you won't need to invest in such huge servers and you gain the 
redundancy you need to meet the HA goals you stated. Backups will be 
more numerous but they will be smaller (and possibly client specific). 
Backups can also happen in parallel (from multiple sources) which will 
make your maintenance windows smaller. Heavy traffic from one client 
will not drag down the performance of another (with the exception of 
clogging your network pipes). It's a win-win.


Go simple, not bigger. Divide and conquer is what I believe is your best 
approach.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: GA download reverted back to 5.5.24?

2012-07-17 Thread Shawn Green

Hello Rick

On 7/16/2012 5:54 PM, Rick James wrote:

Shawn, can you explain why one of the links on that page is broken?
 http://bugs.mysql.com/bug.php?id=14248833
Says
 No such bug #14248833 or bug is referenced in the Oracle bug system.



That's the wrong bug system to see that bug. In order to see that bug, 
you need to log into My Oracle Support (MOS) and do a KM search on the 
number 14248833. However, that system has many more restrictions on it 
than bugs.mysql.com does. I am not at all sure you will have access to 
the details you seek.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Finding Rows With Common Items

2012-07-12 Thread Shawn Green

On 7/12/2012 1:49 PM, John Nichel wrote:

Hi all,

Lets say I have a table with two colums: 'orderid' and 'productid'.  This
table contains line items for purchases in a store where a purchase could
be 1 or more line items (1 or more rows).  A typical order may look like
this:

orderid | productid
12345   | 9876
12345   | 6789
12345   | 7698

Is there a simple way to query the table to pull orders which have 2 or
more products in common?  For example, if order 12345 has 5 line items and
order 12348 has 7 line items, I would like the query to return these two
orders if they have products 9876 and 6789 in common, and I would also
like it to return orders 23456, 65432 and 34567 where they have 8796, 6789
and 4456 in common.  I hope I'm explaining this well...I know what I'm
trying to accomplish in my head, but we all know that doesn't always
translate too well in an email.  :)  For the record, this is a MySQL
4.1.22 db.  TIA



Basically you want an index on (productid, orderid) on the main table to 
do the lookup faster. As you may want to do this query only once, make a 
list of all of the products that have particpated in more than one order 
then periodically update this list.


One way to create this list is with the following query:

CREATE TABLE multipleordprods (key (productid,ordercount), 
key(ordercount, productid))

SELECT productid, count(orderid) ordercount
FROM your table name here
GROUP BY productid HAVING ordercount 1;

Now we have a list of all productid values that participated in more 
than 1 order. You can now sort this table by number of orders or by 
product. You can join this table back to your original table to get a 
list of the orderid for any one product. This summary table is the key 
to drilling into your data.


You can also add more columns to this table or create other summary 
tables using combinations of time or price or any other dimensions you 
want to use to slice and dice your data. This is the core principal to 
designing a data warehouse for online analytical processing (OLAP).


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Composite Index Usage in Joins

2012-07-11 Thread Shawn Green
 scans only happen for the last portion of an index being used. OR 
queries (or those using IN) can also only be applied to the last part of 
an index search. This means that if you are using IN (or OR) against the 
first part of an index, that's where the usage of the index stops. The 
rest of the conditions are evaluated during the WHERE processing phase 
of the query.


Also, trying to force index usage may be creating more work for your 
disks than necessary. An indexed lookup is a random access to a portion 
of a file. Should that need to come directly off the disk, then that 
lookup will be about 3x more expensive than a scan (because of the 
various head positioning maneuvers required). The thumbrule is that if 
more than about 30% of a table needs to be randomly located, then switch 
to a full scan. It saves a lot of time.


Additional information about how indexes are used and abused during 
queries is located in the Optimization chapter in the manual:

http://dev.mysql.com/doc/refman/5.1/en/optimization.html

I suggest you start here and work your way out:
http://dev.mysql.com/doc/refman/5.1/en/optimization-indexes.html

Best wishes,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: GA download reverted back to 5.5.24?

2012-07-09 Thread Shawn Green

On 7/1/2012 12:11 AM, Hank wrote:

Check the manual:
http://dev.mysql.com/doc/refman/5.5/en/news-5-5-25.html
Shawn Green


Thank you, but that warning note was not there when I first posted
this message in here.  I'm not sure when the warning note appeared,
but I'd guess it was within the last 36 hours. All that I could see
was that 5.5.25 mysteriously disappeared to be replaced by 5.5.24.



I realize that communication may not have been as quick as many of you 
may have preferred. Due to the very corporate nature of important public 
announcements, we required several additional days beyond what we could 
post to the change logs to publish an announcement about the removal of 
5.5.25 and its replacement with 5.5.25a.


(posted July 5, 2012)
http://lists.mysql.com/announce/789

Thank you all for your patience as we worked our end of the problem as 
quickly as we could.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: GA download reverted back to 5.5.24?

2012-06-29 Thread Shawn Green

On 6/28/2012 9:41 PM, Hank wrote:

I am in the process of reporting a new MySQL bug in 5.5.25 (doesn't
exist in 5.5.24) - see: http://bugs.mysql.com/bug.php?id=65740

And I just noticed that at the mysql.com website, the GA downloads
have just been reverted back to 5.5.24.

Is there a blog or update site that might explain why they retracted 5.5.25?

thanks,

-Hank



Check the manual:
http://dev.mysql.com/doc/refman/5.5/en/news-5-5-25.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need Query Help

2012-06-22 Thread Shawn Green

On 6/22/2012 12:18 AM, Anupam Karmarkar wrote:

Thanks Rick for your reply,

Here i am asking about logic to perpare query or whole query itself.



A set-based approach to doing the basic task is to convert your set of 
start/stop times into duration values. The timediff() function mentioned 
already is a good way to do this.


CREATE TEMPORARY TABLE tmpHours SELECT EmployeeID, timediff(logouttime, 
logintime) as duration FROM sourcetable;


At this point, you have a temporary table of (EmployeeID, duration). It 
becomes very simple to write a summary query:


SELECT employeeid, sum(duration) as totalhours from tmpHours group by 
employeeid;


If you want to breakdown your final report by other values (by date, by 
week, by shift, etc) then you need to compute those and add them to the 
tmpHours table when you create it.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Indexing about 40 Billion Entries

2012-06-20 Thread Shawn Green

On 6/20/2012 5:45 AM, Christian Koetteritzsch wrote:

Hi guys,

As the title says I'm trying to index 40 billion entries with two
indexes on a server with 16 cores and 128GB RAM. The table is the one
below and it is a myisam table. The *.myd file is about 640GB

DROP TABLE IF EXISTS `l4_link`;
CREATE TABLE  `l4_link` (
   `ruid1` int NOT NULL,
   `ruid2` int NOT NULL,
   `overlap` int NOT NULL
);

I need an index for ruid1 and for ruid2.



Actually, based on your proposed query, I believe you want an index on 
(ruid1, ruid2) not separate indexes for each column.




The status for this table is the following:

Name: l4_link
  Engine: MyISAM
 Version: 10
  Row_format: Fixed
Rows: 39806500262
  Avg_row_length: 17
 Data_length: 676710504454
Max_data_length: 4785074604081151
Index_length: 1024
   Data_free: 0
  Auto_increment: NULL
 Create_time: 2012-06-19 14:51:29
 Update_time: 2012-06-19 16:26:35
  Check_time: NULL
   Collation: utf8_general_ci
Checksum: NULL
  Create_options:
 Comment:

The variables for myisam are the following:
mysql show global variables like '%myisam%';
++---+

| Variable_name   |
Value  |
++---+

| myisam_data_pointer_size  | 6 |
| myisam_max_sort_file_size | 9223372036853727232  |
| myisam_mmap_size | 18446744073709551615|
| myisam_recover_options | BACKUP   |
| myisam_repair_threads   | 1 |
| myisam_sort_buffer_size| 8388608  |
| myisam_stats_method| nulls_unequal |
| myisam_use_mmap |
OFF  |
+---++

8 rows in set (0.00 sec)

The temp folder has about 16tb free space.

When I start the indexing process, it copies the 640Gb into a temp file
and then starts with repair with keycache.
On the internet I found that if it says repair with keycache you shold
increase the myisam_max_sort_file_size, but this didn't work.
It still says repair with keycache after it copied the data to the
temp dir.

I hope you have any idea how to fix this.



Try maxing out the following settings:

  myisam_sort_buffer_size
  key_buffer_size

http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_myisam_sort_buffer_size

http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_myisam_sort_buffer_size

You will need to experiment with how large the maximum value will be 
permitted for your platform.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: License question on libmysql.dll and C/C++ API Version 4.0 question about Victoria Reznichenko response

2012-06-18 Thread Shawn Green

Hello Claudia,

On 6/18/2012 2:13 PM, Claudia Murialdo wrote:

Hello,
Y read this message: http://lists.mysql.com/mysql/109590 and I would like
to ask for authorization but I don´t see the email address in that thread
(it says: sales@stripped).
Can someone tell me that email address?.

Thanks in advance.
Claudia.



That link is 10 years old and that address is no longer valid. MySQL has 
been bought twice since then once directly by Sun Microsystems then 
again when Oracle purchased Sun. Your current questions need to be asked 
to Oracle.


These numbers will route you to the appropriate resources
http://www.oracle.com/us/support/contact-068555.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: console input

2012-06-15 Thread Shawn Green

On 6/14/2012 5:57 PM, Gary Aitken wrote:

Hi all,

I've looked high and low for what I hope is a trivial answer.

I was trying to load a table using LOAD DATA INFILE.  Unfortunately, it craps 
out because there are some duplicate primary keys.  Not surprising as the 
source did not enforce uniqueness.  My problem is the load data simply dies 
without indicating which line of the input file was in error; the error message 
refers to line 3, which is not even the SQL statement for the LOAD DATA INTO 
statement:

I can get the table loaded by specifying REPLACE INTO TABLE, but that still 
leaves me with not knowing where the duplicate records are.

So...  I wanted to read the data line at a time and use a plain INSERT 
statement.  That way I could check for duplicate keys and discover where the 
duplicate records are.  However, I can't find a way to read input from the 
console or a file.  What am I missing?  I know I could write a java or C++ 
program to do this, but it seems like overkill for what should be a trivial 
task.

Thanks for any pointers,

Gary



The trivial thing I do to solve this problem is to create a copy of the 
destination table without any PRIMARY KEY or UNIQUE constraints on it. 
This gives you an empty space to which you can bulk import your raw 
data. I am not sure if there is an official term for this but I call it 
a 'landing table'. This is the first step of the import process.


Once you can get your data off of disk and onto the landing table (it's 
where the raw import lands inside the database) you can check it for 
duplicates very easily.


1) create a normal index for the PRIMARY KEY column
2) create another table that has a list of duplicateslike this

CREATE TABLE dup_list ENGINE=MYISAM SELECT pkcol, count(*) freq FROM 
landing GROUP BY pkcol HAVING freq 1;


notes:
* use a MyISAM table for this preparation work even if the destination 
table is using the InnoDB storage engine, you really do not need to 
protect this data with a transaction yet.

* MyISAM indexes are also very fast for count(*) queries.

You have clearly identified all duplicate rows in the incoming data. You 
can also compare those rows with your existing rows to see if any of 
them duplicate each other (hint: INNER JOIN is your friend) or if any 
exist in one table but not the other (hint: LEFT JOIN).


From here you should be able to cleanse the incoming data (remove 
duplicates, adjust any weird fields) and merge it with your existing 
data to maintain the relational and logical integrity of your tables.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread Shawn Green

On 6/15/2012 1:00 PM, Rick James wrote:

You are very close to a standalone test case.  Please create such.  Then post 
it on bugs.mysql.com .


-Original Message-
From: GF [mailto:gan...@gmail.com]
Sent: Friday, June 15, 2012 12:45 AM
To: Rick James
Cc: Shawn Green; mysql@lists.mysql.com
Subject: Re: Foreign key and uppercase / lowercase values

I think the following might give complete information (I removed some
columns not involved in the problem)

Server version: 5.1.49-3 (Debian)

SET collation_connection = utf8_unicode_ci; Query OK, 0 rows affected
...


Before he submits a test case, he should also review
http://bugs.mysql.com/bug.php?id=27877
http://dev.mysql.com/doc/refman/5.5/en/news-5-5-21.html

This has been a well-discussed problem both inside and outside the MySQL 
development processes.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-06-15 Thread Shawn Green

On 6/15/2012 3:19 PM, Rick James wrote:

Those refer _only_ to German 'ß' LATIN SMALL LETTER SHARP S.  The example GF 
gave did not involve that character.

To my knowledge, that is the only case where MySQL changed a collation after 
releasing it.



Yes, it has been the only occurrence. However, the esset (sharp S) is 
just one example of the alternative spelling letters that were affected 
by the collation change. Thorn, the AE ligand, and many others fall into 
that same category.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Minor collation question

2012-06-12 Thread Shawn Green

On 6/12/2012 9:37 AM, Johan De Meersman wrote:

... but too lazy to look for it myself right now :-p

I'm going to be running into (woot, scheduled problems :-p ) the 5.0 to 5.1 
upgrade collation issue where German β is now collated as 's' instead of as 
'ss', causing duplicate key errors.

The basic solution is to set the collation to utf8_bin - so far, so good - but 
I'm wondering wether the table collation impacts only sorting, or wether 
comparisons will also magically become binary and/or other symptoms might pop 
up ? I suspect 'yes' based on the primary key issue, but I want to verify.



The other tactical move has been to jump 5.1 and upgrade directly from 
5.0 to 5.5 where that problem is resolved.  Sure, it's a bit more work 
(full dump/restore is highly recommended) but it avoids the collation 
bug which exists in all 5.1.x releases.


Yes, collations are used for equality and inequality comparisons, too, 
not just sorting. That's why having alternate spellings ,like the words 
strasse and straβe, will collide within a PK in 5.1 where they will 
not for 5.0 or 5.5 (with the appropriate collation).


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: i need advice on redundancy of mysql server.

2012-06-11 Thread Shawn Green

On 6/11/2012 10:36 AM, Ananda Kumar wrote:

...
Master-master allows load balancing.



Why do people keep replication rings as if they are the best possible 
configuration? A master-slave relationship also permits load balancing 
and is easier to maintain and recover in the event of a node failure.


Any MySQL replication topology cannot, in a generic sense, allow load 
balancing. Careful precautions must be maintained in your load balancer 
and in your application code to avoid data collisions (adding or 
modifying the same PK on the same table on two or more nodes at the same 
time).


For continuous uptime, you do need redundancy. For the 'limited 
downtime' scenario that the customer proposed, this includes 
geographical redundancy as well as physical duplication and should also 
include sharding his data so that the loss of one sharded set (due to 
extreme disaster or comms failure) does not knock all of his users 
offline at the same time.


realtime data is also a variable definition. For hydrologic monitoring 
data 'realtime' could mean within the last 15 minutes.  For some 
applications (such as telecommunication) 'realtime' is measured in 
microseconds.  It may be that the customer's requirements can tolerate a 
normal variance in duplication time provided by MySQL's native 
replication. If not, then rewriting (notice I did not say porting) 
their application to use MySQL Cluster may be the way to meet their 
realtime requirements.


Yes, master-master replication can be useful (in an active-passive 
setup) for rapid failover and recovery if you take the appropriate 
precautions. It is not recommended for the newest administrators because 
recovery can become complicated. One should really understand basic, 
top-down master-slave replication before attempting to create a 
replication ring.


Active-Active (dual master) configuration is even more complicated and 
is suited only for specific application purposes. This is definitely an 
advanced technique and requires careful planning and engineering to 
perform properly.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: i need advice on redundancy of mysql server.

2012-06-11 Thread Shawn Green

On 6/11/2012 12:02 PM, Joey L wrote:

You listed a lot of things - but no solution - i am looking for master
- master configuration.
Any tools you have used ?
Anything concrete you can offer?



There is no one-size-fits-all approach to the problem you are 
attempting to solve. However, there are some documented solutions in the 
manual for you to review


Let's start with the basics:

   http://dev.mysql.com/doc/refman/5.5/en/faqs-replication.html


http://dev.mysql.com/doc/refman/5.5/en/replication-solutions-scaleout.html

   http://dev.mysql.com/doc/refman/5.5/en/replication-solutions-switch.html

   http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html

   http://dev.mysql.com/doc/refman/5.5/en/replication-problems.html

We also document some more exotic configurations:

   http://dev.mysql.com/doc/refman/5.5/en/ha-overview.html

Which one you opt for depends on many factors that we cannot determine 
for you.


Warmest wishes,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Foreign key and uppercase / lowercase values

2012-05-16 Thread Shawn Green

Hello Ananda,

On 5/16/2012 6:42 AM, Ananda Kumar wrote:

why are not using any where condition in the update statment



WHERE clauses are not required. Performing a command without one will 
affect ever row on the table.



On Wed, May 16, 2012 at 1:24 PM, GFgan...@gmail.com  wrote:


Good morning,
I have an application where the user ids were stored lowercase.
Some batch import, in the user table some users stored  a uppercase
id, and for some applicative logic, in other tables that have a
foreign key to the user table, their user ids are stored lowercase.
...
Have you any idea how to solve this situation without
stopping/recreating the DB? (it's a production environment)
Thanks



Have you tried ?

  SET foreign_key_checks=0;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_foreign_key_checks

If that does not work, you would need to first un-create your Foreign 
Key relationships, update your key values (the USER_ID fields), then 
re-create your Foreign Key relationships.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: multiple instances in win 7 -- any idea

2012-05-14 Thread Shawn Green

On 5/13/2012 6:53 PM, Brown, Charles wrote:
 I'm trying to install multiple instances of mysql on windows 7, 
64bit. 3hrs into the job, I'm not making progress. Does anyone have an idea?


1) The installers are designed to work on single-instance installs or 
upgrades.


2) You only need one install to run multiple copies of the same release. 
The trick is to configure the necessary parts to be unique values 
between the instances


3) Each instance needs its own copy of unique data. No two active 
instances can share data.


4) The list of other items that must be unique per instance is listed here:
http://dev.mysql.com/doc/refman/5.5/en/multiple-servers.html

5) (mailing list rule) - avoid hijacking other threads

6) (general support advice) - when having a problem, try to provide 
descriptive details regarding what you are trying to do, any commands 
you are using, and what types of failures you are encountering 
(including any error messages you are receiving). This usually allows 
anyone trying to help you to respond in a more focused and less general 
way.


Warmest regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: One table gets locked by itself

2012-05-09 Thread Shawn Green

On 5/9/2012 6:17 AM, Johan De Meersman wrote:

- Original Message -


From: Claudio Nanniclaudio.na...@gmail.com



Yes indeed,
but I think we are talking about MySQL level deadlocks,
that can happen only with row level locking and transactions.
If the deadlock is generated at application level then you can have
it on anything, also blackhole :-)

Yup, but you don't know where the lock comes from, so that's possible.

However, I just checked the documentation, and he *is* right after all: MySQL 
claims to be deadlock-free for table-locking engines, they simply lock all 
tables at once (I suppose that means any lock statement will invalidate the 
previous one) and always lock tables in the same order.



Actually, we do not lock all tables at the same time. Each table is 
locked as it is needed by the connection that needs it.


As long as this is an implicit lock (created by the SQL command to be 
executed) and not a lock held by an explicit LOCK command then there is 
no way for any two connections to deadlock on a MyISAM (or another 
non-transactional table). It is only when a lock extends beyond the 
single-object scenario that makes a deadlock is possible. As each 
statement against a MyISAM table generally only needs to lock one object 
(the table) no two MySQL-based statements can deadlock. Even in those 
cases where one statement uses two objects (such as a INSERT...SELECT... 
or multi-table UPDATE command) the lock prioritization process for 
MyISAM prevents any two threads from cross-locking at the same time.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: RFE: Allow to use version-specific my.cnf files

2012-04-27 Thread Shawn Green

Hello Honza,

On 4/27/2012 4:35 AM, Honza Horak wrote:

On 04/25/2012 05:52 PM, Andrés Tello wrote:

Reads interesting, but...

Why would you need that?

I mean... If I run several databases in the same hardware, I use
completely
diferent paths for evertying, so I can have atomic, clean and specific
files for each instance/version of the database


Thanks for your opinion.

You're right, it doesn't make too much sense regarding system-wide
configuration files, such as /etc/my.cnf. A real use case I see is when
we speak about users' config files, like ~/.my.cnf.

Let's say we have two different MySQL versions on one hardware, then
it's possible we'll need a bit different options for each instance.

MySQL unfortunately doesn't distinguish between user-specific (usually
called rc files) and system-wide config files. Trying to have the patch
simple, I applied the feature to all config files (which was not
necessary).

The attached patch now is a bit more complicated, but restricts the
feature only for config files in user's home directory. I believe this
makes more sense, than the original one.

Any comments welcome again.

Cheers,


I frequently need to have multiple versions ready to operate on my 
machine at any time. I solved the configuration file problems by only 
setting them up in the basedir of the installed version.


For those special occasions when I need to configure multiple copies of 
the same version, I create a separate set of --datadir folders and craft 
separate configuration files for both. I start mysqld using the 
--defaults-file option and point that at the special file for each 
instance.


As a matter of convenience, if you need to constantly run with multiple 
instances on the same host and if any one of those may need to be using 
a different version than the others, then the utility mysqld_multi may 
be what you need to be looking at.  Each instance you manage by the 
script can have their own separate set of settings all stored in the 
same, common, configuration file. Check it out:

http://dev.mysql.com/doc/refman/5.1/en/mysqld-multi.html



--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Why does the limit use the early row lookup.

2012-04-23 Thread shawn green
On 4/22/2012 11:18 PM, Zhangzhigang wrote:
 Why does not the mysql developer team to do this optimization?
 
 --- 12年4月20日,周五, Reindl Haraldh.rei...@thelounge.net  写道:
 
 ...

 because the mysql optimizer until now is really
 bad in many situations - order by rand() makes a
 temporary table wil ALL data as example even with limit

 select * from table order by rand() limit 10;
 reads and writes the whole table to disk
 have fun with large tables :-)


 

When the Optimizer is told to sort a result set in the order determined
by a random value created only at the time of the query, what better
technique could they use than to materialize the table, sort the data,
then return the results?

If you can think of a better way of sorting random numbers, please tell
us. MySQL has and still does accept solutions from the community. I do
admit that at times in our past we have been very slow about processing
those submissions but recently we have made great improvements in how we
handle those.

Also, we have made significant strides in improving our Optimizer in 5.5
and more improvements will be coming in 5.6.  Please check out our newer
versions to see if we have solved or improved any particular scalability
problems you may be having.

Regards,
-- 
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Maser-Slave replication is playing up

2012-04-20 Thread shawn green

Hello Egor,

On 4/20/2012 7:17 AM, nixofortune wrote:

Hi guys,
I'm am experiencing strange behaviour with the replication.
Our replication Setup looks like this:

Master1(5.1.46-log)
=Master2(5.1.46-log)
/  |\
Slave1(5.1.46-log) Slave2(5.1.52) Slave3(5.1.52)



Yesterday I noticed that 2 of the slaves start lagging behind the master.
The load on the slaves Machines was about 1.0, top = mysqld 100%

SHOW SLAVE STATUS:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Relay_Log_Pos: 670375858  without any progress for 4 hours
Exec_Master_Log_Pos: without any progress for 4 hours
Seconds_Behind_Master: steady growing
No Errors.

Relay log at the Relay log position was something like this:

# at 670375858
#120419  6:22:57 server id 5  end_log_pos 670375922  Query
thread_id=48477609 exec_time=8 error_code=0
SET TIMESTAMP=1334830977/*!*/;
SET @@session.auto_increment_increment=10,
@@session.auto_increment_offset=5/*!*/;
BEGIN
/*!*/;
# at 670375922
# at 670376015
# at 670377033
# at 670378042
# at 670379050
# at 670380055
..
..
# at 678710787
# at 678711799
(8,257 rows like that..)



This is a representation of the replication stream using ROW formatting.


#120419  6:22:57 server id 5  end_log_pos 670376015  Table_map:
`fw4`.`newsletter_campaigns_recipients_type_regular` mapped to number
10591074
#120419  6:22:57 server id 5  end_log_pos 670377033  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 670378042  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 670379050  Delete_rows: table id
10591074


#120419  6:22:57 server id 5  end_log_pos 678711799  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 678712260  Delete_rows: table id
10591074 flags: STMT_END_F
(8,257 rows like that..)



Those are descriptions of the actual ROW events being executed. It 
appears you are trying to delete 8257 rows from the table 10591074 which 
has been mapped to the name 
`fw4`.`newsletter_campaigns_recipients_type_regular`



BINLOG '
geePTxMFXQAAAE8g9ScAAGKboQEAA2Z3NAAsbmV3c2xldHRlcl9jYW1wYWlnbnNfcmVj
aXBpZW50c190eXBlX3JlZ3VsYXIABgMDAw8PDwYQAHwAIAAI
geePTxkF+gMAAEkk9ScAAGKboQAABv/AlWdgIAEAAABnNAUAA01ycwRGVzI0IDkwODcx
ZGIxMDEwOTUyNWM3NTJmODYwYjY4YmNkMjdlwItnYCABZzQFAAJEcgRGVzI0IDYxOTQxZTMw
ZGU2MDNlYWRmOTBmZTMyMGZiODA5OGNhwIFnYCABZzQFAAJNcwRGVzI0IGMxMTIzYWZlNThh
NDczMmQ0ODE1Yzk3ZDUwNmEyMTdhwHdnYCABZzQFAAJNcwRGVzI0IGQ5YmY3YTJjZDAyNzNl
M2Y0MmNmYzI3MDliOTJmNzc0wG1nYCABZzQFAAJNcgRGVzI0IGJmZjY4OTlhNjUwZjdlYmE3
NjY2YzZjMjkyNzJkZGIzwGNnYCABZzQFAAJNcwlFeGVjdXRpdmUgMTE0ZDlmNWY4ZDU2ZGIy
ZmJiNmRiMWY0OTExZmZkNjTAWWdgIAEAAABnNAUAAkRyBEZXMjQgMGMxMGMzMTA3Y2MxMGJhNmE1
ZjhkYzlmMGI1NTM1MDHAT2dgIAEAAABnNAUAAk1zBEZXMjQgNzBkOGFjNmE5MmU3ZDE3MDc5OTEx
NmVmOTE3OTg2OTHARWdgIAEAAABnNAUAAkRyCUV4ZWN1dGl2ZSA4ZjlmMmZiMzc3ZWYwOTFjZDc0
ZWJkNGZmOTdmYzVkZsA7Z2AgAQAAAGc0BQACRHIERlcyNCA2OTQwMGZhNzUyNTg5NmM2Mjc4ZDI1

(1 686 969: 125Meg Rows of Rubbish like that )



This is an actual ROW event. It is a base64 encoded representation of 
the binary information that represents both the values of the original 
row and the values that you want that row to become.




After 4Hours time the Slaves started processing the Replication logs and
gradually catching up with the masters.
No Slave errors btw yet.
Those deletes haven't been processed on slaves and we have now 500 000 rows
difference with the masters.
I'm trying to understand what's caused this issue and what actually
happened here.

 ...

The most common mistake when using ROW or MIXED is the failure to verify 
that every table you want to replicate has a PRIMARY KEY on it. This is 
a mistake because when a ROW event (such as the one documented above) is 
sent to the slave and neither the master's copy nor the slave's copy of 
the table has a PRIMARY KEY on the table, there is no way to easily 
identify which unique row you want replication to change.


Replication solves this problem by scanning your entire table and 
choosing an appropriately matching row to the 'before' image embedded in 
the ROW event.  Multiply the number of rows you are trying to change by 
the number of rows you need to scan and this can quickly become a 
process that takes a long time to complete. A numerical example would 
look like this:


* 5 rows in a table without a PRIMARY KEY
* You delete 5000 of those on the master.
* The 5000 deletion events are written to the Binary Log in ROW formatting.

When the slave gets this event, it must process those 5000 row events by 
scanning the full table 5000 times. This means that the slave needs to 
resolve 5000*5=25000 (250 million) full-row comparisons in order 
to process this DELETE event.


The lesson here: Always assign a PRIMARY KEY to your tables if you are 
going to use MIXED or ROW values for --binlog-format


--
Shawn Green
MySQL Principal

Re: Maser-Slave replication is playing up

2012-04-20 Thread shawn green

Hello All,

On 4/20/2012 7:44 AM, nixofortune wrote:

Thanks,
So the actions should be:
1. Stop slaves
2. On Slaves: SET GLOBAL binlog_format=MIXED
3. Start slaves.

What is restart the replication from scratch with a binary ident copy
(rsync)
Is it to use rsync to rsync mysql data on masters and slaves? and than run
chnge the master to  to start from zero ground?
Many thanks

On 20 April 2012 12:31, Reindl Haraldh.rei...@thelounge.net  wrote:




Am 20.04.2012 13:17, schrieb nixofortune:

So far  I found :
binlog_format different:
Masters (MIXED)
Slaves (STATEMENT)

sync_binlog different on one of the slaves: 1024 against 0 on the rest


binlog format statement is practically unuseable
there are way too much possible queries which can not
work this way over the long

if i were you i would fix the configurations and restart
the replication from scratch with a binary ident copy (rsync)



The --binlog-format setting STATEMENT was not at play in the original 
post. The user is using either MIXED or ROW.


Also, what setting the SLAVES are using only applies to the format of 
the Binary Log created *by the slaves* and has no effect on any log they 
process from their master.


There is nothing visibly wrong with your replication configuration at 
this point. The problem most likely revolves around your table definitions.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Maser-Slave replication is playing up

2012-04-20 Thread shawn green

Hello Egor,

On 4/20/2012 8:52 AM, nixofortune wrote:

Hi Shaw,
I have two more question here.
1. Why those rows hasn't been deleted on slaves?
2. Why no slow logs entries were created?
Many thanks for your time and effort.


1) Perhaps the slave has not reached that point in the binary logs? You 
did say they were still behind.  The other thing to check is if the 
master wrote those commands into the binary log (check the master's 
config file for any binlog* filters) or if the slave has been told to 
not execute those commands (look in the slave's config file for any 
replicate* filters)

http://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html#option_mysqld_binlog-do-db
http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_replicate-do-db

2) The SLAVE SQL thread, by default,  does not contribute to the Slow 
Query Log.  To enable this, use the --log-slow-slave-statements option.

http://dev.mysql.com/doc/refman/5.5/en/slow-query-log.html

Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Questions about mysql-proxy...

2012-04-06 Thread shawn green

On 4/5/2012 3:14 PM, Wes Modes wrote:

No one here has any experience with it?


...

Probably not. If you notice the version number for MySQL Proxy still 
starts with a 0. which the universally accepted indicator that it is 
of pre-production quality.  Thanks to the corporate changes of the last 
few years, the sole developer has been too busy doing his primary job to 
really put any more time into the project and no person or group from 
the community has stepped forward to help push the project to a final 
deliverable state.


Now, with that in mind, we do use the very same Proxy code as the 
critical component to the MySQL Enterprise Monitor Agent that is used to 
collect QUery ANalysis data (QUAN).  The difference here is that we 
wrote the script it executes and stress tested it in our labs to prove 
to our satisfaction that it will operate exactly as we wanted it to.


The 'master/slave write/read splitting' sample code on the Proxy site 
has NOT been through such rigorous testing. It was provided merely as a 
sample on which other, more robust, scripts could be constructed and as 
a demonstration of some of the possible uses of MySQL Proxy. You are 
more than encouraged to take this as a starting point, adjust it to 
match your execution environment and coding styles, then test it in your 
lab environment. Once it passes your internal quality review checks, 
then you should have the confidence to push it into production.


In conclusion, I am not at all surprised that the population of people 
on this list who has either written code for or used MySQL Proxy in 
production situations may be very small or none.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: HA Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-04-02 Thread shawn green
/reads to master/slaves.


Also the loss of one node of a replication ring is not as easy to 
recover from as simply promoting one slave to become the new master of a 
replication tree (demoting the recovered former-master to become yet 
another slave) as there may be pending events in the relay logs of the 
lost node that have not yet been relayed to the downstream node.


I may not have every answer, but I have seen nearly every kind of 
failure.  Everyone else is encouraged to add their views to the discussion.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: HA Scalability w MySQL + SAN + VMWare: Architecture Suggestion Wanted

2012-03-30 Thread shawn green

Hello Wes,

On 3/29/2012 9:23 PM, Wes Modes wrote:

First, thank you in advance for good solid suggestions you can offer. I
suppose someone has already asked this, but perhaps you will view it as
a fun challenge to meet my many criteria with your suggested MySQL
architecture.

I am working at a University on a high-profile database driven project
that we expect to be slammed within the first few months. Since this is
a new project and one that we expect to be popular, we don't know what
kind of usage to expect, but we want to be prepared. Therefore, we are
building in extra capacity.

Our top goals are scalability and high availability, provided we hope
through multiple MySQL nodes and VMWare functionality. I've been
surprised that there are not more MySQL architects trying to meet these
high-level goals using virtualization and shared storage (or at least
they do not seem to be writing about it).

I've looked at replication, multi-mastering, DRBD, clustering,
partitioning, and sharding.

Here's what we got, and some of our constraints:

* We are concerned that One Big Database instance won't be enough to
handle all of the queries, plus it is a single point of failure.
Therefore, multiple nodes are desirable.

* With the primary application that will be using the database, writes
and reads cannot be split off from each other. This limitation alone,
rules out replication, MMM, and a few other solutions.

* We do not expect to be especially write-heavy.

* We have shared storage in the form of an iSCSI SAN. We'd like to
leverage the shared storage, if possible.

* We have VMWare HA which already monitors hosts and brings them up
within minutes elsewhere if we lose a host. So some of the suggested HA
solutions are redundant.

* We expect to have another instance of our system running in the Amazon
cloud for the first few months while the traffic is high, so we may take
advantage of RDS, though an exact duplicate of our local system will
save us development work.

Thanks for any advice you can give.

Wes Modes



As the others have already pointed out, your specifications are rather 
vague and appear to be only hardware-focused. I can state with some 
certainty, through my six years of direct MySQL support experience, that 
the majority of the problems related to performance are NOT hardware 
related.  The majority of the problems I need to resolve are related to 
bad queries, bad table design, and bad application design.


For example, you state:
 * With the primary application that will be using the database, writes
 and reads cannot be split off from each other.

This is already a red flag that your application is not designed for scale.

Also, you say:
 * We are concerned that One Big Database instance won't be enough to
 handle all of the queries, plus it is a single point of failure.
then you say:
 * We have shared storage in the form of an iSCSI SAN. We'd like to
 leverage the shared storage, if possible.

This is another red flag in that your 'shared storage' once again 
becomes a single point of failure.  Also, MySQL instances do not share 
files. Period. Each MySQL instance needs its own, completely 
independent, set of files and folders. Please take the hint that 
Replication should be in your very near future.


Here are some solid suggestions:

a) modularize your code so that WRITES can go to a MASTER server and 
READS can be distributed over a large number of SLAVE servers.


b) modularize your data so that you can easily subdivide subsets of it 
to different master-slave trees. For example, students whose last name 
starts with A-D are assigned to machines in group1, E-K in group2, ... etc.


c) Optimize your queries to avoid subqueries and to take full advantage 
of indexes.


d) Build indexes wisely -  A well-crafted multi-column index can take up 
less space and will be more functional that multiple single-column indexes.


e) Test, test, and retest - Until you can push your system to the 
choking point in the lab, you will have no idea of how much traffic it 
will be able to handle in the field.


Good luck!
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't connect as non-root user to database

2012-03-16 Thread shawn green

On 3/16/2012 7:00 AM, Clemens Eisserer wrote:

Hi Rik,


Hm, is the mysql-client library the same as the mysql-server?

Yes.


And does mysql --host=127.0.0.1 --user=someone -p somedb work (or it's actual
IP-address, forcing the TCP/IP connect instead of possible sockets) ?


This is really strange - with -h127.0.0.1 I get the same error:
ERROR 1045 (28000): Access denied for user 'someone'@'localhost'
(using password: YES)

However with -h192.168.1.102 everything works as expected, although I
used 'someone'@'%' everywhere.
Does '%' not include local connections / unix domain sockets?

Thanks, Clemens



On Unix-based systems the alias 'localhost' implies the local Unix 
socket, not a networking port. The important part to notice is that you 
created a user from @% but the error message said @localhost. As 
that is a local socket, the pattern matching algorithm applied to the % 
to compare the incoming address (the source of the networked connection) 
to the account fails. Therefore it does not match to @localhost as the 
network was not involved.


There is a logic behind this method of operation. Users with physical 
access to the machine (or remote access through tunneling protocols like 
ssh) are local to the files and processes themselves. The security 
exposure for this type of user means that this is most likely a very 
privileged person and they probably need to be allowed privileges for 
full administrative actions. Therefore a local MySQL user (coming in 
through the local Unix socket) may be assigned very different 
permissions than a user who happens to know the administrative account's 
password but is only allowed to login remotely (via the network). By 
keeping @localhost separate from @host matching patterns, we allow you 
(the DBA) to deny privileged access to any other user that cannot login 
directly from the host machine.


Hopefully, this clarifies why your localhost account was unable to login.

Additional reading:
http://dev.mysql.com/doc/refman/5.5/en/connection-access.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't connect as non-root user to database

2012-03-16 Thread shawn green

On 3/16/2012 2:41 PM, Clemens Eisserer wrote:

Hi Shawn,

I understand the logic behind seperating local and remote users,
postgresql does the same thing in its pg_hba.conf file.
However, what I don't understand is the way this turned out to be such
a huge problem (for me), as it worked already with MySQL-5.1 a few
years ago. I've worked with other DBMs a bit as well, so I am not an
absolute noob.

To be honest I am a bit troubled with your explanation. If the % does
not include localhost, why is it now possible to me to login as
someone@% after I have deleted that anonymous user claudio
mentioned (thanks again!).
Furthermore, why does mysql print access denied for
someone@localhost even when I specify -h127.0.0.1?

Thanks a lot for your response, Clemens



Because there is also an entry in your hosts file that equates the 
address 127.0.0.1 to the alias 'localhost'. The MySQL permissions checks 
looks for the the most specific match, first. Host matches are more 
specific than user matches as demonstrated in the examples here:

http://dev.mysql.com/doc/refman/5.5/en/connection-access.html

Once you eliminated the anonymous @localhost user, the system was able 
to check other possible matches, in order of specificity, to see if your 
login attempt qualified for access.


Because the socket-based direct match of someone@localhost failed, the 
system progressed to the network-based match of someone@% where the 
alias was able to resolve your address back from 127.0.0.1 into 
'localhost' which matched your pattern and granted you access.


Any previous improper behavior in older versions was most likely 
corrected when we fixed several bugs related to the IPv6 loopback 
address. Details are covered in the changelog to version 5.5.3 and the 
specific bug reports linked from within it.

http://dev.mysql.com/doc/refman/5.5/en/news-5-5-3.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Strange row counter issues

2012-02-22 Thread Shawn Green (MySQL)

Hello Lay,

On 2/22/2012 07:05, Lay András wrote:

Hi!

I have a table:

CREATE TABLE IF NOT EXISTS `test` (
   `id` int(11) NOT NULL auto_increment,
   `cucc` varchar(255) character set utf8 NOT NULL,
   PRIMARY KEY  (`id`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1;

INSERT INTO `test` (`id`, `cucc`) VALUES
(1, 'egyszer'),
(2, 'ketszer'),
(3, 'ketszer'),
(4, 'haromszor'),
(5, 'haromszor'),
(6, 'haromszor'),
(7, 'negyszer'),
(8, 'negyszer'),
(9, 'negyszer'),
(10, 'negyszer');

select * from test;

++---+
| id | cucc  |
++---+
|  1 | egyszer   |
|  2 | ketszer   |
|  3 | ketszer   |
|  4 | haromszor |
|  5 | haromszor |
|  6 | haromszor |
|  7 | negyszer  |
|  8 | negyszer  |
|  9 | negyszer  |
| 10 | negyszer  |
++---+
10 rows in set (0.00 sec)

Under 5.0.x version this query works good, the cnt column is right:

set @row=0;select @row:=@row+1 as cnt,cucc,count(id)hany from test group by
cucc order by hany desc;

+--+---+--+
| cnt  | cucc  | hany |
+--+---+--+
|1 | negyszer  |4 |
|2 | haromszor |3 |
|3 | ketszer   |2 |
|4 | egyszer   |1 |
+--+---+--+
4 rows in set (0.00 sec)

Under 5.1.x or 5.5.x the cnt column is bad:

set @row=0;select @row:=@row+1 as cnt,cucc,count(id)hany from test group by
cucc order by hany desc;

+--+---+--+
| cnt  | cucc  | hany |
+--+---+--+
|7 | negyszer  |4 |
|4 | haromszor |3 |
|2 | ketszer   |2 |
|1 | egyszer   |1 |
+--+---+--+
4 rows in set (0.00 sec)

Documentation ( http://dev.mysql.com/doc/refman/5.5/en/user-variables.html )
says this, so not a bug:

As a general rule, you should never assign a value to a user variable and
read the value within the same statement. You might get the results you
expect, but this is not guaranteed. The order of evaluation for expressions
involving user variables is undefined and may change based on the elements
contained within a given statement; in addition, this order is not
guaranteed to be the same between releases of the MySQL Server.

Is there any other solution to emulate row counter, which works with the
above query under 5.1 and 5.5 mysql version?



You need to materialize your sorted results before applying your row 
counter. You can use an automatic temp table or a manual temporary 
table. Here is one way to do this using an automatic temp table:


set @row=0;select @row:=@row+1 as cnt,cucc, hany FROM (SELECT 
cucc,count(id) hany from test group by cucc order by hany desc) as stats;


The other option is to create the row counter in your application, and 
not within the database. But, that all depends on how you need the data, 
too. Any way you go, you must first create your results, then assign 
them row numbers as we have changed (hopefully improved) the efficiency 
of how we evaluated the original query which is why your row numbering 
system no longer works as you expected it to.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Upgrade 5.0 - 5.1 - long table names with invalid chars.

2012-02-16 Thread Shawn Green (MySQL)

On 2/15/2012 22:16, Bobb Crosbie wrote:

Hi Folks,

I'm preparing an upgrade of a 5.0.51a database to 5.1.58 (Ubuntu 11.10) -
Some of the table names contain invalid characters which mysql_upgrade
(mysqlcheck) is attempting to escape by renaming the filename.  However I'm
having trouble with some tables with long names.

For instance if I had a table in some_db called:   A table with a really
long name - and some invalid characters

Internally this will be converted to #mysql50#A table with a really long
name - and some invalid characters but it will be truncated to 64
characters: #mysql50#A table with a really long name - and some invalid
char  and I will get errors such as:

 Failed to RENAME TABLE `#mysql50#A table with a really long name - and
some invalid char` TO `A table with a really long name - and some invalid
char`
 Error: Can't find file: './some_db/A table with a really long name
- and some invalid char.frm' (errno: 2)

Any ideas or suggestions ?
I'd prefer to keep with the ubuntu version of mysql if possible. Would also
like to minimize the size of the downtime window.

Would it be best to just dump/drop/re-import the tables/databases ?

Many Thanks,

- bobb



The simple solution is to rename your tables to
a) use shorter names
and b) not use any illegal characters

Are those odd characters and long names really a requirement to your 
design or are they there just for developer's convenience?


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Reading Schema From a MSSQL Dump on a Mac (or Linux)

2012-02-06 Thread Shawn Green (MySQL)

Hello Stan,

On 2/3/2012 20:06, Stan Iverson wrote:

On Fri, Feb 3, 2012 at 11:11 AM, Johan De Meersmanvegiv...@tuxera.bewrote:


- Original Message -

From: Stan Iversoniversons...@gmail.com

I have a Mac and a Linux server and I need to read the table schema
for a database dump from MSSQL. Possible? How?


Heeh. If you're talking about an SQL dump into a textfile, that should be
doable - you might have to mess with some datatypes or so, but not
impossible.

If this is a binary dump, you're going to have to import it into an MS SQL
server, and proceed from there. MySQL connector for ODBC is one route you
could take.



Yes, it's a binary file; however, MySQL Connector for ODBC only works in
Windows and I'm using a Mac. If no other solutions, will try tomorrow on a
friend's PC.
TIA,
Stan



Are you sure that it only works for PC?  If that's true, why do we have 
installation instructions for Macs and other non-Windows systems in the 
manual?


http://dev.mysql.com/doc/refman/5.1/en/connector-odbc-installation.html

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql won't start with service, but starts with mysqld_safe

2012-02-06 Thread Shawn Green (MySQL)

On 2/4/2012 19:57, Larry Martell wrote:

Just installed mysql on centos 6.2. When I try to start it with service I get:

#service mysqld start
MySQL Daemon failed to start.
Starting mysqld:   [FAILED]

Nothing at all is written to the error log.

But if I start it with mysqld_safe it comes up and works fine.

Anyone know what could be going on here?

-larry



If the daemon is attempting to change users during startup, then you 
must be root when you start it.  Otherwise, become the user `mysql` then 
start the daemon (service) under the proper credentials.


http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_user

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: trick trigger

2012-01-11 Thread Shawn Green (MySQL)

Hello John,

On 1/11/2012 11:16, John G. Heim wrote:

I am working on an app to allow a committee to schedule classes. The
members of the committee can all update the database by changing the
time or the instructor for a class. I have to write an app to warn them
when they've scheduled an instructor for 2 classes at the same time or
if they've scheduled any of a large list of classes at the same time.
For example, they shouldn't schedule Calculus 212 at the same time as
Physics 302 because a student might want to take both classes. And
obviously, they shouldn't schedule Professor Higgenbothom to teach both
Calculus 212 and Probability 278 at 10:00 AM on Monday, Wednesday, and
Friday.

The problem isn't actually writing mysql to select the conflicts. The
problem is when and how to run the code. I could put it in a trigger but
say someone assigns Dr. Higgy to teach Calc 212 at 10 AM MWF. They need
to be able to see that he is now scheduled for another class if they
look at Probability 278. Get the problem? An update to one record can
necessitate an update to any number of other records.

I'm just looking for basic suggestions on how you'd deal with this.
Should I attempt to write a trigger that updates both Calc 212 and
Physics 302 when either is changed? Am I going to create an infinate
loop? I am thinking of telling the committee that it can't be done and
they'll have to wait for the list of conflicts to be recalculated by a
background process once an hour or so.

My current database structure is that there is a link table for
conflicts. If Calc 212 is scheduled at the same time as Physics 302,
that is shown by there being 2 records in a conflicts table. The
conflicts table would contain a record with the primary key for Calc
212, the pkey for Physics 302, and a code indicating that its a course
conflict. There'd also be a record for Physics 302 indicating that it
has a conflict with Calc 212. If Prof. Higgy is also scheduled to tach
Calc 212 and Probability 278 at the same time, that would also create 2
records in the conflicts table. Like this:

calc212 | phys302 | course_conflict
phys302 | calc212 | courseConflict
calc212 | prob278 | instructorConflict
prob278 | calc212 | instructorConflict

Then my web app can do a select for conflicts when displaying Calc 212,
Probabbility 278, or Physics 302. But how to get that data into the
table? I'm thinking of trying to write a trigger so that wen a class
record is updated, the trigger deletes the conflicts records for the
class if the id appears in either column 1 or column 2, re-calculate
conflicts, and re-add the conflicts records. But if anybody has basic
suggestions for a completely different approach, I'd like to hear them.



This is all a matter of GUI design and application logic. For example, 
you could force the user to wait for some kind of database error before 
realizing that the data they just entered was invalid or you can 
pre-select conflict lists from the database and block out certain times 
and people as 'already used' before they make their selections. This 
requires your application to check with the database at certain events.


Let's say you want to schedule a class for Higgy to teach Calc 212, well 
there are at least two lists, from your description, that you need to 
know before allowing the user to pick a date and time:


1) the list of all classes that Higgy is already teaching
2) the list of any other classes that might interfere with Calc 212

Some additional lists may also be useful
* Any other Calc 212 sections already scheduled for other professors
* Any 'no classes here' schedule preferences for Higgy
* The list of teaching areas that may be available/unavailable in which 
your Calc 212 may be taught.


These all need to be added to the logic present at the time the 
scheduler wants to make their choices so that they can avoid many 
un-necessary trips to the database for every schedule they want to create.


Another thing to do is to temporarily block (not with a database-level 
transaction) access to both Higgy and Calc 212 to minimize the chance of 
conflicting with the changes made to the database by someone else also 
trying to enter scheduling information.


Summary :
* Get as much data as you can get before the request leaves the user. 
This frees up the database to handle just the data changes as they need 
to happen. Conflicts can still exist (always assume someone else may 
steal the room, for example) and those may need to be resolved through a 
different process.


* Keep the business logic in your application, leave the data integrity 
rules to the database.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Common Pattern for parent-child INSERTs?

2012-01-07 Thread Shawn Green (MySQL)

Hello Jan,

On 1/7/2012 00:58, Jan Steinman wrote:

Okay, I'm seeking enlightenment here.

In my trivial case, one can save storage by not having a record extension for 
people without phones. Big deal.

In my real-world case, I have a contacts database with your typical name, address, phone, email, etc. info. 
Then I have extensions for people who are particular type of contacts that have more information than the 
general case. If I have several thousand records in my contacts database, but only ten in the dairy 
customers database, I'm saving a ton of storage by not having every single record in the 
general-purpose contacts database contain stuff like desired_milk_pickup_day or SET 
dairy_products_of_interest.

But now I have a different extension, Volunteers, with extra fields like special_skills, 
dietary_restrictions, etc. I don't want those fields in the general contact list. And there's another 
extension, Advisory, that holds extra information for contacts who are on our advisory council.

In normalizing databases, I was taught to do exactly what I've done, separate 
out the special cases and put them in a separate table. But as you note, that 
creates a bit of a mess for INSERT while simplifying SELECT.

ON UPDATE CLAUSE does not help on INSERT, does it? I mean, how does it know the 
auto-increment value of the parent record before it's been INSERTed?

It appears that anything I do must be wrapped in a transaction, or there's the 
chance (however unlikely) that something will get in between the INSERT of the 
parent and that of the child.



Once you have inserted the 'parent' row (the one to the Contacts table) 
you know the ID of the parent. This cannot change and no other contacts 
will be given the same ID. You include this ID with the other INSERT 
commands you need for your 'child' rows.


You do have two options to handle rollback scenarios:
1) run with only InnoDB tables and wrap all of the related INSERTs with 
a single transaction


2) use any tables you like and keep track of the auto_increment values 
issued for each row you are INSERTING in your application, too. This 
allows you to implement a manual rollback in the event of some kind of 
problem.


Take, for example, your Volunteers example. This requires at least two 
rows: one main row on the `contacts` table and another on the 
`volunteers` table.


  INSERT `contacts` (name, ...) VALUES ('Bob the Volunteer',...);
  SET @contact_id = LAST_INSERT_ID();
  INSERT `volunteers` (contact_id, skill, diet_restrict, ...) VALUES 
(@contact_id, 'carpentry', 'hates fish', ...);

  SET @volunteer_id = LAST_INSERT_ID();
  ...

If you don't want to track the ID values in user variables, you can 
query them and draw them back into application-based variables. If you 
want to track lists of values, you can add them to temporary tables to 
build each list or query them into application-side arrays. At this 
point how you handle those numbers is up to you.


Remember, though, that LAST_INSERT_ID() can only return one value. This 
means that you cannot use it for batch processing reliably unless you 
manually lock the table and guarantee a specific sequence of numbers. 
The other option is to build an association table of (id, name) based on 
the newly-inserted data (or something similar based on some unique 
combination of identifiers in your original data instead of just 'name' 
) so that each of your child rows can be assigned their proper parent id 
values.

http://dev.mysql.com/doc/refman/5.5/en/information-functions.html#function_last-insert-id

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql_secure_installation

2011-12-27 Thread Shawn Green (MySQL)

Hello Ryan,

On 12/18/2011 15:36, Ryan Dewhurst wrote:

Hi,
Does anyone know why what's done in 'mysql_secure_installation' [0]
isnot part of the default mysql installation?
[0] http://dev.mysql.com/doc/refman/5.0/en/mysql-secure-installation.html
Thank you,Ryan Dewhurst
P.S. I also asked this question on the
forums:http://forums.mysql.com/read.php?30,506069,506069#msg-506069



The script simply automates the steps documented in our manual, here:
http://dev.mysql.com/doc/refman/5.0/en/default-privileges.html

If you want to see the script in action:
* repeat a fresh install
* enable the General Query Log
* run the script.

The General Query Log stores a copy of every command sent to a MySQL 
server before the server even parses it. It's there as a diagnostic tool 
and should not be enabled on a production machine unless there is a 
specific need to do so.


The steps of 'mysql_secure_installation' are not performed by default 
because many people want to just get to know MySQL before putting it 
into full production. This is most easily performed (especially in a 
classroom setting) with an unsecured installation. Also the steps to 
secure the installation can be leveraged as an excellent teaching tool for:


a) How MySQL accounts are authenticated
b) Where the account information is stored
c) The different levels of authentication supported by MySQL.

For those who don't want to read or learn, or for those who simply want 
to automate their installation, there is the script.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SQL DATA

2011-12-13 Thread Shawn Green (MySQL)

Hello All,

On 12/5/2011 14:20,  wrote:

A procedure MODIFIES SQL DATA if all the modifying involves a temporary table, 
or only READS SQL DATA?

Does it matter whether the temporary table is meant to outlast the 
procedure-call?




Many of you are missing the big picture. This flag (along with most of 
the others you can use to describe a stored procedure) are meant to 
indicate to the replication system whether it needs to log the CALL to 
this procedure and in which format.


Statements that only collect data (and don't even write to temporary 
tables) do not need to be written into the binary log. Any stored 
procedures that change a table, even if it's a temporary table, need to 
be marked as MODIFIES_SQL_DATA.


The other commenters are correct: The contents of the stored procedure 
are not evaluated to see if you set the descriptive flags correctly. The 
database must trust you, the DBA, to do that properly.


http://dev.mysql.com/doc/refman/5.5/en/replication-sbr-rbr.html
http://dev.mysql.com/doc/refman/5.5/en/replication-rbr-safe-unsafe.html

Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Loading of large amounts of data

2011-12-07 Thread Shawn Green (MySQL)
 with a grain of salt and adjust this 
possible design based on any other factors you did not include in your 
list of requirements. It may even be possible (depending on the size of 
your rows and other factors) that MySQL Cluster might be a better fit 
for your requirements.  I encourage you to engage with Cluster sales or 
any reputable consultant to get an evaluation and their recommendation, 
too. (disclaimer: I am not a cluster guru). I also encourage you to seek 
multiple recommendations. Many different solutions to the same problems 
you describe have been created by many different people. What works in 
my mind may not work in all situations.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: [MySQL] innodb_file_per_table / apple workers and logic

2011-11-30 Thread Shawn Green (MySQL)

Hello all,

On 11/30/2011 16:46, Reindl Harald wrote:



Am 30.11.2011 19:13, schrieb Karen Abgarian:

Hi inline there.

On 30.11.2011, at 0:16, Reindl Harald wrote:




Most people do not expect a gas tank to shrink once the
gas is consumed...right?


WHO THE FUCK is comparing computers with a gas tank?

Well, I do.  I even managed to do it without using foul language.


what answer do you expect comparing a database with a gas tank
while the gas tank is the hard-drive? if i take some gas out of
the tank (hard-drive)  i expect that there is space for new one



Actually, the gas tank is a good analogy.

There is limited volume in a vehicle which must contain the tank. In 
this analogy, the vehicle must have space for not just fuel but 
passengers, cargo, engine, transmission, etc.  The fact that the tank 
may grow so large it displaces other items from the vehicle is 
appropriate to the original situation (no room left on disk).



There are a lot of things in this life to be upset about.  Empty gas tanks is 
one thing.
But I would not spill all that frustration on the very first person I meet on 
the net.


my frustration is people like you comparing a database with a gas tank
while not understand that the gas tank is the underlying hard-disk

if you stop make laughable comparison you will not get back frustration



I am sorry if you didn't see the larger picture she was trying to present.



Taking the logical part of what was said above, there existed a database that
possibly was able to save the space by using files_per_table.   Does this 
somehow
mean that there are no other databases in the world?


have i said this?

a default which makes it unable to free no longer used space
is dumb not more and not less



There are expenses to maintaining separate files per table that you do 
not have for the larger, more inclusive tablespaces. Individual 
tablespaces can become so numerous that your system may run out of file 
handles to operate them all, for example.  All of those file names may 
clog your directory/folder system making it much slower to randomly 
access any one file, as another example.


While it is true that recovering unused space may be useful to restore 
disk space, it is also true that allocating and deallocating disk space 
is an expensive action.  It is much more efficient in the long run to 
leave an expanded file in its larger state than it would be to 
constantly be shrinking it to a minimal size. Also, since the user 
required that much space at least once before (by their query or usage 
patterns) it is reasonable to assume that the same working space (or 
more) will be used again in the future.


So this puts the 'design decision' squarely on the side of 'always 
increase, never decrease' in order to minimize the disk allocation costs 
associated with operating the InnoDB storage engine.  There are other 
storage options (MyISAM, Archive, CSV, etc) in the event this behavior 
of InnoDB is more than you want to deal with.


The default to NOT use individual tablespaces is related to the need to 
potentially adjust OS-level limits to handle the additional file 
volumes. If that is not a problem for you and your admins, more power to 
you. For desktop users, however, that may not be an option they can use. 
So the default remains at 0 until the support for it becomes much more 
common among Linux user accounts.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: cope with deadlock

2011-11-28 Thread Shawn Green (MySQL)

On 11/17/2011 01:41, 王科选 wrote:

hi,
 From this url:
http://dev.mysql.com/doc/refman/5.5/en/innodb-deadlocks.html , mysql
says If you are using locking reads (|SELECT ... FOR UPDATE|
http://dev.mysql.com/doc/refman/5.5/en/select.htmlor|SELECT ... LOCK
IN SHARE MODE|), try using a lower isolation level such as|READ
COMMITTED|
http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html#isolevel_read-committed.

What's the reason? I have read some online material, but still don't get
the point, can anyone explain why?



The reason behind using less locking has to do with the reason for 
deadlocks in the first place.  A deadlock happens when two consumers of 
a resource need access to the parts of that resource that the other 
consumer controls. An absurdly simple example of a deadlock:


* There is a table of 1000 records
* User A starts updating the table in incrementing order (1, 2, 3, ...)
* User B starts updating the table in descending order (1000, 999, 998, ...)
* The two transactions meet somewhere in the middle of the table. 
Because neither A nor B could complete its sequence of changes without 
access to the rows controlled by the other transaction, we have achieved 
a deadlock. One of the transactions will be rolled back to allow the 
other to continue.


Deadlocking cannot be eliminated from any system that shares resources 
in a random-access method among multiple users. There are, however, 
many ways to reduce deadlocking:
* Always access your resources in the same sequence. This means both 
table sequence and row sequence per table.
* Only lock those resources that you absolutely need for your 
transaction. The fewer things you need to lock, the less likely it will 
be that another session will need to use them too.
* Keep your locks for the least time possible. By reducing the duration 
of your locks, you are also reducing the chances that another session 
will need to use those resources at the same time you are using them.
* Use shared locks instead of exclusive locks whenever possible. When it 
comes to transaction isolation in InnoDB, the less isolation you 
require, the more likely you are to generate a shared lock vs an 
exclusive lock.


Regards,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Issue With Subqueries

2011-11-09 Thread Shawn Green (MySQL)

Hi Mike,

On 11/8/2011 20:46, Mike Seda wrote:

All,
Can anyone out there explain the result of the third statement provided
below:

mysql select count(distinct field1) from db1.table1;
++
| count(distinct field1) |
++
| 1063 |
++
1 row in set (0.01 sec)

mysql select count(distinct field1) from db2.table1;
++
| count(distinct field1) |
++
| 3516 |
++
1 row in set (0.03 sec)

mysql select count(distinct field1) from db2.table1 where field1 not in
(select field1 from db1.table1);
++
| count(distinct field1) |
++
| 0 |
++
1 row in set (0.08 sec)

A colleague of mine is stating that the result should be much greater
than 0.

Please let me know what you think.

Thanks In Advance,
Mike



Simple math (set theory) suggests that all of the values of field1 on 
db2.table1 contain only copies or duplicates of the field1 values in the 
rows in db1.table1.


Try this:

SELECT db2.field1, db1.field1
FROM db2.table1
LEFT JOIN db1.table1
  ON db2.field1 = db1.field1
WHERE db1.field1 IS NULL;

How many rows do you get back from that?

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Removing Double Quotes

2011-11-03 Thread Shawn Green (MySQL)

On 11/3/2011 02:29, Adarsh Sharma wrote:

Dear all,

I need to remove the quotes in text columns in a mysql table. It has
more than 5000 rows.
In some rows there is values quoted with . I want to remove them.

Below is the snapshot :

*ID /URL Country Publication / Description ...and so on*
2474 http://www.times-standard.com/ United States Times-Standard
California 1 2009-10-22 10:34:39 T F 7546609 0 T F T T T
2475 http://www.argentinastar.com/ Argentina Argentina Star 1 -00-00
00:00:00 0 0 0 0 2476 http://www.economist.com/countries/argentina/;
Argentina Economist  The site gives research tools to search
articles by subjects and backgrounds. DD in the date tag is written in
the following format : 3rd for 3 2 -00-00 00:00:00 0 0 0 N 0 2477
http://www.ambito.com/english/; Argentina Ambito.Com The date tage
show on the right side main page 0 -00-00 00:00:00 T 0 0 0 N 0
2570 http://en.apa.az/; Apa 1 -00-00 00:00:00 0 0 0 0 2571
http://www.theazeritimes.com/; Caspian  The Azeri Times last
update on 18 Feb 2011 1 -00-00 00:00:00 0 0 0 0


Please let me know the efficient  easiest way to remove it.



Thanks



The manual is your friend, please don't be afraid to use it. I believe 
the function you are looking for is REPLACE().


http://dev.mysql.com/doc/refman/5.5/en/string-functions.html#function_replace

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   3   4   >