Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Jesper Wisborg Krogh

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt 
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new, 
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE) 
for InnoDB tables. If you use that, it is as the name suggest 
transparent for PHP. See also: 
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html


Best regards,
Jesper Krogh
MySQL Support

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Ah, ok, if I understand correctly within this context every record in the
one table _should_ have a unique identifier.  Please verify this is the
case, though, if for example the primary key is an auto increment what I'm
going to suggest is not good and Really Bad Things will, not may, happen.

If you want to do this all in MySQL, and IFF the records are ensured to be
*globally unique*, then what I suggested previously would work but isn't
necessary (and is actually dangerous if global record uniqueness is not
definite).  Uou _could_ do a standard mysqldump (use flags to do data only,
no schema) and on the importing server it will insert the records and if
there are duplicates records they will fail. If there is a chance the
records aren't unique, or if you want to be extra super safe (good idea
anyway), you can add triggers on the ingest server to ensure
uniqueness/capture failures and record them in another table for analysis
or perhaps even to immediate data remediation (update key) and do insert.

Now, for me, using triggers or other business-logic-in-database features is
a code smell.  I loath putting business logic in databases as they tend to
be non-portable and are hard to troubleshoot for people behind me that is
expecting to have logic in code.  Since you're having to script this
behavior out anyway, if it were me I would dump the data in the table to
CSV or similar using INSERT INTO OUTFILE rather than mysqldump, ship the
file, and have a small php script on cron or whatever ingest it, allowing
for your business logic for data validate/etc to be done in code (IMO where
it belongs).

S



On Mon, Feb 29, 2016 at 12:12 PM, lejeczek  wrote:

> On 29/02/16 16:32, Steven Siebert wrote:
>
>> What level of control do you have on the remote end that is
>> collecting/dumping the data?  Can you specify the command/arguments on how
>> to dump?  Is it possible to turn on binary logging and manually ship the
>> logs rather than shipping the dump, effectively manually doing
>> replication?
>>
> in an overview it's a simple php app, a form of a questionnaire that
> collects user manual input, db backend is similarly simple, just one table.
> Yes I can operate mysqldump command but nothing else, I do not have
> control over mysql config nor processes.
>
> It's one of those cases when for now it's too late and you are only
> thinking - ough... that remote box, if compromised would be good to have
> only a minimal set of data on it.
>
> So I can mysqldump any way it'd be best and I'd have to insert ideally not
> replacing anything, instead aggregating, adding data.
> I think developers took care of uniqueness of the rows, and constructed it
> in conformity with good design practices.
>
> What I'm only guessing is when I lock, dump and remove then insert,
> aggregate could there be problems with keys? And no data loss during
> dump+removal?
>
> thanks for sharing your thoughts.
>
>
>> I agree with others, in general this approach smells like a bad idea.
>> However, updating data from a remote system in batch is quite common,
>> except often it's done at the application level polling things like web
>> services and perhaps some business logic to ensure integrity is
>> maintained.  Attempting to do it within the constructs of the database
>> itself is understandable, but there are risks when not adding that "layer"
>> of logic to ensure state is exactly as you expect it during a merge.
>>
>> At risk of giving you too much rope to hang yourself: if you use mysqldump
>> to dump the database, if you use the --replace flag you'll convert all
>> INSERT statements to REPLACE, which when you merge will update or insert
>> the record, effectively "merging" the data.  This may be one approach you
>> want to look at, but may not be appropriate depending on your specific
>> situation.
>>
>> S
>>
>>
>>
>> On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:
>>
>> On 29/02/16 15:42, Gary Smith wrote:
>>>
>>> On 29/02/2016 15:30, lejeczek wrote:

 On 28/02/16 20:50, lejeczek wrote:
>
> fellow users, hopefully you experts too, could help...
>>
>> ...me to understand how, and what should be the best practice to dump
>> database, then drop it and merge the dumps..
>> What I'd like to do is something probably many have done and I wonder
>> how it's done best.
>> A box will be dumping a database (maybe? tables if it's better) then
>> dropping (purging the data) it and on a different system that dump
>> swill be
>> inserted/aggregated into the same database.
>> It reminds me a kind of incremental backup except for the fact that
>> source data will be dropped/purged on regular basis, but before a
>> drop, a
>> dump which later will be used to sort of reconstruct that same
>> database.
>>
>> How do you recommend to do it? I'm guessing trickiest bit might this
>> reconstruction part, how to merge dumps safely, 

Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Totally with you, I had to get up and wash my hands after writing such
filth =)

On Mon, Feb 29, 2016 at 12:14 PM, Gary Smith  wrote:

> On 29/02/2016 16:32, Steven Siebert wrote:
>
>>
>> At risk of giving you too much rope to hang yourself: if you use
>> mysqldump to dump the database, if you use the --replace flag you'll
>> convert all INSERT statements to REPLACE, which when you merge will update
>> or insert the record, effectively "merging" the data.  This may be one
>> approach you want to look at, but may not be appropriate depending on your
>> specific situation.
>>
>> I'd considered mentioning this myself, but this was the root of my
> comment about integrity - if the original database or tables are dropped,
> then the replace command will cause the data to poo all over the original
> dataset. As you mentioned in your (snipped) reply, this can go badly wrong
> in a short space of time without the correct controls in place. Even if
> they are in place, I'd have trouble sleeping at night if this were my
> circus.
>
> Gary
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 16:32, Steven Siebert wrote:


At risk of giving you too much rope to hang yourself: if you use 
mysqldump to dump the database, if you use the --replace flag you'll 
convert all INSERT statements to REPLACE, which when you merge will 
update or insert the record, effectively "merging" the data.  This may 
be one approach you want to look at, but may not be appropriate 
depending on your specific situation.


I'd considered mentioning this myself, but this was the root of my 
comment about integrity - if the original database or tables are 
dropped, then the replace command will cause the data to poo all over 
the original dataset. As you mentioned in your (snipped) reply, this can 
go badly wrong in a short space of time without the correct controls in 
place. Even if they are in place, I'd have trouble sleeping at night if 
this were my circus.


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 16:32, Steven Siebert wrote:

What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?
in an overview it's a simple php app, a form of a 
questionnaire that collects user manual input, db backend is 
similarly simple, just one table.
Yes I can operate mysqldump command but nothing else, I do 
not have control over mysql config nor processes.


It's one of those cases when for now it's too late and you 
are only thinking - ough... that remote box, if compromised 
would be good to have only a minimal set of data on it.


So I can mysqldump any way it'd be best and I'd have to 
insert ideally not replacing anything, instead aggregating, 
adding data.
I think developers took care of uniqueness of the rows, and 
constructed it in conformity with good design practices.


What I'm only guessing is when I lock, dump and remove then 
insert, aggregate could there be problems with keys? And no 
data loss during dump+removal?


thanks for sharing your thoughts.


I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:


On 29/02/16 15:42, Gary Smith wrote:


On 29/02/2016 15:30, lejeczek wrote:


On 28/02/16 20:50, lejeczek wrote:


fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then
dropping (purging the data) it and on a different system that dump swill be
inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that
source data will be dropped/purged on regular basis, but before a drop, a
dump which later will be used to sort of reconstruct that same database.

How do you recommend to do it? I'm guessing trickiest bit might this
reconstruction part, how to merge dumps safely, naturally while maintaining
consistency & integrity?
Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider -

should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first
cleansed then some data was collected) to merge data again would it work
and merge that newly collected data with what's already in the database


This sounds like a remarkably reliable way to ensure no data integrity.
What exactly are you trying to achieve? Would replication be the magic word
you're after?

I realize this all might look rather like a bird fiddling with a worm

instead of lion going for quick kill. I replicate wherever I need and can,
here a have very little control over one end.
On that end with little control there is one simple database, which data
I'll need to be removed on regular basis, before removing I'll be dumping
and I need to use those dumps to add, merge, aggregate data to a database
on the other end, like:
today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then collect some
and again, dump then remove
and these dumps should reconstruct the database on the other box.

Pointers on what to pay the attention to, how to test for consistency &
integrity, would be of great help.


Gary



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?

I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:

> On 29/02/16 15:42, Gary Smith wrote:
>
>> On 29/02/2016 15:30, lejeczek wrote:
>>
>>> On 28/02/16 20:50, lejeczek wrote:
>>>
 fellow users, hopefully you experts too, could help...

 ...me to understand how, and what should be the best practice to dump
 database, then drop it and merge the dumps..
 What I'd like to do is something probably many have done and I wonder
 how it's done best.
 A box will be dumping a database (maybe? tables if it's better) then
 dropping (purging the data) it and on a different system that dump swill be
 inserted/aggregated into the same database.
 It reminds me a kind of incremental backup except for the fact that
 source data will be dropped/purged on regular basis, but before a drop, a
 dump which later will be used to sort of reconstruct that same database.

 How do you recommend to do it? I'm guessing trickiest bit might this
 reconstruction part, how to merge dumps safely, naturally while maintaining
 consistency & integrity?
 Actual syntax, as usually any code examples are, would be best.

 many thanks.


 I guess dropping a tables is not really what I should even consider -
>>> should I just be deleting everything from tables in order to remove data?
>>> And if I was to use dumps of such a database (where data was first
>>> cleansed then some data was collected) to merge data again would it work
>>> and merge that newly collected data with what's already in the database
>>>
>> This sounds like a remarkably reliable way to ensure no data integrity.
>> What exactly are you trying to achieve? Would replication be the magic word
>> you're after?
>>
>> I realize this all might look rather like a bird fiddling with a worm
> instead of lion going for quick kill. I replicate wherever I need and can,
> here a have very little control over one end.
> On that end with little control there is one simple database, which data
> I'll need to be removed on regular basis, before removing I'll be dumping
> and I need to use those dumps to add, merge, aggregate data to a database
> on the other end, like:
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then collect some
> and again, dump then remove
> and these dumps should reconstruct the database on the other box.
>
> Pointers on what to pay the attention to, how to test for consistency &
> integrity, would be of great help.
>
>
> Gary
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Johan De Meersman
- Original Message -
> From: "lejeczek" 
> Subject: Re: dump, drop database then merge/aggregate
> 
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then
> collect some and again, dump then remove
> and these dumps should reconstruct the database on the other
> box.

It sounds like a horrible mess, to be honest. It's also pretty hard to 
recommend possible paths without knowing what's inside. Is it an option for you 
to simply import the distinct dumps into different schemas? That way there 
would be no need for merging the data, you just query the particular dataset 
you're interested in.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 15:42, Gary Smith wrote:

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the 
dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for 
the fact that source data will be dropped/purged on 
regular basis, but before a drop, a dump which later 
will be used to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest 
bit might this reconstruction part, how to merge dumps 
safely, naturally while maintaining consistency & 
integrity?
Actual syntax, as usually any code examples are, would 
be best.


many thanks.


I guess dropping a tables is not really what I should 
even consider - should I just be deleting everything from 
tables in order to remove data?
And if I was to use dumps of such a database (where data 
was first cleansed then some data was collected) to merge 
data again would it work and merge that newly collected 
data with what's already in the database
This sounds like a remarkably reliable way to ensure no 
data integrity. What exactly are you trying to achieve? 
Would replication be the magic word you're after?


I realize this all might look rather like a bird fiddling 
with a worm instead of lion going for quick kill. I 
replicate wherever I need and can, here a have very little 
control over one end.
On that end with little control there is one simple 
database, which data I'll need to be removed on regular 
basis, before removing I'll be dumping and I need to use 
those dumps to add, merge, aggregate data to a database on 
the other end, like:

today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then 
collect some and again, dump then remove
and these dumps should reconstruct the database on the other 
box.


Pointers on what to pay the attention to, how to test for 
consistency & integrity, would be of great help.



Gary




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump 
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder 
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then 
dropping (purging the data) it and on a different system that dump 
swill be inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that 
source data will be dropped/purged on regular basis, but before a 
drop, a dump which later will be used to sort of reconstruct that 
same database.


How do you recommend to do it? I'm guessing trickiest bit might this 
reconstruction part, how to merge dumps safely, naturally while 
maintaining consistency & integrity?

Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider - 
should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first 
cleansed then some data was collected) to merge data again would it 
work and merge that newly collected data with what's already in the 
database
This sounds like a remarkably reliable way to ensure no data integrity. 
What exactly are you trying to achieve? Would replication be the magic 
word you're after?


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for the 
fact that source data will be dropped/purged on regular 
basis, but before a drop, a dump which later will be used 
to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest bit 
might this reconstruction part, how to merge dumps safely, 
naturally while maintaining consistency & integrity?
Actual syntax, as usually any code examples are, would be 
best.


many thanks.


I guess dropping a tables is not really what I should even 
consider - should I just be deleting everything from tables 
in order to remove data?
And if I was to use dumps of such a database (where data was 
first cleansed then some data was collected) to merge data 
again would it work and merge that newly collected data with 
what's already in the database?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread lejeczek

On 29/02/16 14:38, Steven Siebert wrote:

Simple answer is no. What are you trying to accomplish?

I was hoping that with this new feature (google's) where 
mysql itself, internally uses keys to encrypt/decrypt tables 
or tablespaces I could just secure data, simply.
Chance is I don't quite get the concept, I believe I have 
one table encrypted (trying stings on it suggests) yet I can 
just query it and dump as normal.
I understand it's kind of database-file encryption, 
protection against just grabbing a file and trying to run it 
somewhere else, am I right?
If above is the case the from php perspective nothing should 
be different, it should be transparent, no?


many thanks

S

On Mon, Feb 29, 2016 at 8:31 AM, lejeczek 
> wrote:


hi everybody

a novice type of question - having a php + mysql, can
one just encrypt (internally in mysql) tables and php
will be fine?
If not, would it be easy to re-code php to work with
this new, internal encryption?

thanks.

-- 
MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Steven Siebert
Simple answer is no. What are you trying to accomplish?

S

On Mon, Feb 29, 2016 at 8:31 AM, lejeczek  wrote:

> hi everybody
>
> a novice type of question - having a php + mysql, can one just encrypt
> (internally in mysql) tables and php will be fine?
> If not, would it be easy to re-code php to work with this new, internal
> encryption?
>
> thanks.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


can I just encrypt tables? what about the app?

2016-02-29 Thread lejeczek

hi everybody

a novice type of question - having a php + mysql, can one 
just encrypt (internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this 
new, internal encryption?


thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



dump, drop database then merge/aggregate

2016-02-28 Thread lejeczek

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and 
I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated into 
the same database.
It reminds me a kind of incremental backup except for the 
fact that source data will be dropped/purged on regular 
basis, but before a drop, a dump which later will be used to 
sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest bit 
might this reconstruction part, how to merge dumps safely, 
naturally while maintaining consistency & integrity?

Actual syntax, as usually any code examples are, would be best.

many thanks.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Indexes strangeness

2016-02-24 Thread Chris Knipe
Hi All,

Can someone please fill me in as what I am seeing here... I have two
identical tables, with identical indexes, having different records.  Both
tables have +- 15m records in it...


mysql> EXPLAIN SELECT ArticleID, DateObtained, DateAccessed, TimesAccessed
FROM IDXa ORDER BY DateAccessed LIMIT 10;
++-+--+---+---+-+-+--++---+
| id | select_type | table| type  | possible_keys | key |
key_len | ref  | rows   | Extra |
++-+--+---+---+-+-+--++---+
|  1 | SIMPLE  | IDXa | index | NULL  | idxDateAccessed | 5
  | NULL | 10 | NULL  |
++-+--+---+---+-+-+--++---+
1 row in set (0,00 sec)

mysql> EXPLAIN SELECT ArticleID, DateObtained, DateAccessed, TimesAccessed
FROM IDXb ORDER BY DateAccessed LIMIT 10;
++-+--+--+---+--+-+--+--++
| id | select_type | table| type | possible_keys | key  | key_len | ref
 | rows | Extra  |
++-+--+--+---+--+-+--+--++
|  1 | SIMPLE  | IDXb | ALL  | NULL  | NULL | NULL| NULL |
15004858 | Using filesort |
++-+--+--+---+--+-+--+--++
1 row in set (0,00 sec)


Tables:



mysql> SHOW CREATE TABLE IDXa\G
*** 1. row ***
   Table: IDXa
Create Table: CREATE TABLE `IDXa` (
  `ArticleID` varchar(32) NOT NULL,
  `DateObtained` datetime NOT NULL,
  `DateAccessed` datetime NOT NULL,
  `TimesAccessed` int(5) unsigned NOT NULL,
  PRIMARY KEY (`ArticleID`),
  KEY `idxDateAccessed` (`DateAccessed`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0,00 sec)

mysql> SHOW INDEXES FROM IDXa;
+--++-+--+--+---+-+--++--++-+---+
| Table| Non_unique | Key_name| Seq_in_index | Column_name  |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
Index_comment |
+--++-+--+--+---+-+--++--++-+---+
| IDXa |  0 | PRIMARY |1 | ArticleID| A
|14086444 | NULL | NULL   |  | BTREE  | |
|
| IDXa |  1 | idxDateAccessed |1 | DateAccessed | A
| 1408644 | NULL | NULL   |  | BTREE  | |
|
+--++-+--+--+---+-+--++--++-+---+
2 rows in set (0,00 sec)

mysql> SHOW CREATE TABLE IDXb\G
*** 1. row ***
   Table: IDXb
Create Table: CREATE TABLE `IDXb` (
  `ArticleID` varchar(32) NOT NULL,
  `DateObtained` datetime NOT NULL,
  `DateAccessed` datetime NOT NULL,
  `TimesAccessed` int(5) unsigned NOT NULL,
  PRIMARY KEY (`ArticleID`),
  KEY `idxDateAccessed` (`DateAccessed`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0,00 sec)

mysql> SHOW INDEXES FROM IDXb;
+--++-+--+--+---+-+--++--++-+---+
| Table| Non_unique | Key_name| Seq_in_index | Column_name  |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
Index_comment |
+--++-+--+--+---+-+--++--++-+---+
| IDXb |  0 | PRIMARY |1 | ArticleID| A
|15007345 | NULL | NULL   |  | BTREE  | |
|
| IDXb |  1 | idxDateAccessed |1 | DateAccessed | A
| 1250612 | NULL | NULL   |  | BTREE  | |
|
+--++-+--+--+---+-+--++--++-+---+
2 rows in set (0,00 sec)


Thnx.


-- 

Regards,
Chris Knipe


Re: Mysql devel

2016-02-23 Thread Reindl Harald



Am 23.02.2016 um 13:12 schrieb Larry Martell:

On Tue, Feb 23, 2016 at 7:07 AM, Larry Martell  wrote:

On Tue, Feb 23, 2016 at 6:16 AM, Reindl Harald  wrote:


Am 23.02.2016 um 04:47 schrieb Larry Martell:


I am trying to install the mysql devel libs (so I can build MySQLdb).
This is on RHEL 6. It's failing as shown below. Can anyone help with
this?


you are mixing remi and other repos


I did this:

# yum shell --enablerepo=remi

remove php-mysql
install php-mysqlnd
run


Followed by this:

yum install mysql-devel
/usr/local/bin/pip2.7 install MySQL-python

Now the MySQL server is down and will not come up. It fails with this:

Fatal error: mysql.user table is damaged. Please run mysql_upgrade

It was running 5.1 before and now it's running 5.7.

But how can I run mysql_upgrade if I can't start the server?


Actually it was running 5.5 before:

Server version: 5.5.44 MySQL Community Server (GPL) by Remi


all your problems are coming from careless mix repos and allow updates 
without considering the result and read how to deal with 3rd party repos 
from the very beginning


downgrade and re-consider how you are doing upgrades
5.5 to 5.7 and skip 5.6 entirely is a bad idea
make sure that you don't mix mysql-packages from different repos

and RTFM https://wiki.centos.org/de/AdditionalResources/Repositories/
Les RPM de Remi repository - (See http://rpms.famillecollet.com/) Remi 
Collet maintains latest version of MySQL and PHP (backports of fedora 
RPM). Suggest caution if using this repo due to replacement of core packages


Third Party Repositories
WARNING: These repositories are not provided nor supported by CentOS. 
The CentOS project has no control over these sites. Many have their own 
mailing lists, IRC channels, issue trackers, etc. for support issues 
with their packages.


NOTE: If you are considering using a 3rd Party Repository, then you 
should seriously consider how to prevent unintended 'updates' from these 
side archives from over-writing some core part of CentOS. One approach 
is to only enable these archives from time to time, and generally leave 
them disabled. See: man yum


Another approach is to use the exclude= and includepkgs= options on a 
per sub-archive basis, in the matching .conf file found in 
/etc/yum.repos.d/ See: man yum.conf


The yum Priorities plug-in can prevent a 3rd party repository from 
replacing base packages, or prevent base/updates from replacing a 3rd 
party package.




signature.asc
Description: OpenPGP digital signature


Re: Mysql devel

2016-02-23 Thread Larry Martell
On Tue, Feb 23, 2016 at 7:07 AM, Larry Martell  wrote:
> On Tue, Feb 23, 2016 at 6:16 AM, Reindl Harald  wrote:
>>
>>
>> Am 23.02.2016 um 04:47 schrieb Larry Martell:
>>>
>>> I am trying to install the mysql devel libs (so I can build MySQLdb).
>>> This is on RHEL 6. It's failing as shown below. Can anyone help with
>>> this?
>>
>>
>> you are mixing remi and other repos
>
> I did this:
>
> # yum shell --enablerepo=remi
>> remove php-mysql
>> install php-mysqlnd
>> run
>
> Followed by this:
>
> yum install mysql-devel
> /usr/local/bin/pip2.7 install MySQL-python
>
> Now the MySQL server is down and will not come up. It fails with this:
>
> Fatal error: mysql.user table is damaged. Please run mysql_upgrade
>
> It was running 5.1 before and now it's running 5.7.
>
> But how can I run mysql_upgrade if I can't start the server?

Actually it was running 5.5 before:

Server version: 5.5.44 MySQL Community Server (GPL) by Remi

>
>>
>>> Package mysql-community-devel.x86_64 0:5.7.11-1.el6 will be installed
>>> for package: mysql-server-5.5.44-1.el6.remi.x86_64
>>
>>
>>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mysql devel

2016-02-23 Thread Larry Martell
On Tue, Feb 23, 2016 at 6:16 AM, Reindl Harald  wrote:
>
>
> Am 23.02.2016 um 04:47 schrieb Larry Martell:
>>
>> I am trying to install the mysql devel libs (so I can build MySQLdb).
>> This is on RHEL 6. It's failing as shown below. Can anyone help with
>> this?
>
>
> you are mixing remi and other repos

I did this:

# yum shell --enablerepo=remi
> remove php-mysql
> install php-mysqlnd
> run

Followed by this:

yum install mysql-devel
/usr/local/bin/pip2.7 install MySQL-python

Now the MySQL server is down and will not come up. It fails with this:

Fatal error: mysql.user table is damaged. Please run mysql_upgrade

It was running 5.1 before and now it's running 5.7.

But how can I run mysql_upgrade if I can't start the server?

>
>> Package mysql-community-devel.x86_64 0:5.7.11-1.el6 will be installed
>> for package: mysql-server-5.5.44-1.el6.remi.x86_64
>
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mysql devel

2016-02-23 Thread Reindl Harald



Am 23.02.2016 um 04:47 schrieb Larry Martell:

I am trying to install the mysql devel libs (so I can build MySQLdb).
This is on RHEL 6. It's failing as shown below. Can anyone help with
this?


you are mixing remi and other repos


Package mysql-community-devel.x86_64 0:5.7.11-1.el6 will be installed
for package: mysql-server-5.5.44-1.el6.remi.x86_64





signature.asc
Description: OpenPGP digital signature


ANN: DAC for MySQL 3.0.3 is out!

2016-02-23 Thread Aleksander Andreev
MicroOLAP Direct Access Components for MySQL and Delphi/C++ Builder (also
known as MySQLDAC) is a Borland Delphi/C++Builder component set for direct
connect to MySQL database server. DAC for MySQL allows you to create
Delphi/C++Builder applications without BDE, ODBC, ADO and without
libmysql.dll.

What’s new in v3.0.3:
This release includes Prepare feature based on the MySQL binary protocol,
packages for work with FastReport, improvements and bug fixes.

Full changelog:
[!] Prepare and UnPrepare methods with MySQL binary protocol added to the
TMySQLTable, TMySQLQuery and TMySQLStoredProc
[!] Packages for integration into FastReport components added
[!] More informative message introduced for TMySSHDatabase connection errors
[!] TMySSHDatabase.AddSSHKeyToSystemCache property added
[!] TMySSHDatabase.ShowPLinkConsole property added
[+] Options.AutoPrepare property added to the TMySQLTable, TMySQLQuery and
TMySQLStoredProc
[*] A little bit of optimization for opening table with specified IndexName
property added
[-]"Error occurs when opening table with an alias and ENUM fields" bug fixed
[-] "TMySQLDataSet returns data in incorrect characterSet if use
FetchOnDemand property" bug fixed
[-] "Filtering in Mobile target platforms not working" bug fixed
[-] "TMySQLTable.GetTableEngine getting incorrect result if two tables of
the same name exist in separate schemas" bug fixed
[-] "Incorrect error message if open TMySQLDataSet with the FetchOnDemand
property" bug fixed
[-] Memory leaks when using FetchOnDemand with RequestLive eliminated
[-] "Error occurs when calling post method in TMySQLDataSet with enabled
FetchOnDemand and RequestLive properties" bug fixed
[-] "Record with only null value NOT NULL field with default value not
inserting" bug fixed
[-] "Raise exception when post null value to the NOT NULL field with
default value" bug fixed

You're welcome to download the DAC for MySQL v3.0.3 right now at:
http://microolap.com/products/connectivity/mysqldac/download/ ,
or login to your private area on our site at
http://microolap.com/my/downloads/

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at http://www.microolap.com/support/

-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


Mysql devel

2016-02-22 Thread Larry Martell
I am trying to install the mysql devel libs (so I can build MySQLdb).
This is on RHEL 6. It's failing as shown below. Can anyone help with
this?

# yum install mysql-devel
Loaded plugins: dellsysid, product-id, refresh-packagekit, security,
subscription-manager
Setting up Install Process
Package mysql-devel is obsoleted by mysql-community-devel, trying to
install mysql-community-devel-5.7.11-1.el6.x86_64 instead
Resolving Dependencies
--> Running transaction check
---> Package mysql-community-devel.x86_64 0:5.7.11-1.el6 will be installed
--> Processing Dependency: mysql-community-libs(x86-64) = 5.7.11-1.el6
for package: mysql-community-devel-5.7.11-1.el6.x86_64
--> Processing Dependency: libmysqlclient.so.20()(64bit) for package:
mysql-community-devel-5.7.11-1.el6.x86_64
--> Running transaction check
---> Package mysql-community-libs.x86_64 0:5.7.11-1.el6 will be obsoleting
--> Processing Dependency: mysql-community-common(x86-64) =
5.7.11-1.el6 for package: mysql-community-libs-5.7.11-1.el6.x86_64
---> Package mysql-libs.x86_64 0:5.5.44-1.el6.remi will be obsoleted
--> Processing Dependency: libmysqlclient.so.18()(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
--> Processing Dependency:
libmysqlclient.so.18(libmysqlclient_16)(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
--> Processing Dependency:
libmysqlclient.so.18(libmysqlclient_18)(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
--> Processing Dependency: real-mysql-libs(x86-64) = 5.5.44-1.el6.remi
for package: mysql-server-5.5.44-1.el6.remi.x86_64
--> Processing Dependency: real-mysql-libs(x86-64) = 5.5.44-1.el6.remi
for package: mysql-5.5.44-1.el6.remi.x86_64
--> Running transaction check
---> Package mysql.x86_64 0:5.5.44-1.el6.remi will be obsoleted
---> Package mysql-community-client.x86_64 0:5.7.11-1.el6 will be obsoleting
---> Package mysql-community-common.x86_64 0:5.7.11-1.el6 will be installed
---> Package mysql-community-server.x86_64 0:5.7.11-1.el6 will be obsoleting
---> Package mysql-libs.x86_64 0:5.5.44-1.el6.remi will be obsoleted
--> Processing Dependency: libmysqlclient.so.18()(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
--> Processing Dependency:
libmysqlclient.so.18(libmysqlclient_16)(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
--> Processing Dependency:
libmysqlclient.so.18(libmysqlclient_18)(64bit) for package:
php-mysql-5.4.43-1.el6.remi.x86_64
---> Package mysql-server.x86_64 0:5.5.44-1.el6.remi will be obsoleted
--> Finished Dependency Resolution
Error: Package: php-mysql-5.4.43-1.el6.remi.x86_64 (@remi)
   Requires: libmysqlclient.so.18(libmysqlclient_16)(64bit)
   Removing: mysql-libs-5.5.44-1.el6.remi.x86_64 (@remi)
   libmysqlclient.so.18(libmysqlclient_16)(64bit)
   Obsoleted By: mysql-community-libs-5.7.11-1.el6.x86_64
(mysql57-community)
   Not found
Error: Package: php-mysql-5.4.43-1.el6.remi.x86_64 (@remi)
   Requires: libmysqlclient.so.18()(64bit)
   Removing: mysql-libs-5.5.44-1.el6.remi.x86_64 (@remi)
   libmysqlclient.so.18()(64bit)
   Obsoleted By: mysql-community-libs-5.7.11-1.el6.x86_64
(mysql57-community)
   Not found
Error: Package: php-mysql-5.4.43-1.el6.remi.x86_64 (@remi)
   Requires: libmysqlclient.so.18(libmysqlclient_18)(64bit)
   Removing: mysql-libs-5.5.44-1.el6.remi.x86_64 (@remi)
   libmysqlclient.so.18(libmysqlclient_18)(64bit)
   Obsoleted By: mysql-community-libs-5.7.11-1.el6.x86_64
(mysql57-community)
   Not found
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SAP Customers DB

2016-02-12 Thread Reindl Harald



Am 12.02.2016 um 16:23 schrieb MELISSA White:

Would you be interested in acquiring SAP Users contact information in excel 
sheet for unlimited marketing usage?


creep away spammer or your company domain ends in a Sender-Filter 
(envelope and To-headers) - who do you think you are that you are in the 
position offer "unlimited marketing usage" to 3rd parties?




signature.asc
Description: OpenPGP digital signature


RE: Select Earliest Related Row

2016-02-10 Thread Pavel Zimahorau
Select class_name, 
GROUP_CONCAT(DISTINCT cl_date
  ORDER BY cl_date DESC SEPARATOR ',  ')
 (select min(cl_date) from CLASS_DATES where item_id = c.item_id and 
cl_date > Now())
From CLASSES c
Join CLASS_DATES cd on  (c.item_id = cd.item_id) 
Group by class_name, c. item_id

I did not check it in DB and it can have some parse errors. 

But It should work. 

Best Regards,
Pavel Zimahorau

-Original Message-
From: Don Wieland [mailto:d...@pointmade.net] 
Sent: Tuesday, February 09, 2016 6:57 PM
To: MySql 
Subject: Select Earliest Related Row

I have a two tables where I am adding CLASSES and CLASS_DATES for people to 
register for.

Table Name = tl_items (Parent)
item_id
class_name


table_name = tl_items_classes (Children)
class_date_id
item_id
cl_date

“tl_items" will have 0 to Many “tl_items_classes" using the “item_id” as the 
KEY field.

I am compiling a SELECT query to search the dates of the classes 
(tl_items_classes rows), but I need to these two things:

1) Only show the EARLIEST “cl_date" in my returned list (i.e... if a class has 
dates for Jan 2,3,4,5,6 2016, only show Jan 2’s row)

2) In that displayed EARLIEST Class Date row, have a column that displays the 
complete list of related class dates in ASC order delineated by a COMMA 
(Group_Concat()???).

Here is a query I have started off with which show all the dates fine. Just 
want to fine tune it.

SELECT ic.*, i.*, DATE_FORMAT(ic.cl_date, "%M %Y") AS mo_label FROM 
tl_items_classes ic 
LEFT JOIN tl_items i ON ic.item_id = i.item_id 
WHERE i.active = 1 AND ic.cl_cancelled IS NULL ORDER BY ic.cl_date ASC;

Any help would be appreciated.

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Select Earliest Related Row

2016-02-09 Thread Don Wieland
I have a two tables where I am adding CLASSES and CLASS_DATES for people to 
register for.

Table Name = tl_items (Parent)
item_id
class_name


table_name = tl_items_classes (Children)
class_date_id
item_id
cl_date

“tl_items" will have 0 to Many “tl_items_classes" using the “item_id” as the 
KEY field.

I am compiling a SELECT query to search the dates of the classes 
(tl_items_classes rows), but I need to these two things:

1) Only show the EARLIEST “cl_date" in my returned list (i.e... if a class has 
dates for Jan 2,3,4,5,6 2016, only show Jan 2’s row)

2) In that displayed EARLIEST Class Date row, have a column that displays the 
complete list of related class dates in ASC order delineated by a COMMA 
(Group_Concat()???).

Here is a query I have started off with which show all the dates fine. Just 
want to fine tune it.

SELECT ic.*, i.*, DATE_FORMAT(ic.cl_date, "%M %Y") AS mo_label FROM 
tl_items_classes ic 
LEFT JOIN tl_items i ON ic.item_id = i.item_id 
WHERE i.active = 1 AND ic.cl_cancelled IS NULL ORDER BY ic.cl_date ASC;

Any help would be appreciated.

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






MySQL Community Server 5.7.11 has been released

2016-02-05 Thread Bjorn Munch
Dear MySQL users,

MySQL Server 5.7.11, a new version of the popular Open Source
Database Management System, has been released. MySQL 5.7.11 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.7, please see

http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

For information on installing MySQL 5.7.11 on new servers, please see
the MySQL installation documentation at

  http://dev.mysql.com/doc/refman/5.7/en/installing.html

MySQL Server 5.7.11 is available in source and binary form for a number of
platforms from the "Development Releases" selection of our download
pages at

  http://dev.mysql.com/downloads/mysql/

MySQL Server 5.7.11 is also available from our repository for Linux
platforms, go here for details:

  http://dev.mysql.com/downloads/repo/

Windows packages are available via the Installer for Windows or .ZIP
(no-install) packages for more advanced needs. The point and click
configuration wizards and all MySQL products are available in the
unified Installer for Windows:

  http://dev.mysql.com/downloads/installer/

5.7.11 also comes with a web installer as an alternative to the full
installer.

The web installer doesn't come bundled with any actual products
and instead relies on download-on-demand to fetch only the
products you choose to install. This makes the initial download
much smaller but increases install time as the individual products
will need to be downloaded.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc.:

  http://bugs.mysql.com/report.php

The following section lists the changes in the MySQL 5.7 since
the release of MySQL 5.7.10. It may also be viewed
online at

http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-11.html

Enjoy!

On behalf of MySQL Release Engineering,
- Bjorn Munch

==
Changes in MySQL 5.7.11 (2016-02-05)


 * Compilation Notes

 * Data Type Notes

 * Installation Notes

 * Packaging Notes

 * Security Notes

 * Functionality Added or Changed

 * Bugs Fixed

   Compilation Notes

 * A value of system is now permitted for the WITH_BOOST
   CMake option. If this option is not set or is set to
   system, it is assumed that the correct version of Boost
   is installed on the compilation host in the standard
   location. In this case, the installed version of Boost is
   used rather than any version included with a MySQL source
   distribution. (Bug #4313)

 * In addition to the mysql-5.7.11.tar.gz source tarball,
   another tarball named mysql-boost-5.7.11.tar.gz is
   provided. The new tarball contains everything in the
   first tarball, but also contains all the required Boost
   header files in a subdirectory named boost. This is for
   the benefit of those who do not have the correct Boost
   version installed and do not wish to or cannot download
   it. To build from this source distribution, add
   -DWITH_BOOST=boost to the CMake command line.

   Data Type Notes

 * Bit functions and operators comprise BIT_COUNT(),
   BIT_AND(), BIT_OR(), BIT_XOR(), &, |, ^, ~, <<, and >>.
   Currently, bit functions and operators require BIGINT
   (64-bit integer) arguments and return BIGINT values, so
   they have a maximum range of 64 bits. Arguments of other
   types are converted to BIGINT and truncation might occur.
   A planned extension for MySQL 5.8 is to permit bit
   functions and operators to take binary string type
   arguments (BINARY, VARBINARY, and the BLOB types),
   enabling them to take arguments and produce return values
   larger than 64 bits. Consequently, bit operations on
   binary arguments in MySQL 5.7 might produce different
   results in MySQL 5.8. To provide advance notice about
   this potential change in behavior, the server now
   produces warnings for bit operations for which binary
   arguments will not be converted to integer in MySQL 5.8.
   These warnings afford an opportunity to rewrite affected
   statements. To explicitly produce MySQL 5.7 behavior in a
   way that will not change after an upgrade to 5.8, cast
   bit-operation binary arguments to convert them to
   integer. For more information and examples, see Bit
   Functions and Operators
   (http://dev.mysql.com/doc/refman/5.7/en/bit-functions.html).

   Installation Notes

 * Previously, mysqld --initialize required the data
   directory to not exist or, if it existed, to be empty.
   Now an existing data directory is permitted to be
   nonempty if every entry either has a name that begins
   with a period (.) or is named using an --ignore-db-dir
   option. (Bug #79250, Bug #22213873)

   Packaging Notes

 * Packaging support for Ubuntu 15.10 was added. (Bug
   #79104, Bug #22147191)

   Security Notes

 * 

MySQL Community Server 5.5.48 has been released

2016-02-05 Thread Gipson Pulla
Dear MySQL users,

MySQL Server 5.5.48 is a new version of the 5.5 production release
of the world's most popular open source database. MySQL 5.5.48 is
recommended for use on production systems.

MySQL 5.5 includes several high-impact enhancements to improve the
performance and scalability of the MySQL Database, taking advantage of
the latest multi-CPU and multi-core hardware and operating systems. In
addition, with release 5.5, InnoDB is now the default storage engine for
the MySQL Database, delivering ACID transactions, referential integrity
and crash recovery by default.

MySQL 5.5 also provides a number of additional enhancements including:

  - Significantly improved performance on Windows, with various
Windows specific features and improvements
  - Higher availability, with new semi-synchronous replication and
Replication Heartbeat
  - Improved usability, with Improved index and table partitioning,
SIGNAL/RESIGNAL support and enhanced diagnostics, including a new
Performance Schema monitoring capability.

For a more complete look at what's new in MySQL 5.5, please see the
following resources:

MySQL 5.5 is GA, Interview with Tomas Ulin:

  http://dev.mysql.com/tech-resources/interviews/thomas-ulin-mysql-55.html

Documentation:

  http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html

If you are running a MySQL production level system, we would like to
direct your attention to MySQL Enterprise Edition, which includes the
most comprehensive set of MySQL production, backup, monitoring,
modeling, development, and administration tools so businesses can
achieve the highest levels of MySQL performance, security and uptime.

  http://mysql.com/products/enterprise/

For information on installing MySQL 5.5.48 on new servers, please see
the MySQL installation documentation at

  http://dev.mysql.com/doc/refman/5.5/en/installing.html

For upgrading from previous MySQL releases, please see the important
upgrade considerations at:

  http://dev.mysql.com/doc/refman/5.5/en/upgrading.html

MySQL Database 5.5.48 is available in source and binary form for a
number of platforms from our download pages at:

  http://dev.mysql.com/downloads/mysql/

The following section lists the changes in the MySQL source code since
the previous released version of MySQL 5.5. It may also be viewed
online at:

  http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-48.html

Enjoy!

Changes in MySQL 5.5.48 (2016-02-05)

 * Functionality Added or Changed

 * Bugs Fixed

   Functionality Added or Changed

 * yaSSL was upgraded to version 2.3.9. This upgrade
   corrects an issue in which yaSSL handled only cases of
   zero or one leading zeros for the key agreement instead
   of potentially any number, which in rare cases could
   cause connections to fail when using DHE cipher suites.
   (Bug #22361038)

 * The Valgrind function signature in
   mysql-test/valgrind.supp was upgraded for Valgrind 3.11.
   (Bug #22214867)

   Bugs Fixed

 * Replication: When DML invokes a trigger or a stored
   function that inserts into an AUTO_INCREMENT column, that
   DML has to be marked as an unsafe statement. If the
   tables are locked in the transaction prior to the DML
   statement (for example by using LOCK TABLES), then the
   DML statement was not being marked as an unsafe
   statement. The fix ensures that such DML statements are
   marked correctly as unsafe. (Bug #17047208)

 * Replication: DROP TABLE statements are regenerated by the
   server before being written to the binary log. If a table
   or database name contained a non-regular character, such
   as non-latin characters, the regenerated statement was
   using the wrong name, breaking replication. The fix
   ensures that in such a case the regenerated name is
   correctly converted back to the original character set.
   Also during work on this bug, it was discovered that in
   the rare case that a table or database name contained 64
   characters, the server was throwing an assert(M_TBLLEN <
   128) assertion. The assertion has been corrected to be
   less than or equal 128. (Bug #77249, Bug #21205695)
   References: See also Bug #78036, Bug #22261585, Bug
   #21619371.

 * Data corruption could occur if a stored procedure had a
   variable declared as TEXT or BLOB and data was copied to
   that variable using SELECT ... INTO syntax from a TEXT or
   BLOB column. (Bug #22232332)

 * CREATE TEMPORARY TABLE .. SELECT statements involving BIT
   columns that resulted in a column type redefinition could
   cause a server exit or an improperly created table. (Bug
   #21902059)

 * Added Microsoft Visual Studio 2015 support. Changes
   include using the native (added in VS 2015) timespec
   library if it exists, renamed lfind/lsearch and
   timezone/tzname to avoid redefinition problems, 

MySQL Community Server 5.6.29 has been released

2016-02-05 Thread Hery Ramilison

Dear MySQL users,

MySQL Server 5.6.29, a new version of the popular Open Source
Database Management System, has been released. MySQL 5.6.29 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.6, please see

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

 Starting with 5.6.11, Microsoft Windows packages for MySQL 5.6
 are available both as a "full" installer and as a "web" installer.
 The full installer is significantly larger and comes bundled with
 the latest software releases available. This bundle makes it easy
 to download and configure a full server and development suite.

 The web installer doesn't come bundled with any actual products
 and instead relies on download-on-demand to fetch only the
 products you choose to install. This makes the initial download
 much smaller but increases install time as the individual products
 will need to be downloaded.

For information on installing MySQL 5.6.29 on new servers or upgrading
to MySQL 5.6.29 from previous MySQL releases, please see

  http://dev.mysql.com/doc/refman/5.6/en/installing.html

MySQL Server is available in source and binary form for a number of
platforms from our download pages at

  http://dev.mysql.com/downloads/

Not all mirror sites may be up to date at this point in time, so if you
can't find this version on some mirror, please try again later or choose
another download site.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc:

  https://wikis.oracle.com/display/mysql/Contributing

The following section lists the changes in the MySQL 5.6 since
the release of MySQL 5.6.28. It may also be viewed
online at

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-29.html

Enjoy!

Changes in MySQL 5.6.29 (2016-02-05)

   Packaging Notes

 * Packaging support for Ubuntu 15.10 was added. (Bug
   #79104, Bug #22147191)

   Security Notes

 * The linked OpenSSL library for the MySQL Commercial
   Server has been updated from version 1.0.1p to version
   1.0.1q. Issues fixed in the new version are described at
   http://www.openssl.org/news/vulnerabilities.html.
   This change does not affect the Oracle-produced MySQL
   Community build of MySQL Server, which uses the yaSSL
   library instead. (Bug #22348181)

   Functionality Added or Changed

 * InnoDB: A new InnoDB configuration option, innodb_tmpdir,
   allows you to configure a separate temporary file
   directory for online ALTER TABLE operations. This option
   was introduced to help avoid tmpdir overflows that could
   occur as a result of large temporary files created during
   online ALTER TABLE operations. innodb_tmpdir is a SESSION
   variable and can be configured dynamically using a SET
   statement. (Bug #19183565)

 * yaSSL was upgraded to version 2.3.9. This upgrade
   corrects an issue in which yaSSL handled only cases of
   zero or one leading zeros for the key agreement instead
   of potentially any number, which in rare cases could
   cause connections to fail when using DHE cipher suites.
   (Bug #22361038)

 * The Valgrind function signature in
   mysql-test/valgrind.supp was upgraded for Valgrind 3.11.
   (Bug #22214867)

   Bugs Fixed

 * InnoDB: A small InnoDB buffer pool size with a large
   innodb_stats_persistent_sample_pages setting resulted in
   a Difficult to find free blocks in the buffer pool
   warning. (Bug #22385442)

 * InnoDB: Starting the server with an empty
   innodb_data_home_dir entry in the configuration file
   caused InnoDB to look for the buffer pool file in the
   root directory, resulting in a startup error. (Bug
   #22016556)

 * InnoDB: A full-text query run under high concurrency
   caused a server exit due to an invalid memory access.
   (Bug #21922532)

 * InnoDB: With a large innodb_sort_buffer_size setting,
   adding an index on an empty table performed more slowly
   than expected. (Bug #21762319, Bug #78262)

 * Replication: When DML invokes a trigger or a stored
   function that inserts into an AUTO_INCREMENT column, that
   DML has to be marked as an unsafe statement. If the
   tables are locked in the transaction prior to the DML
   statement (for example by using LOCK TABLES), then the
   DML statement was not being marked as an unsafe
   statement. The fix ensures that such DML statements are
   marked correctly as unsafe. (Bug #17047208)

 * Replication: As part of the fix for Bug #16290902, when
   writing a DROP TEMPORARY TABLE IF EXISTS query into the
   binary log, the query is no longer preceded by a USE `db`
   statement. Instead the query uses a fully qualified table
   name, for example DROP TEMPORARY TABLE IF EXISTS
   `db`.`t1`;. This changed the application of
   

Query Statistics...

2016-02-04 Thread Chris Knipe
Hi All,

Perhaps a bit of a trivial question, but in terms of query statistics (i.e.
how many SELECT / INSERT / DELETE / etc. queries has been ran against the
server)...

When you take an INSERT ... ON DUPLICATE KEY UPDATE ...

Under the two conditions (i.e. either INSERT, or UPDATE if the record
already exist),  how is this query logged in the statistics?

When the ON DUPLICATE KEY UPDATE runs (i.e. it's updating a record), is it
still logged as a INSERT query, or is it logged as a UPDATE query?

Thnx.



-- 

Regards,
Chris Knipe


Re: my.cnf authencication

2016-01-29 Thread Johan De Meersman
- Original Message -
> From: "Harrie Robins" 
> Subject: my.cnf authencication
> 
> mysqldump --defaults-file  dbase > c:\sql\dbase.sql 2>> c:\log.tct

Might just be a typo in your mail, but you'll need to actually pass the 
defaults-file, too: --defaults-file=c:\sql\dump.cnf .

I think there may be another typo somewhere, too, as it seems to think that 
lts-file is the user you're passing. I'm wondering if you haven't accidentally 
put only a single - in front of defaults-file.


> 
> 
> My log shows:
> 
> mysqldump: Got error: 1045: Access denied for user
> 'lts-file=c:\sql\dump.cnf'@'localhost' (using password: NO) when trying to
> connect
> 
> It looks like credentials are not filled in!?
> 
> Regards,
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: my.cnf authencication

2016-01-29 Thread Harrie Robins
Thanks for the reply - that was indeed a typo. I resolved this buy making these 
changes:

mysqldump --defaults-extra-file="c:\sql\dump.cnf" dname > c:\loc

with my cnf containing:

[client]
user = user
password = pass

Looks like I misplaced "" and or [client] / [mysqldump] in the cnf

Thanks



-Original Message-
From: Johan De Meersman [mailto:vegiv...@tuxera.be] 
Sent: vrijdag 29 januari 2016 15:06
To: Harrie Robins 
Cc: MySql 
Subject: Re: my.cnf authencication

- Original Message -
> From: "Harrie Robins" 
> Subject: my.cnf authencication
> 
> mysqldump --defaults-file  dbase > c:\sql\dbase.sql 2>> c:\log.tct

Might just be a typo in your mail, but you'll need to actually pass the 
defaults-file, too: --defaults-file=c:\sql\dump.cnf .

I think there may be another typo somewhere, too, as it seems to think that 
lts-file is the user you're passing. I'm wondering if you haven't accidentally 
put only a single - in front of defaults-file.


> 
> 
> My log shows:
> 
> mysqldump: Got error: 1045: Access denied for user 
> 'lts-file=c:\sql\dump.cnf'@'localhost' (using password: NO) when 
> trying to connect
> 
> It looks like credentials are not filled in!?
> 
> Regards,
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql

--
Unhappiness is discouraged and will be corrected with kitten pictures.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



my.cnf authencication

2016-01-29 Thread Harrie Robins
I'm running a mysqldump in windows and I'm mailing 2>> output, right now I get 
this annoying 'insecure' error that pollutes my log. So I figured I use 
--defaults-file and set:

In c:\sql\dump.cnf

[mysqldump]
user = myuser
password = pass

my line looks like this

mysqldump --defaults-file  dbase > c:\sql\dbase.sql 2>> c:\log.tct


My log shows:

mysqldump: Got error: 1045: Access denied for user 
'lts-file=c:\sql\dump.cnf'@'localhost' (using password: NO) when trying to 
connect

It looks like credentials are not filled in!?

Regards,



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-29 Thread shawn l.green



On 1/28/2016 6:30 PM, Larry Martell wrote:

On Thu, Jan 28, 2016 at 5:45 PM, shawn l.green  wrote:



On 1/28/2016 3:32 PM, Larry Martell wrote:


On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green 
wrote:




On 1/28/2016 1:14 PM, Larry Martell wrote:



On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:



2016/01/25 19:16 ... Larry Martell:




SELECT IFNULL(f_tag_bottom,
IFNULL(f_tag_bottom_major_axis,
 IFNULL(f_tag_bottom_minor_axis,
  IFNULL(f_tag_ch_x_bottom,
   IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
   STDDEV(ch_x_top)





Of course, this isn't your real problem, but you could use COALESCE
instead
of all those IFNULLs (and you don't need the last one):
   SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
   STDDEV(ch_x_top)
   

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
   WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) =
'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
   'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)




Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.


One option to consider is to add another column to the query with a CASE
similar to this...

SELECT
, ... original fields ...
, CASE
WHEN f_tag_bottom THEN 'f_tag_bottom'
WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
... repeat for the rest of the fields to test ...
ELSE 'none'
END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result
but
I put it there to help future-proof the code.



Won't that find the first one of the f_tags that is not null, but not
necessarily the one that was matched by the where clause?



I slightly cheated in my example.

My CASE...END was listing terms in the same order as the COALESCE() function
you were using in the WHERE clause. The cheat was that only a non-null value
could be TRUE. To be more accurate, I should have used
... WHEN f_tag_bottom IS NOT NULL THEN ...
That way you end up with a true boolean check within the CASE decision tree.

As the COALESCE() is testing its terms in the same sequence as the
CASE...END, there should be no difference between the two checks.  But, that
also adds to the maintenance cost of this query. If you should change the
order of the f_tag checks in the COALESCE() function, you would need to
change the CASE...END to the same sequence.


Yes, I see that, but does the case only look at the filtered rows? For
example, lets say there's this data:

row 1: f_tag_bottom = "ABC"
row 2: f_tag_bottom_major_axis = "XYZ"

and my where clause has this:

WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'XYZ'

won't the CASE pick up row 1? Whereas I want it to pick up row 2.



Yes it would.

Just modify the check in the case from "IS NOT NULL" to "='XYZ'" to make 
both functions (CASE and COALESCE) find the same field value in the same 
row at the same time.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-29 Thread Larry Martell
On Fri, Jan 29, 2016 at 11:15 AM, shawn l.green
 wrote:
>
>
> On 1/28/2016 6:30 PM, Larry Martell wrote:
>>
>> On Thu, Jan 28, 2016 at 5:45 PM, shawn l.green 
>> wrote:
>>>
>>>
>>>
>>> On 1/28/2016 3:32 PM, Larry Martell wrote:


 On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green
 
 wrote:
>
>
>
>
> On 1/28/2016 1:14 PM, Larry Martell wrote:
>>
>>
>>
>> On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:
>>>
>>>
>>>
>>> 2016/01/25 19:16 ... Larry Martell:




 SELECT IFNULL(f_tag_bottom,
 IFNULL(f_tag_bottom_major_axis,
  IFNULL(f_tag_bottom_minor_axis,
   IFNULL(f_tag_ch_x_bottom,
IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
STDDEV(ch_x_top)
 
>>>
>>>
>>>
>>>
>>> Of course, this isn't your real problem, but you could use COALESCE
>>> instead
>>> of all those IFNULLs (and you don't need the last one):
>>>SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as
>>> ftag,
>>>STDDEV(ch_x_top)
>>>
>>>
>>> As Johnny Withers points out, you may repeat the expression in the
>>> WHERE-clause:
>>>WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) =
>>> 'E-CD7'
>>> If really only one of those is not NULL, it is equivalent to this:
>>>'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)
>>
>>
>>
>>
>> Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
>> I didn't know I could use that in a where clause. This worked great
>> for the requirement I had, but of course, once that was implemented my
>> client changed the requirements. Now they want to know which of the 5
>> f_tag_* columns was matched. Not sure how I'll do that. Probably need
>> another query.
>>
> One option to consider is to add another column to the query with a
> CASE
> similar to this...
>
> SELECT
> , ... original fields ...
> , CASE
> WHEN f_tag_bottom THEN 'f_tag_bottom'
> WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
> ... repeat for the rest of the fields to test ...
> ELSE 'none'
> END as match_flag
> FROM ...
>
> Technically, the term in the WHERE clause should prevent a 'none'
> result
> but
> I put it there to help future-proof the code.



 Won't that find the first one of the f_tags that is not null, but not
 necessarily the one that was matched by the where clause?

>>>
>>> I slightly cheated in my example.
>>>
>>> My CASE...END was listing terms in the same order as the COALESCE()
>>> function
>>> you were using in the WHERE clause. The cheat was that only a non-null
>>> value
>>> could be TRUE. To be more accurate, I should have used
>>> ... WHEN f_tag_bottom IS NOT NULL THEN ...
>>> That way you end up with a true boolean check within the CASE decision
>>> tree.
>>>
>>> As the COALESCE() is testing its terms in the same sequence as the
>>> CASE...END, there should be no difference between the two checks.  But,
>>> that
>>> also adds to the maintenance cost of this query. If you should change the
>>> order of the f_tag checks in the COALESCE() function, you would need to
>>> change the CASE...END to the same sequence.
>>
>>
>> Yes, I see that, but does the case only look at the filtered rows? For
>> example, lets say there's this data:
>>
>> row 1: f_tag_bottom = "ABC"
>> row 2: f_tag_bottom_major_axis = "XYZ"
>>
>> and my where clause has this:
>>
>> WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'XYZ'
>>
>> won't the CASE pick up row 1? Whereas I want it to pick up row 2.
>>
>
> Yes it would.
>
> Just modify the check in the case from "IS NOT NULL" to "='XYZ'" to make
> both functions (CASE and COALESCE) find the same field value in the same row
> at the same time.

Thanks very much Shawn.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread Larry Martell
On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:
> 2016/01/25 19:16 ... Larry Martell:
>>
>> SELECT IFNULL(f_tag_bottom,
>>  IFNULL(f_tag_bottom_major_axis,
>>   IFNULL(f_tag_bottom_minor_axis,
>>IFNULL(f_tag_ch_x_bottom,
>> IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>> STDDEV(ch_x_top)
>> 
>
> Of course, this isn't your real problem, but you could use COALESCE instead
> of all those IFNULLs (and you don't need the last one):
> SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
> STDDEV(ch_x_top)
> 
>
> As Johnny Withers points out, you may repeat the expression in the
> WHERE-clause:
> WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
> If really only one of those is not NULL, it is equivalent to this:
> 'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)

Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread shawn l.green



On 1/28/2016 1:14 PM, Larry Martell wrote:

On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:

2016/01/25 19:16 ... Larry Martell:


SELECT IFNULL(f_tag_bottom,
  IFNULL(f_tag_bottom_major_axis,
   IFNULL(f_tag_bottom_minor_axis,
IFNULL(f_tag_ch_x_bottom,
 IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
 STDDEV(ch_x_top)



Of course, this isn't your real problem, but you could use COALESCE instead
of all those IFNULLs (and you don't need the last one):
 SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
 STDDEV(ch_x_top)
 

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
 WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
 'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)


Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.

One option to consider is to add another column to the query with a CASE 
similar to this...


SELECT
, ... original fields ...
, CASE
  WHEN f_tag_bottom THEN 'f_tag_bottom'
  WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
  ... repeat for the rest of the fields to test ...
  ELSE 'none'
  END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result 
but I put it there to help future-proof the code.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread shawn l.green



On 1/28/2016 3:32 PM, Larry Martell wrote:

On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green  wrote:



On 1/28/2016 1:14 PM, Larry Martell wrote:


On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:


2016/01/25 19:16 ... Larry Martell:



SELECT IFNULL(f_tag_bottom,
   IFNULL(f_tag_bottom_major_axis,
IFNULL(f_tag_bottom_minor_axis,
 IFNULL(f_tag_ch_x_bottom,
  IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
  STDDEV(ch_x_top)




Of course, this isn't your real problem, but you could use COALESCE
instead
of all those IFNULLs (and you don't need the last one):
  SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
  STDDEV(ch_x_top)
  

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
  WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
  'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)



Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.


One option to consider is to add another column to the query with a CASE
similar to this...

SELECT
, ... original fields ...
, CASE
   WHEN f_tag_bottom THEN 'f_tag_bottom'
   WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
   ... repeat for the rest of the fields to test ...
   ELSE 'none'
   END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result but
I put it there to help future-proof the code.


Won't that find the first one of the f_tags that is not null, but not
necessarily the one that was matched by the where clause?



I slightly cheated in my example.

My CASE...END was listing terms in the same order as the COALESCE() 
function you were using in the WHERE clause. The cheat was that only a 
non-null value could be TRUE. To be more accurate, I should have used

   ... WHEN f_tag_bottom IS NOT NULL THEN ...
That way you end up with a true boolean check within the CASE decision 
tree.


As the COALESCE() is testing its terms in the same sequence as the 
CASE...END, there should be no difference between the two checks.  But, 
that also adds to the maintenance cost of this query. If you should 
change the order of the f_tag checks in the COALESCE() function, you 
would need to change the CASE...END to the same sequence.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread Larry Martell
On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green  wrote:
>
>
> On 1/28/2016 1:14 PM, Larry Martell wrote:
>>
>> On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:
>>>
>>> 2016/01/25 19:16 ... Larry Martell:


 SELECT IFNULL(f_tag_bottom,
   IFNULL(f_tag_bottom_major_axis,
IFNULL(f_tag_bottom_minor_axis,
 IFNULL(f_tag_ch_x_bottom,
  IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
  STDDEV(ch_x_top)
 
>>>
>>>
>>> Of course, this isn't your real problem, but you could use COALESCE
>>> instead
>>> of all those IFNULLs (and you don't need the last one):
>>>  SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
>>>  STDDEV(ch_x_top)
>>>  
>>>
>>> As Johnny Withers points out, you may repeat the expression in the
>>> WHERE-clause:
>>>  WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
>>> If really only one of those is not NULL, it is equivalent to this:
>>>  'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
>>> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)
>>
>>
>> Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
>> I didn't know I could use that in a where clause. This worked great
>> for the requirement I had, but of course, once that was implemented my
>> client changed the requirements. Now they want to know which of the 5
>> f_tag_* columns was matched. Not sure how I'll do that. Probably need
>> another query.
>>
> One option to consider is to add another column to the query with a CASE
> similar to this...
>
> SELECT
> , ... original fields ...
> , CASE
>   WHEN f_tag_bottom THEN 'f_tag_bottom'
>   WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
>   ... repeat for the rest of the fields to test ...
>   ELSE 'none'
>   END as match_flag
> FROM ...
>
> Technically, the term in the WHERE clause should prevent a 'none' result but
> I put it there to help future-proof the code.

Won't that find the first one of the f_tags that is not null, but not
necessarily the one that was matched by the where clause?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread Larry Martell
On Thu, Jan 28, 2016 at 5:45 PM, shawn l.green  wrote:
>
>
> On 1/28/2016 3:32 PM, Larry Martell wrote:
>>
>> On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green 
>> wrote:
>>>
>>>
>>>
>>> On 1/28/2016 1:14 PM, Larry Martell wrote:


 On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:
>
>
> 2016/01/25 19:16 ... Larry Martell:
>>
>>
>>
>> SELECT IFNULL(f_tag_bottom,
>>IFNULL(f_tag_bottom_major_axis,
>> IFNULL(f_tag_bottom_minor_axis,
>>  IFNULL(f_tag_ch_x_bottom,
>>   IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>>   STDDEV(ch_x_top)
>> 
>
>
>
> Of course, this isn't your real problem, but you could use COALESCE
> instead
> of all those IFNULLs (and you don't need the last one):
>   SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
>   STDDEV(ch_x_top)
>   
>
> As Johnny Withers points out, you may repeat the expression in the
> WHERE-clause:
>   WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) =
> 'E-CD7'
> If really only one of those is not NULL, it is equivalent to this:
>   'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
> f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)



 Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
 I didn't know I could use that in a where clause. This worked great
 for the requirement I had, but of course, once that was implemented my
 client changed the requirements. Now they want to know which of the 5
 f_tag_* columns was matched. Not sure how I'll do that. Probably need
 another query.

>>> One option to consider is to add another column to the query with a CASE
>>> similar to this...
>>>
>>> SELECT
>>> , ... original fields ...
>>> , CASE
>>>WHEN f_tag_bottom THEN 'f_tag_bottom'
>>>WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
>>>... repeat for the rest of the fields to test ...
>>>ELSE 'none'
>>>END as match_flag
>>> FROM ...
>>>
>>> Technically, the term in the WHERE clause should prevent a 'none' result
>>> but
>>> I put it there to help future-proof the code.
>>
>>
>> Won't that find the first one of the f_tags that is not null, but not
>> necessarily the one that was matched by the where clause?
>>
>
> I slightly cheated in my example.
>
> My CASE...END was listing terms in the same order as the COALESCE() function
> you were using in the WHERE clause. The cheat was that only a non-null value
> could be TRUE. To be more accurate, I should have used
>... WHEN f_tag_bottom IS NOT NULL THEN ...
> That way you end up with a true boolean check within the CASE decision tree.
>
> As the COALESCE() is testing its terms in the same sequence as the
> CASE...END, there should be no difference between the two checks.  But, that
> also adds to the maintenance cost of this query. If you should change the
> order of the f_tag checks in the COALESCE() function, you would need to
> change the CASE...END to the same sequence.

Yes, I see that, but does the case only look at the filtered rows? For
example, lets say there's this data:

row 1: f_tag_bottom = "ABC"
row 2: f_tag_bottom_major_axis = "XYZ"

and my where clause has this:

WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'XYZ'

won't the CASE pick up row 1? Whereas I want it to pick up row 2.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



C++ comment in binary_log_types.h header

2016-01-27 Thread Sebastien FLAESCH

Hi all,

In version 5.7.10, I found a C++ comment in the .h headers:

#endif // __cplusplus

Should be:

#endif /* __cplusplus */

To avoid:

cc -g -fsanitize=address -fno-omit-frame-pointer -W -Wall -pedantic 
-Wmissing-prototypes \
   -Wno-long-long -Werror -fPIC \
   -I/opt3/dbs/mys/5.7/include \
mys.c -o mys_5_7.o
In file included from /opt3/dbs/mys/5.7/include/mysql_com.h:22:0,
 from /opt3/dbs/mys/5.7/include/mysql.h:58,
 from mys.c:41:
/opt3/dbs/mys/5.7/include/binary_log_types.h:68:8: error: C++ style comments 
are not allowed in ISO C90 [-Werror]
 #endif // __cplusplus */


Seb

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-26 Thread Hal.sz S.ndor

2016/01/25 19:16 ... Larry Martell:

SELECT IFNULL(f_tag_bottom,
 IFNULL(f_tag_bottom_major_axis,
  IFNULL(f_tag_bottom_minor_axis,
   IFNULL(f_tag_ch_x_bottom,
IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
STDDEV(ch_x_top)

Of course, this isn't your real problem, but you could use COALESCE 
instead of all those IFNULLs (and you don't need the last one):
	SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis, 
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,

STDDEV(ch_x_top)


As Johnny Withers points out, you may repeat the expression in the 
WHERE-clause:
	WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis, 
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'

If really only one of those is not NULL, it is equivalent to this:
	'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis, 
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



bytes in Recv-Q not processed by mysql

2016-01-26 Thread Jatin Davey

Hi All

In my application while writing to DB , i could see the following output 
from netstat for a single connection to the mysqld DB.


***
tcp   18  0 :::127.0.0.1:3306 :::127.0.0.1:43029  
ESTABLISHED 30850/mysqld keepalive (3517.74/0/0)


***

I could see that there are 18 outstanding bytes that are not read by 
mysql in this connection. This stays on indefinitely in my system and is 
cleared only until i restart my application. I could see some functional 
issues as well in my application and when i hit the functional issue i 
can see the above observation from the netstat -tonp command.


Basically certain threads get into BLOCKED state unable to write to DB. 
Any help / pointers would be really appreciated.


Thanks
Jatin


Re: using alias in where clause

2016-01-25 Thread Johnny Withers
On Mon, Jan 25, 2016 at 9:32 PM, Larry Martell 
wrote:

> On Mon, Jan 25, 2016 at 8:26 PM, Johnny Withers 
> wrote:
> > You should probably turn this into a UNION and put an index on each
> column:
> >
> > SELECT f_tag_ch_y_bottom AS ftag FROM data_cst WHERE f_tag_ch_y_bottom =
> > 'E-CD7'
> > UNION ALL
> > SELECT f_tag_ch_x_bottom AS ftag FROM data_cst WHERE f_tag_ch_x_bottom =
> > 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom_minor_axis AS ftag FROM data_cst WHERE
> > f_tag_bottom_minor_axis = 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom_major_axis AS ftag FROM data_cst WHERE
> > f_tag_bottom_major_axis = 'E-CD7'
> > UNION ALL
> > SELECT f_tag_bottom FROM data_cst AS ftag WHERE f_tag_bottom = 'E-CD7'
> > ;
>
> This may work for me, but I need to explore this more tomorrow. I need
> the select to control the rows included in the aggregation. For
> example, the rows where the f_tag_* col that is used does not =
> 'E-CD7' should not be included in the aggregation. Also, I grossly
> simplified the query for this post. In reality I have 15 items in the
> where clause and a having as well.
>

I see, I missed the STDDEV() function you had, perhaps you could add that
column to each SELECT in the untion, then wrap the entire union inside
another select:

SELECT ftag, STDDEV(ch_x_top) FROM (
..union stuff here...
)


>
> > Doing this any other way will prevent index usage and a full table scan
> will
> > be required.
>
> Yes, I will be adding indices - right now I am just worried about
> getting the query to work. But this is important as it's a big table.
> So I do appreciate you mentioning it.
>

In reality, you could repeat the IFNULL(...) in the where clause the same
way you have it in the column list. Not the optimal solution but it'd work
for a proof of concept.


>
> > Is there a possibility of more than one column matching? How would you
> > handle that?
>
> I was told only 1 of the 5 will be populated and the other 4 will be
> null. But still, I said I have to code for the case where that is not
> true. So then I was told to use the first one I find that is not null,
> looking in the order I had in my original post.
>

You could also wrap another select around the union to handle more than one
of the columns having a value. You could use the outer select to pick the
one you wanted, something similar to:

SELECT
IFNULL(f_tag_ch_y_bottom,IFULL(f_tag_ch_x_bottom,IFNULL(,STDEV(ch_x_top)
FROM (
SELECT f_tag_ch_y_bottom,NULL as f_tag_ch_x_bottom,NULL AS
f_tag_bottom_minor,..., ch_x_top FROM data_cst WHERE f_tag_ch_y_bottom =
'E-CD7'
UNION ALL
SELECT NULL AS f_tag_ch_y_bottom, f_tag_ch_x_bottom,NULL AS
f_tag_bottom_minor,...,ch_x_top FROM data_cst WHERE f_tag_ch_x_bottom
= 'E-CD7'
UNION
)

And so on for each column/query.


>
>
> > On Mon, Jan 25, 2016 at 6:16 PM, Larry Martell 
> > wrote:
> >>
> >> I know I cannot use an alias in a where clause, but I am trying to
> >> figure out how to achieve what I need.
> >>
> >> If I could have an alias in a where clause my sql would look like this:
> >>
> >> SELECT IFNULL(f_tag_bottom,
> >> IFNULL(f_tag_bottom_major_axis,
> >>  IFNULL(f_tag_bottom_minor_axis,
> >>   IFNULL(f_tag_ch_x_bottom,
> >>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
> >>STDDEV(ch_x_top)
> >> FROM data_cst
> >> WHERE ftag = 'E-CD7'
> >> GROUP BY wafer_id, lot_id
> >>
> >> But I can't use ftag in the where clause. I can't put it in a having
> >> clause, as that would exclude the already aggregated rows and I want
> >> to filter then before the aggregation. Anyone have any idea how I can
> >> do this?
> >>
> >> --
> >> MySQL General Mailing List
> >> For list archives: http://lists.mysql.com/mysql
> >> To unsubscribe:http://lists.mysql.com/mysql
> >>
> >
> >
> >
> > --
> > -
> > Johnny Withers
> > 601.209.4985
> > joh...@pixelated.net
>



-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: using alias in where clause

2016-01-25 Thread Larry Martell
On Mon, Jan 25, 2016 at 8:26 PM, Johnny Withers  wrote:
> You should probably turn this into a UNION and put an index on each column:
>
> SELECT f_tag_ch_y_bottom AS ftag FROM data_cst WHERE f_tag_ch_y_bottom =
> 'E-CD7'
> UNION ALL
> SELECT f_tag_ch_x_bottom AS ftag FROM data_cst WHERE f_tag_ch_x_bottom =
> 'E-CD7'
> UNION ALL
> SELECT f_tag_bottom_minor_axis AS ftag FROM data_cst WHERE
> f_tag_bottom_minor_axis = 'E-CD7'
> UNION ALL
> SELECT f_tag_bottom_major_axis AS ftag FROM data_cst WHERE
> f_tag_bottom_major_axis = 'E-CD7'
> UNION ALL
> SELECT f_tag_bottom FROM data_cst AS ftag WHERE f_tag_bottom = 'E-CD7'
> ;

This may work for me, but I need to explore this more tomorrow. I need
the select to control the rows included in the aggregation. For
example, the rows where the f_tag_* col that is used does not =
'E-CD7' should not be included in the aggregation. Also, I grossly
simplified the query for this post. In reality I have 15 items in the
where clause and a having as well.

> Doing this any other way will prevent index usage and a full table scan will
> be required.

Yes, I will be adding indices - right now I am just worried about
getting the query to work. But this is important as it's a big table.
So I do appreciate you mentioning it.

> Is there a possibility of more than one column matching? How would you
> handle that?

I was told only 1 of the 5 will be populated and the other 4 will be
null. But still, I said I have to code for the case where that is not
true. So then I was told to use the first one I find that is not null,
looking in the order I had in my original post.


> On Mon, Jan 25, 2016 at 6:16 PM, Larry Martell 
> wrote:
>>
>> I know I cannot use an alias in a where clause, but I am trying to
>> figure out how to achieve what I need.
>>
>> If I could have an alias in a where clause my sql would look like this:
>>
>> SELECT IFNULL(f_tag_bottom,
>> IFNULL(f_tag_bottom_major_axis,
>>  IFNULL(f_tag_bottom_minor_axis,
>>   IFNULL(f_tag_ch_x_bottom,
>>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>>STDDEV(ch_x_top)
>> FROM data_cst
>> WHERE ftag = 'E-CD7'
>> GROUP BY wafer_id, lot_id
>>
>> But I can't use ftag in the where clause. I can't put it in a having
>> clause, as that would exclude the already aggregated rows and I want
>> to filter then before the aggregation. Anyone have any idea how I can
>> do this?
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql
>>
>
>
>
> --
> -
> Johnny Withers
> 601.209.4985
> joh...@pixelated.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-25 Thread Johnny Withers
You should probably turn this into a UNION and put an index on each column:

SELECT f_tag_ch_y_bottom AS ftag FROM data_cst WHERE f_tag_ch_y_bottom =
'E-CD7'
UNION ALL
SELECT f_tag_ch_x_bottom AS ftag FROM data_cst WHERE f_tag_ch_x_bottom =
'E-CD7'
UNION ALL
SELECT f_tag_bottom_minor_axis AS ftag FROM data_cst WHERE
f_tag_bottom_minor_axis = 'E-CD7'
UNION ALL
SELECT f_tag_bottom_major_axis AS ftag FROM data_cst WHERE
f_tag_bottom_major_axis = 'E-CD7'
UNION ALL
SELECT f_tag_bottom FROM data_cst AS ftag WHERE f_tag_bottom = 'E-CD7'
;

Doing this any other way will prevent index usage and a full table scan
will be required.

Is there a possibility of more than one column matching? How would you
handle that?


On Mon, Jan 25, 2016 at 6:16 PM, Larry Martell 
wrote:

> I know I cannot use an alias in a where clause, but I am trying to
> figure out how to achieve what I need.
>
> If I could have an alias in a where clause my sql would look like this:
>
> SELECT IFNULL(f_tag_bottom,
> IFNULL(f_tag_bottom_major_axis,
>  IFNULL(f_tag_bottom_minor_axis,
>   IFNULL(f_tag_ch_x_bottom,
>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>STDDEV(ch_x_top)
> FROM data_cst
> WHERE ftag = 'E-CD7'
> GROUP BY wafer_id, lot_id
>
> But I can't use ftag in the where clause. I can't put it in a having
> clause, as that would exclude the already aggregated rows and I want
> to filter then before the aggregation. Anyone have any idea how I can
> do this?
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: using alias in where clause

2016-01-25 Thread Reindl Harald



Am 26.01.2016 um 01:16 schrieb Larry Martell:

I know I cannot use an alias in a where clause, but I am trying to
figure out how to achieve what I need.

If I could have an alias in a where clause my sql would look like this:

SELECT IFNULL(f_tag_bottom,
 IFNULL(f_tag_bottom_major_axis,
  IFNULL(f_tag_bottom_minor_axis,
   IFNULL(f_tag_ch_x_bottom,
IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
STDDEV(ch_x_top)
FROM data_cst
WHERE ftag = 'E-CD7'
GROUP BY wafer_id, lot_id

But I can't use ftag in the where clause. I can't put it in a having
clause, as that would exclude the already aggregated rows and I want
to filter then before the aggregation. Anyone have any idea how I can
do this?


why would you want to do that?
it would make the where clause not be able to use proper indexes



signature.asc
Description: OpenPGP digital signature


Re: using alias in where clause

2016-01-25 Thread Rebecca Love
Have you tried using a select case statement for ftag?


> On Jan 25, 2016, at 6:39 PM, Larry Martell  wrote:
> 
> On Mon, Jan 25, 2016 at 7:27 PM, Reindl Harald  wrote:
>> 
>> 
>> Am 26.01.2016 um 01:16 schrieb Larry Martell:
>>> 
>>> I know I cannot use an alias in a where clause, but I am trying to
>>> figure out how to achieve what I need.
>>> 
>>> If I could have an alias in a where clause my sql would look like this:
>>> 
>>> SELECT IFNULL(f_tag_bottom,
>>> IFNULL(f_tag_bottom_major_axis,
>>>  IFNULL(f_tag_bottom_minor_axis,
>>>   IFNULL(f_tag_ch_x_bottom,
>>>IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>>>STDDEV(ch_x_top)
>>> FROM data_cst
>>> WHERE ftag = 'E-CD7'
>>> GROUP BY wafer_id, lot_id
>>> 
>>> But I can't use ftag in the where clause. I can't put it in a having
>>> clause, as that would exclude the already aggregated rows and I want
>>> to filter then before the aggregation. Anyone have any idea how I can
>>> do this?
>> 
>> 
>> why would you want to do that?
>> it would make the where clause not be able to use proper indexes
> 
> I do not know which of the 5 f_tag_* columns will be populated and I
> need to filter on the one that is populated.
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



using alias in where clause

2016-01-25 Thread Larry Martell
I know I cannot use an alias in a where clause, but I am trying to
figure out how to achieve what I need.

If I could have an alias in a where clause my sql would look like this:

SELECT IFNULL(f_tag_bottom,
IFNULL(f_tag_bottom_major_axis,
 IFNULL(f_tag_bottom_minor_axis,
  IFNULL(f_tag_ch_x_bottom,
   IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
   STDDEV(ch_x_top)
FROM data_cst
WHERE ftag = 'E-CD7'
GROUP BY wafer_id, lot_id

But I can't use ftag in the where clause. I can't put it in a having
clause, as that would exclude the already aggregated rows and I want
to filter then before the aggregation. Anyone have any idea how I can
do this?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-25 Thread Larry Martell
On Mon, Jan 25, 2016 at 7:27 PM, Reindl Harald  wrote:
>
>
> Am 26.01.2016 um 01:16 schrieb Larry Martell:
>>
>> I know I cannot use an alias in a where clause, but I am trying to
>> figure out how to achieve what I need.
>>
>> If I could have an alias in a where clause my sql would look like this:
>>
>> SELECT IFNULL(f_tag_bottom,
>>  IFNULL(f_tag_bottom_major_axis,
>>   IFNULL(f_tag_bottom_minor_axis,
>>IFNULL(f_tag_ch_x_bottom,
>> IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
>> STDDEV(ch_x_top)
>> FROM data_cst
>> WHERE ftag = 'E-CD7'
>> GROUP BY wafer_id, lot_id
>>
>> But I can't use ftag in the where clause. I can't put it in a having
>> clause, as that would exclude the already aggregated rows and I want
>> to filter then before the aggregation. Anyone have any idea how I can
>> do this?
>
>
> why would you want to do that?
> it would make the where clause not be able to use proper indexes

I do not know which of the 5 f_tag_* columns will be populated and I
need to filter on the one that is populated.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-25 Thread Larry Martell
On Mon, Jan 25, 2016 at 8:01 PM, Rebecca Love  wrote:
> Have you tried using a select case statement for ftag?

How would that help? Unless I'm missing something, I'd still have to
have a reference to the column alias in the where clause.


>> On Jan 25, 2016, at 6:39 PM, Larry Martell  wrote:
>>
>> On Mon, Jan 25, 2016 at 7:27 PM, Reindl Harald  
>> wrote:
>>>
>>>
>>> Am 26.01.2016 um 01:16 schrieb Larry Martell:

 I know I cannot use an alias in a where clause, but I am trying to
 figure out how to achieve what I need.

 If I could have an alias in a where clause my sql would look like this:

 SELECT IFNULL(f_tag_bottom,
 IFNULL(f_tag_bottom_major_axis,
  IFNULL(f_tag_bottom_minor_axis,
   IFNULL(f_tag_ch_x_bottom,
IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
STDDEV(ch_x_top)
 FROM data_cst
 WHERE ftag = 'E-CD7'
 GROUP BY wafer_id, lot_id

 But I can't use ftag in the where clause. I can't put it in a having
 clause, as that would exclude the already aggregated rows and I want
 to filter then before the aggregation. Anyone have any idea how I can
 do this?
>>>
>>>
>>> why would you want to do that?
>>> it would make the where clause not be able to use proper indexes
>>
>> I do not know which of the 5 f_tag_* columns will be populated and I
>> need to filter on the one that is populated.
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/mysql
>>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Neil Tompkins
Hi,

Hoping someone can help me identify why I keep having to restore my
database.  You can see below, that my machine shut down normally, yet when
I restarted the machine back up, I'm getting the error
'mysql\innodb_table_stats.ibd. Cannot open tablespace' and am having to
delete the current database and restore my backup.

Here is my Error log file.

2016-01-22 18:06:28 2540 [Note] MySQL: Normal shutdown

2016-01-22 18:06:28 2540 [Note] Giving 0 client threads a chance to die
gracefully
2016-01-22 18:06:28 2540 [Note] Event Scheduler: Purging the queue. 0 events
2016-01-22 18:06:28 2540 [Note] Shutting down slave threads
2016-01-22 18:06:28 2540 [Note] Forcefully disconnecting 0 remaining clients
2016-01-22 18:06:28 2540 [Note] Binlog end
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'partition'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_SYS_TABLESPACES'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_SYS_FOREIGN_COLS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_FT_BEING_DELETED'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_FT_DEFAULT_STOPWORD'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_METRICS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_BUFFER_POOL_STATS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_BUFFER_PAGE_LRU'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin
'INNODB_CMP_PER_INDEX_RESET'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCKS'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_TRX'
2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'InnoDB'
2016-01-22 18:06:31 2540 [Note] InnoDB: FTS optimize thread exiting.
2016-01-22 18:06:31 2540 [Note] InnoDB: Starting shutdown...
2016-01-22 19:43:48 2556 [Note] Plugin 'FEDERATED' is disabled.
2016-01-22 19:43:48 2556 [Warning] option 'innodb-autoextend-increment':
unsigned value 67108864 adjusted to 1000
2016-01-22 19:43:48 a0c InnoDB: Warning: Using
innodb_additional_mem_pool_size is DEPRECATED. This option may be removed
in future releases, together with the option innodb_use_sys_malloc and with
the InnoDB's internal memory allocator.
2016-01-22 19:43:48 2556 [Note] InnoDB: The InnoDB memory heap is disabled
2016-01-22 19:43:48 2556 [Note] InnoDB: Mutexes and rw_locks use Windows
interlocked functions
2016-01-22 19:43:48 2556 [Note] InnoDB: Compressed tables use zlib 1.2.3
2016-01-22 19:43:48 2556 [Note] InnoDB: Not using CPU crc32 instructions
2016-01-22 19:43:48 2556 [Note] InnoDB: Initializing buffer pool, size =
1.0G
2016-01-22 19:43:48 2556 [Note] InnoDB: Completed initialization of buffer
pool
2016-01-22 19:43:48 2556 [Note] InnoDB: Highest supported file format is
Barracuda.
2016-01-22 19:43:48 2556 [Note] InnoDB: The log sequence numbers 1600607
and 1600607 in ibdata files do not match the log sequence number 154136116
in the ib_logfiles!
2016-01-22 19:43:48 2556 [Note] InnoDB: Database was not shutdown normally!
2016-01-22 19:43:48 2556 [Note] InnoDB: Starting crash recovery.
2016-01-22 19:43:48 2556 [Note] InnoDB: Reading tablespace information from
the .ibd files...
2016-01-22 19:43:48 2556 [ERROR] InnoDB: Attempted to open a previously
opened tablespace. Previous tablespace mysql/innodb_table_stats uses space
ID: 1 at filepath: .\mysql\innodb_table_stats.ibd. Cannot open tablespace
sportstrader/event which uses space ID: 1 at filepath:
.\sportstrader\event.ibd
InnoDB: Error: could not open single-table tablespace file
.\sportstrader\event.ibd
InnoDB: We do not continue the crash recovery, because the table may 

Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Steven Siebert
The error is stating that your innodb log sequence is higher that that of
the actual data files...any chance your data partition is full after your
restore?

On Fri, Jan 22, 2016 at 3:23 PM, Neil Tompkins  wrote:

> Hi,
>
> Hoping someone can help me identify why I keep having to restore my
> database.  You can see below, that my machine shut down normally, yet when
> I restarted the machine back up, I'm getting the error
> 'mysql\innodb_table_stats.ibd. Cannot open tablespace' and am having to
> delete the current database and restore my backup.
>
> Here is my Error log file.
>
> 2016-01-22 18:06:28 2540 [Note] MySQL: Normal shutdown
>
> 2016-01-22 18:06:28 2540 [Note] Giving 0 client threads a chance to die
> gracefully
> 2016-01-22 18:06:28 2540 [Note] Event Scheduler: Purging the queue. 0
> events
> 2016-01-22 18:06:28 2540 [Note] Shutting down slave threads
> 2016-01-22 18:06:28 2540 [Note] Forcefully disconnecting 0 remaining
> clients
> 2016-01-22 18:06:28 2540 [Note] Binlog end
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'partition'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_TABLESPACES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_FOREIGN_COLS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_TABLESTATS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_INDEX_TABLE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_INDEX_CACHE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_BEING_DELETED'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_DELETED'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_DEFAULT_STOPWORD'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_METRICS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_BUFFER_POOL_STATS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_BUFFER_PAGE_LRU'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_CMP_PER_INDEX_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCKS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_TRX'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'InnoDB'
> 2016-01-22 18:06:31 2540 [Note] InnoDB: FTS optimize thread exiting.
> 2016-01-22 18:06:31 2540 [Note] InnoDB: Starting shutdown...
> 2016-01-22 19:43:48 2556 [Note] Plugin 'FEDERATED' is disabled.
> 2016-01-22 19:43:48 2556 [Warning] option 'innodb-autoextend-increment':
> unsigned value 67108864 adjusted to 1000
> 2016-01-22 19:43:48 a0c InnoDB: Warning: Using
> innodb_additional_mem_pool_size is DEPRECATED. This option may be removed
> in future releases, together with the option innodb_use_sys_malloc and with
> the InnoDB's internal memory allocator.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: The InnoDB memory heap is disabled
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Mutexes and rw_locks use Windows
> interlocked functions
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Compressed tables use zlib 1.2.3
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Not using CPU crc32 instructions
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Initializing buffer pool, size =
> 1.0G
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Completed initialization of buffer
> pool
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Highest supported file format is
> Barracuda.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: The log sequence numbers 1600607
> and 1600607 in ibdata files do not match the log sequence number 154136116
> in the ib_logfiles!
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Database was not shutdown normally!
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Starting crash recovery.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Reading tablespace information from
> the .ibd files...
> 2016-01-22 19:43:48 2556 [ERROR] 

Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Lisa Smith
Hi Neil,

When you say you delete the current database, do you mean the database
files only? Are you doing a complete restore?

On Fri, Jan 22, 2016 at 1:23 PM, Neil Tompkins  wrote:

> Hi,
>
> Hoping someone can help me identify why I keep having to restore my
> database.  You can see below, that my machine shut down normally, yet when
> I restarted the machine back up, I'm getting the error
> 'mysql\innodb_table_stats.ibd. Cannot open tablespace' and am having to
> delete the current database and restore my backup.
>
> Here is my Error log file.
>
> 2016-01-22 18:06:28 2540 [Note] MySQL: Normal shutdown
>
> 2016-01-22 18:06:28 2540 [Note] Giving 0 client threads a chance to die
> gracefully
> 2016-01-22 18:06:28 2540 [Note] Event Scheduler: Purging the queue. 0
> events
> 2016-01-22 18:06:28 2540 [Note] Shutting down slave threads
> 2016-01-22 18:06:28 2540 [Note] Forcefully disconnecting 0 remaining
> clients
> 2016-01-22 18:06:28 2540 [Note] Binlog end
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'partition'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_TABLESPACES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_FOREIGN_COLS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_SYS_TABLESTATS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_INDEX_TABLE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_INDEX_CACHE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_BEING_DELETED'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_DELETED'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_FT_DEFAULT_STOPWORD'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_METRICS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_BUFFER_POOL_STATS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_BUFFER_PAGE_LRU'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
> 'INNODB_CMP_PER_INDEX_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_RESET'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCKS'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_TRX'
> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'InnoDB'
> 2016-01-22 18:06:31 2540 [Note] InnoDB: FTS optimize thread exiting.
> 2016-01-22 18:06:31 2540 [Note] InnoDB: Starting shutdown...
> 2016-01-22 19:43:48 2556 [Note] Plugin 'FEDERATED' is disabled.
> 2016-01-22 19:43:48 2556 [Warning] option 'innodb-autoextend-increment':
> unsigned value 67108864 adjusted to 1000
> 2016-01-22 19:43:48 a0c InnoDB: Warning: Using
> innodb_additional_mem_pool_size is DEPRECATED. This option may be removed
> in future releases, together with the option innodb_use_sys_malloc and with
> the InnoDB's internal memory allocator.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: The InnoDB memory heap is disabled
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Mutexes and rw_locks use Windows
> interlocked functions
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Compressed tables use zlib 1.2.3
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Not using CPU crc32 instructions
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Initializing buffer pool, size =
> 1.0G
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Completed initialization of buffer
> pool
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Highest supported file format is
> Barracuda.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: The log sequence numbers 1600607
> and 1600607 in ibdata files do not match the log sequence number 154136116
> in the ib_logfiles!
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Database was not shutdown normally!
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Starting crash recovery.
> 2016-01-22 19:43:48 2556 [Note] InnoDB: Reading tablespace information from
> the .ibd files...
> 2016-01-22 19:43:48 2556 [ERROR] InnoDB: Attempted to open a 

Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Neil Tompkins
My disk drive has about 3 gb free space.  The sequence of events is

1. Create database
2. Restore the data
3. Use the database, SELECT data etc
4. Shut down PC
5. When I restart PC I get this error


Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Neil Tompkins
Sorry to clarify;  I have my database schema, and when the database crashes
at the moment, I delete that crashed database, create the new database
using the schema, and then restore the data using a import routine I have
written in C Sharp, to import data from a CSV file to MySQL tables.

The database works fine, but seems to crash when I shut the PC down.


On Fri, Jan 22, 2016 at 8:47 PM, Lisa Smith  wrote:

> Hi Neil,
>
> When you say you delete the current database, do you mean the database
> files only? Are you doing a complete restore?
>
> On Fri, Jan 22, 2016 at 1:23 PM, Neil Tompkins <
> neil.tompk...@googlemail.com> wrote:
>
>> Hi,
>>
>> Hoping someone can help me identify why I keep having to restore my
>> database.  You can see below, that my machine shut down normally, yet when
>> I restarted the machine back up, I'm getting the error
>> 'mysql\innodb_table_stats.ibd. Cannot open tablespace' and am having to
>> delete the current database and restore my backup.
>>
>> Here is my Error log file.
>>
>> 2016-01-22 18:06:28 2540 [Note] MySQL: Normal shutdown
>>
>> 2016-01-22 18:06:28 2540 [Note] Giving 0 client threads a chance to die
>> gracefully
>> 2016-01-22 18:06:28 2540 [Note] Event Scheduler: Purging the queue. 0
>> events
>> 2016-01-22 18:06:28 2540 [Note] Shutting down slave threads
>> 2016-01-22 18:06:28 2540 [Note] Forcefully disconnecting 0 remaining
>> clients
>> 2016-01-22 18:06:28 2540 [Note] Binlog end
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'partition'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_DATAFILES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_TABLESPACES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_FOREIGN_COLS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_TABLESTATS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_INDEX_TABLE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_INDEX_CACHE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_BEING_DELETED'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_DELETED'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_DEFAULT_STOPWORD'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_METRICS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_BUFFER_POOL_STATS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_BUFFER_PAGE_LRU'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_CMP_PER_INDEX_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_CMP_PER_INDEX'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCKS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_TRX'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'InnoDB'
>> 2016-01-22 18:06:31 2540 [Note] InnoDB: FTS optimize thread exiting.
>> 2016-01-22 18:06:31 2540 [Note] InnoDB: Starting shutdown...
>> 2016-01-22 19:43:48 2556 [Note] Plugin 'FEDERATED' is disabled.
>> 2016-01-22 19:43:48 2556 [Warning] option 'innodb-autoextend-increment':
>> unsigned value 67108864 adjusted to 1000
>> 2016-01-22 19:43:48 a0c InnoDB: Warning: Using
>> innodb_additional_mem_pool_size is DEPRECATED. This option may be removed
>> in future releases, together with the option innodb_use_sys_malloc and
>> with
>> the InnoDB's internal memory allocator.
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: The InnoDB memory heap is disabled
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Mutexes and rw_locks use Windows
>> interlocked functions
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Compressed tables use zlib 1.2.3
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Not using CPU crc32 instructions
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Initializing buffer pool, size =
>> 1.0G
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Completed initialization of buffer
>> pool
>> 2016-01-22 19:43:48 2556 [Note] 

Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread Neil Tompkins
One point to add, that might be of interest is that the primary key and
foreign key for two of related tables is a VARCHAR(255), and my MySQL
version is '5.6.12-log'

On Fri, Jan 22, 2016 at 8:47 PM, Lisa Smith  wrote:

> Hi Neil,
>
> When you say you delete the current database, do you mean the database
> files only? Are you doing a complete restore?
>
> On Fri, Jan 22, 2016 at 1:23 PM, Neil Tompkins <
> neil.tompk...@googlemail.com> wrote:
>
>> Hi,
>>
>> Hoping someone can help me identify why I keep having to restore my
>> database.  You can see below, that my machine shut down normally, yet when
>> I restarted the machine back up, I'm getting the error
>> 'mysql\innodb_table_stats.ibd. Cannot open tablespace' and am having to
>> delete the current database and restore my backup.
>>
>> Here is my Error log file.
>>
>> 2016-01-22 18:06:28 2540 [Note] MySQL: Normal shutdown
>>
>> 2016-01-22 18:06:28 2540 [Note] Giving 0 client threads a chance to die
>> gracefully
>> 2016-01-22 18:06:28 2540 [Note] Event Scheduler: Purging the queue. 0
>> events
>> 2016-01-22 18:06:28 2540 [Note] Shutting down slave threads
>> 2016-01-22 18:06:28 2540 [Note] Forcefully disconnecting 0 remaining
>> clients
>> 2016-01-22 18:06:28 2540 [Note] Binlog end
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'partition'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_DATAFILES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_TABLESPACES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_FOREIGN_COLS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_SYS_TABLESTATS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_INDEX_TABLE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_INDEX_CACHE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_BEING_DELETED'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_FT_DELETED'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_FT_DEFAULT_STOPWORD'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_METRICS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_BUFFER_POOL_STATS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_BUFFER_PAGE_LRU'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_CMP_PER_INDEX_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin
>> 'INNODB_CMP_PER_INDEX'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMPMEM'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP_RESET'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_CMP'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_LOCKS'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'INNODB_TRX'
>> 2016-01-22 18:06:31 2540 [Note] Shutting down plugin 'InnoDB'
>> 2016-01-22 18:06:31 2540 [Note] InnoDB: FTS optimize thread exiting.
>> 2016-01-22 18:06:31 2540 [Note] InnoDB: Starting shutdown...
>> 2016-01-22 19:43:48 2556 [Note] Plugin 'FEDERATED' is disabled.
>> 2016-01-22 19:43:48 2556 [Warning] option 'innodb-autoextend-increment':
>> unsigned value 67108864 adjusted to 1000
>> 2016-01-22 19:43:48 a0c InnoDB: Warning: Using
>> innodb_additional_mem_pool_size is DEPRECATED. This option may be removed
>> in future releases, together with the option innodb_use_sys_malloc and
>> with
>> the InnoDB's internal memory allocator.
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: The InnoDB memory heap is disabled
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Mutexes and rw_locks use Windows
>> interlocked functions
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Compressed tables use zlib 1.2.3
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Not using CPU crc32 instructions
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Initializing buffer pool, size =
>> 1.0G
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Completed initialization of buffer
>> pool
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: Highest supported file format is
>> Barracuda.
>> 2016-01-22 19:43:48 2556 [Note] InnoDB: The log sequence numbers 1600607
>> and 1600607 in ibdata files do not match the log sequence 

Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread shawn l.green

Hello Neil,

On 1/22/2016 3:23 PM, Neil Tompkins wrote:

2016-01-22 19:43:48 2556 [ERROR] InnoDB: Attempted to open a previously
opened tablespace. Previous tablespace mysql/innodb_table_stats uses space
ID: 1 at filepath: .\mysql\innodb_table_stats.ibd. Cannot open tablespace
sportstrader/event which uses space ID: 1 at filepath:
.\sportstrader\event.ibd
InnoDB: Error: could not open single-table tablespace file
.\sportstrader\event.ibd


This is where you need to focus. Something has modified the tablespace 
ID in the event.ibd file (the table `sportstrader`.`events`) so that it 
has the same ID as one of the system tables (the one used to store 
persistent stats for the InnoDB storage engine).


Each tablespace must have its own unique ID value.

This could have been anything from a bad sector on disk, a stray write 
by some other program, an intentional file-system-level change by some 
nefarious person, a bad memory cell that held the ID value of the 
`event` table's first page that caused a wrong value to be written to 
disk when the table closed, a bad backup/restore cycle ... the list can 
go on practically forever.  The problem is, the ID value in the .ibd 
file now conflicts with that of another tablespace file.


The resolution is to delete that table then restore it from a known good 
copy (preferably one from a backup).  If necessary, use the content of 
the Binary log to recover changes made to the table since that backup 
was made.


If you don't have a good copy (or any Binary log content) then try this:

1) Make a copy of event.ibd then remove it from that folder
2) Restart mysqld, it will complain about the missing file into the 
Error log. This is expected.

3) Use the techniques here to DROP that table.
http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html
4) Create an empty copy of that table (correct schema, no data)
5) Use ALTER TABLE...DISCARD TABLESPACE then ALTER TABLE...IMPORT 
TABLESPACE to replace the empty .ibd file you have now with the one you 
backed up earlier.
(demonstrated at the bottom of that link I just provided for "orphaned" 
tablespaces)


Then, when you can, schedule a full hardware check to look for 
potentially failing memory or disk media. That's the most common cause 
for a fault like this.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Connector/C++ 1.1.7 has been released

2016-01-20 Thread Hery Ramilison

Dear MySQL Users,

A new GA (general availability) version of MySQL Connector/C++ has
been made available: MySQL Connector/C++ 1.1.7 GA. The MySQL
Connector/C++ provides a C++ API for connecting client applications to
the MySQL Server 5.5 or newer.

You can download the production release at:

http://dev.mysql.com/downloads/connector/cpp/1.1.html

MySQL Connector C++ (Commercial) will be available for download on the
My Oracle Support (MOS) website. This release will be available on eDelivery
(OSDC) in next month's upload cycle.

The MySQL driver for C++ offers an easy to use API derived from JDBC
4.0. MySQL Workbench has used it successfully for years.

We have improved the driver since the last GA release. Please see the
documentation and the CHANGES file in the source distribution for a
detailed description of bugs that have been fixed. Bug descriptions are
also listed below.

Enjoy!

==

Changes in MySQL Connector/C++ 1.1.7

   Configuration Notes

 * Binary distributions for this release of MySQL
   Connector/C++ were linked against libmysqlclient from
   MySQL 5.7.10, except for OS X 10.8/10.9, for which
   distributions were linked against MySQL 5.7.9. This
   enables Connector/C++ to take advantage of features
   present in recent client library versions. Some examples:

  + Support for the MySQL JSON data type is available.
Current versions of MySQL Workbench require JSON
support, so to build MySQL Workbench 6.3.5 or higher
from source, it is necessary to use a version of
Connector/C++ at least as recent as 1.1.7.

  + Applications will attempt to establish a secure
(encrypted) connection by default whenever the
server is enabled to support secure connections, and
fall back to an unencrypted connection otherwise.
(This is as described at Configuring MySQL to Use
Secure Connections

(http://dev.mysql.com/doc/refman/5.7/en/using-secure-connections.html).)
To enforce a secure connection, such that an error occurs
if secure connections are not available, applications can
enable the sslEnforce connection option.
   To build Connector/C++ from source, you must use either a
   General Availability version of MySQL 5.7 (5.7.9 or
   higher) or Connector/C 6.1.8 or higher. Set the MYSQL_DIR
   CMake option appropriately at configuration time as
   necessary. (Bug #22351273)

   Security Notes

 * The linked OpenSSL library for Connector/C++ Commercial
   has been updated to version 1.0.1q. Issues fixed in the
   new OpenSSL version are described at
   http://www.openssl.org/news/vulnerabilities.html.
   This change does not affect Oracle-produced MySQL
   Community builds of Connector/C++, which use the yaSSL
   library instead.

   Spatial Data Support

 * The required version of the Boost library for
   Connector/C++ builds has been raised to 1.56.0.

   Bugs Fixed

 * MySQL_Prepared_ResultSet::relative() failed to fetch the
   record due to a missing proxy->fetch() call. (Bug
   #21152054)

 * During Connector/C++ builds, the MySQL Server CXXFLAGS
   and CFLAGS values were used rather than the system
   default values. To specify explicitly to use the server
   values, enable the new USE_SERVER_CXXFLAGS CMake option.
   (Bug #77655, Bug #21391025)

On Behalf of the MySQL/ORACLE RE Team



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Cluster 7.2.23 has been released

2016-01-19 Thread Prashant Tekriwal

Dear MySQL Users,

MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:

  - In-Memory storage - Real-time performance (with optional
checkpointing to disk)
  - Transparent Auto-Sharding - Read & write scalability
  - Active-Active/Multi-Master geographic replication
  - 99.999% High Availability with no single point of failure
and on-line maintenance
  - NoSQL and SQL APIs (including C++, Java, http and Memcached)

MySQL Cluster 7.2.23, has been released and can be downloaded from

http://www.mysql.com/downloads/cluster/

where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.

The release notes are available from

http://dev.mysql.com/doc/relnotes/mysql-cluster/7.2/en/index.html

MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.

More details can be found at

http://www.mysql.com/products/cluster/

Enjoy !



== 


Changes in MySQL Cluster NDB 7.2.23 (5.5.47-ndb-7.2.23) (2016-01-19)

   MySQL Cluster NDB 7.2.23 is a new release of MySQL Cluster,
   incorporating new features in the NDB storage engine, and
   fixing recently discovered bugs in previous MySQL Cluster NDB
   7.2 development releases.

   Obtaining MySQL Cluster NDB 7.2.  MySQL Cluster NDB 7.2
   source code and binaries can be obtained from
http://dev.mysql.com/downloads/cluster/.

   This release also incorporates all bugfixes and changes made
   in previous MySQL Cluster releases, as well as all bugfixes
   and feature changes which were added in mainline MySQL 5.5
   through MySQL 5.5.47 (see Changes in MySQL 5.5.47
   (2015-12-07)
   (http://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-47.html)).

   Bugs Fixed

 * In debug builds, a WAIT_EVENT while polling caused
   excessive logging to stdout. (Bug #22203672)

 * When executing a schema operation such as CREATE TABLE on
   a MySQL Cluster with multiple SQL nodes, it was possible
   for the SQL node on which the operation was performed to
   time out while waiting for an acknowledgement from the
   others. This could occur when different SQL nodes had
   different settings for --ndb-log-updated-only,
   --ndb-log-update-as-write, or other mysqld options
   effecting binary logging by NDB.
   This happened due to the fact that, in order to
   distribute schema changes between them, all SQL nodes
   subscribe to changes in the ndb_schema system table, and
   that all SQL nodes are made aware of each others
   subscriptions by subscribing to TE_SUBSCRIBE and
   TE_UNSUBSCRIBE events. The names of events to subscribe
   to are constructed from the table names, adding REPL$ or
   REPLF$ as a prefix. REPLF$ is used when full binary
   logging is specified for the table. The issue described
   previously arose because different values for the options
   mentioned could lead to different events being subscribed
   to by different SQL nodes, meaning that all SQL nodes
   were not necessarily aware of each other, so that the
   code that handled waiting for schema distribution to
   complete did not work as designed.
   To fix this issue, MySQL Cluster now treats the
   ndb_schema table as a special case and enforces full
   binary logging at all times for this table, independent
   of any settings for mysqld binary logging options. (Bug
   #22174287, Bug #79188)

 * Using ndb_mgm STOP -f to force a node shutdown even when
   it triggered a complete shutdown of the cluster, it was
   possible to lose data when a sufficient number of nodes
   were shut down, triggering a cluster shutodwn, and the
   timing was such that SUMA handovers had been made to
   nodes already in the process of shutting down. (Bug
   #17772138)

 * The internal NdbEventBuffer::set_total_buckets() method
   calculated the number of remaining buckets incorrectly.
   This caused any incomplete epoch to be prematurely
   completed when the SUB_START_CONF signal arrived out of
   order. Any events belonging to this epoch arriving later
   were then ignored, and so effectively lost, which
   resulted in schema changes not being distributed
   correctly among SQL nodes. (Bug #79635, Bug #22363510)

 * Schema events were appended to the binary log out of
   order relative to non-schema events. This was caused by
   the fact that the binlog injector did not properly handle
   the case where schema events and non-schema events were
   from different epochs.
   This fix modifies the handling of events from the two
   schema and non-schema event streams such that events are
   now always handled one epoch at a time, 

MySQL Cluster 7.4.9 has been released

2016-01-18 Thread Lars Tangvald


Dear MySQL Users, 

MySQL Cluster is the distributed, shared-nothing variant of MySQL. 
This storage engine provides: 

- In-memory persistent storage - Real-time performance 
- Transparent Auto-Sharding - Read & write scalability 
- Active-Active/Multi-Master geographic replication 
- 99.999% High Availability with no single point of failure 
and on-line maintenance 
- NoSQL and SQL APIs (including C++, Java, http, Memcached 
and JavaScript/Node.js) 

MySQL Cluster 7.4 makes significant advances in performance; 
operational efficiency (such as enhanced reporting and faster restarts 
and upgrades) and conflict detection and resolution for active-active 
replication between MySQL Clusters. 

MySQL Cluster 7.4.9, has been released and can be downloaded from 

http://www.mysql.com/downloads/cluster/ 

where you will also find Quick Start guides to help you get your 
first MySQL Cluster database up and running. 

The release notes are available from 

http://dev.mysql.com/doc/relnotes/mysql-cluster/7.4/en/index.html 

MySQL Cluster enables users to meet the database challenges of next 
generation web, cloud, and communications services with uncompromising 
scalability, uptime and agility. 

More details can be found at 

http://www.mysql.com/products/cluster/ 

Enjoy ! 


Changes in MySQL Cluster NDB 7.4.9 (5.6.28-ndb-7.4.9) (2016-01-18 ) 

MySQL Cluster NDB 7.4.9 is a new release of MySQL Cluster 
7.4, based on MySQL Server 5.6 and including features in 
version 7.4 of the NDB storage engine, as well as fixing 
recently discovered bugs in previous MySQL Cluster releases. 

Obtaining MySQL Cluster NDB 7.4. MySQL Cluster NDB 7.4 
source code and binaries can be obtained from 
http://dev.mysql.com/downloads/cluster/. 

For an overview of changes made in MySQL Cluster NDB 7.4, see 
MySQL Cluster Development in MySQL Cluster NDB 7.4 
( http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-development- 
5-6-ndb-7-4.htm l). 

This release also incorporates all bugfixes and changes made 
in previous MySQL Cluster releases, as well as all bugfixes 
and feature changes which were added in mainline MySQL 5.6 
through MySQL 5.6.28 (see Changes in MySQL 5.6.28 
(2015-12-07) 
(http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-28.html)). 

Functionality Added or Changed 

* Important Change: Previously, the NDB scheduler always 
optimized for speed against throughput in a predetermined 
manner (this was hard coded); this balance can now be set 
using the SchedulerResponsiveness data node configuration 
parameter. This parameter accepts an integer in the range 
of 0-10 inclusive, with 5 as the default. Higher values 
provide better response times relative to throughput. 
Lower values provide increased throughput, but impose 
longer response times. (Bug #78531, Bug #21889312) 

* Added the tc_time_track_stats table to the ndbinfo 
information database. This table provides time-tracking 
information relating to transactions, key operations, and 
scan operations performed by NDB. (Bug #78533, Bug 
#21889652) 

* Cluster Replication: Normally, RESET SLAVE causes all 
entries to be deleted from the mysql.ndb_apply_status 
table. This release adds the ndb_clear_apply_status 
system variable, which makes it possible to override this 
behavior. This variable is ON by default; setting it to 
OFF keeps RESET SLAVE from purging the ndb_apply_status 
table. (Bug #12630403) 

Bugs Fixed 

* Important Change: Users can now set the number and length 
of connection timeouts allowed by most NDB programs with 
the --connect-retries and --connect-retry-delay command 
line options introduced for the programs in this release. 
For ndb_mgm, --connect-retries supersedes the existing 
--try-reconnect option. (Bug #57576, Bug #11764714) 

* When executing a schema operation such as CREATE TABLE on 
a MySQL Cluster with multiple SQL nodes, it was possible 
for the SQL node on which the operation was performed to 
time out while waiting for an acknowledgement from the 
others. This could occur when different SQL nodes had 
different settings for --ndb-log-updated-only, 
--ndb-log-update-as-write, or other mysqld options 
effecting binary logging by NDB. 
This happened due to the fact that, in order to 
distribute schema changes between them, all SQL nodes 
subscribe to changes in the ndb_schema system table, and 
that all SQL nodes are made aware of each others 
subscriptions by subscribing to TE_SUBSCRIBE and 
TE_UNSUBSCRIBE events. The names of events to subscribe 
to are constructed from the table names, adding REPL$ or 
REPLF$ as a prefix. REPLF$ is used when full binary 
logging is specified for the table. The issue described 
previously arose because different values for the options 
mentioned could lead to different events being subscribed 
to by different SQL nodes, meaning that all SQL nodes 
were not necessarily aware of each other, so that the 
code that handled waiting for schema distribution to 
complete did not work as designed. 

Re: Renaming a table that has a trigger, then attempting to drop that trigger results in 'table doesn't exist' error.

2016-01-18 Thread shawn l.green

Hello Michael,

On 1/6/2016 12:51 PM, Michael Vaughan wrote:

If you execute the script below, you will get the following error:
'Error Code: 1146. Table 'testschema.TestRenamed' doesn't exist"

delimiter //

CREATE TABLE Test(
   id  int not null primary key auto_increment,
   name varchar(255)
)//

CREATE TRIGGER TEST_TRIGGER
BEFORE INSERT ON Test
FOR EACH ROW
BEGIN
SET NEW.name = CONCAT(NEW.name, '_X');
END//

RENAME TABLE Test TO TestRenamed//

DROP TRIGGER Test.TEST_TRIGGER//



Are there any workarounds for this?



Thank you for also reporting this as a bug.

http://bugs.mysql.com/bug.php?id=79873

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL for Excel 1.3.6 has been released

2016-01-11 Thread Gipson Pulla
Dear MySQL users,

The MySQL Windows Experience Team is proud to announce the
release of MySQL for Excel version 1.3.6. This is a maintenance
release for 1.3.x. It can be used for production environments.

MySQL for Excel is an application plug-in enabling data analysts
to very easily access and manipulate MySQL data within Microsoft
Excel. It enables you to directly work with a MySQL database from
within Microsoft Excel so you can easily do tasks such as:

  * Importing MySQL Data into Excel

  * Exporting Excel data directly into MySQL to a new or existing table

  * Editing MySQL data directly within Excel


MySQL for Excel is installed using the MySQL Installer for Windows.

The MySQL Installer comes in 2 versions

- Full (150 MB) which includes a complete set of MySQL products with
  their binaries included in the download.

- Web (1.5 MB - a network install) which will just pull the MySQL for
  Excel over the web and install it when run.

You can download MySQL Installer from our official Downloads page at
http://dev.mysql.com/downloads/installer/

The MySQL for Excel product can also be downloaded by using the product
standalone installer found at this link
http://dev.mysql.com/downloads/windows/excel/

Changes in MySQL for Excel 1.3.6 (2016-01-07)

 * Functionality Added or Changed

 * Bugs Fixed

   Functionality Added or Changed

 * A new Generate an INSERT statement for each data row
   option was added to the Advanced Options dialog for
   Append and Export Data operations. This is disabled by
   default. A single INSERT statement is generated that
   inserts all affected rows, unless this option is enabled.
   When checked, an INSERT statement for each data row being
   appended or exported is generated in the resulting SQL
   query sent to the MySQL server. When unchecked, a single
   INSERT statement is generated as a bulk data operation,
   which performs better than multiple INSERT statements.
   While row-by-row inserts are slower, for debugging errors
   it could help determine which row causes an error, since
   the MySQL Server will process them row by row.

 * Added column properties to the Export Data dialog to
   support the definition of Auto Increment and Default
   Value for new MySQL tables where the data will be
   exported.
   Added a context menu to declare a numeric field to be
   Unsigned or to have Zero-Fill, accessible by
   right-clicking the Data Type combo box.
   Added the TIMESTAMP data type to the full list of valid
   data types, as it was not being shown before. Added help
   tool tips to fields in the Export Data dialog that
   explain in detail how the column properties work in a
   MySQL database.

   Bugs Fixed

 * The Pivot Table feature was updated to work with Excel
   2016. (Bug #22387425)

 * Closing the plugin via the Close button in the panel
   would not toggle the MySQL for Excel checkbox in the Data
   ribbon. (Bug #22374026, Bug #79655)

 * Fixed the content detector for the active Excel cell
   after importing data and after loading database objects
   when a database is opened. Now, the data options are not
   enabled until a table is selected. (Bug #22372915)

 * Append and Export operations would not append/export
   boolean data. (Bug #22138966)

 * The MySQL for Excel Refresh All function would throw an
   unhandled exception. (Bug #22110627, Bug #78941)

 * With Office 2007, data would not auto-save after closing
   Excel. (Bug #22081313, Bug #78910)

 * Numeric data stored in text columns is now correctly
   formatted by Excel as text, and not as the "General"
   format that automatically formats cells as numbers when
   the data is stored in a text object. This fixes the issue
   where text columns were formatted as numbers, which
   stripped leading zeros. (Bug #20905284, Bug #76738)

 * Changed settings for containers and forms to let all UI
   components automatically scale on different DPI settings.
   Also removed the restriction to resize the MySQL for
   Excel's pane, so in case a different DPI setting is used
   and the fonts look too big, the pane can be resized to
   see contents as best as possible. (Bug #18099454, Bug
   #71389)

Quick links:
MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html.
Inside MySQL blog (NEW blog home): http://insidemysql.com/
MySQL on Windows blog (OLD blog home): http://blogs.oracle.com/MySQLOnWindows.
MySQL for Excel forum: http://forums.mysql.com/list.php?172.
MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel.

Enjoy and thanks for the support!
The MySQL on Windows team at Oracle.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: commands out of sync, you can't run this command now'

2016-01-08 Thread Reindl Harald



Am 08.01.2016 um 10:14 schrieb Nitin Mehta:

Looks like some kind of locking mechanism in the application. Should not be a 
database issue.


no true - this is a native mysql error message!
honestly both of you should have used Google

http://dev.mysql.com/doc/refman/5.7/en/commands-out-of-sync.html

http://stackoverflow.com/questions/15798762/mysqli-commands-out-of-sync-you-cant-run-this-command-now


 On Friday, January 8, 2016 10:47 AM, geetanjali mehra 
 wrote:


  Hi all,

Suddenly I have started getting below error from our web site:

'commands out of sync, you can't run this command now'

This error is generated sometimes when opening a page ,and immediately
after refreshing the page, error gone.

Can you please help me to troubleshoot this issue, so that above error
disappears permanently? Is it a database issue or application side issue?




signature.asc
Description: OpenPGP digital signature


Re: commands out of sync, you can't run this command now'

2016-01-08 Thread Peter Brawley



On 1/8/2016 3:14, Nitin Mehta wrote:

Looks like some kind of locking mechanism in the application. Should not be a 
database issue.

 On Friday, January 8, 2016 10:47 AM, geetanjali mehra 
 wrote:
  


  Hi all,

Suddenly I have started getting below error from our web site:

'commands out of sync, you can't run this command now'

This error is generated sometimes when opening a page ,and immediately
after refreshing the page, error gone.

Can you please help me to troubleshoot this issue, so that above error
disappears permanently? Is it a database issue or application side issue?


http://dev.mysql.com/doc/refman/5.7/en/commands-out-of-sync.html

PB

-




Best Regards,
Geetanjali Mehra
Senior Database Administrator






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



commands out of sync, you can't run this command now'

2016-01-07 Thread geetanjali mehra
Hi all,

Suddenly I have started getting below error from our web site:

'commands out of sync, you can't run this command now'

This error is generated sometimes when opening a page ,and immediately
after refreshing the page, error gone.

Can you please help me to troubleshoot this issue, so that above error
disappears permanently? Is it a database issue or application side issue?



Best Regards,
Geetanjali Mehra
Senior Database Administrator


MySQL for Visual Studio 1.2.6 has been released

2016-01-07 Thread Prashant Tekriwal

Dear MySQL users,

The MySQL Windows Experience Team is proud to announce the release
of MySQL for Visual Studio 1.2.6.

This is a maintenance release for 1.2.x. It can be used for
production environments.

MySQL for Visual Studio is a product that includes all of the
Visual Studio integration functionality to create and manage
MySQL databases when developing .NET applications.

MySQL for Visual Studio is installed using the MySQL Installer
for Windows which comes in 2 versions:
   * Full (150 MB) which includes a complete set of MySQL products
 with their binaries included in the downloaded bundle.
   * Web (1.5 MB - a network install) which will just pull MySQL
 for Visual Studio over the web and install it when run.

You can download MySQL Installer from our official Downloads page at
http://dev.mysql.com/downloads/installer/.

MySQL for Visual Studio can also be downloaded by using the product
standalone installer found at
http://dev.mysql.com/downloads/windows/visualstudio/.


Changes in MySQL for Visual Studio 1.2.6 (2016-01-07)

   Bugs Fixed

 * With MySQL for Visual Studio installed, attempting to
   create MSSQL connections would fail and throw an
   exception. (Bug #22122881, Bug #77441)

 * The Tools, Extensions and Updates, Installed, Tools
   window always reported "1.0.0" as the current MySQL for
   Visual Studio version number. (Bug #22114385)

 * Executing Generate Database from Model from an existing
   Entity Framework model would throw an exception related
   to an error with the T4 template file, when then aborted
   the creation of the database. The Connector/Net Entity
   Framework assembly (MySql.Data.Entity.EF6.dll) path was
   fixed in the MySQL T4 template file (SSDLToMySQL.tt).
   (Bug #20234532, Bug #74388)

 * With Visual Studio 2012, generating a database from a
   model would fail and throw an exception. However,
   clicking OK after the error would cause the action to
   succeed. (Bug #18181649)


What Is New In MySQL for Visual Studio 1.2
---
- New MySQL Project Items for creating data views in Windows Forms and
  ASP.NET MVC web applications.

- A new option in web configuration tool for the ASP.NET Personalization
  Provider (this feature requires MySQL Connector/NET 6.9 or newer).

- A new option in web configuration tool for the ASP.NET Site Map
  Provider   (this feature requires MySQL Connector/NET 6.9 or newer).

- A new option for the MySQLSimpleMembership provider in the web
  configuration tool. (This feature requires MySQL Connector/NET or newer).


MySQL Windows Forms Project Item
---
This Project Item is available on the Add New Item dialog in Visual Studio
when adding a new item to an existing project.

The dialog presented to create the MySQL Windows Forms Project Item
automates the generation of a Windows Form, representing a view for
MySQL data available through an existing Entity Framework's model
entity containing a MySQL table or view.

Different view types are available to present the data:
   * Single-column: A form that contains one control by each existing
 column in the table with navigation controls and that allows CRUD
 operations.All controls can include validations for numeric and
 DateTime data types.

   * Grid: A form with a data grid view that contains navigation controls.

   * Master-detail: A form with a single control layout for the Parent 
table

 and a data grid view to navigate through child table's data.

Supported with C# or Visual Basic language. This feature requires
Connector/NET 6.7.5, 6.8.3 or 6.9.x.

For more details on the features included check the documentation at:
https://dev.mysql.com/doc/connector-net/en/visual-studio-project-items-forms.html


MySQL ASP.NET MVC Project Item

This Project Item is available on the Add New Item dialog in Visual Studio
when adding a new item to an existing project.

The dialog presented to create the MySQL ASP.NET MVC Item automates the
generation of a controller and its corresponding view, representing a view
for MySQL data available through an existing Entity Framework's model entity
containing a MySQL table or view. The MVC versions supported by this wizard
are 3 when using Visual Studio 2010 or 2012, and 4 when using Visual Studio
2013 or greater.

The generation of the MVC items is done by creating an Entity Framework data
model either with Entity Framework version 5 or 6 depending on the user's
selection.

Supported with C# or Visual Basic language. This feature requires
Connector/NET 6.7.5, 6.8.3 or 6.9.x.

For more details on the features included check the documentation at:
https://dev.mysql.com/doc/connector-net/en/visual-studio-project-items-mvc.html


New option in web configuration tool for the ASP.NET Personalization 

Renaming a table that has a trigger, then attempting to drop that trigger results in 'table doesn't exist' error.

2016-01-06 Thread Michael Vaughan
If you execute the script below, you will get the following error: 
'Error Code: 1146. Table 'testschema.TestRenamed' doesn't exist"

delimiter //

CREATE TABLE Test(
  id  int not null primary key auto_increment,
  name varchar(255)
)//

CREATE TRIGGER TEST_TRIGGER
BEFORE INSERT ON Test
FOR EACH ROW
BEGIN
   SET NEW.name = CONCAT(NEW.name, '_X');
END//

RENAME TABLE Test TO TestRenamed//

DROP TRIGGER Test.TEST_TRIGGER//



Are there any workarounds for this?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-02 Thread Larry Martell
On Fri, Jan 1, 2016 at 9:31 PM, Peter Brawley
 wrote:
> On 1/1/2016 19:24, Larry Martell wrote:
>>
>> On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
>>  wrote:
>>>
>>> On 12/31/2015 0:51, Larry Martell wrote:

 I need to count the number of rows in a table that are grouped by a
 list of columns, but I also need to exclude rows that have more then
 some count when grouped by a different set of columns. Conceptually,
 this is not hard, but I am having trouble doing this efficiently.

 My first counting query would be this:

 SELECT count(*)
 FROM cst_rollup
 GROUP BY target_name_id, ep, roiname, recipe_process,
 recipe_product, recipe_layer, f_tag_bottom,
 measname, recipe_id

 But from this count I need to subtract the count of rows that have
 more then 50 rows with a different grouping:

 SELECT count(*)
 FROM cst_rollup
 GROUP BY target_name_id, ep, wafer_id
 HAVING count(*) >= 50

 As you can see, the second query has wafer_id, but the first query does
 not.

 Currently I am doing this in python, and it's slow. In my current
 implementation I have one query, and it selects the columns (i.e.
 doesn't just count), and I have added wafer_id:

 SELECT target_name_id, ep, roiname, recipe_process,
 recipe_product, recipe_layer, f_tag_bottom,
 measname, recipe_id, wafer_id
 FROM cst_rollup

 Then I go through the result set (which can be over 200k rows) and I
 count the number of rows with matching (target_name_id, ep, wafer_id).
 Then I go through the rows again and regroup them without wafer_id,
 but skipping the rows that have more then 50 rows for that row's
 (target_name_id, ep, wafer_id).

 Is this clear to everyone what I am trying to do?
>>>
>>>
>>> If I've understand this correctly, the resultset you wish to aggregate on
>>> is
>>> ...
>>>
>>> select target_name_id, ep, wafer_id
>>> from cst_rollup a
>>> left join (   -- exclude rows for which wafer_id count >= 50
>>>select name_id, ep, wafer, count(*) n
>>>from cst_rollup
>>>group by target_name_id, ep, wafer_id
>>>having n >= 50
>>> ) b using ( target_name_id, ep, wafer_id )
>>> where b.target_name is null ;
>>>
>>> If that's so, you could assemble that resultset in a temp table then run
>>> the
>>> desired aggregate query on it, or you could aggregate on it directly as a
>>> subquery.
>>
>> That query gives:
>>
>> ERROR 1137 (HY000): Can't reopen table: 'a'
>
>
> So, it's a temporary table, and you'll need to make that not so.

Yes, cst_rollup is a temp table. The underlying table is millions of
rows (with 300 columns) so for efficiency a subset of the rows and
columns are selected into the temp table based on some user input.
It's just the rows in the temp table that are of interest for the
current report.

I was able to get this working with a second temp table:

CREATE TEMPORARY TABLE rollup_exclude
SELECT target_name_id, ep, wafer_id, count(*) n
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING n >= 50

And then:

SELECT count(*)
FROM cst_rollup
LEFT JOIN(
SELECT target_name_id, ep, wafer_id
FROM rollup_exclude) b
USING (target_name_id, ep, wafer_id)
WHERE b.target_name_id IS NULL
GROUP by target_name_id, ep, roiname, recipe_process, recipe_product,
recipe_layer, f_tag_bottom, measname, recipe_id

And the rowcount from that query gave me what I needed.

Thanks very much for the help Peter, you gave me a push toward the right path.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Larry Martell
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
 wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
>> some count when grouped by a different set of columns. Conceptually,
>> this is not hard, but I am having trouble doing this efficiently.
>>
>> My first counting query would be this:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id
>>
>> But from this count I need to subtract the count of rows that have
>> more then 50 rows with a different grouping:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, wafer_id
>> HAVING count(*) >= 50
>>
>> As you can see, the second query has wafer_id, but the first query does
>> not.
>>
>> Currently I am doing this in python, and it's slow. In my current
>> implementation I have one query, and it selects the columns (i.e.
>> doesn't just count), and I have added wafer_id:
>>
>> SELECT target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id, wafer_id
>> FROM cst_rollup
>>
>> Then I go through the result set (which can be over 200k rows) and I
>> count the number of rows with matching (target_name_id, ep, wafer_id).
>> Then I go through the rows again and regroup them without wafer_id,
>> but skipping the rows that have more then 50 rows for that row's
>> (target_name_id, ep, wafer_id).
>>
>> Is this clear to everyone what I am trying to do?
>
>
> If I've understand this correctly, the resultset you wish to aggregate on is
> ...
>
> select target_name_id, ep, wafer_id
> from cst_rollup a
> left join (   -- exclude rows for which wafer_id count >= 50
>   select name_id, ep, wafer, count(*) n
>   from cst_rollup
>   group by target_name_id, ep, wafer_id
>   having n >= 50
> ) b using ( target_name_id, ep, wafer_id )
> where b.target_name is null ;
>
> If that's so, you could assemble that resultset in a temp table then run the
> desired aggregate query on it, or you could aggregate on it directly as a
> subquery.

That query gives:

ERROR 1137 (HY000): Can't reopen table: 'a'

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Peter Brawley

On 12/31/2015 0:51, Larry Martell wrote:

I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?


If I've understand this correctly, the resultset you wish to aggregate 
on is ...


select target_name_id, ep, wafer_id
from cst_rollup a
left join (   -- exclude rows for which wafer_id count >= 50
  select name_id, ep, wafer, count(*) n
  from cst_rollup
  group by target_name_id, ep, wafer_id
  having n >= 50
) b using ( target_name_id, ep, wafer_id )
where b.target_name is null ;

If that's so, you could assemble that resultset in a temp table then run 
the desired aggregate query on it, or you could aggregate on it directly 
as a subquery.


PB

-


I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



help with query to count rows while excluding certain rows

2015-12-30 Thread Larry Martell
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?

I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Start mysqld 5.7.7 with new disk

2015-12-23 Thread Jørn Dahl-Stamnes
Hello

I had to replace the datadisk where mysql had its innodb files.

But after I had added a new disk I'm not able to initialize mysql files any 
more:

# mysqld --initialize --user=mysql
2015-12-23T13:07:08.216472Z 0 [ERROR] Plugin 'InnoDB' init function returned 
error.
2015-12-23T13:07:08.216556Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE 
ENGINE failed.
2015-12-23T13:07:08.216577Z 0 [ERROR] Failed to initialize plugins.
2015-12-23T13:07:08.216595Z 0 [ERROR] Aborting

I have created the directories stored in my /etc/my.cnf and changed group and 
ownership to the 
directories to mysql:mysql


Any suggestions how to fix this?

-- 
Jørn Dahl-Stamnes
homepage: http://photo.dahl-stamnes.net/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Start mysqld 5.7.7 with new disk

2015-12-23 Thread Reindl Harald



Am 23.12.2015 um 14:11 schrieb Jørn Dahl-Stamnes:

I had to replace the datadisk where mysql had its innodb files.

But after I had added a new disk I'm not able to initialize mysql files any 
more:

# mysqld --initialize --user=mysql
2015-12-23T13:07:08.216472Z 0 [ERROR] Plugin 'InnoDB' init function returned 
error.
2015-12-23T13:07:08.216556Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE 
ENGINE failed.
2015-12-23T13:07:08.216577Z 0 [ERROR] Failed to initialize plugins.
2015-12-23T13:07:08.216595Z 0 [ERROR] Aborting

I have created the directories stored in my /etc/my.cnf and changed group and 
ownership to the
directories to mysql:mysql


man mysql_install_db

NAME
   mysql_install_db - initialize MariaDB data directory

SYNOPSIS
   mysql_install_db [options]

DESCRIPTION
   mysql_install_db initializes the MariaDB data directory and 
creates the system tables that it contains, if they do not exist.


   To invoke mysql_install_db, use the following syntax:

   shell> mysql_install_db [options]

   Because the MariaDB server, mysqld, needs to access the data 
directory when it runs later, you should either run mysql_install_db 
from the same account that will be used for running mysqld or run it
   as root and use the --user option to indicate the user name that 
mysqld will run as. It might be necessary to specify other options such 
as --basedir or --datadir if mysql_install_db does not use
   the correct locations for the installation directory or data 
directory. For example:


   shell> bin/mysql_install_db --user=mysql \
--basedir=/opt/mysql/mysql \
--datadir=/opt/mysql/mysql/data



signature.asc
Description: OpenPGP digital signature


Re: Start mysqld 5.7.7 with new disk

2015-12-23 Thread Jørn Dahl-Stamnes
On Wednesday, December 23, 2015, Jørn Dahl-Stamnes wrote:
> Hello
> 
> I had to replace the datadisk where mysql had its innodb files.
> 
> But after I had added a new disk I'm not able to initialize mysql files any
> more:
> 
> # mysqld --initialize --user=mysql
> 2015-12-23T13:07:08.216472Z 0 [ERROR] Plugin 'InnoDB' init function
> returned error. 2015-12-23T13:07:08.216556Z 0 [ERROR] Plugin 'InnoDB'
> registration as a STORAGE ENGINE failed. 2015-12-23T13:07:08.216577Z 0
> [ERROR] Failed to initialize plugins. 2015-12-23T13:07:08.216595Z 0
> [ERROR] Aborting

Is'nt it typical... after I posted my message I found the error in my 
/etc/my.cnf file. I had 
changed this line:

innodb_data_file_path   = 
ibdata1:60G;ibdata2:60G;ibdata3:60G;ibdata4:60G:;ibdata5:60G:autoextend

Notice the extra column in "ibdata4:60G:".

After I removed it, it worked.

-- 
Jørn Dahl-Stamnes
homepage: http://photo.dahl-stamnes.net/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: DAC for MySQL 3.0.2 is out!

2015-12-21 Thread Aleksander Andreev
MicroOLAP Direct Access Components for MySQL and Delphi/C++ Builder (also
known as MySQLDAC) is a Borland Delphi/C++Builder component set for direct
connect to MySQL database server. DAC for MySQL allows you to create
Delphi/C++Builder applications without BDE, ODBC, ADO and without
libmysql.dll.

What’s new in v3.0.2:
This release includes critical fixes for iOS Device x64 target platform and
other bug fixes.

Full changelog:
[!] "Error occurs on connect to the server using password on iOS Device x64
target platform" bug fixed
[!] "Error occurs on open DataSet on iOS Device x64 target platform" bug
fixed
[-] "Incorrect sending BLOB Text field to the sever in Mobile target
platforms" bug fixed
[-] "Empty statement text = #$D#$A after applying ProcessComments" bug fixed
[-] "Not showing GEOMETRY fields and data collision occurs" bug fixed
[-] "AV on connect with old password format in Mobile platforms" bug fixed
[-] "AV occurs in TMySQLQuery on running statement which doesn't return
result" bug fixed

You're welcome to download the DAC for MySQL v3.0.2 right now at:
http://microolap.com/products/connectivity/mysqldac/download/ ,
or login to your private area on our site at
http://microolap.com/my/downloads/

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at http://www.microolap.com/support/

-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


Announcing release for MySQL 5.5 and 5.6 on CentOS Linux 6 and 7 x86_64 SCL

2015-12-21 Thread Honza Horak
I am pleased to announce the immediate availability of versions 5.5 and 
5.6 of the MySQL on CentOS Linux 6 and 7 x86_64, delivered via a 
Software Collection (SCL) built by the SCLo Special Interest Group 
(https://wiki.centos.org/SpecialInterestGroup/SCLo).


QuickStart
--
You can get started in three easy steps (example of MySQL 5.6):
$ sudo yum install centos-release-scl
$ sudo yum install rh-mysql56
$ scl enable rh-mysql56 bash

At this point you should be able to use MySQL just as a normal 
application. An examples of commands run might be:

$ service rh-mysql56-mysqld start
$ mysql

In order to view the individual components included in this collection, 
including additional subpackages plugins, you can run:

$ sudo yum list rh-mysql56\*

About Software Collections
--
Software Collections give you the power to build, install, and use 
multiple versions of software on the same system, without affecting 
system-wide installed packages. Each collection is delivered as a group 
of RPMs, with the grouping being done using the name of the collection 
as a prefix of all packages that are part of the software collection.


The collection rh-mysql56 delivers version 5.6 of the MySQL server (with 
daemon called rh-mysql56-mysqld) and related client tools. The 
collection mysql55 delivers version 5.5 of the MySQL server (with daemon 
called mysql55-mysqld) and related client tools.


Both the collections include no client library libmysqlclient.so and 
client applications are advised to use the client library available in 
mysql-libs package from base system.


For more on the MySQL, see https://www.mysql.com.

The SCLo SIG in CentOS
--
The Software Collections SIG group is an open community group 
co-ordinating the development of the SCL technology, and helping curate 
a reference set of collections. In addition to the MySQL collections 
being released here, we also build and deliver other databases, web 
servers, and language stacks including multiple versions of MariaDB, 
PostgreSQL, MongoDB Apache HTTP Server, NodeJS, Ruby, Python and others.


Software Collections SIG release was announced at 
https://lists.centos.org/pipermail/centos-announce/2015-October/021446.html


You can learn more about Software Collections concepts at: 
http://softwarecollections.org
You can find information on the SIG at 
https://wiki.centos.org/SpecialInterestGroup/SCLo ; this includes howto 
get involved and help with the effort.


We meet every second Wednesday at 16:00 UTC in #centos-devel (ref: 
https://www.centos.org/community/calendar), for an informal open forum 
open to anyone who might have comments, concerns or wants to get started 
with SCL's in CentOS.


Enjoy!

Honza
SCLo SIG member

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql dump global read lock

2015-12-19 Thread Lisa Smith
Artem,

You have --add-locks listed which will run LOCK TABLES prior to each table
dump. --single-transaction is for InnoDB tables and ensures that they are
copied in a consistent state.

I hope this helps!

On Thu, Dec 17, 2015 at 4:24 PM, Artem Kuchin  wrote:

> Hello!
>
> Hereis my mysqldump command line
>
> mysqldump -u root --events --complete-insert --skip-opt
> --single-transaction --add-drop-table --add-locks --create-options
> --disable-keys --
> extended-insert --quick --set-charset --routines --triggers --hex-blob
> DB_NAME
>
> But i see tons of Waiting for global read lock
> in show processlist for many tables in many different databases for all
> modification queries and locks
>
> Why?  As i understood --skip-opt --single-transaction must disable global
> read lock
>
>
> mysql version
> Server version: 5.6.27-log Source distribution
>
> Artem
>
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


ANN: PHP Generator for MySQL 15.12 released

2015-12-17 Thread SQL Maestro Team
Hi!

SQL Maestro Group announces the release of PHP Generator for MySQL
15.12, a powerful GUI frontend for Windows that allows you to generate
feature-rich CRUD web applications for your MySQL database.
http://www.sqlmaestro.com/products/mysql/phpgenerator/

Online demo:
http://demo.sqlmaestro.com/

Top 15 new features:


 1. New modern look and feel.
 2. 100% responsive design.
 3. PHP 7 support.
 4. Top side drop-down menus.
 5. Enhanced Filter Builder.
 6. Multi-column sorting.
 7. Keyboard shortcuts.
 8. A number of new and updated controls.
 9. 18 color themes.
10. Font-based icons.
11. Event management enhancements.
12. Less preprocessor syntax for user-defined styles.
13. HTML filter.
14. New and updated data access drivers.
15. PHP Generator UI improvements.

Full press-release is available at:
http://www.sqlmaestro.com/news/company/php_generator_15_12_released/

Background information:
---
SQL Maestro Group offers complete database admin, development and
management tools for MySQL, SQL Server, PostgreSQL, Oracle, DB2,
SQLite, SQL Anywhere, Firebird and MaxDB providing the highest
performance, scalability and reliability to meet the requirements of
today's database applications.

Sincerely yours,
The SQL Maestro Group Team
http://www.sqlmaestro.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



select contiguous addresses that start on a bit boundary

2015-12-17 Thread John Stile
I need help creating a select that returns 4 records that have
contiguous addresses that start on a bit boundary.

If 4 do not exist, I need a return of zero records.

I would like to do this in one statement and I do not have ownership of
this mysql server, so fancy views, temporary tables, indexing, etc are
outside my permission level.

I am also not the only consumer of this database, so altering it for my
needs could hurt the other consumers.

Below I specify the issue and where I am.

Thank you for your attention.

#
# Create problem set
# - This has non-contiguous addresses
# - This has one status not 0
# - This has contiguous addresses that start before the bit boundary
#
CREATE TABLE addresses ( address BIGINT(20), status INT );
INSERT INTO addresses
VALUES (1001,0),
   (1003,0),
   (1004,0),
   (1005,1),
   (1006,0),
   (1007,0),
   (1008,0),
   (1009,0),
   (1010,0),
   (1011,0),
   (1013,0),
   (1014,0),
   (1015,0),
   (1016,0),
   (1017,0);
#
# This shows the bit boundary, where the start is  (address & 3) = 0
#
select address, (address & 3) as boundary from addresses where address
>0 and status=0 order by address limit 10  ;
+--+--+
| address  | boundary |
+--+--+
| 1001 |1 |
| 1003 |3 |
| 1004 |0 |
| 1006 |2 |
| 1007 |3 |
| 1008 |0 |
| 1009 |1 |
| 1010 |2 |
| 1011 |3 |
| 1013 |1 |
+--+--+
10 rows in set (0.00 sec)
#
# This shows contiguous add, but they do not stat on the bit  boundary
#
select c1.address, (address & 3) as boundary  from addresses c1 where 4
= ( SELECT count(*) FROM addresses c2 WHERE c2.status = 0 and c2.address
BETWEEN c1.address AND (c1.address + 3)  ) limit 10;

+--+--+
| address  | boundary |
+--+--+
| 1006 |2 |
| 1007 |3 |
| 1008 |0 |
| 1013 |1 |
| 1014 |2 |
+--+--+
5 rows in set (0.00 sec)



I can't seem to add my ((address & 3) = 0) condition to the correct location to 
get the desired
result. I don't understand how I can use c1.address in the BETWEEN, and
yet I can't seem to make ((address & 3) = 0) work anywhere.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Narrow A First Set Of Records

2015-12-17 Thread Don Wieland
Hey gang,

I need the ability to produce this end result:

condition a) All clients who have had at least 2 appointments in the last 24 
months
condition b) but have NO appointments in the last 6 months

I am able to get the clients with >= 2 appointments with this query

SELECT 
c.client_id, 
concat(c.first_name, ' ', c.last_name) AS client_name,
c.`address`, 
c.city, 
c.state, 
c.`zip`, 
c.email , 
max(from_unixtime(a.time_start)) AS last_appt,
count(a.client_id) AS appt_count

FROM tl_clients c
LEFT JOIN tl_appt a ON c.client_id = a.client_id
GROUP BY a.client_id
HAVING max(a.time_start) >= UNIX_TIMESTAMP(DATE_SUB( NOW(), INTERVAL 24 MONTH))
AND last_appt IS NOT NULL AND count(a.client_id) >= 2;

BUT, how do I add the other “condition b” to the query to get an accurate end 
result.

I’d appreciate any help.

Don Wieland
D W   D a t a   C o n c e p t s
~
d...@dwdataconcepts.com
http://www.dwdataconcepts.com
Direct Line - (949) 336-4828
SKYPE - skypename = dwdata

Integrated data solutions to fit your business needs.

Need assistance in dialing in your FileMaker solution? Check out our Developer 
Support Plans:

Basic Developer Support Plan - 3 hours @ $360:
http://www.dwdataconcepts.com/DevSup.php 


Intermediate Developer Support Plan - 10 hours for $960 (2 FREE HOURS - $240 
savings off regular billable rate)
http://www.dwdataconcepts.com/IntDevSup.php 


Premium Developer Support Plan - 20 hours for $1800 ( 5 FREE HOURS - $600 
savings off regular billable rate)
http://www.dwdataconcepts.com/PremDevSup.php 


In all of these plans, we create a support account and credit the account with 
the hours purchased. You can then dictate how and when the hours are used. They 
will not expire until they are used up and at that point you can opt to 
replenish the account, if you desire. When we work on your system or with you, 
we simply deduct the billable time from your account. At a regular interval or 
per your request, a summary of your account status will be email to you.

---

Appointment 1.0v9 - Powerful Appointment Scheduling for FileMaker Pro 9 or 
higher
http://www.appointment10.com 

For a quick overview - 
http://www.appointment10.com/Appt10_Promo/Overview.html



Re: select contiguous addresses that start on a bit boundary

2015-12-17 Thread John Stile
I should have said consecutive addresses, rather than contiguous.
I care about a set of consecutive addresses, and there is no guarantee
of record order.

On 12/17/2015 07:35 AM, John Stile wrote:
> I need help creating a select that returns 4 records that have
> contiguous addresses that start on a bit boundary.
>
> If 4 do not exist, I need a return of zero records.
>
> I would like to do this in one statement and I do not have ownership of
> this mysql server, so fancy views, temporary tables, indexing, etc are
> outside my permission level.
>
> I am also not the only consumer of this database, so altering it for my
> needs could hurt the other consumers.
>
> Below I specify the issue and where I am.
>
> Thank you for your attention.
>
> #
> # Create problem set
> # - This has non-contiguous addresses
> # - This has one status not 0
> # - This has contiguous addresses that start before the bit boundary
> #
> CREATE TABLE addresses ( address BIGINT(20), status INT );
> INSERT INTO addresses
> VALUES (1001,0),
>(1003,0),
>(1004,0),
>(1005,1),
>(1006,0),
>(1007,0),
>(1008,0),
>(1009,0),
>(1010,0),
>(1011,0),
>(1013,0),
>(1014,0),
>(1015,0),
>(1016,0),
>(1017,0);
> #
> # This shows the bit boundary, where the start is  (address & 3) = 0
> #
> select address, (address & 3) as boundary from addresses where address
>> 0 and status=0 order by address limit 10  ;
> +--+--+
> | address  | boundary |
> +--+--+
> | 1001 |1 |
> | 1003 |3 |
> | 1004 |0 |
> | 1006 |2 |
> | 1007 |3 |
> | 1008 |0 |
> | 1009 |1 |
> | 1010 |2 |
> | 1011 |3 |
> | 1013 |1 |
> +--+--+
> 10 rows in set (0.00 sec)
> #
> # This shows contiguous add, but they do not stat on the bit  boundary
> #
> select c1.address, (address & 3) as boundary  from addresses c1 where 4
> = ( SELECT count(*) FROM addresses c2 WHERE c2.status = 0 and c2.address
> BETWEEN c1.address AND (c1.address + 3)  ) limit 10;
>
> +--+--+
> | address  | boundary |
> +--+--+
> | 1006 |2 |
> | 1007 |3 |
> | 1008 |0 |
> | 1013 |1 |
> | 1014 |2 |
> +--+--+
> 5 rows in set (0.00 sec)
>
>
>
> I can't seem to add my ((address & 3) = 0) condition to the correct location 
> to get the desired
> result. I don't understand how I can use c1.address in the BETWEEN, and
> yet I can't seem to make ((address & 3) = 0) work anywhere.
>
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Narrow A First Set Of Records

2015-12-17 Thread Don Wieland
Hey gang,

I need the ability to produce this end result:

condition a) All clients who have had at least 2 appointments in the last 24 
months
condition b) of the set of “condition a” rows, which of those have NO 
appointments in the last 6 months

I am able to get the clients with >= 2 appointments with this query

SELECT 
c.client_id, 
concat(c.first_name, ' ', c.last_name) AS client_name,
c.`address`, 
c.city, 
c.state, 
c.`zip`, 
c.email , 
max(from_unixtime(a.time_start)) AS last_appt,
count(a.client_id) AS appt_count

FROM tl_clients c
LEFT JOIN tl_appt a ON c.client_id = a.client_id
GROUP BY a.client_id
HAVING max(a.time_start) >= UNIX_TIMESTAMP(DATE_SUB( NOW(), INTERVAL 24 MONTH))
AND last_appt IS NOT NULL AND count(a.client_id) >= 2;

BUT, how do I add the other “condition b” to the query to get an accurate end 
result.

I’d appreciate any help.

Don

Re: Narrow A First Set Of Records

2015-12-17 Thread Roy Lyseng

Hi Don,

On 17.12.15 16.14, Don Wieland wrote:

Hey gang,

I need the ability to produce this end result:

condition a) All clients who have had at least 2 appointments in the last 24 
months
condition b) of the set of “condition a” rows, which of those have NO 
appointments in the last 6 months

I am able to get the clients with >= 2 appointments with this query

SELECT
c.client_id,
concat(c.first_name, ' ', c.last_name) AS client_name,
c.`address`,
c.city,
c.state,
c.`zip`,
c.email ,
max(from_unixtime(a.time_start)) AS last_appt,
count(a.client_id) AS appt_count

FROM tl_clients c
LEFT JOIN tl_appt a ON c.client_id = a.client_id
GROUP BY a.client_id
HAVING max(a.time_start) >= UNIX_TIMESTAMP(DATE_SUB( NOW(), INTERVAL 24 MONTH))
AND last_appt IS NOT NULL AND count(a.client_id) >= 2;

BUT, how do I add the other “condition b” to the query to get an accurate end 
result.

I’d appreciate any help.

Don



Can you try this query?

select * FROM (
select client_id, client_name, ...,
   (select count(*) from tl_appt AS a
WHERE c.client_id = a.client_id AND
  a.time_start >=
  UNIX_TIMESTAMP(DATE_SUB( NOW(), INTERVAL 24 MONTH)) AND
  last_appt IS NOT NULL) AS twoyear,
(select count(*) from tl_appt AS a
 WHERE c.client_id = a.client_id AND
   a.time_start >=
   UNIX_TIMESTAMP(DATE_SUB( NOW(), INTERVAL 6 MONTH)) AND
   last_appt IS NOT NULL) AS halfyear
from tl_clients AS c) AS dt
WHERE twoyear >= 2 AND halfyear == 0;

I have not run it through MySQL, so you may have to fix the syntax a bit...

Thanks,
Roy

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: select contiguous addresses that start on a bit boundary

2015-12-17 Thread John Stile
I have a solution.

SELECT start_bit_boundary FROM (
  SELECT
min(address) as start_bit_boundary,
status, count(*) as CT
  FROM MAC_addresses
  WHERE status = 0
  GROUP BY address >> 2
) AS _INNER
WHERE
  _INNER.CT = 4
ORDER BY start_bit_boundary
LIMIT 0,1;

It returns the first of 4 consecutive addresses.
This works with the following data set.

CREATE TABLE addresses ( address BIGINT(20), status INT );
INSERT INTO addresses
VALUES (1001,0),
   (1003,0),
   (1004,0),
   (1005,1),
   (1006,0),
   (1007,0),
   (1009,0),
   (1010,0),
   (1011,0),
   (1013,0),
   (1008,0),
   (1014,0),
   (1015,0),
   (1016,0),
   (1017,0);

If I want to print all the addresses I could do this:

select * from addresses
where status = 0
AND  address BETWEEN (
  SELECT @b := start_bit_boundary FROM (
SELECT min(address) as start_bit_boundary,status,count(*) as CT
FROM MAC_addresses
WHERE status = 0
GROUP BY address >> 2
  ) AS _INNER
  WHERE _INNER.CT = 4
  ORDER BY start_bit_boundary
  LIMIT 0,1
) AND (@b+3) limit 0,4;

On 12/17/2015 08:14 AM, John Stile wrote:
> I should have said consecutive addresses, rather than contiguous.
> I care about a set of consecutive addresses, and there is no guarantee
> of record order.
>
> On 12/17/2015 07:35 AM, John Stile wrote:
>> I need help creating a select that returns 4 records that have
>> contiguous addresses that start on a bit boundary.
>>
>> If 4 do not exist, I need a return of zero records.
>>
>> I would like to do this in one statement and I do not have ownership of
>> this mysql server, so fancy views, temporary tables, indexing, etc are
>> outside my permission level.
>>
>> I am also not the only consumer of this database, so altering it for my
>> needs could hurt the other consumers.
>>
>> Below I specify the issue and where I am.
>>
>> Thank you for your attention.
>>
>> #
>> # Create problem set
>> # - This has non-contiguous addresses
>> # - This has one status not 0
>> # - This has contiguous addresses that start before the bit boundary
>> #
>> CREATE TABLE addresses ( address BIGINT(20), status INT );
>> INSERT INTO addresses
>> VALUES (1001,0),
>>(1003,0),
>>(1004,0),
>>(1005,1),
>>(1006,0),
>>(1007,0),
>>(1008,0),
>>(1009,0),
>>(1010,0),
>>(1011,0),
>>(1013,0),
>>(1014,0),
>>(1015,0),
>>(1016,0),
>>(1017,0);
>> #
>> # This shows the bit boundary, where the start is  (address & 3) = 0
>> #
>> select address, (address & 3) as boundary from addresses where address
>>> 0 and status=0 order by address limit 10  ;
>> +--+--+
>> | address  | boundary |
>> +--+--+
>> | 1001 |1 |
>> | 1003 |3 |
>> | 1004 |0 |
>> | 1006 |2 |
>> | 1007 |3 |
>> | 1008 |0 |
>> | 1009 |1 |
>> | 1010 |2 |
>> | 1011 |3 |
>> | 1013 |1 |
>> +--+--+
>> 10 rows in set (0.00 sec)
>> #
>> # This shows contiguous add, but they do not stat on the bit  boundary
>> #
>> select c1.address, (address & 3) as boundary  from addresses c1 where 4
>> = ( SELECT count(*) FROM addresses c2 WHERE c2.status = 0 and c2.address
>> BETWEEN c1.address AND (c1.address + 3)  ) limit 10;
>>
>> +--+--+
>> | address  | boundary |
>> +--+--+
>> | 1006 |2 |
>> | 1007 |3 |
>> | 1008 |0 |
>> | 1013 |1 |
>> | 1014 |2 |
>> +--+--+
>> 5 rows in set (0.00 sec)
>>
>>
>>
>> I can't seem to add my ((address & 3) = 0) condition to the correct location 
>> to get the desired
>> result. I don't understand how I can use c1.address in the BETWEEN, and
>> yet I can't seem to make ((address & 3) = 0) work anywhere.
>>
>>
>>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysql dump global read lock

2015-12-17 Thread Artem Kuchin

Hello!

Hereis my mysqldump command line

mysqldump -u root --events --complete-insert --skip-opt 
--single-transaction --add-drop-table --add-locks --create-options 
--disable-keys --
extended-insert --quick --set-charset --routines --triggers --hex-blob  
DB_NAME


But i see tons of Waiting for global read lock
in show processlist for many tables in many different databases for all 
modification queries and locks


Why?  As i understood --skip-opt --single-transaction must disable 
global read lock



mysql version
Server version: 5.6.27-log Source distribution

Artem





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Upscene releases Database Workbench 5.1.10

2015-12-16 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

Database Workbench 5.1.10 

This release adds a few small features and includes Windows Terminal Server 
support.

The full version 5.1.x change log is available here:
http://www.upscene.com/go/?go=tracker=5.1.x=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.


Version 5.1 includes SQL Azure and Firebird 3 support, additional data export 
functionality, improved Oracle XML and Object-type support, Diagramming 
enhancements 
and new printing features as well as improvements in other areas.

http://www.upscene.com/news/item/20151215


Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server, 
SQL Azure,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.


MySQL Workbench 6.3.6 GA has been released

2015-12-15 Thread Lars Tangvald

Dear MySQL users,

The MySQL developer tools team announces 6.3.6 as our GA release for
MySQL Workbench 6.3.

For the full list of changes in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html

For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?152

Download MySQL Workbench 6.3.6 GA now, for Windows, Mac OS X 10.9+,
Oracle Linux 6 and 7, Fedora 22 and Fedora 23, Ubuntu 14.04 and
Ubuntu 15.10 or sources, from:

http://dev.mysql.com/downloads/tools/workbench/

Enjoy!

Changes in MySQL Workbench 6.3.6 (2015-12-15)

   Functionality Added or Changed

 * On OS X, the multiple object selector was disabled in the
   model overview. (Bug #22265928)

 * On Linux, Fedora 23 is now supported. (Bug #22146526, Bug
   #79101)

 * The "mysql_old_password
   (http://dev.mysql.com/doc/refman/5.7/en/old-client.html)"
   protocol is no longer supported by Workbench, as its
   support was removed in MySQL Server 5.7. This affects the
   former "Use old authentication protocol" option from the
   "Advanced" configuration tab. This option has now been
   removed. (Bug #22093548, Bug #78947)

 * Management operations when using an SSH MySQL Server
   connection that authenticated using a password (and not
   SSH keys) would not function. (Bug #22047602)

 * The File, New Query option no longer requires an open
   MySQL connection. (Bug #21879064)

 * The MySQL Firewall tab is now disabled when the Firewall
   user is being created. (Bug #21150265)

   Bugs Fixed

 * Setting the MySQL Firewall mode to "PROTECTING" for a
   user without defined rules would emit an unhandled
   exception. (Bug #22294174)

 * Improvements were made to the Microsoft SQL Server 2008
   migration wizard. (Bug #22271597)

 * On OS X, the MySQL Firewall management option was missing
   from the commercial edition builds. (Bug #22270244)

 * After editing the code of a routine and closing the
   routine editor, the changed routine was duplicated
   instead of applying the changes to the original routine
   object. (Bug #22269199)

 * The JSON Data Field Value Editor functionality was
   improved, and bugs were fixed. Workbench 6.3.5 had known
   limitations. (Bug #22265214, Bug #22337871)

 * On Linux, the MySQL Firewall installation checked the
   path /usr/share/ instead of /usr/share/mysql/. (Bug
   #22263249)

 * For Online Backup, the Fix Grants for MEB function would
   not create the MySQL Enterprise Backup user account. (Bug
   #22258080)

 * The SSL results page listed the "-" character as "_" for
   the SSL certificate file names. (Bug #22243210)

 * Exporting a database from the Data Export wizard did not
   function with remote connections. (Bug #22233795)

 * The libzip library was updated; Workbench can now be
   installed on Ubuntu 15.10. (Bug #22230416, Bug #79320)

 * After initially altering a table using the Alter Table,
   subsequent alterations to the same table in the same
   screen, without having closed the screen, would not
   always complete, as the previous alterations were applied
   again. For example, it might attempt to add a column that
   was already added, or remove a column that was already
   deleted. (Bug #22202325, Bug #79265)

 * Workbench did not recognize identifiers with a leading
   digit if it was preceded by a dot. (Bug #22193758, Bug
   #79247)

 * After creating a table, editing the table in the same tab
   would yield an error after clicking Apply. (Bug
   #22178996, Bug #79199)

 * Clicking Apply a second time while performing an Alter
   Table operation would emit a "No Changes Detected"
   message, despite changes being present. (Bug #22169405,
   Bug #79164)

 * The MySQL connection search field's background and text
   were both white, making it difficult to decipher. (Bug
   #22168697, Bug #79159)

 * On Windows 10, and with resolutions of 1366x768 and
   below, the default table editor window did not show the
   table columns.
   As a workaround, the output pane could be resized or
   hidden. (Bug #22168049, Bug #79160)

 * The "BY" keyword is now supported. (Bug #22151204, Bug
   #79121)

 * The MySQL Enterprise Backup binary can now be configured
   per MySQL connection, to match the correct version needed
   by the connected MySQL server. Before, only one MySQL
   Enterprise Backup binary could be used for all
   connections. (Bug #22143844, Bug #79108)

 * Compiling Workbench now requires Connector/C++ 1.1.7 or
   higher, as this is the first Connector/C++ release that
   supports JSON. (Bug #22141269, Bug #79054)

 * On the Server Logs management tab, the Copy and Copy
   Details options were switched. (Bug 

Global read lock on delete

2015-12-09 Thread Artem Kuchin

Hello!

I am actually using MariaDB, but they do not seem to have any public 
discussion system
and i suppose that engine is the same basically, so, problems are 
probably  the same.


Today i setup the server to show locks and notice this:

|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE 
TABLE_SCHEMATABLE_NAME

268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata 
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata 
lock sprusearches

268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lockspru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lockspru

The threads are
ID: 270022 TIME:185 COMMAND:Query STATE:Waiting for table metadata lock 
USER:spru DB:spru

LOCK TABLES searchsobjects WRITE, searches WRITE

ID: 268871 TIME:3 COMMAND:Query STATE:updating USER:spru DB:spru
DELETE FROM searchsobjects WHERE search_id IN ( 
3680622,3677720,3679348,3679347,3680621,3678106,3678105,3680597,3680596,3680595,3676915,3676914,3676913,36777

19,3677718,3677717,3677716,3676984,3677795,3677794,3677793,3677792,3677796,3677802,3677801,3677800,3677799,3677798,3677797,3680580,3676988,3677791,3680589,36
77790,3677789,3677788,3677787,3677786,3677785,3677784,3677783,3677782,3680575,3677781,3677780,369,368,367,366,365,364,363,362
,361,360,3677769,3677768,3677767,3677766,3677765,3677764,3680619,3680620,3682405,3677763,3677762,3677761,3677760,3677759,3677758,3680601,3677757,3680
627,3680628,3680576,3680577,3680625,3680626,3680624,3680623,3677754,3679280,3679279,3679278,3679277,3679276,3679867,3679890,3680588,3677753,3677064,3677752,3
677751,3677750,3677749,3679608,3679607,3679606,3679605,3680613 )


So, by thread id it seems like DELETE started first and the LOCK TABLES 
was issued.
However, i do not understand how GLOBAL READ LOCK became involved in 
this all? And both lock tables and delete requested global read lock.


All tables are myisam. MariaDB  is 10.0.22 (mysql 5.6 based as i understand)

Artem




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!

I am actually using MariaDB, but they do not seem to have any public
discussion system
and i suppose that engine is the same basically, so, problems are
probably  the same.

Today i setup the server to show locks and notice this:

|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru

The threads are
ID: 270022 TIME:185 COMMAND:Query STATE:Waiting for table metadata lock
USER:spru DB:spru
LOCK TABLES searchsobjects WRITE, searches WRITE

ID: 268871 TIME:3 COMMAND:Query STATE:updating USER:spru DB:spru
DELETE FROM searchsobjects WHERE search_id IN (
3680622,3677720,3679348,3679347,3680621,3678106,3678105,3680597,3680596,3680595,3676915,3676914,3676913,36777

19,3677718,3677717,3677716,3676984,3677795,3677794,3677793,3677792,3677796,3677802,3677801,3677800,3677799,3677798,3677797,3680580,3676988,3677791,3680589,36

77790,3677789,3677788,3677787,3677786,3677785,3677784,3677783,3677782,3680575,3677781,3677780,369,368,367,366,365,364,363,362

,361,360,3677769,3677768,3677767,3677766,3677765,3677764,3680619,3680620,3682405,3677763,3677762,3677761,3677760,3677759,3677758,3680601,3677757,3680

627,3680628,3680576,3680577,3680625,3680626,3680624,3680623,3677754,3679280,3679279,3679278,3679277,3679276,3679867,3679890,3680588,3677753,3677064,3677752,3

677751,3677750,3677749,3679608,3679607,3679606,3679605,3680613 )


So, by thread id it seems like DELETE started first and the LOCK TABLES
was issued.
However, i do not understand how GLOBAL READ LOCK became involved in
this all? And both lock tables and delete requested global read lock.

All tables are myisam. MariaDB  is 10.0.22 (mysql 5.6 based as i
understand)

Artem



You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do 
row-level locking. All UPDATE and DELETE operations require a full table 
lock to perform and those must wait for all earlier readers or writers 
to exit the table before they can start.  INSERT operations are special 
as you can enable a mode to allow INSERTs to happen only at the end of 
the file and not be blocked while one of the other two operations are in 
progress.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 12:06 PM, Artem Kuchin wrote:

09.12.2015 19:35, shawn l.green пишет:


 INSERT operations are special as you can enable a mode to allow
INSERTs to happen only at the end of the file and not be blocked while
one of the other two operations are in progress.



Cannot find anything about that. Can you be a little more specific? It
is unrelated the my question, but would be great to have too.

Artem



It's been a long while since I had to think about the variable 
--concurrent-insert so I blurred the lines just a little. I apologize.



Changing the mode cannot allow concurrent UPDATE or DELETE with an 
INSERT. The UPDATE or DELETE will always ask for a full table lock. It 
only allows for concurrent SELECT and INSERT commands to happen at the 
same time to the same MyISAM table.


http://dev.mysql.com/doc/refman/5.6/en/concurrent-inserts.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread Artem Kuchin

09.12.2015 19:35, shawn l.green пишет:



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!
|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru




You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do 
row-level locking. All UPDATE and DELETE operations require a full 
table lock to perform and those must wait for all earlier readers or 
writers to exit the table before they can start.  INSERT operations 
are special as you can enable a mode to allow INSERTs to happen only 
at the end of the file and not be blocked while one of the other two 
operations are in progress.


TABLE LOCK is okay, i understand that. But i see GLOBAL READ LOCK - not 
table lock, but GLOBAL.

As i understand that it means ALL TABLES IN ALL DATABASES. Why?

Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 11:59 AM, Artem Kuchin wrote:

09.12.2015 19:35, shawn l.green пишет:



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!
|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru




You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do
row-level locking. All UPDATE and DELETE operations require a full
table lock to perform and those must wait for all earlier readers or
writers to exit the table before they can start.  INSERT operations
are special as you can enable a mode to allow INSERTs to happen only
at the end of the file and not be blocked while one of the other two
operations are in progress.


TABLE LOCK is okay, i understand that. But i see GLOBAL READ LOCK - not
table lock, but GLOBAL.
As i understand that it means ALL TABLES IN ALL DATABASES. Why?

Artem



That is something the official MySQL does not do. You would need to 
research the MariaDB fork's documentation to see why they report it as a 
global lock.


I'm thinking that it might not be a full lock on all tables, just on the 
one, to prevent someone from changing the table's design before the 
queued UPDATE or DELETE could complete. We do that, too. We lock the 
definition while any writer is using the table. That is performed using 
a metadata lock (MDL).


I, personally, have not had any time at all to dig that deeply into how 
the forks differ from the original in terms of lock management. Maybe 
another person on this list will know?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread Laurynas Biveinis
Artem -

> |THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE TABLE_SCHEMA
> TABLE_NAME
> 268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
> 270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
> 268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata lock
> sprusearchsobjects
> 268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata lock
> sprusearches
> 268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lockspru
> 270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lockspur

The global read lock is in MDL_INTENTION_EXCLUSIVE mode, as an
intention lock it's not an actual global read lock in effect, but
rather something to prevent another thread to take GRL while
GRL-incompatible statements (DELETE, LOCK TABLES) are still running.

-- 
Laurynas

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread Artem Kuchin

09.12.2015 19:35, shawn l.green пишет:


 INSERT operations are special as you can enable a mode to allow 
INSERTs to happen only at the end of the file and not be blocked while 
one of the other two operations are in progress.




Cannot find anything about that. Can you be a little more specific? It 
is unrelated the my question, but would be great to have too.


Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Community Server 5.7.10 has been released

2015-12-07 Thread Bjorn Munch
Dear MySQL users,

MySQL Server 5.7.10, a new version of the popular Open Source
Database Management System, has been released. MySQL 5.7.10 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.7, please see

http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

For information on installing MySQL 5.7.10 on new servers, please see
the MySQL installation documentation at

  http://dev.mysql.com/doc/refman/5.7/en/installing.html

MySQL Server 5.7.10 is available in source and binary form for a number of
platforms from the "Development Releases" selection of our download
pages at

  http://dev.mysql.com/downloads/mysql/

MySQL Server 5.7.10 is also available from our repository for Linux
platforms, go here for details:

  http://dev.mysql.com/downloads/repo/

Windows packages are available via the Installer for Windows or .ZIP
(no-install) packages for more advanced needs. The point and click
configuration wizards and all MySQL products are available in the
unified Installer for Windows:

  http://dev.mysql.com/downloads/installer/

5.7.10 also comes with a web installer as an alternative to the full
installer.

The web installer doesn't come bundled with any actual products
and instead relies on download-on-demand to fetch only the
products you choose to install. This makes the initial download
much smaller but increases install time as the individual products
will need to be downloaded.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc.:

  http://bugs.mysql.com/report.php

The following section lists the changes in the MySQL 5.7 since
the release of MySQL 5.7.9. It may also be viewed
online at

http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-10.html

Enjoy!


==
Changes in MySQL 5.7.10 (2015-12-07)


 * Functionality Added or Changed

 * Security Notes

 * Bugs Fixed

   Functionality Added or Changed

 * InnoDB: Enabling the new
   innodb_background_drop_list_empty debug configuration
   option helps avoid test case failures by delaying table
   creation until the background drop list is empty. (Bug
   #21891560)

 * InnoDB: The innodb_support_xa system variable, which
   enables support for two-phase commit in XA transactions,
   is deprecated. InnoDB support for two-phase commit in XA
   transactions is always enabled as of MySQL 5.7.10.
   Disabling innodb_support_xa is no longer permitted as it
   makes replication unsafe and prevents performance gains
   associated with binary log group commit.

 * MySQL distributions now include lz4_decompress and
   zlib_decompress utilities that can be used to decompress
   mysqlpump output that was compressed using the
   --compress-output=LZ4 and --compress-output=ZLIB options.
   For more information, see lz4_decompress --- Decompress
   mysqlpump LZ4-Compressed Output
   (http://dev.mysql.com/doc/refman/5.7/en/lz4-decompress.html),
   and zlib_decompress --- Decompress mysqlpump
   ZLIB-Compressed Output
   (http://dev.mysql.com/doc/refman/5.7/en/zlib-decompress.html).
   (Bug #21644479)

 * mysql_upgrade now attempts to print more informative
   errors than FATAL ERROR: Upgrade failed. (Bug #21489398)

 * These client programs now support the
   --enable-cleartext-plugin option: mysqlcheck, mysqldump,
   mysqlimport, mysqlshow. This option enables the
   mysql_clear_password cleartext authentication plugin.
   (See The Cleartext Client-Side Authentication Plugin
(http://dev.mysql.com/doc/refman/5.7/en/cleartext-authentication-plugin.html).)
   (Bug #21235226)

 * These functions now produce warnings for invalid input:
   UNHEX(), INET_NTOA(), INET_ATON(), INET6_NTOA(),
   INET6_ATON(). Also, INET_ATON() now returns NULL and
   produces a warning for arguments that contain more than
   four groups of digits. (Bug #78856, Bug #22042027, Bug
   #78884, Bug #22071558)

   Security Notes

 * Previously, MySQL supported TLSv1.0 and TLSv1.1 SSL
   protocols. When compiled with OpenSSL 1.0.1 or higher,
   MySQL now also supports TLSv1.2 protocol to enable a
   higher level of security for SSL connections.

   Because TLSv1.2 requires OpenSSL, this feature is
   available in binary distributions only for MySQL
   Commercial Server, and not for MySQL Community Server
   (which is compiled using yaSSL). To enable TLSv1.2
   support if you build from source, you must set the
   WITH_SSL CMake option to use OpenSSL when configuring
   MySQL.

   Bugs Fixed

 * InnoDB: When attempting to create a cascading foreign key
   constraint on a primary key column used in a virtual
   index, the error message that was returned did not
   include information from
   dict_foreign_has_col_in_v_index(). (Bug 

MySQL Cluster Manager 1.4.0 has been released

2015-12-07 Thread Lars Tangvald

Hello all,

MySQL Cluster Manager 1.4.0, has been released and can be downloaded
from the My Oracle Support (MOS) website. It will also be available
on Oracle Software Delivery Cloud at http://edelivery.oracle.com with
the next monthly update

MySQL Cluster Manager is an optional component of the MySQL Cluster Carrier
Grade Edition, providing a command-line interface that automates common
management tasks, including the following online operations:
 - Configuring and starting MySQL Cluster
 - Upgrades
 - Adding and removing cluster nodes
 - Adding and removing site hosts
 - Configuration changes
 - Backup and restore

MySQL Cluster Manager is a commercial extension to the MySQL family of 
products.

More details can be found at http://www.mysql.com/products/cluster/mcm/

A brief summary of changes in MySQL Cluster Manager version 1.4.0 is 
listed below:


Changes in MySQL Cluster Manager 1.4.0 (2015-12-07)

   This section documents all changes and bug fixes that have
   been applied in MySQL Cluster Manager 1.4.0 since the release
   of MySQL Cluster Manager version 1.3.6.

   Functionality Added or Changed

 * Packaging: MySQL Cluster Manager is now built and shipped
   with GLib-2.44.0, OpenSSL 1.0.1p, and the MySQL 5.6
   client library. (Bug #22202878)

 * Agent: When using the import cluster command, before, if
   an SQL node was started at the command line with options
   outside of a special, pre-defined set, the import would
   fail with the complaint that those options were
   unsupported. Now, import will continue in the situation,
   as long as those options and their values also appear in
   the node's configuration created by MySQL Cluster Manager
   for import. (Bug #21943518)

 * Agent: A warning is now logged (if log-level=warning)
   when a failed process is not restarted because the
   parameter StopOnError has the value "true." (Bug
   #21575241)

 * Agent: Two new options have been introduced for the
   upgrade cluster command: --retry and --nodeid. They,
   together with the --force option, allow a retry after an
   initial attempt to upgrade a cluster has failed. See
   description for upgrade cluster for detail. (Bug
   #20469067, Bug #16932006, Bug #21200698)

 * Client: The get command now returns attributes in the
   same order as the MySQL Cluster ndb_mgmd command does
   when the --print-full-config option is used, with the
   non-data nodes going first, and in increasing order of
   the node ID. (Bug #22202973)

 * Client: A new autotune command has been introduced, which
   tunes a number of parameters of the cluster to optimize
   its performance. (Bug #22202855)

 * Client: The show settings command has a new --hostinfo
   option, with which the command prints out information on
   the host the mcm client is connected to. (Bug #21923561)

 * Client: You can now use the wildcard * (asterisk
   character) to match attribute names in a get command. See
   The get Command
   (http://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/mc
   m-get.html) for examples. (Bug #18069656)

   Bugs Fixed

 * Agent: On Windows platform, after a cluster import, the
   subsequent cluster restart would time out if a
   non-default value of the option pid_file has been
   imported for a mysqld node. (Bug #21943518)
   References: This bug is a regression of Bug #2944.

 * Agent: When a data node could not be restarted after a
   set command because some attributes were set wrongly,
   another set command could not be used to correct the
   attributes, because the set command required the data
   node to be running. With this fix, the second set command
   can now be executed even when the data node is not
   running, as long as the --force option is used. The
   failed node is then restarted, followed by a rolling
   restart of the cluster. (Bug #21943518)

 * Agent: A timeout occurred for a restore cluster command
   when the number of tables in the cluster was huge
   (>1000). It was because a timeout extension was blocked.
   This fix unblocks the extension. (Bug #21393857)

 * Agent: At the initial startup of a large cluster (with
   memory size in the order of 10GB), the process might time
   out while waiting for a data node to start. This fix
   makes the transaction timeout longer for data node
   initiation. (Bug #21355383)

 * Agent: Under some conditions, a check status command
   might report negative node group ID values for processes
   after an add process command was completed. That was
   because the agent was reporting the node group IDs before
   their proper values had arrived, after the creation of
   new node groups. This fix makes the agent wait for the
   correct node group IDs before reporting them. 

MySQL Connector/J 5.1.38 has been released

2015-12-07 Thread Hery Ramilison

Hello all,

MySQL Connector Java 5.1.38, a maintenance release of the production 5.1
branch, has been released. Connector/J is a Type-IV pure-Java JDBC
driver for MySQL. It is suitable for use with MySQL server versions 5.5,
5.6, and 5.7. It is also the first release of MySQL Connector/J to support
the Java Database Connectivity (JDBC) 4.2 API.

MySQL Connector Java is available in source and binary form from the
Connector/J download pages at
http://dev.mysql.com/downloads/connector/j/5.1.html

MySQL Connector Java (Commercial) is available for download on the My
Oracle Support (MOS) website. This release will be available on
eDelivery (OSDC) in next month's upload cycle.

As always, we recommend that you check the "CHANGES" file in the
download archive to be aware of changes in behavior that might affect
your application.

MySQL Connector/J 5.1.38 includes the following general bug fixes and
improvements, also available in more detail on
http://dev.mysql.com/doc/relnotes/connector-j/en/news-5-1-38.html :

Changes in MySQL Connector/J 5.1.38 (2015-12-07)

   Functionality Added or Changed

 * When connecting to a MySQL server 5.7 instance that
   supports TLS, Connector/J now prefers a TLS over a plain
   TCP connection. (Bug #21947042)

 * Two new connection properties, allowSlaveDownConnections
   and readFromMasterWhenNoSlaves, have been introduced for
   configuring replication-aware connections. See
   Configuring Master/Slave Replication with Connector/J
(http://dev.mysql.com/doc/connector-j/en/connector-j-master-slave-replication-connection.html)
   and the entries for the new properties in Driver/Datasource
   Class Names, URL Syntax and Configuration Properties for
   Connector/J 
(http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html)

   for details. (Bug #21286268)

   Bugs Fixed

 * In a Fabric environment, a ClassCastException resulted
   when a stored procedure was called with an INOUT
   parameter. This was because in this situation, what was
   being cast was a proxied version of the object. This fix
   extracts the underlying object from its proxy, allowing
   it to be cast to the implementing class. (Bug #22096981)

 * getTypeInfo() erroneously returned a PRECISION value of
   "255" instead of "65535" for the data type VARBINARY.
   (Bug #21978216)

 * A deadlock was observed when in the same server group,
   two concurrent threads were using different Fabric
   connections, and one executed a failover procedure while
   the other simultaneously called a method that acquired a
   lock on the underlying replication connection instance
   monitor. This fix revised the locking mechanism of
   replication connections, in order to prevent the observed
   deadlocks. (Bug #21966391, Bug #21934573, Bug #78710)

 * State information of a Fabric group was not updated by
   Connector/J after a local cache's Time to Live (TTL)
   expired, which resulted in the client not recognizing any
   topology changes happening to the group like a master
   failover, a server rejoining, and so on. (Bug #21296840,
   Bug #17910835)

 * Connector/J threw an AbstractMethodError when a JDBC 4
   functionality (for example, createBlob()) was used on a
   replication-aware connection. This has been fixed by
   putting replication connections under the multi-host
   connection proxy structure that Connector/J has been
   using for load-balanced and fail-over connections. (Bug
   #11763419)
   References: See also Bug #11763401.

 * After the initial call of Connection.setReadOnly()
   following the creation of a replication-aware connection,
   subsequent calls of Connection.setReadOnly() could not
   change the nature of the connection. This has been fixed
   by putting replication connections under the multi-host
   connection proxy structure that Connector/J has been
   using for load-balanced and fail-over connections. (Bug
   #11763401)
   References: See also Bug #11763419.

Documentation
--
Online: http://dev.mysql.com/doc/connector-j/en/index.html

Reporting Bugs
---
We welcome and appreciate your feedback and bug reports:
http://bugs.mysql.com/

On Behalf of the MySQL/ORACLE RE Team
Hery Ramilison

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Community Server 5.6.28 has been released

2015-12-07 Thread Hery Ramilison

Dear MySQL users,

MySQL Server 5.6.28, a new version of the popular Open Source
Database Management System, has been released. MySQL 5.6.28 is
recommended for use on production systems.

For an overview of what's new in MySQL 5.6, please see

http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html

For information on installing MySQL 5.6.28 on new servers or upgrading
to MySQL 5.6.28 from previous MySQL releases, please see

  http://dev.mysql.com/doc/refman/5.6/en/installing.html

MySQL Server is available in source and binary form for a number of
platforms from our download pages at

  http://dev.mysql.com/downloads/

Not all mirror sites may be up to date at this point in time, so if you
can't find this version on some mirror, please try again later or choose
another download site.

We welcome and appreciate your feedback, bug reports, bug fixes,
patches, etc:

  https://wikis.oracle.com/display/mysql/Contributing

The following section lists the changes in the MySQL 5.6 since
the release of MySQL 5.6.27. It may also be viewed
online at

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-28.html

Enjoy!

Changes in MySQL 5.6.28 (2015-12-07):

   Functionality Added or Changed

 * MySQL Server RPM packages now contain a conflict
   indicator for MySQL Connector C, such that an error
   occurs when installing MySQL Server if MySQL Connector C
   is also installed. To install MySQL Server, remove any
   MySQL Connector C packages first. (Bug #21900800)

 * mysql_upgrade now attempts to print more informative
   errors than FATAL ERROR: Upgrade failed. (Bug #21489398)

 * These client programs now support the
   --enable-cleartext-plugin option: mysqlcheck, mysqldump,
   mysqlimport, mysqlshow. This option enables the
   mysql_clear_password cleartext authentication plugin.
   (See The Cleartext Client-Side Authentication Plugin

(http://dev.mysql.com/doc/refman/5.6/en/cleartext-authentication-plugin.html).) 


   (Bug #21235226)

 * Support for building with Solaris Studio 5.13 was added.
   (Bug #21185883)

 * Performance Schema digests in DIGEST_TEXT columns have
   ... appended to the end to indicate when statements
   exceed the maximum statement size and were truncated.
   This is also now done for statement text values in
   SQL_TEXT columns. (Bug #75861, Bug #20519832)

   Bugs Fixed

 * InnoDB: InnoDB returned an invalid corruption-related
   error message during an IMPORT TABLESPACE operation. (Bug
   #21838158, Bug #77321)

 * InnoDB: An old version of numactl headers on the build
   host caused a compilation error when building a MySQL
   version that includes NUMA memory policy support. (Bug
   #21785074)

 * InnoDB: An online ALTER TABLE operation caused a server
   exit. (Bug #21640679)

 * InnoDB: A schema mismatch error occurred when importing a
   tablespace that was altered by DROP INDEX operation on
   the source server. (Bug #21514135, Bug #77659)

 * InnoDB: A duplicate key error that occurred during an
   online DDL operation reported an incorrect key name. (Bug
   #21364096, Bug #77572)

 * InnoDB: An ALTER TABLE operation caused the server to
   exit on disk full. (Bug #21326304, Bug #77497)

 * InnoDB: The system tablespace data file did not extend
   automatically when reaching the file size limit, causing
   startup to fail with a size mismatch error and preventing
   the addition of another system tablespace data file. (Bug
   #21287796, Bug #77128)

 * InnoDB: Altering the letter case of a column introduced
   an inconsistency between the frm file and data dictionary
   resulting in a failed CREATE INDEX operation on the
   altered column. (Bug #20755615)

 * InnoDB: An ALTER TABLE operation that converted a table
   to an InnoDB file-per-table tablespace did not check for
   unknown files with the same name as the destination .idb
   file, permitting an unknown file of the same name to be
   overwritten. (Bug #19218794, Bug #73225)

 * Replication: As binlog_error_action=ABORT_SERVER is the
   default in MySQL 5.7.7 and later it is being used for
   more error situations. The behavior has been adjusted to
   generate a core dump to improve troubleshooting
   possibilities. (Bug #21486161, Bug #77738)

 * Replication: Fatal errors encountered during flushing or
   synchronizing the binary log were being ignored. Such
   errors are now caught and handled depending on the
   setting of binlog_error_action. (Bug #76795, Bug #68953,
   Bug #20938915, Bug #1407)

 * Internal buffer sizes in resolve_stack_dump were
   increased to accommodate larger symbol space requirements
   for C++ code. (Bug #22071592)

 * MySQL development RPM packages could fail to install if
   MySQL Connector/C development RPM 

QxOrm 1.4.1 released : new documentation (user guide), and boost::serialization becomes optional

2015-12-03 Thread QxOrm contact
Hello,

*QxOrm 1.4.1 just released !* 


Changes in version QxOrm 1.4.1 :
*!!! IMPORTANT NOTE ABOUT THIS VERSION !!!* : it is strongly recommended to
read the QxOrm.pri configuration file
 of this new version
(compilation options have changed compared to previous versions).
Now, by default, QxOrm library is a much lighter library : QxOrm depends
only on *QtCore* and *QtSql* (boost serialization is now optional and not
required by default).
By default, serialization engine is now based on Qt *QDataStream* class
(but you can still enable boost serialization defining *
_QX_ENABLE_BOOST_SERIALIZATION* compilation option in QxOrm.pri
configuration file).
So now, with default options :
 - QxOrm 1.4.1 is much easier to install because you don't have to deal
with boost serialization extra dependency ;
 - QxOrm 1.4.1 shared library is 3X smaller than 1.3.2 version ;
 - Generated binaries which depends on QxOrm library are 25% smaller ;
 - If you are not using serialization functions in current projects based
on QxOrm library, then you can define or not
*_QX_ENABLE_BOOST_SERIALIZATION* compilation option without changing any
line of your source code.


Here are all other changes of version QxOrm 1.4.1 :
 - Improve relationships engine : possibility to select columns to fetch
using syntax : my_relation { col_1, col_2, etc... }
 ;
 - Improve QxTraits module
 to reduce
compilation times and build smaller binaries ;
 - Improve QxOrm website adding possibility to search and replacing the old
FAQ  by a more organized manual
(user guide)  ;
 - New compilation option *_QX_ENABLE_BOOST_SERIALIZATION* to enable boost
serialization dependency (read QxOrm.pri configuration file for more
details) ;
 - New compilation option *_QX_ENABLE_QT_NETWORK* to enable QxService module
 (transfer persistent
layer over network) : read QxOrm.pri configuration file for more details ;
 - New compilation option *_QX_NO_RTTI* to build QxOrm library without C++
RTTI type information ;
 - Support QDataStream Qt serialization engine
 (used by default
when *_QX_ENABLE_BOOST_SERIALIZATION* compilation option is not defined) ;
 - Improve qx_query class
 (SQL queries) : new
method (named *customOperator()*) which gives the possibility to define a
custom operator (for example <@ for PostgreSQL ltree type) ;
 - Fix a program startup issue due to 'static initialization order fiasco'
creating singletons (it was an issue with some compilers during the shared
library link process) ;
 - New namespace qx::dao::throwable : same functions as qx::dao namespace
, but they throw
a *qx::dao::sql_error* exception when a SQL error occurred (instead of
returning a *QSqlError* instance) ;
 - Add a *qAssertMsg()* macro to put a more explicit error message when
throwing an assertion ;
 - Include all **.inl* files (template implementation) in QxOrm.pro project
file : QtCreator can now index these **.inl* files in its project treeview ;
 - Rename *QxStringCvt* to *QxConvert* : so if you persist custom types to
database, you have to rename from *QxStringCvt_FromVariant,
QxStringCvt_ToVariant* to *QxConvert_FromVariant, QxConvert_ToVariant*.


You can download latest version of *QxOrm library* and *QxEntityEditor
application* on QxOrm website. 


Regards,


Lionel Marty - QxOrm library


AW: --initialize specified but the data directory has files in it. Aborting.

2015-11-16 Thread Axel Diehl
Hi,

you can check your file limits with 
$ ulimit -Hn
$ ulimit -Sn

The limits are configured in /etc/security/limits.conf

If you google for it, you will find, how to set. After changing, new opend 
shells will have the new parameters.

After changing them, your first error message " Changed limits: max_open_files: 
1024 (requested 5000)" should be gone, maybe the next one too.

Kind regards
Axel



-Ursprüngliche Nachricht-
Von: jim Zhou [mailto:jim.jz.z...@gmail.com] 
Gesendet: Freitag, 13. November 2015 22:23
An: Reindl Harald
Cc: mysql@lists.mysql.com
Betreff: Re: --initialize specified but the data directory has files in it. 
Aborting.

[root@deweyods1 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server 
release 6.6 (Santiago)

On Fri, Nov 13, 2015 at 8:56 AM, Reindl Harald 
wrote:

>
>
> Am 13.11.2015 um 17:46 schrieb Axel Diehl:
>
>> what kind of OS do you have?
>>
>
> pretty sure Fedora/CentOS7 because "yum" and the package name, if i 
> would be him i would just install mariadb which is the default mysql 
> implementation these days on most linux distributions and hence the 
> packages are better maintained
>
>
> -Ursprüngliche Nachricht-
>> Von: jim Zhou [mailto:jim.jz.z...@gmail.com]
>> Gesendet: Freitag, 13. November 2015 17:12
>> An: mysql@lists.mysql.com
>> Betreff: --initialize specified but the data directory has files in it.
>> Aborting.
>>
>> Hi,
>>
>> I did "yum install myswl-community-server" and "service mysqld start"
>> I got the error
>>
>> Initializing MySQL database:  2015-11-13T15:54:01.203931Z 0 [Warning] 
>> Changed limits: max_open_files: 1024 (requested 5000) 
>> 2015-11-13T15:54:01.204205Z 0 [Warning] Changed limits: table_open_cache:
>> 431 (requested 2000)
>> 2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit 
>> DEFAULT value is deprecated. Please use 
>> --explicit_defaults_for_timestamp server option (see documentation for more 
>> details).
>> 2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the 
>> data directory has files in it. Aborting.
>> 2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting
>>
>> can someone help?
>>
>
>


Re: --initialize specified but the data directory has files in it. Aborting.

2015-11-13 Thread Reindl Harald



Am 13.11.2015 um 17:12 schrieb jim Zhou:

I did "yum install myswl-community-server" and "service mysqld start"
I got the error

Initializing MySQL database:  2015-11-13T15:54:01.203931Z 0 [Warning]
Changed limits: max_open_files: 1024 (requested 5000)
2015-11-13T15:54:01.204205Z 0 [Warning] Changed limits: table_open_cache:
431 (requested 2000)
2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit DEFAULT
value is deprecated. Please use --explicit_defaults_for_timestamp server
option (see documentation for more details).
2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the data
directory has files in it. Aborting.
2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting


this is most likely because the socket file lives by default in 
"/var/lib/mysql" and maybe created too soon for the task


write a bugreport and in the meantime temporary configure mysqld with 
the socket file somewhere else like /tmp, make sure with "ls -lha 
/var/lib/mysql" that the directory is really empty




signature.asc
Description: OpenPGP digital signature


<    3   4   5   6   7   8   9   10   11   12   >