Re: need help from the list admin

2016-04-03 Thread Lentes, Bernd

- Am 1. Apr 2016 um 21:56 schrieb shawn l.green shawn.l.gr...@oracle.com:

> Correct. MyISAM is not a transactional storage engine. It has no concept
> of COMMIT or ROLLBACK. Changes to it are controlled by a full table lock
> and as soon as the change is complete, the table is unlocked and is
> immediately visible to every other session.
> 
> What the replication system has done is to extend the length of that
> lock until the transaction completes to avoid situations where changes
> appear "out of sequence" to what is recorded in the Binary Log.

>> So when transaction is rollbacked, the inserted data in the MyISAM table 
>> remains
>> ?
> 
> Correct. All of the changes that could be undone were undone. MyISAM
> changes can't be undone so they stayed in place.
> 

Aah. OK.

> Try this as an analogy.
> 
> MyISAM tables are like writing directly with ink on paper. If you can
> complete the write, you have changed the row.
> 
> InnoDB tables are like whiteboards. You can undo your pending changes
> before someone else (one of the background threads of the InnoDB engine)
> makes the changes permanent.
> 
> The Binary Log is like a recipe used to rebuild your data in case it
> goes boom. If you start from a backup then repeat the sequence of
> actions as they were recorded in the Binary Log since that backup was
> created, you should wind up with exactly the same data you had before
> the problem.  If there is a problem with that sequence (actions are out
> of order) then rebuilding that data could be a problem.
> 
> Sequence makes a big difference even in less esoteric settings. Try this...
> 
> Start with your phone lying flat on your desk (screen up) pointing
> directly away from you. Roll it 45 degrees to the right. Now lift it
> vertically towards you 90 degrees (maintain the roll).  The phone is now
> pointing straight up but the screen is turned away from you.
> 
> Then try those same actions in reverse order. lift first, then roll it
> to the right. In this case the screen is pointing in your general
> direction but the whole thing is leaning off to one side.

I don't understand the example completely, but i understand
what you want to say: Changing the order of statements
may lead to a different result.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-03 Thread Lentes, Bernd


- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

>> Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
>> with Statement-Based-Logging or with both ?
>>
>>
> 
> Both.
> 
>
>>
>> I don't understand the example:
>> Does "begin transaction" and "COMMIT" have any influence on the insert ?
>>  From what i understand a myisam table does not support transactions,
>> so it should not care about "begin transaction" and "commit".
>> So the insert should be done immediately. The select on the InnoDB also
>> should not wait, because it's applied without "LOCK IN SHARE MODE".
>> So x lines are added immediately. This is done on the master, written in the 
>> log
>> and then replicated to the slave, which also adds x lines.
>> Then connection 2 deletes 8 rows, one is from the previous insert.
>> First on the master and then on the slave.
>> I assume that the connections are established in the order they appear here
>> (connection 2 is established after the insert in connection 1).
>> So on both 8 rows are deleted.
>>
>>
> 
> 
> You said, "This is done on the master, written in the log and then
> replicated to the slave, "
> 
> The INSERT would not appear in the Binary log until after session 1
> commits. Even if session 1 does a rollback, you would still see the
> entire transaction including the ROLLBACK. We have to do it that way to
> preserve the transaction isolation of the InnoDB data.
> 
> Yes, you read the shorthand correctly and in the correct temporal sequence.
>   session1 did two commands.
>   session2 issued one command.
>   session1 did a commit.
> 
> It does not matter of the sessions were created in that order or not.
> Only the sequence in which the commands are executed matters.
> 
> 
>>
>>
>> Independent from the binlog_format ?
>> Does commit means "write now to the binlog" ?
>>
> 
> Effectively, it does (for InnoDB-based transactions). InnoDB first
> writes the entire transaction to the Binary Log (it was sitting in the
> Binlog cache up until this point) then it pumps the necessary data into
> the REDO log (for disaster recovery). At that point the transaction is
> considered "committed".  In the case of a rollback, there is nothing to
> log in either location, no permanent changes were made to the data.
> However if the transaction that rolled back contained statements that
> changed MyISAM tables, then the entire transaction (all of the work it
> did) needs to be written into the Binary Log and REDO log just to have
> the very last command be "ROLLBACK".   What that will do is create the
> same sequence of data changes on the slave that happened on the master.
> 
> 
In case of a rollback: is the INSERT in the MyISAM table also rollbacked ? 
I think no.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread Reindl Harald



Am 01.04.2016 um 21:09 schrieb Lentes, Bernd:

- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

You said, "This is done on the master, written in the log and then
replicated to the slave, "

The INSERT would not appear in the Binary log until after session 1
commits.


So the INSERT take care about the transaction (begin transaction ... COMMIT)
although it's a MyISAM table ?
Because i read MyISAM does not care about it:
http://stackoverflow.com/questions/8036005/myisam-engine-transaction-support


and hence you should not mix innodb and non-transactional tables, a 
MyISAM table is not and will never be part of a transaction




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd
Sorry for pm !

- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

>>> You would be better served by first converting your MyISAM tables to
>>> InnoDB to stop mixing storage engine behaviors (transactional and
>>> non-transactional) within the scope of a single transaction. But if you
>>> cannot convert them, using MIXED will be a good compromise.
>>
>> Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
>> with Statement-Based-Logging or with both ?
> Both.

Aah ! In the beginning i thought it's just a problem for RBL.

>>> Look at this sequence and think what would happen without that "stronger
>>> locking" you mentioned earlier.
>>>
>>> (connection 1)
>>>begin transaction
>>>INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
>>>   (connection 2)
>>>   DELETE myisam_table WHERE ...  (this removes one of the rows that
>>>   connection 1 just added)
>> (end of connection 2)
>>> (connection 1)
>>>COMMIT


>> I don't understand the example:
>> Does "begin transaction" and "COMMIT" have any influence on the insert ?
>>  From what i understand a myisam table does not support transactions,
>> so it should not care about "begin transaction" and "commit".
>> So the insert should be done immediately. The select on the InnoDB also
>> should not wait, because it's applied without "LOCK IN SHARE MODE".
>> So x lines are added immediately. This is done on the master, written in the 
>> log
>> and then replicated to the slave, which also adds x lines.
>> Then connection 2 deletes 8 rows, one is from the previous insert.
>> First on the master and then on the slave.
>> I assume that the connections are established in the order they appear here
>> (connection 2 is established after the insert in connection 1).
>> So on both 8 rows are deleted.
>>
>>
> 
> 
> You said, "This is done on the master, written in the log and then
> replicated to the slave, "
> 
> The INSERT would not appear in the Binary log until after session 1
> commits. 

So the INSERT take care about the transaction (begin transaction ... COMMIT) 
although it's a MyISAM table ?
Because i read MyISAM does not care about it:
http://stackoverflow.com/questions/8036005/myisam-engine-transaction-support


>> Does commit means "write now to the binlog" ?
>>
> 
> Effectively, it does (for InnoDB-based transactions). InnoDB first
> writes the entire transaction to the Binary Log (it was sitting in the
> Binlog cache up until this point) then it pumps the necessary data into
> the REDO log (for disaster recovery). 

And when in that temporal sequence is the data written to the tablespace ?

> At that point the transaction is
> considered "committed".  In the case of a rollback, there is nothing to
> log in either location, no permanent changes were made to the data.
> However if the transaction that rolled back contained statements that
> changed MyISAM tables, then the entire transaction (all of the work it
> did) needs to be written into the Binary Log and REDO log just to have
> the very last command be "ROLLBACK".   What that will do is create the
> same sequence of data changes on the slave that happened on the master.

So when transaction is rollbacked, the inserted data in the MyISAM table 
remains ?

Thaks again.

Bernd

P.S. i tried several times to rename the subject into something like
"Replication - was "need help from the list admin"", but this mail
is always bounced back because it is recognized as spam !?!
I just renamed the subject !


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd

- Am 1. Apr 2016 um 17:52 schrieb shawn l.green shawn.l.gr...@oracle.com:

>> What is true ? when the transaction started or when the first read is 
>> performed ?
 
> Until you need to establish a snapshot of the data, then you don't need
> a snapshot position.
> 
> The transaction physically begins (rows begin to be protected against
> changes by other transactions) with the first read.

OK. But only when the first read is in a transaction and isolation level
is REPEATABLE READ the query is "snapshotted" for further queries, so
seeing the same result ?
When several SELECT are issued not inside a transaction they always get
the current data ?
 
> Consider the alternative: If we started protecting data with the START
> TRANSACTION command we would need to protect every row in every table in
> every database.  That is simply not efficient.

YES.

> We protect the pages that contain the rows that are physically required
> by the individual transaction. This is a much smaller locking footprint
> and is much easier to manage.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 10:08 AM, Lentes, Bernd wrote:



- On Apr 1, 2016, at 3:12 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

Btw:
i read about isolation levels. REPEATABLE READ is the default for InnoDB.
http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_repeatable_read says:

"...so that all queries within a transaction see data from the same snapshot,
that is, the data as it was at the time the transaction started.".

http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_consistent_read says:

"With the repeatable read isolation level, the snapshot is based on the time
when the first read operation is performed".

What is true ? when the transaction started or when the first read is performed 
?



Until you need to establish a snapshot of the data, then you don't need 
a snapshot position.


The transaction physically begins (rows begin to be protected against 
changes by other transactions) with the first read.


Consider the alternative: If we started protecting data with the START 
TRANSACTION command we would need to protect every row in every table in 
every database.  That is simply not efficient.


We protect the pages that contain the rows that are physically required 
by the individual transaction. This is a much smaller locking footprint 
and is much easier to manage.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 9:12 AM, Lentes, Bernd wrote:

- On Mar 25, 2016, at 9:54 PM, shawn l.green shawn.l.gr...@oracle.com wrote:



"Unsafe" in that sense replies to the fact that certain commands can
have a different effect when processed from the Binary Log than they did
when they were executed originally on the system that wrote the Binary
Log. This would be true for both a point-in-time recovery situation and
for replication. The topic of unsafe commands is covered rather well on
these pages:
http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional
boundaries and change non-transactional tables.  The effect of those
commands are apparent immediately to any other user of the server. They
do not rely on the original transaction to complete with a COMMIT. The
workaround we employed was to keep the non-transactional table locked
(to keep others from altering it) until the transaction completes
(COMMIT or ROLLBACK). That way we do our best to make all changes
"permanent" at the same time.



Hi,

oh my god. The more i read the more i'm getting confused. I totally underrated 
replication.
But i will not give up ;-) And i appreciate your help, Shawn.
What do you mean with the workaround ? Does MySQL this automatically or has it 
be done
in the app code ?



It's inside the server. You don't need to do anything as a user.



You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
with Statement-Based-Logging or with both ?




Both.



Look at this sequence and think what would happen without that "stronger
locking" you mentioned earlier.

(connection 1)
   begin transaction
   INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
  (connection 2)
  DELETE myisam_table WHERE ...  (this removes one of the rows that
  connection 1 just added)

(end of connection 2)

(connection 1)
   COMMIT

When the slave sees this sequence, it will get the command from
Connection2 first (it completed first so it winds up in the Binary Log).
It removed 8 rows on the master but it would only see 7 on the slave.
Why? The 8th row has not been added to the MyISAM table on the slave
because the transaction that does it hasn't been recorded to the Binary
Log yet.

That's why there is stronger locking comes into play. If we had not
blocked connection 2 until connection 1 completed things would be out of
temporally speaking. It's still possible for things to happen out of
sequence on the slave when mixing transactional and non-transactional
tables in the same transaction.



I don't understand the example:
Does "begin transaction" and "COMMIT" have any influence on the insert ?
 From what i understand a myisam table does not support transactions,
so it should not care about "begin transaction" and "commit".
So the insert should be done immediately. The select on the InnoDB also
should not wait, because it's applied without "LOCK IN SHARE MODE".
So x lines are added immediately. This is done on the master, written in the log
and then replicated to the slave, which also adds x lines.
Then connection 2 deletes 8 rows, one is from the previous insert.
First on the master and then on the slave.
I assume that the connections are established in the order they appear here
(connection 2 is established after the insert in connection 1).
So on both 8 rows are deleted.





You said, "This is done on the master, written in the log and then 
replicated to the slave, "


The INSERT would not appear in the Binary log until after session 1 
commits. Even if session 1 does a rollback, you would still see the 
entire transaction including the ROLLBACK. We have to do it that way to 
preserve the transaction isolation of the InnoDB data.


Yes, you read the shorthand correctly and in the correct temporal sequence.
  session1 did two commands.
  session2 issued one command.
  session1 did a commit.

It does not matter of the sessions were created in that order or not. 
Only the sequence in which the commands are executed matters.






This takes us to the next point you have...

The doc says: "Due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transactional
and nontransactional tables. MySQL tries to preserve causality among
these statements by writing nontransactional statements to the
transaction cache, which is flushed upon commit. However, problems arise
when modifications done to nontransactional tables on behalf of a
transaction becom

Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd


- On Apr 1, 2016, at 3:12 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

Btw:
i read about isolation levels. REPEATABLE READ is the default for InnoDB.
http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_repeatable_read says:

"...so that all queries within a transaction see data from the same snapshot, 
that is, the data as it was at the time the transaction started.".

http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_consistent_read says:

"With the repeatable read isolation level, the snapshot is based on the time 
when the first read operation is performed".

What is true ? when the transaction started or when the first read is performed 
?


Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: need help from the list admin

2016-04-01 Thread Lentes, Bernd
- On Mar 25, 2016, at 9:54 PM, shawn l.green shawn.l.gr...@oracle.com wrote:


> "Unsafe" in that sense replies to the fact that certain commands can
> have a different effect when processed from the Binary Log than they did
> when they were executed originally on the system that wrote the Binary
> Log. This would be true for both a point-in-time recovery situation and
> for replication. The topic of unsafe commands is covered rather well on
> these pages:
> http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
> http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html
> 
> This is particularly true for commands that may cross transactional
> boundaries and change non-transactional tables.  The effect of those
> commands are apparent immediately to any other user of the server. They
> do not rely on the original transaction to complete with a COMMIT. The
> workaround we employed was to keep the non-transactional table locked
> (to keep others from altering it) until the transaction completes
> (COMMIT or ROLLBACK). That way we do our best to make all changes
> "permanent" at the same time.
> 

Hi,

oh my god. The more i read the more i'm getting confused. I totally underrated 
replication.
But i will not give up ;-) And i appreciate your help, Shawn.
What do you mean with the workaround ? Does MySQL this automatically or has it 
be done 
in the app code ?
 
> You would be better served by first converting your MyISAM tables to
> InnoDB to stop mixing storage engine behaviors (transactional and
> non-transactional) within the scope of a single transaction. But if you
> cannot convert them, using MIXED will be a good compromise.

Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
with Statement-Based-Logging or with both ?


> Look at this sequence and think what would happen without that "stronger
> locking" you mentioned earlier.
> 
> (connection 1)
>   begin transaction
>   INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
>  (connection 2)
>  DELETE myisam_table WHERE ...  (this removes one of the rows that
>  connection 1 just added)
   (end of connection 2)
> (connection 1)
>   COMMIT
> 
> When the slave sees this sequence, it will get the command from
> Connection2 first (it completed first so it winds up in the Binary Log).
> It removed 8 rows on the master but it would only see 7 on the slave.
> Why? The 8th row has not been added to the MyISAM table on the slave
> because the transaction that does it hasn't been recorded to the Binary
> Log yet.
> 
> That's why there is stronger locking comes into play. If we had not
> blocked connection 2 until connection 1 completed things would be out of
> temporally speaking. It's still possible for things to happen out of
> sequence on the slave when mixing transactional and non-transactional
> tables in the same transaction.
> 

I don't understand the example:
Does "begin transaction" and "COMMIT" have any influence on the insert ?
>From what i understand a myisam table does not support transactions,
so it should not care about "begin transaction" and "commit".
So the insert should be done immediately. The select on the InnoDB also
should not wait, because it's applied without "LOCK IN SHARE MODE".
So x lines are added immediately. This is done on the master, written in the log
and then replicated to the slave, which also adds x lines.
Then connection 2 deletes 8 rows, one is from the previous insert.
First on the master and then on the slave.
I assume that the connections are established in the order they appear here
(connection 2 is established after the insert in connection 1).
So on both 8 rows are deleted.



> This takes us to the next point you have...
>> The doc says: "Due to concurrency issues, a slave can become
>> inconsistent when a transaction contains updates to both transactional
>> and nontransactional tables. MySQL tries to preserve causality among
>> these statements by writing nontransactional statements to the
>> transaction cache, which is flushed upon commit. However, problems arise
>> when modifications done to nontransactional tables on behalf of a
>> transaction become immediately visible to other connections because
>> these changes may not be written immediately into the binary log.
>> Beginning with MySQL 5.5.2, the binlog_direct_non_transactional_updates
>> variable offers one possible workaround to this issue. By default, this
>> variable is disabled. Enabling binlog_direct_non_transactional_updates
>> causes updates to nontransactional tables to be written directly to the
>> binary log, rather than to the transaction cache.
>> binlo

Re: need help from the list admin

2016-03-30 Thread shawn l.green



On 3/30/2016 1:26 PM, Lentes, Bernd wrote:

- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:



So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a
webrequest are half written due a crash or restart use autocommit


Autocommit means that every statement is committed implicitly. Right ?
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?



it depends:  If the transaction made it into the Binary Log (if it is 
enabled) and the REDO log as "committed", then InnoDB will finish the 
commit (put the actual data in its proper place in the data files) after 
recovery. If not, it will rollback and your data remains as it was.


http://dev.mysql.com/doc/refman/5.6/en/innodb-recovery.html



if you care that all or nothing is written use transactions
if you care that way don't mix non-transactional tables with innodb


I'm planning to convert the MyISAM tables to InnoDB.



That will solve many of your data consistency problems (particularly 
those related to how things are recorded in the Binary Log), presuming 
you surround changes that involve multiple commands with transaction 
control commands.


If your sets of data changes only need one command to complete, then the 
overhead of issuing explicit START TRANSACTION and COMMIT commands is 
just going to create work you don't need for your workflow. If you need 
more than one command to make a complete and consistent update to your 
data, then use a transaction. If not, operating in autocommit mode is 
ideal.




Bernd



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-30 Thread Reindl Harald



Am 30.03.2016 um 19:26 schrieb Lentes, Bernd:

- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:



So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a
webrequest are half written due a crash or restart use autocommit


Autocommit means that every statement is committed implicitly. Right ?
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?


transactions are not about single queries, transactions are all about 
multiple queries when you want them all or nothing written


please do some homework and read https://en.wikipedia.org/wiki/ACID 
which is basic knowledge about databases


the crash safety of innodb has nothing to do with commits



signature.asc
Description: OpenPGP digital signature


RE: need help from the list admin

2016-03-30 Thread Lentes, Bernd
- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:


>> So i should use the default (autocommit=1)?
> 
> no, you should what is appropriate for your application
> 
> if you don't care about inserts/updates triggered by let say a
> webrequest are half written due a crash or restart use autocommit

Autocommit means that every statement is committed implicitly. Right ? 
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?

> 
> if you care that all or nothing is written use transactions
> if you care that way don't mix non-transactional tables with innodb

I'm planning to convert the MyISAM tables to InnoDB.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-30 Thread Reindl Harald


Am 30.03.2016 um 18:56 schrieb Lentes, Bernd:

- On Mar 28, 2016, at 9:53 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

I read that the converting is not difficult. But has the code of our webapp to
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be
commited. Yes ?


No. The server's default is to have --autocommit=1, which means that
there is an implicit commit at the end of every command. You do not need
to state explicitly "COMMIT" every time you want this to happen.

In fact, disabling autocommit has gotten many new users into trouble
because they did not understand the behavior they changed.


So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a 
webrequest are half written due a crash or restart use autocommit


if you care that all or nothing is written use transactions
if you care that way don't mix non-transactional tables with innodb



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-30 Thread Lentes, Bernd

- On Mar 28, 2016, at 9:53 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

>>
>> I read that the converting is not difficult. But has the code of our webapp 
>> to
>> be changed ? It's written in php and perl.
>> What i understand is that inserts/updates/deletions in InnoDB tables have to 
>> be
>> commited. Yes ?
> 
> No. The server's default is to have --autocommit=1, which means that
> there is an implicit commit at the end of every command. You do not need
> to state explicitly "COMMIT" every time you want this to happen.
> 
> In fact, disabling autocommit has gotten many new users into trouble
> because they did not understand the behavior they changed.

So i should use the default (autocommit=1) ?

 
> Here is a reference from the 5.0 manual to illustrate that this behavior
> has been around for a long time:
> http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html

 
Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-28 Thread Reindl Harald



Am 28.03.2016 um 21:36 schrieb Lentes, Bernd:

- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:


Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can
convert


surely - when you have non-transactional tables involved in
updates/inserts you can go and forget using transactions at all since
interruption or rollback would not rollback already written changes in
MyISAM tables

transactions are all about consistency - impossible with a mix of InnoDB
and MyISAM tables


I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?


first please stop place a space before "?" - it hurts :-)

NO - it only has to be commited if you START A TRANSACTION at the begin


This has to be done in the code ? Or can we use the system variable autocommit ?


if you are using automcommit you lose any purpose of having transactions


That means that everything is commited immediately ? Is this a good solution ?


if you don't care about consistency yes


What means "By default, client connections begin with autocommit set to 1" in 
the doc ?


that nobody is starting a transaction automatically because nobody can 
tell when it is finished automatically end hence you need to tell it the 
db server



That every client connection established via perl/php is started with 
autocommit=1 ?


surely


And when does the commit happen ? When the connection is closed ? Is that 
helpful ?


depends - maybe you should start to read what transactions in the 
database world are because than you can answer all of your questions 
above at your own




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-28 Thread shawn l.green

Hello Bernd,

On 3/28/2016 3:36 PM, Lentes, Bernd wrote:



- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:


Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can
convert


surely - when you have non-transactional tables involved in
updates/inserts you can go and forget using transactions at all since
interruption or rollback would not rollback already written changes in
MyISAM tables

transactions are all about consistency - impossible with a mix of InnoDB
and MyISAM tables


I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?


No. The server's default is to have --autocommit=1, which means that 
there is an implicit commit at the end of every command. You do not need 
to state explicitly "COMMIT" every time you want this to happen.


In fact, disabling autocommit has gotten many new users into trouble 
because they did not understand the behavior they changed.



This has to be done in the code ? Or can we use the system variable autocommit ?


You should need to change nothing.


That means that everything is commited immediately ? Is this a good solution ?


It is going to behave better than the data you have now. The changes to 
the tables you will convert from MyISAM to InnoDB will not become 
visible to other sessions until after the COMMIT (implicit or explicit) 
completes. For finer-grained control over data visibility, you need to 
understand the broader topic of transaction isolation.



What means "By default, client connections begin with autocommit set to 1" in 
the doc ?


It means that every command is already running in its own private 
mini-transaction. To start a multi-statement transaction you do not need 
to disable autocommit, you simply need to use the START TRANSACTION 
command.


Here is a reference from the 5.0 manual to illustrate that this behavior 
has been around for a long time:

http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html


That every client connection established via perl/php is started with 
autocommit=1 ?


It is as long as:

1) the global variable autocommit=1
2) the client does nothing to change its own session variable to 
autocommit=0



And when does the commit happen ? When the connection is closed ? Is that 
helpful ?



The commit happens at the end of each command. If you need to contain 
multiple commands within a single transaction, use START TRANSACTION and 
COMMIT.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-28 Thread Lentes, Bernd


- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:

> Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:
>>> You would be better served by first converting your MyISAM tables to
>>> InnoDB to stop mixing storage engine behaviors (transactional and
>>> non-transactional) within the scope of a single transaction. But if you
>>> cannot convert them, using MIXED will be a good compromise.
>>
>> Is this a big problem ? Something to take care of ? Currently we have a mix.
>> I will ask the girl who developed it why we have both kinds. I hope i can
>> convert
> 
> surely - when you have non-transactional tables involved in
> updates/inserts you can go and forget using transactions at all since
> interruption or rollback would not rollback already written changes in
> MyISAM tables
> 
> transactions are all about consistency - impossible with a mix of InnoDB
> and MyISAM tables

I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?
This has to be done in the code ? Or can we use the system variable autocommit ?
That means that everything is commited immediately ? Is this a good solution ?
What means "By default, client connections begin with autocommit set to 1" in 
the doc ?
That every client connection established via perl/php is started with 
autocommit=1 ?
And when does the commit happen ? When the connection is closed ? Is that 
helpful ?

Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-27 Thread Reindl Harald



Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can 
convert


surely - when you have non-transactional tables involved in 
updates/inserts you can go and forget using transactions at all since 
interruption or rollback would not rollback already written changes in 
MyISAM tables


transactions are all about consistency - impossible with a mix of InnoDB 
and MyISAM tables




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-27 Thread Lentes, Bernd


- Am 25. Mrz 2016 um 21:54 schrieb shawn l.green shawn.l.gr...@oracle.com:

> Hello Bernd,
> 
> Sorry for the delay, I wanted to make sure I had enough time to address
> all of your points.


>> He proposed to have two hosts, and on each is running a MySQL instance
>> as master AND slave. But it's not a "real multi master solution",
>> because pacemaker takes care that the IP for the web app just points to
>> one master. So i don't have the multi-master problems with concurrent
>> inserts (i believe).
> 
> This is wise advice. We (MySQL Support) often recommend exactly the same
> setup:  a master + one(or more) slave(s) using replication to keep the
> slaves in relative sync. I say "relative" because replication is
> asynchronous.
> 
> All writes are directed at the master. Clients that can tolerate the
> natural lag of the replication system can use any available slave for
> read-only queries.
> 

is semi-synchronous a good idea ? I think we just have several 100 inserts per 
day, so i believe the lag should not be a problem.

>> His idea is that host A is master for the slave on host B, and host B is
>> the master for the slave on host A. OK ?
>> Let's imagining that the IP to the web app points to host A, inserts are
>> done to the master on host A and replicated to the slave on host B. Now
>> host A has problems, pacemaker redirects the IP to host B, and
>> everything should be fine.
>> What do you think about this setup ? Where is the advantage to a
>> "classical Master/Slave Replication" ? How should i configure
>> log-slave-updates in this scenario ?
> 
> We have a page on that in the manual (with a diagram):
> http://dev.mysql.com/doc/refman/5.6/en/replication-solutions-switch.html
> 

I will read that.

> 
>> Let's imagine i have two hosts again: Host A is master, host B is slave.
>> Nothing else. No real or pseudo "Multi-Master". IP points to host A.
>> Host A has problems, pacemaker recognizes it, promotes B to master and
>> pivot the IP. Everything should be fine. Where is the disadvantage of
>> this setup compared to the "Multi-Master Replication" in the book ? The
>> OCF ressource agent for mysql should be able to handle the mysql stuff
>> and the RA for the IP pivots the IP.
>>
> 
> Remember to wait for the slave to catch up to the master it lost contact
> with. That way its data is as current as possible. Then redirect your
> clients to the new read-write node in your replication topology.
> 

What is if the slave is behind and the master is gone ? So he has neither 
possibility to be up-to-date nor to catch up.


>>
>> The doc says: "For tables using the MYISAM storage engine, a stronger
>> lock is required on the slave for INSERT statements when applying them
>> as row-based events to the binary log than when applying them as
>> statements. This means that concurrent inserts on MyISAM tables are not
>> supported when using row-based replication."
>> What does this exactly mean ? Concurrent inserts in MyISAM-tables are
>> not possible if using RBL ? Or unsafe in the meaning they create
>> inconsistencies ?
>>
> 
> "Unsafe" in that sense replies to the fact that certain commands can
> have a different effect when processed from the Binary Log than they did
> when they were executed originally on the system that wrote the Binary
> Log. This would be true for both a point-in-time recovery situation and
> for replication. The topic of unsafe commands is covered rather well on
> these pages:
> http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
> http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

I will read that.
> 
> This is particularly true for commands that may cross transactional
> boundaries and change non-transactional tables.  The effect of those
> commands are apparent immediately to any other user of the server. They
> do not rely on the original transaction to complete with a COMMIT. The
> workaround we employed was to keep the non-transactional table locked
> (to keep others from altering it) until the transaction completes
> (COMMIT or ROLLBACK). That way we do our best to make all changes
> "permanent" at the same time.
> 
> 
>> "RBL (Row Based Logging) and synchronization of nontransactional tables.
>> When many rows are affected, the set of changes is split into several
>> events; when the statement commits, all of these events are written to
>> the binary log. When executing on the slave, a table lock is taken on
>> all tables involved, and then
>> the rows are applied in batch mode. (This may or may not be effective,
>> depending on the engine used for the slave抯 copy of the table.)"
>> What does that mean ? Effective ? Is it creating inconsistencies ? Or
>> just not effective in the sense of slow or inconvinient ?
>>
>> Or should i prefer MIXED for binlog_format ?
>>
> 
> You would be better served by first converting your MyISAM tables to
> InnoDB to stop mixing storage engine behaviors (transactional and
> non-transactional) within 

Re: need help from the list admin

2016-03-25 Thread shawn l.green

Hello Bernd,

Sorry for the delay, I wanted to make sure I had enough time to address 
all of your points.


On 3/22/2016 7:07 AM, william drescher wrote:

sent for Bernd, and to see if it works from another sender
--
  Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when you have
a look in the archive it's nearly complete empty. Really not busy.
So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with perl
and apache httpd. Webpages serve reading and writing into the db. The db
is important for our own work flow, so i'd like to make it HA. I have
two HP servers and will use SLES 11 SP4 64bit as OS. MySQL is 5.5.47.
For HA i'd like to use pacemaker, which is available in SLES High
Availibility Extension. I have experience in linux, but i'm not a
database administrator nor developer. HA is important for us, we don't
have performance problems.
My first idea was to run the web app and the db in a virtual machine on
the host and in case of a failure of one host pacemaker would run the vm
on the other host. VM would be stored on a FC SAN. I stopped following
this idea. I have bought a book about HA: "..." from Oliver Liebel. It's
only available in german. But i can recommend it, it's very detailed and
well explained.
He proposed to have two hosts, and on each is running a MySQL instance
as master AND slave. But it's not a "real multi master solution",
because pacemaker takes care that the IP for the web app just points to
one master. So i don't have the multi-master problems with concurrent
inserts (i believe).


This is wise advice. We (MySQL Support) often recommend exactly the same 
setup:  a master + one(or more) slave(s) using replication to keep the 
slaves in relative sync. I say "relative" because replication is 
asynchronous.


All writes are directed at the master. Clients that can tolerate the 
natural lag of the replication system can use any available slave for 
read-only queries.



His idea is that host A is master for the slave on host B, and host B is
the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A, inserts are
done to the master on host A and replicated to the slave on host B. Now
host A has problems, pacemaker redirects the IP to host B, and
everything should be fine.
What do you think about this setup ? Where is the advantage to a
"classical Master/Slave Replication" ? How should i configure
log-slave-updates in this scenario ?


We have a page on that in the manual (with a diagram):
http://dev.mysql.com/doc/refman/5.6/en/replication-solutions-switch.html



Let's imagine i have two hosts again: Host A is master, host B is slave.
Nothing else. No real or pseudo "Multi-Master". IP points to host A.
Host A has problems, pacemaker recognizes it, promotes B to master and
pivot the IP. Everything should be fine. Where is the disadvantage of
this setup compared to the "Multi-Master Replication" in the book ? The
OCF ressource agent for mysql should be able to handle the mysql stuff
and the RA for the IP pivots the IP.



Remember to wait for the slave to catch up to the master it lost contact 
with. That way its data is as current as possible. Then redirect your 
clients to the new read-write node in your replication topology.




Now some dedicated questions to replication. I read a lot in the
official documentation, but some things are not clear to me.
In our db we have MyISAM and InnoDB tables.

 From what i read i'd prefer row based replication. The doc says is the
safest approach. But there seems to be still some problems:

The doc says: "For tables using the MYISAM storage engine, a stronger
lock is required on the slave for INSERT statements when applying them
as row-based events to the binary log than when applying them as
statements. This means that concurrent inserts on MyISAM tables are not
supported when using row-based replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables are
not possible if using RBL ? Or unsafe in the meaning they create
inconsistencies ?



"Unsafe" in that sense replies to the fact that certain commands can 
have a different effect when processed from the Binary Log than they did 
when they were executed originally on the system that wrote the Binary 
Log. This would be true for both a point-in-time recovery situation and 
for replication. The topic of unsafe commands is covered rather well on 
these pages:

http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional 
boundaries and change non-transactional tables.  The effect of those 
commands are apparent immediately to any other user of the server. They 
do not rely on the original transaction to complete with a COMMIT. The 
workaround we employed was to keep the 

Re: need help from the list admin

2016-03-23 Thread Lentes, Bernd


- On Mar 23, 2016, at 11:11 AM, william drescher will...@techservsys.com 
wrote:

>>
>> Hi William,
>>
>> thanks for the try. Good idea !
>> Did you change anything ?
>>
>>
>> Bernd
> 
> Yes, in the original document there were some characters that
> were put on the screen as asian pictograph characters.  I
> replaced them with periods:
> I have bought a book
> about HA: "..." from Oliver Liebel
> 
> And I found the same characters in your sig and removed them.
> 
> bill
> 
> 
> 


Hi Bill,

thanks. I will try now to ask again.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-23 Thread william drescher

On 3/22/2016 7:49 AM, Lentes, Bernd wrote:



- On Mar 22, 2016, at 12:07 PM, william drescher will...@techservsys.com 
wrote:


sent for Bernd, and to see if it works from another sender
--
  Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when
you have a look in the archive it's nearly complete empty. Really
not busy.
So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with
perl and apache httpd. Webpages serve reading and writing into
the db. The db is important for our own work flow, so i'd like to
make it HA. I have two HP servers and will use SLES 11 SP4 64bit
as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which
is available in SLES High Availibility Extension. I have
experience in linux, but i'm not a database administrator nor
developer. HA is important for us, we don't have performance
problems.
My first idea was to run the web app and the db in a virtual
machine on the host and in case of a failure of one host
pacemaker would run the vm on the other host. VM would be stored
on a FC SAN. I stopped following this idea. I have bought a book
about HA: "..." from Oliver Liebel. It's only available in
german. But i can recommend it, it's very detailed and well
explained.
He proposed to have two hosts, and on each is running a MySQL
instance as master AND slave. But it's not a "real multi master
solution", because pacemaker takes care that the IP for the web
app just points to one master. So i don't have the multi-master
problems with concurrent inserts (i believe).
His idea is that host A is master for the slave on host B, and
host B is the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A,
inserts are done to the master on host A and replicated to the
slave on host B. Now host A has problems, pacemaker redirects the
IP to host B, and everything should be fine.
What do you think about this setup ? Where is the advantage to a
"classical Master/Slave Replication" ? How should i configure
log-slave-updates in this scenario ?
Let's imagine i have two hosts again: Host A is master, host B is
slave. Nothing else. No real or pseudo "Multi-Master". IP points
to host A. Host A has problems, pacemaker recognizes it, promotes
B to master and pivot the IP. Everything should be fine. Where is
the disadvantage of this setup compared to the "Multi-Master
Replication" in the book ? The OCF ressource agent for mysql
should be able to handle the mysql stuff and the RA for the IP
pivots the IP.

Now some dedicated questions to replication. I read a lot in the
official documentation, but some things are not clear to me.
In our db we have MyISAM and InnoDB tables.

 From what i read i'd prefer row based replication. The doc says
is the safest approach. But there seems to be still some problems:

The doc says: "For tables using the MYISAM storage engine, a
stronger lock is required on the slave for INSERT statements when
applying them as row-based events to the binary log than when
applying them as statements. This means that concurrent inserts
on MyISAM tables are not supported when using row-based
replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables
are not possible if using RBL ? Or unsafe in the meaning they
create inconsistencies ?

"RBL (Row Based Logging) and synchronization of nontransactional
tables. When many rows are affected, the set of changes is split
into several events; when the statement commits, all of these
events are written to the binary log. When executing on the
slave, a table lock is taken on all tables involved, and then
the rows are applied in batch mode. (This may or may not be
effective, depending on the engine used for the slave抯 copy of
the table.)"
What does that mean ? Effective ? Is it creating inconsistencies
? Or just not effective in the sense of slow or inconvinient ?

Or should i prefer MIXED for binlog_format ?

The doc says: " If a statement is logged by row and the session
that executed the statement has any temporary tables, logging by
row is used for all subsequent statements (except for those
accessing temporary tables) until all temporary tables in use by
that session are dropped.
This is true whether or not any temporary tables are actually
logged. Temporary tables cannot be logged using row-based format;
thus, once row-based logging is used, all subsequent statements
using that table are unsafe. The server approximates this
condition by treating all statements executed during the session
as unsafe until the session no longer holds any temporary tables."
What does that mean ? Unsafe ? Causing inconsistencies ? Problem
with SBL or RBL ?

The doc says: "Due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both
transactional and nontransactional tables. MySQL tries to
preserve causality among these statements by writing

Re: need help from the list admin

2016-03-22 Thread Lentes, Bernd


- On Mar 22, 2016, at 12:07 PM, william drescher will...@techservsys.com 
wrote:

> sent for Bernd, and to see if it works from another sender
> --
>  Lentes, Bernd wrote:
> Hi,
> 
> i know that there is a list dedicated to replication, but when
> you have a look in the archive it's nearly complete empty. Really
> not busy.
> So i hope it's ok if i ask here.
> we have a web app which runs a MySQL DB and dynamic webpages with
> perl and apache httpd. Webpages serve reading and writing into
> the db. The db is important for our own work flow, so i'd like to
> make it HA. I have two HP servers and will use SLES 11 SP4 64bit
> as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which
> is available in SLES High Availibility Extension. I have
> experience in linux, but i'm not a database administrator nor
> developer. HA is important for us, we don't have performance
> problems.
> My first idea was to run the web app and the db in a virtual
> machine on the host and in case of a failure of one host
> pacemaker would run the vm on the other host. VM would be stored
> on a FC SAN. I stopped following this idea. I have bought a book
> about HA: "..." from Oliver Liebel. It's only available in
> german. But i can recommend it, it's very detailed and well
> explained.
> He proposed to have two hosts, and on each is running a MySQL
> instance as master AND slave. But it's not a "real multi master
> solution", because pacemaker takes care that the IP for the web
> app just points to one master. So i don't have the multi-master
> problems with concurrent inserts (i believe).
> His idea is that host A is master for the slave on host B, and
> host B is the master for the slave on host A. OK ?
> Let's imagining that the IP to the web app points to host A,
> inserts are done to the master on host A and replicated to the
> slave on host B. Now host A has problems, pacemaker redirects the
> IP to host B, and everything should be fine.
> What do you think about this setup ? Where is the advantage to a
> "classical Master/Slave Replication" ? How should i configure
> log-slave-updates in this scenario ?
> Let's imagine i have two hosts again: Host A is master, host B is
> slave. Nothing else. No real or pseudo "Multi-Master". IP points
> to host A. Host A has problems, pacemaker recognizes it, promotes
> B to master and pivot the IP. Everything should be fine. Where is
> the disadvantage of this setup compared to the "Multi-Master
> Replication" in the book ? The OCF ressource agent for mysql
> should be able to handle the mysql stuff and the RA for the IP
> pivots the IP.
> 
> Now some dedicated questions to replication. I read a lot in the
> official documentation, but some things are not clear to me.
> In our db we have MyISAM and InnoDB tables.
> 
> From what i read i'd prefer row based replication. The doc says
> is the safest approach. But there seems to be still some problems:
> 
> The doc says: "For tables using the MYISAM storage engine, a
> stronger lock is required on the slave for INSERT statements when
> applying them as row-based events to the binary log than when
> applying them as statements. This means that concurrent inserts
> on MyISAM tables are not supported when using row-based
> replication."
> What does this exactly mean ? Concurrent inserts in MyISAM-tables
> are not possible if using RBL ? Or unsafe in the meaning they
> create inconsistencies ?
> 
> "RBL (Row Based Logging) and synchronization of nontransactional
> tables. When many rows are affected, the set of changes is split
> into several events; when the statement commits, all of these
> events are written to the binary log. When executing on the
> slave, a table lock is taken on all tables involved, and then
> the rows are applied in batch mode. (This may or may not be
> effective, depending on the engine used for the slave抯 copy of
> the table.)"
> What does that mean ? Effective ? Is it creating inconsistencies
> ? Or just not effective in the sense of slow or inconvinient ?
> 
> Or should i prefer MIXED for binlog_format ?
> 
> The doc says: " If a statement is logged by row and the session
> that executed the statement has any temporary tables, logging by
> row is used for all subsequent statements (except for those
> accessing temporary tables) until all temporary tables in use by
> that session are dropped.
> This is true whether or not any temporary tables are actually
> logged. Temporary tables cannot be logged using row-based format;
> thus, once row-based logging is used, all subsequent statements
> using that table are unsafe. The server approximates this
> condition by treating all statements executed during the session
> as unsafe until the session no longer holds any temporary tables."
> What does that mean ? Unsafe ? Causing inconsistencies ? Problem
> with SBL or RBL ?
> 
> The doc says: "Due to concurrency issues, a slave can become
> inconsistent when a transaction 

Re: need help from the list admin

2016-03-22 Thread william drescher

sent for Bernd, and to see if it works from another sender
--
 Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when 
you have a look in the archive it's nearly complete empty. Really 
not busy.

So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with 
perl and apache httpd. Webpages serve reading and writing into 
the db. The db is important for our own work flow, so i'd like to 
make it HA. I have two HP servers and will use SLES 11 SP4 64bit 
as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which 
is available in SLES High Availibility Extension. I have 
experience in linux, but i'm not a database administrator nor 
developer. HA is important for us, we don't have performance 
problems.
My first idea was to run the web app and the db in a virtual 
machine on the host and in case of a failure of one host 
pacemaker would run the vm on the other host. VM would be stored 
on a FC SAN. I stopped following this idea. I have bought a book 
about HA: "..." from Oliver Liebel. It's only available in 
german. But i can recommend it, it's very detailed and well 
explained.
He proposed to have two hosts, and on each is running a MySQL 
instance as master AND slave. But it's not a "real multi master 
solution", because pacemaker takes care that the IP for the web 
app just points to one master. So i don't have the multi-master 
problems with concurrent inserts (i believe).
His idea is that host A is master for the slave on host B, and 
host B is the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A, 
inserts are done to the master on host A and replicated to the 
slave on host B. Now host A has problems, pacemaker redirects the 
IP to host B, and everything should be fine.
What do you think about this setup ? Where is the advantage to a 
"classical Master/Slave Replication" ? How should i configure 
log-slave-updates in this scenario ?
Let's imagine i have two hosts again: Host A is master, host B is 
slave. Nothing else. No real or pseudo "Multi-Master". IP points 
to host A. Host A has problems, pacemaker recognizes it, promotes 
B to master and pivot the IP. Everything should be fine. Where is 
the disadvantage of this setup compared to the "Multi-Master 
Replication" in the book ? The OCF ressource agent for mysql 
should be able to handle the mysql stuff and the RA for the IP 
pivots the IP.


Now some dedicated questions to replication. I read a lot in the 
official documentation, but some things are not clear to me.

In our db we have MyISAM and InnoDB tables.

From what i read i'd prefer row based replication. The doc says 
is the safest approach. But there seems to be still some problems:


The doc says: "For tables using the MYISAM storage engine, a 
stronger lock is required on the slave for INSERT statements when 
applying them as row-based events to the binary log than when 
applying them as statements. This means that concurrent inserts 
on MyISAM tables are not supported when using row-based 
replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables 
are not possible if using RBL ? Or unsafe in the meaning they 
create inconsistencies ?


"RBL (Row Based Logging) and synchronization of nontransactional 
tables. When many rows are affected, the set of changes is split 
into several events; when the statement commits, all of these 
events are written to the binary log. When executing on the 
slave, a table lock is taken on all tables involved, and then
the rows are applied in batch mode. (This may or may not be 
effective, depending on the engine used for the slave抯 copy of 
the table.)"
What does that mean ? Effective ? Is it creating inconsistencies 
? Or just not effective in the sense of slow or inconvinient ?


Or should i prefer MIXED for binlog_format ?

The doc says: " If a statement is logged by row and the session 
that executed the statement has any temporary tables, logging by 
row is used for all subsequent statements (except for those 
accessing temporary tables) until all temporary tables in use by 
that session are dropped.
This is true whether or not any temporary tables are actually 
logged. Temporary tables cannot be logged using row-based format; 
thus, once row-based logging is used, all subsequent statements 
using that table are unsafe. The server approximates this 
condition by treating all statements executed during the session 
as unsafe until the session no longer holds any temporary tables."
What does that mean ? Unsafe ? Causing inconsistencies ? Problem 
with SBL or RBL ?


The doc says: "Due to concurrency issues, a slave can become 
inconsistent when a transaction contains updates to both 
transactional and nontransactional tables. MySQL tries to 
preserve causality among these statements by writing 
nontransactional statements to the transaction cache, which is 
flushed 

Re: Conditional ODER BY Clause HELP

2016-03-21 Thread Hal.sz S.ndor

2016/03/18 12:54 ... Don Wieland:

Trying to get the correct syntax on this:

ORDER BY
CASE
WHEN tr.Placed = "X" THEN r.Division ASC, 
FIELD(tr.Place,"1","2","3","4","5","6","7","8","R","WD","Exc","E","S”), tr.Score DESC
WHEN tr.Placed != "X" THEN tr.ride_time ASC
END


How does one deal with CONDITION study like this?

That certainly is quite wrong. It is
ORDER BY f1 [ASC/DESC], f2 [ASC/DESC], ...
where each "f" may be a formula. This is valid:

ORDER BY
CASE
WHEN tr.Placed = "X" THEN r.Division
ELSE tr.ride_time
END ASC, 
FIELD(tr.Place,"1","2","3","4","5","6","7","8","R","WD","Exc","E","S”), 
tr.Score DESC


but maybe you don't want that.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-21 Thread Lentes, Bernd


- On Mar 19, 2016, at 3:28 PM, Reindl Harald h.rei...@thelounge.net wrote:

> Am 19.03.2016 um 15:23 schrieb Reindl Harald:
>>
>>
>> Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:
>>> one further question:
>>> if some of my e-mails get through (like this one) and others don't, it
>>> does not depend on theh reputation of our domain or mailserver ? Right ?
>>> So the reason has to be that particular e-Mail?
>>
>> both
>>
>> a spamfilter is typically score based and combines a ton of rules, some
>> add points, some remove points and the decision is made of the summary
>>
>> when you have a bad server reputation you start with a penalty, some
>> other rules hitting and a not well trained bayes makes the rest
>>
>> "How do i have to provide the ip" in case of RBLs?
>> https://en.wikipedia.org/wiki/Reverse_DNS_lookup
> 
> and that your domain even don't provide a "~all" SPF policy if you can't
> or don't want a stricht "-all" makes things not better, typically a
> SPF_PASS gives benefits in spamfilter scorings
> 
> Received-SPF: none (helmholtz-muenchen.de: No applicable sender policy
>  available) receiver=amysql-list-wsv01.oracle.com; identity=mailfrom;
>  envelope-from="bernd.len...@helmholtz-muenchen.de";
>  helo=mtaextp1.scidom.de; client-ip=146.107.103.20
> 

OK guys. I asked our computing center to provide a SPF ressource record for our 
outgoing mta in the DNS and to take 
care that the ip of our outgoing mta appears on https://www.dnswl.org/ (our 
domain is listed already). I hope they will do.

The score of our mta raised already:


sunhb58820:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.1.88
Address:146.107.1.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.76
=

76 isn't bad.

But nevertheless it must also have to do with the e-mail itself. I sent it from 
gmx.de and web.de, both werde declined.
My other mails (like this one) arrive.
I shrinked the mail already, but also this did not help.
You can have a look on the two mails i tried: 
https://hmgubox.helmholtz-muenchen.de:8001/d/dc1ec4eb38/

I'm thankful for any hint what else i can do, also with the mail itself.


Bernd


 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



need help from the list admin

2016-03-20 Thread Lentes, Bernd
Dear list admin,

i need your help. I'm trying to write an e-Mail to the list for already one 
week. I always get it back because it's classified as spam.
The mail is formatted as plain-text, include neither links nor attachments. I 
don't understand why it's classified as spam. Neither our domain nor the ip of 
our outgoing mailserver appears currently on a blacklist, as far as i see. 
Harald Reindl, a member from the list, tried already to help me, but also he 
couldn't find out why it is rejected.
Can you tell me why it's classified as spam and what i can do that the mail is 
delivered correctly ?
The mail has the subject "Replication and HA - some basic questions".
I wrote already two mails to "list-ad...@mysql.com" but didn't get an answer.

Thanks.


Bernd

-- 
Bernd Lentes 

Systemadministration 
institute of developmental genetics 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 (0)89 3187 1241 
fax: +49 (0)89 3187 2294 

Wer Visionen hat soll zum Hausarzt gehen 
Helmut Schmidt


Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Conditional ODER BY Clause HELP

2016-03-19 Thread Don Wieland
Hi gang,

Trying to get the correct syntax on this:

ORDER BY 
CASE 
WHEN tr.Placed = "X" THEN r.Division ASC, 
FIELD(tr.Place,"1","2","3","4","5","6","7","8","R","WD","Exc","E","S”), 
tr.Score DESC 
WHEN tr.Placed != "X" THEN tr.ride_time ASC 
END


How does one deal with CONDITION study like this?

Thanks in advance for any assistance

Don

Conditional ODER BY Clause HELP

2016-03-19 Thread Don Wieland
Hi gang,

Trying to get the correct syntax on this:

ORDER BY 
CASE 
WHEN tr.Placed = "X" THEN r.Division ASC, 
FIELD(tr.Place,"1","2","3","4","5","6","7","8","R","WD","Exc","E","S”), 
tr.Score DESC 
WHEN tr.Placed != "X" THEN tr.ride_time ASC 
END


How does one deal with CONDITION study like this?

Thanks in advance for any assistance

Don

Re: need help from the list admin

2016-03-19 Thread Reindl Harald


Am 18.03.2016 um 14:56 schrieb Chris Knipe:

Blah blah blah...

Delivery to the following recipient failed permanently:

  mysql@lists.mysql.com

Technical details of permanent failure:
Your message was rejected by the server for the recipient domain
lists.mysql.com by lists-mx.mysql.com. [137.254.60.71].

The error that the other server returned was:
550 Currently Sending Spam See:
http://www.sorbs.net/lookup.shtml?5.200.22.158

Show me one site, where that IP is, or WAS ever blacklisted?


on sorbs as you quoated yourself, that only idiots block by 
"spam.dnsbl.sorbs.net" ann response 127.0.0.6 instead RBL scoring is a 
different story


51.192.85.209.in-addr.arpa name = mail-qg0-f51.google.com

Name:   51.192.85.209.spam.dnsbl.sorbs.net
Address: 127.0.0.6

Received-SPF: pass (savage.za.org: Sender is authorized to use
 'ckn...@savage.za.org' in 'mfrom' identity (mechanism
 'include:_spf.google.com' matched)) receiver=amysql-list-wsv01.oracle.com;
 identity=mailfrom; envelope-from="ckn...@savage.za.org";
 helo=mail-qg0-f51.google.com; client-ip=209.85.192.51




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Chris Knipe
On Fri, Mar 18, 2016 at 3:43 PM, Lentes, Bernd <
bernd.len...@helmholtz-muenchen.de> wrote:

> i need your help. I'm trying to write an e-Mail to the list for already
> one week. I always get it back because it's classified as spam.
>


Ditto.  I've pretty much given up on this list...


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:23 schrieb Reindl Harald:



Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:

one further question:
if some of my e-mails get through (like this one) and others don't, it
does not depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail?


both

a spamfilter is typically score based and combines a ton of rules, some
add points, some remove points and the decision is made of the summary

when you have a bad server reputation you start with a penalty, some
other rules hitting and a not well trained bayes makes the rest

"How do i have to provide the ip" in case of RBLs?
https://en.wikipedia.org/wiki/Reverse_DNS_lookup


and that your domain even don't provide a "~all" SPF policy if you can't 
or don't want a stricht "-all" makes things not better, typically a 
SPF_PASS gives benefits in spamfilter scorings


Received-SPF: none (helmholtz-muenchen.de: No applicable sender policy
 available) receiver=amysql-list-wsv01.oracle.com; identity=mailfrom;
 envelope-from="bernd.len...@helmholtz-muenchen.de";
 helo=mtaextp1.scidom.de; client-ip=146.107.103.20

[harry@srv-rhsoft:~]$ dig TXT helmholtz-muenchen.de @8.8.8.8
; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> TXT 
helmholtz-muenchen.de @8.8.8.8

;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25126
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

[harry@srv-rhsoft:~]$ dig TXT thelounge.net @8.8.8.8
;; ANSWER SECTION:
thelounge.net.  21599   IN  TXT 
"google-site-verification=XQcET0ij0uOdn8AvlL82t8FoGTthvfPKWRfNjSNTfaM"
thelounge.net.  21599   IN  TXT "v=spf1 a 
ip4:91.118.73.0/24 ip4:95.129.202.170 -all"




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:

one further question:
if some of my e-mails get through (like this one) and others don't, it does not 
depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail?


both

a spamfilter is typically score based and combines a ton of rules, some 
add points, some remove points and the decision is made of the summary


when you have a bad server reputation you start with a penalty, some 
other rules hitting and a not well trained bayes makes the rest


"How do i have to provide the ip" in case of RBLs?
https://en.wikipedia.org/wiki/Reverse_DNS_lookup




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:08 schrieb Lentes, Bernd:

Ok. I tried again:

pc53200:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.8.88
Address:146.107.8.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.62

My result is 127.0.4.62. How can i interpret this result?


simple - it's a reputation score
100 = perfect reputation - whitelist score
0 = no reputation, pure spammer

with 62 you make it through postscreen but end with 2.5 points penalty 
in SA and that you had 2 days ago "senderscore.com LISTED 127.0.4.63" 
and now got worser scores shows that your outgoing server sends spam 
(given that we have full reputation 100 there without any actve 
operation, even before i did know about the RBL/DNSWL)


anything below 127.0.4.70 should raise alerts
___

our postscreen-scoring:
score.senderscore.com=127.0.4.[0..20]*2
score.senderscore.com=127.0.4.[0..69]*2
score.senderscore.com=127.0.4.[90..100]*-1
___

our spamassassin scoring:

header   CUST_DNSBL_21 
eval:check_rbl('cust21-lastexternal','score.senderscore.com.','^127\.0\.4\.(1?[0-9]|20)$')

describe CUST_DNSBL_21 score.senderscore.com (senderscore.com High)
scoreCUST_DNSBL_21 1.5

header   CUST_DNSBL_25 
eval:check_rbl('cust25-lastexternal','score.senderscore.com.','^127\.0\.4\.(0?[0-6]?[0-9])$')

describe CUST_DNSBL_25 score.senderscore.com (senderscore.com Medium)
scoreCUST_DNSBL_25 1.0

header   CUST_DNSWL_2 
eval:check_rbl('cust35-lastexternal','score.senderscore.com.','^127\.0\.4\.(9[0-9]|100)$')

describe CUST_DNSWL_2 score.senderscore.com (Low Trust)
scoreCUST_DNSWL_2 -0.1



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd
Hi,

one further question:
if some of my e-mails get through (like this one) and others don't, it does not 
depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd


- Am 18. Mrz 2016 um 15:34 schrieb Reindl Harald h.rei...@thelounge.net:

> Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:
>>
>> - Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:
>>

> 
> as i already told you offlist
> senderscore.com  LISTED  127.0.4.67
> 
> this *is* a bad reputation
> 
> and more worse: you did not manage to get your server on any DNSWL
> 
> [harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   20.103.107.146.score.senderscore.com
> Address: 127.0.4.67

Ok. I tried again:

pc53200:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.8.88
Address:146.107.8.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.62

My result is 127.0.4.62. How can i interpret this result ? I was looking on 
senderscroe.com to find any explaination, but not have been successfull.
Also i'm redirected to senderscore.org. Is that ok ?
Does that mean my reputation is 62 ? That would be bad. Because if i check the 
ip of our outgoing mailserver (146.107.103.20) in the webinterface, i get a 
reputation of 74, which is not great but hopefully ok.


I also tested sorbs.net:

pc53200:~ # nslookup 20.103.107.146.dnsbl.sorbs.net
Server: 146.107.8.88
Address:146.107.8.88#53

*** Can't find 20.103.107.146.dnsbl.sorbs.net: No answer

pc53200:~ # nslookup 146.107.103.20.dnsbl.sorbs.net
Server: 146.107.8.88
Address:146.107.8.88#53

*** Can't find 146.107.103.20.dnsbl.sorbs.net: No answer

(How do i have to provide the ip ? ) Our mailserver seems not do appear on 
sorbs.net. Is it a good sign ?


Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:


- Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:


and yet, both of those messages made it through :-p

Stick your domain in http://mxtoolbox.com to see if there's any problems that
might be worth solving. If the mailserver classifies you as spam, that's
usually caused by something on your side :-)

- Original Message -

From: "Chris Knipe" <sav...@savage.za.org>
To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
Cc: "MySql" <mysql@lists.mysql.com>
Sent: Friday, 18 March, 2016 14:46:26
Subject: Re: need help from the list admin



Ditto.  I've pretty much given up on this list...




Neither our outgoing mailserver 
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
 nor our domain
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
 is listed there.
I checked that before i wrote the e-Mail. If you could help me to point out 
what's wrong on our side i could ask our mail admin to correct it.
Currently i don't have any idea


as i already told you offlist
senderscore.com  LISTED  127.0.4.67

this *is* a bad reputation

and more worse: you did not manage to get your server on any DNSWL

[harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
Server: 127.0.0.1
Address:127.0.0.1#53
Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.67
_

compare with 91.118.73.15 (our outgoing server) which has there the best 
possible reputation (treated as whitelist) and is at the same time on 
the "list.dnswl.org" and "hostkarma.junkemailfilter" while one of both 
would possibly neutralize the BL listing in a scoring system


[harry@srv-rhsoft:~]$ nslookup 15.73.118.91.score.senderscore.com.
Server: 127.0.0.1
Address:127.0.0.1#53
Non-authoritative answer:
Name:   15.73.118.91.score.senderscore.com
Address: 127.0.4.100



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Chris Knipe
Ok :-)



On Fri, Mar 18, 2016 at 4:34 PM, Reindl Harald <h.rei...@thelounge.net>
wrote:

>
>
> Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:
>
>>
>> - Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman
>> vegiv...@tuxera.be:
>>
>> and yet, both of those messages made it through :-p
>>>
>>> Stick your domain in http://mxtoolbox.com to see if there's any
>>> problems that
>>> might be worth solving. If the mailserver classifies you as spam, that's
>>> usually caused by something on your side :-)
>>>
>>> - Original Message -
>>>
>>>> From: "Chris Knipe" <sav...@savage.za.org>
>>>> To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
>>>> Cc: "MySql" <mysql@lists.mysql.com>
>>>> Sent: Friday, 18 March, 2016 14:46:26
>>>> Subject: Re: need help from the list admin
>>>>
>>>
>>> Ditto.  I've pretty much given up on this list...
>>>>
>>>
>>>
>> Neither our outgoing mailserver (
>> http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
>> nor our domain
>> (
>> http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
>> is listed there.
>> I checked that before i wrote the e-Mail. If you could help me to point
>> out what's wrong on our side i could ask our mail admin to correct it.
>> Currently i don't have any idea
>>
>
> as i already told you offlist
> senderscore.com  LISTED  127.0.4.67
>
> this *is* a bad reputation
>
> and more worse: you did not manage to get your server on any DNSWL
>
> [harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   20.103.107.146.score.senderscore.com
> Address: 127.0.4.67
> _
>
> compare with 91.118.73.15 (our outgoing server) which has there the best
> possible reputation (treated as whitelist) and is at the same time on the "
> list.dnswl.org" and "hostkarma.junkemailfilter" while one of both would
> possibly neutralize the BL listing in a scoring system
>
> [harry@srv-rhsoft:~]$ nslookup 15.73.118.91.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   15.73.118.91.score.senderscore.com
> Address: 127.0.4.100
>
>


-- 

Regards,
Chris Knipe


Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd

- Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:

> and yet, both of those messages made it through :-p
> 
> Stick your domain in http://mxtoolbox.com to see if there's any problems that
> might be worth solving. If the mailserver classifies you as spam, that's
> usually caused by something on your side :-)
> 
> - Original Message -
>> From: "Chris Knipe" <sav...@savage.za.org>
>> To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
>> Cc: "MySql" <mysql@lists.mysql.com>
>> Sent: Friday, 18 March, 2016 14:46:26
>> Subject: Re: need help from the list admin
> 
>> Ditto.  I've pretty much given up on this list...
> 
> --
> Unhappiness is discouraged and will be corrected with kitten pictures.

Hi Johan,


Neither our outgoing mailserver 
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
 nor our domain
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
 is listed there.
I checked that before i wrote the e-Mail. If you could help me to point out 
what's wrong on our side i could ask our mail admin to correct it.
Currently i don't have any idea.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-18 Thread Chris Knipe
Blah blah blah...

Delivery to the following recipient failed permanently:

 mysql@lists.mysql.com

Technical details of permanent failure:
Your message was rejected by the server for the recipient domain
lists.mysql.com by lists-mx.mysql.com. [137.254.60.71].

The error that the other server returned was:
550 Currently Sending Spam See:
http://www.sorbs.net/lookup.shtml?5.200.22.158


Show me one site, where that IP is, or WAS ever blacklisted?


--
Chris.




On Fri, Mar 18, 2016 at 3:52 PM, Johan De Meersman <vegiv...@tuxera.be>
wrote:

>
> and yet, both of those messages made it through :-p
>
> Stick your domain in http://mxtoolbox.com to see if there's any problems
> that might be worth solving. If the mailserver classifies you as spam,
> that's usually caused by something on your side :-)
>
> - Original Message -
> > From: "Chris Knipe" <sav...@savage.za.org>
> > To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
> > Cc: "MySql" <mysql@lists.mysql.com>
> > Sent: Friday, 18 March, 2016 14:46:26
> > Subject: Re: need help from the list admin
>
> > Ditto.  I've pretty much given up on this list...
>
> --
> Unhappiness is discouraged and will be corrected with kitten pictures.
>



-- 

Regards,
Chris Knipe


Re: help with query to count rows while excluding certain rows

2016-01-02 Thread Larry Martell
On Fri, Jan 1, 2016 at 9:31 PM, Peter Brawley
<peter.braw...@earthlink.net> wrote:
> On 1/1/2016 19:24, Larry Martell wrote:
>>
>> On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
>> <peter.braw...@earthlink.net> wrote:
>>>
>>> On 12/31/2015 0:51, Larry Martell wrote:
>>>>
>>>> I need to count the number of rows in a table that are grouped by a
>>>> list of columns, but I also need to exclude rows that have more then
>>>> some count when grouped by a different set of columns. Conceptually,
>>>> this is not hard, but I am having trouble doing this efficiently.
>>>>
>>>> My first counting query would be this:
>>>>
>>>> SELECT count(*)
>>>> FROM cst_rollup
>>>> GROUP BY target_name_id, ep, roiname, recipe_process,
>>>> recipe_product, recipe_layer, f_tag_bottom,
>>>> measname, recipe_id
>>>>
>>>> But from this count I need to subtract the count of rows that have
>>>> more then 50 rows with a different grouping:
>>>>
>>>> SELECT count(*)
>>>> FROM cst_rollup
>>>> GROUP BY target_name_id, ep, wafer_id
>>>> HAVING count(*) >= 50
>>>>
>>>> As you can see, the second query has wafer_id, but the first query does
>>>> not.
>>>>
>>>> Currently I am doing this in python, and it's slow. In my current
>>>> implementation I have one query, and it selects the columns (i.e.
>>>> doesn't just count), and I have added wafer_id:
>>>>
>>>> SELECT target_name_id, ep, roiname, recipe_process,
>>>> recipe_product, recipe_layer, f_tag_bottom,
>>>> measname, recipe_id, wafer_id
>>>> FROM cst_rollup
>>>>
>>>> Then I go through the result set (which can be over 200k rows) and I
>>>> count the number of rows with matching (target_name_id, ep, wafer_id).
>>>> Then I go through the rows again and regroup them without wafer_id,
>>>> but skipping the rows that have more then 50 rows for that row's
>>>> (target_name_id, ep, wafer_id).
>>>>
>>>> Is this clear to everyone what I am trying to do?
>>>
>>>
>>> If I've understand this correctly, the resultset you wish to aggregate on
>>> is
>>> ...
>>>
>>> select target_name_id, ep, wafer_id
>>> from cst_rollup a
>>> left join (   -- exclude rows for which wafer_id count >= 50
>>>select name_id, ep, wafer, count(*) n
>>>from cst_rollup
>>>group by target_name_id, ep, wafer_id
>>>having n >= 50
>>> ) b using ( target_name_id, ep, wafer_id )
>>> where b.target_name is null ;
>>>
>>> If that's so, you could assemble that resultset in a temp table then run
>>> the
>>> desired aggregate query on it, or you could aggregate on it directly as a
>>> subquery.
>>
>> That query gives:
>>
>> ERROR 1137 (HY000): Can't reopen table: 'a'
>
>
> So, it's a temporary table, and you'll need to make that not so.

Yes, cst_rollup is a temp table. The underlying table is millions of
rows (with 300 columns) so for efficiency a subset of the rows and
columns are selected into the temp table based on some user input.
It's just the rows in the temp table that are of interest for the
current report.

I was able to get this working with a second temp table:

CREATE TEMPORARY TABLE rollup_exclude
SELECT target_name_id, ep, wafer_id, count(*) n
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING n >= 50

And then:

SELECT count(*)
FROM cst_rollup
LEFT JOIN(
SELECT target_name_id, ep, wafer_id
FROM rollup_exclude) b
USING (target_name_id, ep, wafer_id)
WHERE b.target_name_id IS NULL
GROUP by target_name_id, ep, roiname, recipe_process, recipe_product,
recipe_layer, f_tag_bottom, measname, recipe_id

And the rowcount from that query gave me what I needed.

Thanks very much for the help Peter, you gave me a push toward the right path.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Larry Martell
On Fri, Jan 1, 2016 at 2:12 PM, Peter Brawley
 wrote:
> On 12/31/2015 0:51, Larry Martell wrote:
>>
>> I need to count the number of rows in a table that are grouped by a
>> list of columns, but I also need to exclude rows that have more then
>> some count when grouped by a different set of columns. Conceptually,
>> this is not hard, but I am having trouble doing this efficiently.
>>
>> My first counting query would be this:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id
>>
>> But from this count I need to subtract the count of rows that have
>> more then 50 rows with a different grouping:
>>
>> SELECT count(*)
>> FROM cst_rollup
>> GROUP BY target_name_id, ep, wafer_id
>> HAVING count(*) >= 50
>>
>> As you can see, the second query has wafer_id, but the first query does
>> not.
>>
>> Currently I am doing this in python, and it's slow. In my current
>> implementation I have one query, and it selects the columns (i.e.
>> doesn't just count), and I have added wafer_id:
>>
>> SELECT target_name_id, ep, roiname, recipe_process,
>> recipe_product, recipe_layer, f_tag_bottom,
>> measname, recipe_id, wafer_id
>> FROM cst_rollup
>>
>> Then I go through the result set (which can be over 200k rows) and I
>> count the number of rows with matching (target_name_id, ep, wafer_id).
>> Then I go through the rows again and regroup them without wafer_id,
>> but skipping the rows that have more then 50 rows for that row's
>> (target_name_id, ep, wafer_id).
>>
>> Is this clear to everyone what I am trying to do?
>
>
> If I've understand this correctly, the resultset you wish to aggregate on is
> ...
>
> select target_name_id, ep, wafer_id
> from cst_rollup a
> left join (   -- exclude rows for which wafer_id count >= 50
>   select name_id, ep, wafer, count(*) n
>   from cst_rollup
>   group by target_name_id, ep, wafer_id
>   having n >= 50
> ) b using ( target_name_id, ep, wafer_id )
> where b.target_name is null ;
>
> If that's so, you could assemble that resultset in a temp table then run the
> desired aggregate query on it, or you could aggregate on it directly as a
> subquery.

That query gives:

ERROR 1137 (HY000): Can't reopen table: 'a'

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: help with query to count rows while excluding certain rows

2016-01-01 Thread Peter Brawley

On 12/31/2015 0:51, Larry Martell wrote:

I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?


If I've understand this correctly, the resultset you wish to aggregate 
on is ...


select target_name_id, ep, wafer_id
from cst_rollup a
left join (   -- exclude rows for which wafer_id count >= 50
  select name_id, ep, wafer, count(*) n
  from cst_rollup
  group by target_name_id, ep, wafer_id
  having n >= 50
) b using ( target_name_id, ep, wafer_id )
where b.target_name is null ;

If that's so, you could assemble that resultset in a temp table then run 
the desired aggregate query on it, or you could aggregate on it directly 
as a subquery.


PB

-


I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



help with query to count rows while excluding certain rows

2015-12-30 Thread Larry Martell
I need to count the number of rows in a table that are grouped by a
list of columns, but I also need to exclude rows that have more then
some count when grouped by a different set of columns. Conceptually,
this is not hard, but I am having trouble doing this efficiently.

My first counting query would be this:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id

But from this count I need to subtract the count of rows that have
more then 50 rows with a different grouping:

SELECT count(*)
FROM cst_rollup
GROUP BY target_name_id, ep, wafer_id
HAVING count(*) >= 50

As you can see, the second query has wafer_id, but the first query does not.

Currently I am doing this in python, and it's slow. In my current
implementation I have one query, and it selects the columns (i.e.
doesn't just count), and I have added wafer_id:

SELECT target_name_id, ep, roiname, recipe_process,
recipe_product, recipe_layer, f_tag_bottom,
measname, recipe_id, wafer_id
FROM cst_rollup

Then I go through the result set (which can be over 200k rows) and I
count the number of rows with matching (target_name_id, ep, wafer_id).
Then I go through the rows again and regroup them without wafer_id,
but skipping the rows that have more then 50 rows for that row's
(target_name_id, ep, wafer_id).

Is this clear to everyone what I am trying to do?

I'd like to do this all in sql with count because then I do not have
to actually return and parse the data in python.

Can anyone think of a way to do this in sql in a way that will be more
efficient then my current implementation?


Thanks!
-Larry

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Summary Help...

2015-10-24 Thread Mogens Melander

You need to GROUP BY those fields NOT in the aggregate function. Like:

SELECT f.id,f.name,sum(p.price)
FROM fruit f
left join purchase p on f.id = p.fruit
where p.price is not null
group by f.id,f.name;

1, 'Apples', 2
2, 'Grapes', 6.5
4, 'Kiwis', 4

On 2015-10-23 04:15, Don Wieland wrote:

Hi gang,

I have a query:

SELECT
p.pk_ProductID,
p.Description,
i.Quantity

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)

WHERE p.pk_ProductID IN (1,2,3);

It produces a list like the following:

1,Banana,3
2,Orange,1
2,Orange,4
3,Melon,3
3,Melon,3

I want to SUM the i.Quantity per ProductID, but I am unable to get the
scope/syntax correct. I was expecting the following would work:

SELECT
p.pk_ProductID,
p.Description,
SUM(i.Quantity)

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
i.fk_InvoiceID IN (1,2,3)

WHERE p.pk_ProductID IN (1,2,3)
GROUP BY i.fk_ProductID;

but it is not working.


Little help please. Thanks!


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band


--
Mogens
+66 8701 33224


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-22 Thread shawn l.green



On 10/22/2015 11:48 AM, Don Wieland wrote:



On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:

Which release of MySQL are you using?


Version 5.5.45-cll


How many rows do you get if you remove the GROUP_CONCAT operator? We don't need 
to see the results. (sometimes it is a good idea to look at the raw, 
unprocessed results)

Is it possible that you are attempting to concat more values than allowed by 
--group-concat-max-len ?


When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys.

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND "2015-12-31 
23:59:59"
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band




Thank you for sharing your solution.

Best wishes,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Summary Help...

2015-10-22 Thread Michael Dykman
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.

On Thu, Oct 22, 2015, 5:18 PM Don Wieland <d...@pointmade.net> wrote:

> Hi gang,
>
> I have a query:
>
> SELECT
> p.pk_ProductID,
> p.Description,
> i.Quantity
>
> FROM invoice_invoicelines_Product p
> JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
> i.fk_InvoiceID IN (1,2,3)
>
> WHERE p.pk_ProductID IN (1,2,3);
>
> It produces a list like the following:
>
> 1,Banana,3
> 2,Orange,1
> 2,Orange,4
> 3,Melon,3
> 3,Melon,3
>
> I want to SUM the i.Quantity per ProductID, but I am unable to get the
> scope/syntax correct. I was expecting the following would work:
>
> SELECT
> p.pk_ProductID,
> p.Description,
> SUM(i.Quantity)
>
> FROM invoice_invoicelines_Product p
> JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND
> i.fk_InvoiceID IN (1,2,3)
>
> WHERE p.pk_ProductID IN (1,2,3)
> GROUP BY i.fk_ProductID;
>
> but it is not working.
>
>
> Little help please. Thanks!
>
>
> Don Wieland
> d...@pointmade.net
> http://www.pointmade.net
> https://www.facebook.com/pointmade.band
>
>
>
>
>


Re: Query Summary Help...

2015-10-22 Thread Don Wieland

> On Oct 22, 2015, at 2:41 PM, Michael Dykman  wrote:
> 
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.

Yeah I tried that - actually the SUM I need is on the JOIN relationship - 
results should be:

1,Banana,3
2,Orange,5
3,Melon,6

Thanks!

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Query Summary Help...

2015-10-22 Thread Don Wieland
Hi gang,

I have a query:

SELECT 
p.pk_ProductID, 
p.Description, 
i.Quantity  

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND 
i.fk_InvoiceID IN (1,2,3)  

WHERE p.pk_ProductID IN (1,2,3);

It produces a list like the following:

1,Banana,3
2,Orange,1
2,Orange,4
3,Melon,3
3,Melon,3

I want to SUM the i.Quantity per ProductID, but I am unable to get the 
scope/syntax correct. I was expecting the following would work:

SELECT 
p.pk_ProductID, 
p.Description, 
SUM(i.Quantity)  

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON p.pk_ProductID = i.fk_ProductID AND 
i.fk_InvoiceID IN (1,2,3)  

WHERE p.pk_ProductID IN (1,2,3)
GROUP BY i.fk_ProductID;

but it is not working.


Little help please. Thanks!


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: Query Summary Help...

2015-10-22 Thread Michael Dykman
One more guess:

Try explicitly aliasing the fields of interest and using those aliases
exclusively throughout the rest of the expression.

SELECT
p.pk_ProductID as pid,
p.Description as dsc,
SUM(i.Quantity) as totl

FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON pid = i.fk_ProductID

WHERE pid IN (1,2,3)
AND i.fk_InvoiceID IN (1,2,3)
GROUP BY pid;

Note that I moved the invoiceID clause out of the join condition into the
where filter. The ON clause should only contain expressions of relational
interest.

On Thu, Oct 22, 2015, 6:00 PM Don Wieland  wrote:

>
> > On Oct 22, 2015, at 2:41 PM, Michael Dykman  wrote:
> >
> > I'm not at a terminal but have you tried grouping by p.pk_ProductID
> instead
> > of i.fk...? It is the actual value you are selecting as well as being on
> > the primary table in the query.
>
> Yeah I tried that - actually the SUM I need is on the JOIN relationship -
> results should be:
>
> 1,Banana,3
> 2,Orange,5
> 3,Melon,6
>
> Thanks!
>
> Don Wieland
> d...@pointmade.net
> http://www.pointmade.net
> https://www.facebook.com/pointmade.band
>
>
>
>
>


Re: Query Help...

2015-10-22 Thread Don Wieland

> On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:
> 
> Which release of MySQL are you using?

Version 5.5.45-cll

> How many rows do you get if you remove the GROUP_CONCAT operator? We don't 
> need to see the results. (sometimes it is a good idea to look at the raw, 
> unprocessed results)
> 
> Is it possible that you are attempting to concat more values than allowed by 
> --group-concat-max-len ?

When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys. 

SELECT 
ht.*, 
CONCAT(o.first_name, " ", o.last_name) AS orphan, 
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc 
FROM hiv_transactions ht 
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id 
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id 
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND 
"2015-12-31 23:59:59" 
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: Query Help...

2015-10-20 Thread Peter Brawley

On 2015-10-20 12:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr <http://ht.tr/>_date BETWEEN "2014-01-01 00:00:00" AND 
"2014-12-31 23:59:59"
ORDER BY ht.tr <http://ht.tr/>_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Group_Concat() is an aggregating function, so you need to Group By the 
column(s) on which you wish to aggregate, and for valid results you need 
to limit Selected columns to those on which you're aggregating plus 
those columns that have unique values for your aggregating columns..


PB





Don Wieland
D W   D a t a



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Query Help...

2015-10-20 Thread Don Wieland
Hi all,

Trying to get a query working:

SELECT 
ht.*, 
CONCAT(o.first_name, " ", o.last_name) AS orphan, 
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc 

FROM hiv_transactions ht 

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id 
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id 

WHERE ht.donor_id = 730 AND ht.tr <http://ht.tr/>_date BETWEEN "2014-01-01 
00:00:00" AND "2014-12-31 23:59:59" 
ORDER BY ht.tr <http://ht.tr/>_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland
D W   D a t a  

Re: Query Help...

2015-10-20 Thread shawn l.green



On 10/20/2015 1:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2014-01-01 00:00:00" AND "2014-12-31 
23:59:59"
ORDER BY ht.tr_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland



Which release of MySQL are you using?

How many rows do you get if you remove the GROUP_CONCAT operator? We 
don't need to see the results. (sometimes it is a good idea to look at 
the raw, unprocessed results)


Is it possible that you are attempting to concat more values than 
allowed by --group-concat-max-len ?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need a little admin help

2015-09-01 Thread shawn l.green

Hello Steve,

On 8/27/2015 9:11 PM, Steve Matzura wrote:

I have a Wordpress user who is setting up a Website and says he can't
connect to his database. Both I and the Wordpress admin are new to
this, so I've probably done something wron when I set him up
initiallyg.

Once I connected to SQL as the SQL admin, I used the following
commands to set up the new user's access. In the commands below, I
used my name as for user and database names:

CREATE DATABASE steve_db;
GRANT ALL PRIVILEGES ON steve_db.* TO "steve"@"localhost"
 -> IDENTIFIED BY "steve_pw";
FLUSH PRIVILEGES;

All commands worked successfully.

To figure out what I did wrong, I need to know how to list all SQL
users and what databases they have access to, and if I discover I've
connected a user to a wrong database, how to correct this--do I delete
the user and database and start it all over, or is it easier to modify
wrong things than to replace them? Whatever I do, including deleting
everything, is OK, since the only things I'm doing with SQL at this
time have to do with Postfix, and I certainly know enough not to touch
those.

As always, thanks in advance.



Unless that user is going to terminal into that host server before 
trying to start their MySQL session, the account you created is not 
going to work.  The host pattern "@localhost" only authenticates users 
that are appear as if they are connecting from within the host machine, 
itself.  If this other user is attempting to connect to his database 
from some other location, you will need a different host pattern to 
allow that user to authenticate.


To see what privileges an account has you would use the "SHOW GRANTS FOR 
..." command. Any account can issue a SHOW GRANTS command (without any 
user name or FOR keyword) to see their own privileges.



Additional reading:
http://dev.mysql.com/doc/refman/5.6/en/account-names.html
http://dev.mysql.com/doc/refman/5.6/en/show-grants.html


Does that give you the details you need to create a second account with 
the appropriate host pattern?


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Need a little admin help

2015-08-27 Thread Steve Matzura
I have a Wordpress user who is setting up a Website and says he can't
connect to his database. Both I and the Wordpress admin are new to
this, so I've probably done something wron when I set him up
initiallyg.

Once I connected to SQL as the SQL admin, I used the following
commands to set up the new user's access. In the commands below, I
used my name as for user and database names:

CREATE DATABASE steve_db;
GRANT ALL PRIVILEGES ON steve_db.* TO steve@localhost
- IDENTIFIED BY steve_pw;
FLUSH PRIVILEGES;

All commands worked successfully.

To figure out what I did wrong, I need to know how to list all SQL
users and what databases they have access to, and if I discover I've
connected a user to a wrong database, how to correct this--do I delete
the user and database and start it all over, or is it easier to modify
wrong things than to replace them? Whatever I do, including deleting
everything, is OK, since the only things I'm doing with SQL at this
time have to do with Postfix, and I certainly know enough not to touch
those.

As always, thanks in advance.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help with REGEXP

2015-03-22 Thread Jan Steinman
 From: Olivier Nicole olivier.nic...@cs.ait.ac.th
 
 You could look for a tool called The Regex Coach. While it is mainly
 for Windows, it runs very well in vine. I fijd it highly useful to debug
 regexps.

On the Mac, look for RegExRx. It lets you paste in text to work on, build a 
regex, and see the result in real time. I also use one simply called 
Patterns, another real-time regex engine. It does some things RegExRx doesn't 
do, and vice-versa.

 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Help with REGEXP

2015-03-19 Thread Paul Halliday
I am trying to pick out a range of IP addresses using REGEXP but
failing miserably :)

The pattern I want to match is:

10.%.224-239.%.%

The regex I have looks like this:

AND INET_NTOA(src_ip) REGEXP '\d{1,3}\\.\d{1,3}\.(22[4-9]|23[0-9])\\.\d{1,3}'

but, go fish. Thoughts?


Thanks!

-- 
Paul Halliday
http://www.pintumbler.org/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help with REGEXP

2015-03-19 Thread Olivier Nicole
Paul,

You could look for a tool called The Regex Coach. While it is mainly
for Windows, it runs very well in vine. I fijd it highly useful to debug
regexps.

Best regards,

Olivier
-- 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help with REGEXP

2015-03-19 Thread Michael Dykman
Trying to pattern match ip addresses is a famous anti-pattern; it's one of
those things like you feel like it should work, but it won't.

Your case, however, is pretty specific. taking advantage of the limited
range (I will assume you only wanted 4 sections of IPv4)

this should come close:

10[.]\d{1,3}[.](224|225|226|227|228|229|23\d))[.]\d{1.3}

On Thu, Mar 19, 2015 at 9:39 AM, Paul Halliday paul.halli...@gmail.com
wrote:

 I am trying to pick out a range of IP addresses using REGEXP but
 failing miserably :)

 The pattern I want to match is:

 10.%.224-239.%.%

 The regex I have looks like this:

 AND INET_NTOA(src_ip) REGEXP
 '\d{1,3}\\.\d{1,3}\.(22[4-9]|23[0-9])\\.\d{1,3}'

 but, go fish. Thoughts?


 Thanks!

 --
 Paul Halliday
 http://www.pintumbler.org/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


Re: Help with REGEXP

2015-03-19 Thread Paul Halliday
I don't think it accepts \d, or much of anything else I am used to
putting in expressions :)

This is what I ended up with and it appears to be working:

REGEXP '10.[[:alnum:]]{1,3}.(22[4-9]|23[0-9]).[[:alnum:]]{1,3}'



On Thu, Mar 19, 2015 at 11:10 AM, Michael Dykman mdyk...@gmail.com wrote:
 Trying to pattern match ip addresses is a famous anti-pattern; it's one of
 those things like you feel like it should work, but it won't.

 Your case, however, is pretty specific. taking advantage of the limited
 range (I will assume you only wanted 4 sections of IPv4)

 this should come close:

 10[.]\d{1,3}[.](224|225|226|227|228|229|23\d))[.]\d{1.3}

 On Thu, Mar 19, 2015 at 9:39 AM, Paul Halliday paul.halli...@gmail.com
 wrote:

 I am trying to pick out a range of IP addresses using REGEXP but
 failing miserably :)

 The pattern I want to match is:

 10.%.224-239.%.%

 The regex I have looks like this:

 AND INET_NTOA(src_ip) REGEXP
 '\d{1,3}\\.\d{1,3}\.(22[4-9]|23[0-9])\\.\d{1,3}'

 but, go fish. Thoughts?


 Thanks!

 --
 Paul Halliday
 http://www.pintumbler.org/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.



-- 
Paul Halliday
http://www.pintumbler.org/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/1/2015 4:49 PM, Larry Martell wrote:

I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_recipe.id IN (148) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...


Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE 
term against the data_recipe table.


Compare the two EXPLAINS, in the faster query you see that data_recipe 
is listed second. This allows the additional term a chance to reduce the 
number of row combinations for the entire query.


To really get at the logic behind how the Optimizer chooses its 
execution plan, get an optimizer trace. Look at the cost estimates for 
each phase being considered.

http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/4/2015 3:18 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:


I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_recipe.id IN (148) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...



Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? 
It might make it even faster


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread Larry Martell
On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
 Hi Larry,


 On 2/4/2015 3:18 PM, Larry Martell wrote:

 On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
 wrote:

 Hi Larry,


 On 2/1/2015 4:49 PM, Larry Martell wrote:


 I have 2 queries. One takes 4 hours to run and returns 21 rows, and
 the other, which has 1 additional where clause, takes 3 minutes and
 returns 20 rows. The main table being selected from is largish
 (37,247,884 rows with 282 columns). Caching is off for my testing, so
 it's not related to that. To short circuit anyone asking, these
 queries are generated by python code, which is why there's an IN
 clause with 1 value, as oppose to an =.

 Here are the queries and their explains. The significant difference is
 that the faster query has Using
 intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
 those 2 indexes are on the 2 columns in the where clause, so that's
 why the second one is faster. But I am wondering what I can do to make
 the first one faster.


 4 hour query:

 SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id


 ... snipped ...



 Faster query:

 SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
 data_recipe.id IN (148) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id

 ... snip ...



 Thanks for taking the time to read this, and for any help or pointers
 you can give me.


 The biggest difference is the added selectivity generated by the WHERE
 term
 against the data_recipe table.

 Compare the two EXPLAINS, in the faster query you see that data_recipe is
 listed second. This allows the additional term a chance to reduce the
 number
 of row combinations for the entire query.

 To really get at the logic behind how the Optimizer chooses its execution
 plan, get an optimizer trace. Look at the cost estimates for each phase
 being considered.
 http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
 http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


 Thanks very much Shawn for the reply and the links. I will check those
 out and I'm sure I will find them very useful.

 Meanwhile I changed the query to select from data_cst using the where
 clause into a temp table and then I join the temp table with the other
 tables. That has improved the slow query from 4 hours to 10 seconds
 (!)


 Did you also add an index to the temporary table for the JOIN condition? It
 might make it even faster

No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread Larry Martell
On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:
 Hi Larry,


 On 2/1/2015 4:49 PM, Larry Martell wrote:

 I have 2 queries. One takes 4 hours to run and returns 21 rows, and
 the other, which has 1 additional where clause, takes 3 minutes and
 returns 20 rows. The main table being selected from is largish
 (37,247,884 rows with 282 columns). Caching is off for my testing, so
 it's not related to that. To short circuit anyone asking, these
 queries are generated by python code, which is why there's an IN
 clause with 1 value, as oppose to an =.

 Here are the queries and their explains. The significant difference is
 that the faster query has Using
 intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
 those 2 indexes are on the 2 columns in the where clause, so that's
 why the second one is faster. But I am wondering what I can do to make
 the first one faster.


 4 hour query:

 SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id


 ... snipped ...



 Faster query:

 SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
 FROM data_cst
 INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
 INNER JOIN data_target ON data_target.id = data_cst.target_name_id
 INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
 INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
 INNER JOIN data_measparams ON data_measparams.id =
 data_cst.meas_params_name_id
 INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
 WHERE data_target.id IN (172) AND
data_recipe.id IN (148) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
 23:59:59'
 GROUP BY wafer_id, data_cst.lot_id, target_name_id

 ... snip ...


 Thanks for taking the time to read this, and for any help or pointers
 you can give me.


 The biggest difference is the added selectivity generated by the WHERE term
 against the data_recipe table.

 Compare the two EXPLAINS, in the faster query you see that data_recipe is
 listed second. This allows the additional term a chance to reduce the number
 of row combinations for the entire query.

 To really get at the logic behind how the Optimizer chooses its execution
 plan, get an optimizer trace. Look at the cost estimates for each phase
 being considered.
 http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
 http://dev.mysql.com/doc/internals/en/optimizer-tracing.html

Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hello Larry,

On 2/4/2015 3:37 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/4/2015 3:18 PM, Larry Martell wrote:


On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
wrote:


Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:



I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_recipe.id IN (148) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...




Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE
term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the
number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html



Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? It
might make it even faster


No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.



If...
* it's a MEMORY temp table
* it's always the same table design

Then, you can use DELETE to clear the content (it's faster than DROP

Help improving query performance

2015-02-01 Thread Larry Martell
 |  1 | data_cst_fba12377 |1 | recipe_id
  | A |4788 | NULL | NULL   |  | BTREE
 | |   |
| data_cst |  1 | data_cst_3e838ccb |1 |
user_name_id| A |  42 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_634020d0 |1 |
meas_params_name_id | A |  775997 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_b84e5788 |1 |
run_name_id | A |5439 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_2030f483 |1 |
image_pr_top_id | A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_798133fb |1 |
image_measurer_id   | A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_ced012e5 |1 |
wafer_map_image_id  | A |  354741 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_c90ac9f6 |1 |
ler_file_path_id| A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_d3c8ac46 |1 |
wf_file_path_id | A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_732917fa |1 |
ece_file_path_id| A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_5bb7136a |1 |
epe_file_path_id| A |37247884 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_a8290ba2 |1 |
mcd_file_path_id| A |18623942 | NULL | NULL   |
YES  | BTREE  | |   |
| data_cst |  1 | data_cst_fc65a7fb |1 | ep
  | A |  760160 | NULL | NULL   | YES  | BTREE
 | |   |
| data_cst |  1 | data_cst_dbe16c2b |1 | roiname
  | A |12415961 | NULL | NULL   | YES  | BTREE
 | |   |
+--++---+--+-+---+-+--++--++-+---+

Thanks for taking the time to read this, and for any help or pointers
you can give me.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help optimize query.

2014-12-01 Thread shawn l.green

Hello Mimko,

Sorry for the late reply. I had a bunch of work to take care of before 
vacation, then there was the vacation itself. :)


On 11/13/2014 2:34 PM, Mimiko wrote:

Hello. I have this table:

  show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
   id int(10) unsigned NOT NULL AUTO_INCREMENT,
   date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
   cc_agent varchar(45) NOT NULL,
   cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
   cc_queue_id tinyint(3) unsigned NOT NULL,
   cc_agent_id int(10) unsigned NOT NULL,
   cc_agent_phone smallint(5) unsigned NOT NULL,
   cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
   PRIMARY KEY (id),
   KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
   KEY IDX_cc_agents_tier_status_log_3 (date_log),
   KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
   KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
   KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id)
USING BTREE,
   KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
   CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart
(id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

  show index from cc_agents_tier_status_log:
TableNon_uniqueKey_nameSeq_in_indexColumn_name
Collation CardinalitySub_partPackedNullIndex_type
CommentIndex_comment
cc_agents_tier_status_log0PRIMARY1idA
23999(null)BTREE(null) (null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_21
cc_agentA260(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_31
date_logA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_21
cc_agent_idA2(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_31
cc_queue_idA14(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_11
cc_agent_tier_status_idA2(null)BTREE
(null)(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_71
idA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_72
date_logA23999(null)BTREE(null)(null)

And the query is:
 set @enddate:=now();
 set @startdate:='2014-11-01';
 set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from a.theDateHour)
as theHour,count(c.cc_agent_tier_status_id) as nrAgents
from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and
b.id=datediff(@enddate,@startdate)+1 ) as d
left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION
ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10
UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL
SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17
UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL
SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h
on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or
if(@queue_id=c.cc_queue_id,1,0))
group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and
c.cc_agent_tier_status_id=2
group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id
login/logout per queue per phone. status_id can have value 1 (logged
out) and 2 (login) at date_log datetime.

The resulting table must contain average number of agents logged in at
every hour per startdate to enddate.

Hope for some hints. Thank you.


The first problem is that you are generating a lot of extra rows before 
you actually need them. The only place where you should be faking the 

Re: Help optimize query.

2014-11-15 Thread Mimiko

On 15.11.2014 01:06, Peter Brawley wrote:

Let's see the results of Explain Extended this query,  result of Show
Create Table cc_member_queue_end_log.


cc_member_queue_end_log is not of interest, it is used just as a series 
of numbers. It may be any table with ids.


I've changed a bit the query which seemed to reduce the select time, but 
not for a lot.


set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
explain extended select s.theHour as theHour,avg(s.nrAgents) as 
nrAgents from
- (select date(FROM_UNIXTIME(a.theDateHour)) as 
theDate,extract(hour from FROM_UNIXTIME(a.theDateHour)) as 
theHour,count(c.cc_agent_tier_status_id) as nrAgents

- from (
-
- select dh.theDateHour as theDateHour, max(c.date_log) as 
maxdatelog,c.*

- FROM
- ( select UNIX_TIMESTAMP(concat(d.thedate,' ',h.theHour,':0:0')) 
as theDateHour

- from
- ( select DATE(DATE_ADD(date('2014-11-01'), INTERVAL @i:=@i+1 
DAY) ) as theDate from (select @i:=-1) as t1
- inner join cc_agents_tier_status_log b on 1=1 and 
b.id=datediff(now(),'2014-11-01')+1 ) as d

- straight_join
- (SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION 
ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
SELECT 10 UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 
UNION ALL SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL 
SELECT 17 UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 
UNION ALL SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

- on 1=1 ) AS dh
- straight_join
- cc_agents_tier_status_log as c
- on UNIX_TIMESTAMP(c.date_log)=dh.theDateHour where 
(if(-10,1,0) or if(-1=c.cc_queue_id,1,0))

- group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone
-
-
- ) as a
- straight_join cc_agents_tier_status_log as c
- on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

- group by a.theDateHour
- order by date(FROM_UNIXTIME(a.theDateHour)),extract(hour from 
FROM_UNIXTIME(a.theDateHour)))

- as s
- group by s.theHour
- order by s.theHour\G
*** 1. row ***
   id: 1
  select_type: PRIMARY
table: derived2
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 360
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 2
  select_type: DERIVED
table: derived3
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 43560
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 3. row ***
   id: 2
  select_type: DERIVED
table: c
 type: ref
possible_keys: 
IDX_cc_agents_tier_status_log_3,FK_cc_agents_tier_status_log_2,FK_cc_agents_tier_status_log_3,FK_cc_agents_tier_status_log_1

  key: IDX_cc_agents_tier_status_log_3
  key_len: 4
  ref: a.maxdatelog
 rows: 1
 filtered: 100.00
Extra: Using where
*** 4. row ***
   id: 3
  select_type: DERIVED
table: derived4
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 360
 filtered: 100.00
Extra: Using temporary; Using filesort
*** 5. row ***
   id: 3
  select_type: DERIVED
table: c
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 24207
 filtered: 100.00
Extra: Using where; Using join buffer
*** 6. row ***
   id: 4
  select_type: DERIVED
table: derived5
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 15
 filtered: 100.00
Extra:
*** 7. row ***
   id: 4
  select_type: DERIVED
table: derived7
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 24
 filtered: 100.00
Extra: Using join buffer
*** 8. row ***
   id: 7
  select_type: DERIVED
table: NULL
 type: NULL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: NULL
 filtered: NULL
Extra: No tables used
*** 9. row ***
   id: 8
  select_type: UNION
table: NULL
 

Re: Help optimize query.

2014-11-14 Thread Peter Brawley
Let's see the results of Explain Extended this query,  result of Show 
Create Table cc_member_queue_end_log.


PB

-

On 2014-11-13 1:34 PM, Mimiko wrote:

Hello. I have this table:

 show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  cc_agent varchar(45) NOT NULL,
  cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
  cc_queue_id tinyint(3) unsigned NOT NULL,
  cc_agent_id int(10) unsigned NOT NULL,
  cc_agent_phone smallint(5) unsigned NOT NULL,
  cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
  PRIMARY KEY (id),
  KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
  KEY IDX_cc_agents_tier_status_log_3 (date_log),
  KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
  KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
  KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id) 
USING BTREE,

  KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
  CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY 
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart 
(id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY 
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE 
CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY 
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE

) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

 show index from cc_agents_tier_status_log:
TableNon_uniqueKey_nameSeq_in_indexColumn_name 
Collation CardinalitySub_partPackedNull Index_type
CommentIndex_comment
cc_agents_tier_status_log0PRIMARY1idA 
23999(null)BTREE(null) (null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_21 
cc_agentA 260(null)BTREE(null)(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_31 
date_logA 23999(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_21 
cc_agent_idA 2(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_31 
cc_queue_idA 14(null)BTREE(null)(null)
cc_agents_tier_status_log1 FK_cc_agents_tier_status_log_11 
cc_agent_tier_status_id A2(null)BTREE(null)
(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_7 1
idA23999(null)BTREE(null) (null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_72 
date_logA 23999(null)BTREE(null)(null)


And the query is:
set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from 
a.theDateHour) as theHour,count(c.cc_agent_tier_status_id) as nrAgents

from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as 
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and 
b.id=datediff(@enddate,@startdate)+1 ) as d

left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL 
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 
UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL 
SELECT 10 UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 
UNION ALL SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL 
SELECT 17 UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 
UNION ALL SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or 
if(@queue_id=c.cc_queue_id,1,0))

group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id 
login/logout per queue per phone. status_id can have value 1 (logged 
out) and 2 (login) at date_log datetime.


The resulting table must contain average number of agents logged in at 
every hour per startdate to enddate.


Hope for some hints. Thank you.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Help optimize query.

2014-11-13 Thread Mimiko

Hello. I have this table:

 show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  cc_agent varchar(45) NOT NULL,
  cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
  cc_queue_id tinyint(3) unsigned NOT NULL,
  cc_agent_id int(10) unsigned NOT NULL,
  cc_agent_phone smallint(5) unsigned NOT NULL,
  cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
  PRIMARY KEY (id),
  KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
  KEY IDX_cc_agents_tier_status_log_3 (date_log),
  KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
  KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
  KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id) 
USING BTREE,

  KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
  CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY 
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart 
(id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY 
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE CASCADE,
  CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY 
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE

) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

 show index from cc_agents_tier_status_log:
Table	Non_unique	Key_name	Seq_in_index	Column_name	Collation 
Cardinality	Sub_part	Packed	Null	Index_type	Comment	Index_comment
cc_agents_tier_status_log	0	PRIMARY	1	id	A	23999			(null)	BTREE	(null) 
(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_2	1 
cc_agent	A	260			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_3	1 
date_log	A	23999			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_2	1 
cc_agent_id	A	2			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_3	1 
cc_queue_id	A	14			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	FK_cc_agents_tier_status_log_1	1 
cc_agent_tier_status_id	A	2			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1 
IDX_cc_agents_tier_status_log_7	1	id	A	23999			(null)	BTREE	(null)	(null)
cc_agents_tier_status_log	1	IDX_cc_agents_tier_status_log_7	2 
date_log	A	23999			(null)	BTREE	(null)	(null)


And the query is:
set @enddate:=now();
set @startdate:='2014-11-01';
set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from a.theDateHour) 
as theHour,count(c.cc_agent_tier_status_id) as nrAgents

from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as 
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and 
b.id=datediff(@enddate,@startdate)+1 ) as d

left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL 
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION 
ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10 
UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL 
SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17 
UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL 
SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h

on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or 
if(@queue_id=c.cc_queue_id,1,0))

group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and 
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and 
c.cc_agent_tier_status_id=2

group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id 
login/logout per queue per phone. status_id can have value 1 (logged 
out) and 2 (login) at date_log datetime.


The resulting table must contain average number of agents logged in at 
every hour per startdate to enddate.


Hope for some hints. Thank you.
--
Mimiko desu.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Stored Procedure help

2014-07-14 Thread Keith Murphy
I would second what m. dykman says. There is no reason I can think of that
you would even be doing the order by clause.

keith


On Sun, Jul 13, 2014 at 11:16 PM, yoku ts. yoku0...@gmail.com wrote:

 Would you try this?

 CREATE PROCEDURE `reset_sortid` (IN category INT(11))
 BEGIN
 SET @a = 0;
 UPDATE
 documents SET sort_id = (@a := @a + 1)
 WHERE
 document_category = category
 ORDER BY
 sort_id;
 END
 //


 2014-07-14 11:42 GMT+09:00 Don Wieland d...@pointmade.net:

  I am trying to create this stored procedure, but can't understand why my
  editor is chocking on it. Little help please:
 
  DELIMITER //
  CREATE PROCEDURE `reset_sortid` (IN category INT(11))
  BEGIN
  DECLARE a INT;
  SET a = 0;
  UPDATE
  documents SET sort_id = (a := a + 1)
  WHERE
  document_category = category
  ORDER BY
  sort_id;
  END
  //
 
 
  Don Wieland
  d...@pointmade.net
  http://www.pointmade.net
  https://www.facebook.com/pointmade.band
 
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 




-- 



(c) 850-449-1912
(f)  423-930-8646


Re: Stored Procedure help

2014-07-14 Thread Anders Karlsson
The order makes quite a big difference, actually. In this case it 
ensures that the ordering of the values in the sort_id column is 
maintained, even though the numbers are different.

Say this is your data (I have ignored the category thingy for now):
SELECT id, sort_id FROM documents;
+--+-+
| id   | sort_id |
+--+-+
|1 |  12 |
|2 |  13 |
|3 |  11 |
+--+-+
Now if I run this the update without the order by:

UPDATE documents SET sort_id = (@a := @a + 1) WHERE
document_category = category;

The result will be:
SELECT id, sort_id FROM documents;
+--+-+
| id   | sort_id |
+--+-+
|1 |  1  |
|2 |  2  |
|3 |  3  |
+--+-+
Whereas with the order by

UPDATE documents SET sort_id = (@a := @a + 1) WHERE
document_category = category ORDER BY sort_id;

the result would be:
+--+-+
| id   | sort_id |
+--+-+
|1 |  2  |
|2 |  3  |
|3 |  1  |
+--+-+

/Karlsson
Keith Murphy skrev 2014-07-14 15:31:

I would second what m. dykman says. There is no reason I can think of that
you would even be doing the order by clause.

keith


On Sun, Jul 13, 2014 at 11:16 PM, yoku ts. yoku0...@gmail.com wrote:


Would you try this?

CREATE PROCEDURE `reset_sortid` (IN category INT(11))
BEGIN
 SET @a = 0;
 UPDATE
 documents SET sort_id = (@a := @a + 1)
 WHERE
 document_category = category
 ORDER BY
 sort_id;
END
//


2014-07-14 11:42 GMT+09:00 Don Wieland d...@pointmade.net:


I am trying to create this stored procedure, but can't understand why my
editor is chocking on it. Little help please:

DELIMITER //
CREATE PROCEDURE `reset_sortid` (IN category INT(11))
BEGIN
 DECLARE a INT;
 SET a = 0;
 UPDATE
 documents SET sort_id = (a := a + 1)
 WHERE
 document_category = category
 ORDER BY
 sort_id;
END
//


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql








--

Anders Karlsson, Senior Sales Engineer
SkySQL | t: +46 708-608-121 | Skype: drdatabase


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Stored Procedure help

2014-07-14 Thread Mogens Melander
Anders,

I didn't see that at first, but now. I'd agree. Maybe I should read
up on stored procedures.

On Mon, July 14, 2014 16:25, Anders Karlsson wrote:
 The order makes quite a big difference, actually. In this case it
 ensures that the ordering of the values in the sort_id column is
 maintained, even though the numbers are different.
 Say this is your data (I have ignored the category thingy for now):
 SELECT id, sort_id FROM documents;
 +--+-+
 | id   | sort_id |
 +--+-+
 |1 |  12 |
 |2 |  13 |
 |3 |  11 |
 +--+-+
 Now if I run this the update without the order by:

 UPDATE documents SET sort_id = (@a := @a + 1) WHERE
 document_category = category;

 The result will be:
 SELECT id, sort_id FROM documents;
 +--+-+
 | id   | sort_id |
 +--+-+
 |1 |  1  |
 |2 |  2  |
 |3 |  3  |
 +--+-+
 Whereas with the order by

 UPDATE documents SET sort_id = (@a := @a + 1) WHERE
 document_category = category ORDER BY sort_id;

 the result would be:
 +--+-+
 | id   | sort_id |
 +--+-+
 |1 |  2  |
 |2 |  3  |
 |3 |  1  |
 +--+-+

 /Karlsson
 Keith Murphy skrev 2014-07-14 15:31:
 I would second what m. dykman says. There is no reason I can think of
 that
 you would even be doing the order by clause.

 keith


 On Sun, Jul 13, 2014 at 11:16 PM, yoku ts. yoku0...@gmail.com wrote:

 Would you try this?

 CREATE PROCEDURE `reset_sortid` (IN category INT(11))
 BEGIN
  SET @a = 0;
  UPDATE
  documents SET sort_id = (@a := @a + 1)
  WHERE
  document_category = category
  ORDER BY
  sort_id;
 END
 //


 2014-07-14 11:42 GMT+09:00 Don Wieland d...@pointmade.net:

 I am trying to create this stored procedure, but can't understand why
 my
 editor is chocking on it. Little help please:

 DELIMITER //
 CREATE PROCEDURE `reset_sortid` (IN category INT(11))
 BEGIN
  DECLARE a INT;
  SET a = 0;
  UPDATE
  documents SET sort_id = (a := a + 1)
  WHERE
  document_category = category
  ORDER BY
  sort_id;
 END
 //


 Don Wieland
 d...@pointmade.net
 http://www.pointmade.net
 https://www.facebook.com/pointmade.band



 --

 Anders Karlsson, Senior Sales Engineer
 SkySQL | t: +46 708-608-121 | Skype: drdatabase



-- 
Mogens Melander
+66 8701 33224


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Stored Procedure help

2014-07-13 Thread Don Wieland
I am trying to create this stored procedure, but can't understand why my editor 
is chocking on it. Little help please:

DELIMITER //
CREATE PROCEDURE `reset_sortid` (IN category INT(11))
BEGIN
DECLARE a INT;
SET a = 0;
UPDATE
documents SET sort_id = (a := a + 1)
WHERE
document_category = category
ORDER BY
sort_id; 
END 
//


Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Stored Procedure help

2014-07-13 Thread kitlenv
maybe try 'order by sort_id desc'?


On Mon, Jul 14, 2014 at 12:42 PM, Don Wieland d...@pointmade.net wrote:

 I am trying to create this stored procedure, but can't understand why my
 editor is chocking on it. Little help please:

 DELIMITER //
 CREATE PROCEDURE `reset_sortid` (IN category INT(11))
 BEGIN
 DECLARE a INT;
 SET a = 0;
 UPDATE
 documents SET sort_id = (a := a + 1)
 WHERE
 document_category = category
 ORDER BY
 sort_id;
 END
 //


 Don Wieland
 d...@pointmade.net
 http://www.pointmade.net
 https://www.facebook.com/pointmade.band





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Stored Procedure help

2014-07-13 Thread Michael Dykman
why do you need the 'order by' in your update at all?  The statement, if
innodb, will certainly be atomic; the order in which they are updated means
nothing.
 On Jul 13, 2014 11:46 PM, kitlenv kitl...@gmail.com wrote:

 maybe try 'order by sort_id desc'?


 On Mon, Jul 14, 2014 at 12:42 PM, Don Wieland d...@pointmade.net wrote:

  I am trying to create this stored procedure, but can't understand why my
  editor is chocking on it. Little help please:
 
  DELIMITER //
  CREATE PROCEDURE `reset_sortid` (IN category INT(11))
  BEGIN
  DECLARE a INT;
  SET a = 0;
  UPDATE
  documents SET sort_id = (a := a + 1)
  WHERE
  document_category = category
  ORDER BY
  sort_id;
  END
  //
 
 
  Don Wieland
  d...@pointmade.net
  http://www.pointmade.net
  https://www.facebook.com/pointmade.band
 
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Re: Stored Procedure help

2014-07-13 Thread yoku ts.
Would you try this?

CREATE PROCEDURE `reset_sortid` (IN category INT(11))
BEGIN
SET @a = 0;
UPDATE
documents SET sort_id = (@a := @a + 1)
WHERE
document_category = category
ORDER BY
sort_id;
END
//


2014-07-14 11:42 GMT+09:00 Don Wieland d...@pointmade.net:

 I am trying to create this stored procedure, but can't understand why my
 editor is chocking on it. Little help please:

 DELIMITER //
 CREATE PROCEDURE `reset_sortid` (IN category INT(11))
 BEGIN
 DECLARE a INT;
 SET a = 0;
 UPDATE
 documents SET sort_id = (a := a + 1)
 WHERE
 document_category = category
 ORDER BY
 sort_id;
 END
 //


 Don Wieland
 d...@pointmade.net
 http://www.pointmade.net
 https://www.facebook.com/pointmade.band





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Help with cleaning up data

2014-03-31 Thread Bob Eby
delete b from icd9x10 a
join icd9x10 b on a.icd9 = b.icd9 and a.id  b.id

...
 CREATE TABLE `ICD9X10` (
 ...
 id   icd9  icd10
 25   29182 F10182
 26   29182 F10282
 ...

Good luck,
Bob


Re: Help with cleaning up data

2014-03-30 Thread william drescher

On 3/29/2014 2:26 PM, william drescher wrote:

I am given a table: ICD9X10 which is a maping of ICD9 codes to
ICD10 codes.  Unfortunately the table contains duplicate entries
that I need to remove.

CREATE TABLE `ICD9X10` (
  `id` smallint(6) NOT NULL AUTO_INCREMENT,
  `icd9` char(8) NOT NULL,
  `icd10` char(6) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `icd9` (`icd9`,`id`),
  UNIQUE KEY `icd10` (`icd10`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=671 DEFAULT CHARSET=ascii

id   icd9  icd10
25   29182 F10182
26   29182 F10282
27   29182 F10982

I just can't think of a way to write a querey to delete the
duplicates.  Does anyone have a suggestion ?

bill





Thanks for all the suggestions.  I learned a lot, which is the 
most important part of the exercise.


bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Help with cleaning up data

2014-03-29 Thread william drescher
I am given a table: ICD9X10 which is a maping of ICD9 codes to 
ICD10 codes.  Unfortunately the table contains duplicate entries 
that I need to remove.


CREATE TABLE `ICD9X10` (
 `id` smallint(6) NOT NULL AUTO_INCREMENT,
 `icd9` char(8) NOT NULL,
 `icd10` char(6) NOT NULL,
 PRIMARY KEY (`id`),
 UNIQUE KEY `icd9` (`icd9`,`id`),
 UNIQUE KEY `icd10` (`icd10`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=671 DEFAULT CHARSET=ascii

id   icd9  icd10
25   29182 F10182
26   29182 F10282
27   29182 F10982

I just can't think of a way to write a querey to delete the 
duplicates.  Does anyone have a suggestion ?


bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help with cleaning up data

2014-03-29 Thread Fran Garcia
Hi Bill,

How big is your table? It seems to me that you might want to change your
unique keys to something like (icd9, icd10), thus guaranteeing that every
mapping will exist only once in your table. You could create a new table
with that constraint and copy all your data to it:

CREATE TABLE `ICD9X10_2` (
 `id` smallint(6) NOT NULL AUTO_INCREMENT,
 `icd9` char(8) NOT NULL,
 `icd10` char(6) NOT NULL,
 PRIMARY KEY (`id`),
 UNIQUE KEY `icd9_icd10` (`icd9`,`icd10`)
) ENGINE=InnoDB DEFAULT CHARSET=ascii

INSERT IGNORE INTO ICD9X10_2 SELECT * FROM ICD9X10; -- This will skip the
duplicates

-- Once you've checked the new table and it looks fine to you, you can swap
them:
RENAME TABLE ICD9X10 TO ICD9X10_old, ICD9X10_2 TO ICD9X10;


Or, alternatively, you can also directly alter your table by adding that
unique index like this:
ALTER IGNORE TABLE ICD9X10 ADD UNIQUE KEY (ICD9, ICD10);

Hope that helps



2014-03-29 18:26 GMT+00:00 william drescher will...@techservsys.com:

 I am given a table: ICD9X10 which is a maping of ICD9 codes to ICD10
 codes.  Unfortunately the table contains duplicate entries that I need to
 remove.

 CREATE TABLE `ICD9X10` (
  `id` smallint(6) NOT NULL AUTO_INCREMENT,
  `icd9` char(8) NOT NULL,
  `icd10` char(6) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `icd9` (`icd9`,`id`),
  UNIQUE KEY `icd10` (`icd10`,`id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=671 DEFAULT CHARSET=ascii

 id   icd9  icd10
 25   29182 F10182
 26   29182 F10282
 27   29182 F10982

 I just can't think of a way to write a querey to delete the duplicates.
  Does anyone have a suggestion ?

 bill


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Help with cleaning up data

2014-03-29 Thread Carsten Pedersen

On 29-03-2014 19:26, william drescher wrote:

I am given a table: ICD9X10 which is a maping of ICD9 codes to ICD10
codes.  Unfortunately the table contains duplicate entries that I need
to remove.

...

I just can't think of a way to write a querey to delete the duplicates.
Does anyone have a suggestion ?


http://bit.ly/1hKCVHi


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Help with cleaning up data

2014-03-29 Thread David Lerer
Bill, here is one approach:

The following query will return the id's that should NOT be deleted:
  Select min (id) from icd9x10 group by icd9, icd10

Once you run it and happy with the results then you subquery it in a DELETE 
statement. Something like:
   Delete from icd9x10 A where A.id not in (Select min (B.id) from icd9x10 B 
group by B.icd9, B.icd10).

I have not tested it (sorry it is a weekend here...), but I hope it will lead 
you into the right direction.

David.


David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

-Original Message-
From: william drescher [mailto:will...@techservsys.com]
Sent: Saturday, March 29, 2014 2:26 PM
To: mysql@lists.mysql.com
Subject: Help with cleaning up data

I am given a table: ICD9X10 which is a maping of ICD9 codes to
ICD10 codes.  Unfortunately the table contains duplicate entries
that I need to remove.

CREATE TABLE `ICD9X10` (
  `id` smallint(6) NOT NULL AUTO_INCREMENT,
  `icd9` char(8) NOT NULL,
  `icd10` char(6) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `icd9` (`icd9`,`id`),
  UNIQUE KEY `icd10` (`icd10`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=671 DEFAULT CHARSET=ascii

id   icd9  icd10
25   29182 F10182
26   29182 F10282
27   29182 F10982

I just can't think of a way to write a querey to delete the
duplicates.  Does anyone have a suggestion ?

bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.


Pivot Query Help

2013-11-04 Thread Jan Steinman
I'm using MySQL 5.0.92-log.

I'm trying to do a pivot-sort-of-thing. I've tried a few things from the 
O'Reilly SQL Cookbook, but I seem to be having a mental block.

I have a table of farm harvests. Each harvest has a date, quantity, and foreign 
keys into product and harvester tables:

CREATE TABLE s_product_harvest (
 id int(10) unsigned NOT NULL auto_increment,
 `date` datetime NOT NULL COMMENT 'Date and time of harvest.',
 product int(11) unsigned NOT NULL default '53',
 quantity decimal(10,3) NOT NULL default '1.000',
 units 
enum('kilograms','grams','pounds','ounces','liters','each','cords','bales') 
character set utf8 NOT NULL default 'kilograms',
 who1 int(5) unsigned NOT NULL default '2' COMMENT 'Who harvested this 
resource?',
 notes varchar(255) character set utf8 NOT NULL,
 PRIMARY KEY  (id),
 KEY product (product),
 KEY `date` (`date`),
 KEY who1 (who1),
) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='historical list 
of EcoReality farm products harvested';


What I want is a report with years as columns, and rows of:
first harvest (MIN(date)),
last harvest (MAX(date)),
days of harvest (DATEDIFF(MAX(date), MIN(date))) and
total (SUM(quantity)).

first/last  200720082009...
first   Aug 5   Sep 27  Aug 7
lastOct 1   Nov 24  Oct 16
days57  108 82
kg  10.17   16.746.53

This is my first attempt, and it appears to be giving me a row per year, with 
the first sequential harvest date for each year. I can get the data I want by 
making each one a separate column, but that's ugly and I want them in rows.

SELECT
 'first_last' AS `First/Last`,
 CASE WHEN YEAR(harvest.date)='2007' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2007',
 CASE WHEN YEAR(harvest.date)='2008' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2008',
 CASE WHEN YEAR(harvest.date)='2009' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2009',
 CASE WHEN YEAR(harvest.date)='2010' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2010',
 CASE WHEN YEAR(harvest.date)='2011' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2011',
 CASE WHEN YEAR(harvest.date)='2012' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2012',
 CASE WHEN YEAR(harvest.date)='2013' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2013',
 CASE WHEN YEAR(harvest.date)='2014' THEN DATE_FORMAT(harvest.date, '%b %e') 
ELSE 0 END AS '2014'
FROM
 s_product_harvest harvest
WHERE harvest.product = 4 /* product ID for tomatoes */
GROUP BY YEAR(harvest.date)

Using an example from SQL Cookbook on page 372, I tried to select from a 
subquery, grouped by a rank, but I kept getting one result row, and I can't 
figure out how to get the literal row headers.

Any ideas?

 Compared to those on pasteurized milk, children who received raw certified 
milk had better weight gain and greater protection against rachitis. -- Ron 
Schmid
 Jan Steinman, EcoReality Co-op 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Date comparison help

2013-10-22 Thread Michael Stroh
I recently upgraded a local MySQL installation to 5.5.32 and am trying to 
figure out why the following query won't work as expected anymore. I'm just 
trying to compare a set of dates to NOW() but since the upgrade, these don't 
seem to work as expected.

SELECT 
DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY), 
NOW(), 
DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY)NOW()

For instance, when I run it on my system, I get 1 for the third column even 
though comparing the two by eye it should be false.

Cheers,
Michael




Re: Date comparison help

2013-10-22 Thread kitlenv
Hi Michael,

FYI: I'm using 5.6.13 and your query returns 0 for the third column with my
instance.

Cheers,
Sam


On Wed, Oct 23, 2013 at 2:35 AM, Michael Stroh st...@astroh.org wrote:

 I recently upgraded a local MySQL installation to 5.5.32 and am trying to
 figure out why the following query won't work as expected anymore. I'm just
 trying to compare a set of dates to NOW() but since the upgrade, these
 don't seem to work as expected.

 SELECT
 DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY),
 NOW(),
 DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2
 DAY)NOW()

 For instance, when I run it on my system, I get 1 for the third column
 even though comparing the two by eye it should be false.

 Cheers,
 Michael





Re: Date comparison help

2013-10-22 Thread Michael Stroh
Thanks Sam.

It turns out that if I put the DATE_ADD.. within DATE(), it works as expected. 
That is sufficient for my goals, but it would be nice to understand this issue 
in case there may be other cases that I need to watch out for.

Cheers,
Michael



On Oct 22, 2013, at 6:18 PM, kitlenv kitl...@gmail.com wrote:

 Hi Michael,
 
 FYI: I'm using 5.6.13 and your query returns 0 for the third column with my 
 instance. 
 
 Cheers,
 Sam
 
 
 On Wed, Oct 23, 2013 at 2:35 AM, Michael Stroh st...@astroh.org wrote:
 I recently upgraded a local MySQL installation to 5.5.32 and am trying to 
 figure out why the following query won't work as expected anymore. I'm just 
 trying to compare a set of dates to NOW() but since the upgrade, these don't 
 seem to work as expected.
 
 SELECT
 DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY),
 NOW(),
 DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 
 DAY)NOW()
 
 For instance, when I run it on my system, I get 1 for the third column even 
 though comparing the two by eye it should be false.
 
 Cheers,
 Michael
 
 
 



Re: Date comparison help

2013-10-22 Thread hsv
 2013/10/22 12:20 -0400,  
I recently upgraded a local MySQL installation to 5.5.32 and am trying to 
figure out why the following query won't work as expected anymore. I'm just 
trying to compare a set of dates to NOW() but since the upgrade, these don't 
seem to work as expected.

SELECT 
DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY), 
NOW(), 
DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY)NOW()

For instance, when I run it on my system, I get 1 for the third column even 
though comparing the two by eye it should be false.

Well, show us all three columns

And with 5.5.8 I get the same third column as you. Has it worked?

And I found that changed to

SELECT 
DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY) AS 
A, 
NOW(), 
CAST(DATE_ADD(STR_TO_DATE('2013-350-00:00:00','%Y-%j-%H:%i:%S'),INTERVAL 2 DAY) 
AS DATETIME)NOW() AS B

it works as hoped for--and it seems a bug to me, but probably an old one. It 
seems to me that the outcome of DATE_ADD is DATE, not DATETIME, and the 
comparison is numeric, with the six trailing 0s dropped. Quote about 
STR_TO_DATE:
It takes a string str and a format string format. 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_str-to-dateSTR_TO_DATE()
 returns a 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/data-types.html#datetimeDATETIME
 value if the format string contains both date and time parts, or a 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/data-types.html#datetimeDATE
 or 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/data-types.html#timeTIME
 value if the string contains only date or time parts. 
How really does it decide which type to return? It is wrong if the decision is 
based whether all the hour, minute, and second are 0 or not.  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Lost connection to MySQL server - need help.

2013-10-12 Thread Jørn Dahl-Stamnes
Hello,

I got a strange problem related to a production server. It has been working OK 
for months, but yesterday it start to fail. There are several batch scripts 
using the database in addition to a web application using it.

The php scripts running in batch mode began to get:

mysql_connect(): Lost connection to MySQL server at 'reading initial 
communication packet', system error: 111

I stopped the server and restarted it and everything seems to work OK for 
hours but when the load start to increase, the errors begin to appear again.

Today I noticed that after I starte phpMyAdmin and selected one of the 
databases, phpMyAdmin was hanging and the batch scripts began to fail again. 
Seems like the server does not handle much load anymore.


What's strange is the memory usage. The server is a quad core cpu with 48 Gb 
memory, where 28 Gb is allocated to innodb (we mostly use innodb). But when 
using top command, I noticed this:

VIRT: 33.9g
RES: 9.4g
SWAP: 23g

at this time over 11G memory is free. vm.swappiness is set to 0. I find it 
strange that the server is not able to use physical memory but use swap 
instead. The amount of cpu time used for swapping is rather high during sql 
queries. The amount of RESident memory may increase slowly over time but very 
slowly (it can take hours before it increase to 20+ Gb).

[PS: I also got a MySQL server running at a dedicated host at home, where the 
it seem to use the memory as I except it to use:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  SWAP DATA COMMAND
 1462 mysql 20   0 30.0g  27g 3900 S  0.3 87.3   2633:14 844m  29g mysqld
]


I would like to have some suggestions what I can do to solve this problem.
I have google'd it but found nothing that seem to solve my case.

Server:
  OS: Debian 6
  MySQL: 5.1.61-0+squeeze1

my.cnf:
#
# The MySQL database server configuration file.
#

[client]
port= 3306
socket  = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld].
[mysqld_safe]
socket  = /var/run/mysqld/mysqld.sock
nice= 0

[mysqld]
#
# * Basic Settings
#
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port= 3306
basedir = /usr
datadir = /database/mysql
tmpdir  = /tmp
language= /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address= 127.0.0.1
## All applications use 127.0.0.1 when connectiong to the db.

#
# * Fine Tuning
#
#key_buffer = 16M
max_allowed_packet  = 64M
thread_stack= 192K
#thread_cache_size   = 8

# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#
# * Query Cache Configuration
#
query_cache_limit   = 1M

#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#   other settings you may need to change.
#server-id  = 1
#log_bin= /var/log/mysql/mysql-bin.log
expire_logs_days= 10
max_binlog_size = 100M
#binlog_do_db   = include_database_name
#binlog_ignore_db   = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

thread_cache_size = 192
table_cache = 768
## key_buffer = 64M
## sort_buffer_size = 256K
## read_buffer_size = 256K
## read_rnd_buffer_size = 256K
tmp_table_size=32M
max_heap_table_size=32M
query_cache_size=128M
query_cache_type=2

innodb_open_files=1000
innodb_buffer_pool_size = 28G
innodb_additional_mem_pool_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_support_xa = 0
innodb_lock_wait_timeout = 50
## innodb_flush_method=O_DIRECT
innodb_log_files_in_group = 2
## innodb_log_file_size = 128M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 14
innodb_file_per_table

max_connections = 100
binlog_cache_size   = 1M
sort_buffer_size= 16M
join_buffer_size= 16M
ft_min_word_len = 1
ft_max_word_len = 84
ft_stopword_file= ''
default_table_type  = InnoDB
key_buffer  = 2G
read_buffer_size= 2M
read_rnd_buffer_size= 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size   = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads   = 1
myisam_recover

[mysqldump]
quick
quote-names
max_allowed_packet = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab 

Re: Lost connection to MySQL server - need help.

2013-10-12 Thread nixofortune

You might want to comment

bind-address= 127.0.0.1

in your my.cnf and restart mysql server.



On 12/10/13 10:49, Jørn Dahl-Stamnes wrote:

Hello,

I got a strange problem related to a production server. It has been working OK
for months, but yesterday it start to fail. There are several batch scripts
using the database in addition to a web application using it.

The php scripts running in batch mode began to get:

mysql_connect(): Lost connection to MySQL server at 'reading initial
communication packet', system error: 111

I stopped the server and restarted it and everything seems to work OK for
hours but when the load start to increase, the errors begin to appear again.

Today I noticed that after I starte phpMyAdmin and selected one of the
databases, phpMyAdmin was hanging and the batch scripts began to fail again.
Seems like the server does not handle much load anymore.


What's strange is the memory usage. The server is a quad core cpu with 48 Gb
memory, where 28 Gb is allocated to innodb (we mostly use innodb). But when
using top command, I noticed this:

VIRT: 33.9g
RES: 9.4g
SWAP: 23g

at this time over 11G memory is free. vm.swappiness is set to 0. I find it
strange that the server is not able to use physical memory but use swap
instead. The amount of cpu time used for swapping is rather high during sql
queries. The amount of RESident memory may increase slowly over time but very
slowly (it can take hours before it increase to 20+ Gb).

[PS: I also got a MySQL server running at a dedicated host at home, where the
it seem to use the memory as I except it to use:

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  SWAP DATA COMMAND
  1462 mysql 20   0 30.0g  27g 3900 S  0.3 87.3   2633:14 844m  29g mysqld
]


I would like to have some suggestions what I can do to solve this problem.
I have google'd it but found nothing that seem to solve my case.

Server:
   OS: Debian 6
   MySQL: 5.1.61-0+squeeze1

my.cnf:
#
# The MySQL database server configuration file.
#

[client]
port= 3306
socket  = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld].
[mysqld_safe]
socket  = /var/run/mysqld/mysqld.sock
nice= 0

[mysqld]
#
# * Basic Settings
#
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port= 3306
basedir = /usr
datadir = /database/mysql
tmpdir  = /tmp
language= /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address= 127.0.0.1
## All applications use 127.0.0.1 when connectiong to the db.

#
# * Fine Tuning
#
#key_buffer = 16M
max_allowed_packet  = 64M
thread_stack= 192K
#thread_cache_size   = 8

# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#
# * Query Cache Configuration
#
query_cache_limit   = 1M

#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#   other settings you may need to change.
#server-id  = 1
#log_bin= /var/log/mysql/mysql-bin.log
expire_logs_days= 10
max_binlog_size = 100M
#binlog_do_db   = include_database_name
#binlog_ignore_db   = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

thread_cache_size = 192
table_cache = 768
## key_buffer = 64M
## sort_buffer_size = 256K
## read_buffer_size = 256K
## read_rnd_buffer_size = 256K
tmp_table_size=32M
max_heap_table_size=32M
query_cache_size=128M
query_cache_type=2

innodb_open_files=1000
innodb_buffer_pool_size = 28G
innodb_additional_mem_pool_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_support_xa = 0
innodb_lock_wait_timeout = 50
## innodb_flush_method=O_DIRECT
innodb_log_files_in_group = 2
## innodb_log_file_size = 128M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 14
innodb_file_per_table

max_connections = 100
binlog_cache_size   = 1M
sort_buffer_size= 16M
join_buffer_size= 16M
ft_min_word_len = 1
ft_max_word_len = 84
ft_stopword_file= ''
default_table_type  = InnoDB
key_buffer  = 2G
read_buffer_size= 2M
read_rnd_buffer_size= 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size   = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads   

Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Jørn Dahl-Stamnes
On Saturday 12 October 2013 12:01, nixofortune wrote:
 You might want to comment

 bind-address= 127.0.0.1

 in your my.cnf and restart mysql server.

It does not explain why it works under low load and not under high load.

However, I seem to have found something. When I started phpMyAdmin and 
selected one of the database, the server went away again and I found this 
in /var/log/syslog:

Oct 12 11:53:33 cebycny mysqld: 131012 11:53:33  InnoDB: Assertion failure in 
thread 140182892447488 in file ../../../storage/innobase/handler/ha_innodb.cc 
line
 8066
Oct 12 11:53:33 cebycny mysqld: InnoDB: Failing assertion: auto_inc  0
Oct 12 11:53:33 cebycny mysqld: InnoDB: We intentionally generate a memory 
trap.
Oct 12 11:53:33 cebycny mysqld: InnoDB: Submit a detailed bug report to 
http://bugs.mysql.com.
Oct 12 11:53:33 cebycny mysqld: InnoDB: If you get repeated assertion failures 
or crashes, even
Oct 12 11:53:33 cebycny mysqld: InnoDB: immediately after the mysqld startup, 
there may be
Oct 12 11:53:33 cebycny mysqld: InnoDB: corruption in the InnoDB tablespace. 
Please refer to
Oct 12 11:53:33 cebycny mysqld: InnoDB: 
http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
Oct 12 11:53:33 cebycny mysqld: InnoDB: about forcing recovery.
Oct 12 11:53:33 cebycny mysqld: 09:53:33 UTC - mysqld got signal 6 ;
Oct 12 11:53:33 cebycny mysqld: This could be because you hit a bug. It is 
also possible that this binary
Oct 12 11:53:33 cebycny mysqld: or one of the libraries it was linked against 
is corrupt, improperly built,
Oct 12 11:53:33 cebycny mysqld: or misconfigured. This error can also be 
caused by malfunctioning hardware.
Oct 12 11:53:33 cebycny mysqld: We will try our best to scrape up some info 
that will hopefully help
Oct 12 11:53:33 cebycny mysqld: diagnose the problem, but since we have 
already crashed,
Oct 12 11:53:33 cebycny mysqld: something is definitely wrong and this may 
fail.
Oct 12 11:53:33 cebycny mysqld:
Oct 12 11:53:33 cebycny mysqld: key_buffer_size=2147483648
Oct 12 11:53:33 cebycny mysqld: read_buffer_size=2097152
Oct 12 11:53:33 cebycny mysqld: max_used_connections=8
Oct 12 11:53:33 cebycny mysqld: max_threads=100
Oct 12 11:53:33 cebycny mysqld: thread_count=2
Oct 12 11:53:33 cebycny mysqld: connection_count=2
Oct 12 11:53:33 cebycny mysqld: It is possible that mysqld could use up to
Oct 12 11:53:33 cebycny mysqld: key_buffer_size + (read_buffer_size + 
sort_buffer_size)*max_threads = 3941387 K  bytes of memory
Oct 12 11:53:33 cebycny mysqld: Hope that's ok; if not, decrease some 
variables in the equation.
Oct 12 11:53:33 cebycny mysqld:
Oct 12 11:53:33 cebycny mysqld: Thread pointer: 0x7f7f1bf997c0
Oct 12 11:53:33 cebycny mysqld: Attempting backtrace. You can use the 
following information to find out
Oct 12 11:53:33 cebycny mysqld: where mysqld died. If you see no messages 
after this, something went
Oct 12 11:53:33 cebycny mysqld: terribly wrong...
Oct 12 11:53:33 cebycny mysqld: stack_bottom = 7f7edf81fe88 thread_stack 
0x3
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x29) 
[0x7f7edff62b59]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x483) 
[0x7f7edfd774a3]
Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0xeff0) [0x7f7edf4c9ff0]
Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(gsignal+0x35) [0x7f7eddf6c1b5]
Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(abort+0x180) [0x7f7eddf6efc0]
Oct 12 11:53:33 cebycny 
mysqld: /usr/sbin/mysqld(ha_innobase::innobase_peek_autoinc()+0x8f) 
[0x7f7edfe1fa2f]
Oct 12 11:53:33 cebycny 
mysqld: /usr/sbin/mysqld(ha_innobase::info_low(unsigned int, bool)+0x18f) 
[0x7f7edfe2524f]
Oct 12 11:53:33 cebycny 
mysqld: 
/usr/sbin/mysqld(ha_innobase::update_create_info(st_ha_create_information*)+0x29)
 
[0x7f7edfe256b9]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(+0x49e3dc) [0x7f7edfd953dc]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysqld_show_create(THD*, 
TABLE_LIST*)+0x7a8) [0x7f7edfd9d388]
Oct 12 11:53:33 cebycny 
mysqld: /usr/sbin/mysqld(mysql_execute_command(THD*)+0x184a) [0x7f7edfc7cb0a]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysql_parse(THD*, char*, 
unsigned int, char const**)+0x3fb) [0x7f7edfc80dbb]
Oct 12 11:53:33 cebycny 
mysqld: /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, 
unsigned int)+0x115a) [0x7f7edfc81f2a]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(do_command(THD*)+0xea) 
[0x7f7edfc8285a]
Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(handle_one_connection+0x235) 
[0x7f7edfc74435]
Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0x68ca) [0x7f7edf4c18ca]
Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(clone+0x6d) [0x7f7ede00992d]
Oct 12 11:53:33 cebycny mysqld:
Oct 12 11:53:33 cebycny mysqld: Trying to get some variables.
Oct 12 11:53:33 cebycny mysqld: Some pointers may be invalid and cause the 
dump to abort.
Oct 12 11:53:33 cebycny mysqld: Query (7f7f1c0dcbc0): SHOW CREATE TABLE 
`calculation`
Oct 12

Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Andrew Moore
Could be a crash related to innodb data dictionary being out of sync. Could
be a bug.

http://bugs.mysql.com/bug.php?id=55277
On 12 Oct 2013 11:21, Jørn Dahl-Stamnes sq...@dahl-stamnes.net wrote:

 On Saturday 12 October 2013 12:01, nixofortune wrote:
  You might want to comment
 
  bind-address= 127.0.0.1
 
  in your my.cnf and restart mysql server.

 It does not explain why it works under low load and not under high load.

 However, I seem to have found something. When I started phpMyAdmin and
 selected one of the database, the server went away again and I found this
 in /var/log/syslog:

 Oct 12 11:53:33 cebycny mysqld: 131012 11:53:33  InnoDB: Assertion failure
 in
 thread 140182892447488 in file
 ../../../storage/innobase/handler/ha_innodb.cc
 line
  8066
 Oct 12 11:53:33 cebycny mysqld: InnoDB: Failing assertion: auto_inc  0
 Oct 12 11:53:33 cebycny mysqld: InnoDB: We intentionally generate a memory
 trap.
 Oct 12 11:53:33 cebycny mysqld: InnoDB: Submit a detailed bug report to
 http://bugs.mysql.com.
 Oct 12 11:53:33 cebycny mysqld: InnoDB: If you get repeated assertion
 failures
 or crashes, even
 Oct 12 11:53:33 cebycny mysqld: InnoDB: immediately after the mysqld
 startup,
 there may be
 Oct 12 11:53:33 cebycny mysqld: InnoDB: corruption in the InnoDB
 tablespace.
 Please refer to
 Oct 12 11:53:33 cebycny mysqld: InnoDB:
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 Oct 12 11:53:33 cebycny mysqld: InnoDB: about forcing recovery.
 Oct 12 11:53:33 cebycny mysqld: 09:53:33 UTC - mysqld got signal 6 ;
 Oct 12 11:53:33 cebycny mysqld: This could be because you hit a bug. It is
 also possible that this binary
 Oct 12 11:53:33 cebycny mysqld: or one of the libraries it was linked
 against
 is corrupt, improperly built,
 Oct 12 11:53:33 cebycny mysqld: or misconfigured. This error can also be
 caused by malfunctioning hardware.
 Oct 12 11:53:33 cebycny mysqld: We will try our best to scrape up some info
 that will hopefully help
 Oct 12 11:53:33 cebycny mysqld: diagnose the problem, but since we have
 already crashed,
 Oct 12 11:53:33 cebycny mysqld: something is definitely wrong and this may
 fail.
 Oct 12 11:53:33 cebycny mysqld:
 Oct 12 11:53:33 cebycny mysqld: key_buffer_size=2147483648
 Oct 12 11:53:33 cebycny mysqld: read_buffer_size=2097152
 Oct 12 11:53:33 cebycny mysqld: max_used_connections=8
 Oct 12 11:53:33 cebycny mysqld: max_threads=100
 Oct 12 11:53:33 cebycny mysqld: thread_count=2
 Oct 12 11:53:33 cebycny mysqld: connection_count=2
 Oct 12 11:53:33 cebycny mysqld: It is possible that mysqld could use up to
 Oct 12 11:53:33 cebycny mysqld: key_buffer_size + (read_buffer_size +
 sort_buffer_size)*max_threads = 3941387 K  bytes of memory
 Oct 12 11:53:33 cebycny mysqld: Hope that's ok; if not, decrease some
 variables in the equation.
 Oct 12 11:53:33 cebycny mysqld:
 Oct 12 11:53:33 cebycny mysqld: Thread pointer: 0x7f7f1bf997c0
 Oct 12 11:53:33 cebycny mysqld: Attempting backtrace. You can use the
 following information to find out
 Oct 12 11:53:33 cebycny mysqld: where mysqld died. If you see no messages
 after this, something went
 Oct 12 11:53:33 cebycny mysqld: terribly wrong...
 Oct 12 11:53:33 cebycny mysqld: stack_bottom = 7f7edf81fe88 thread_stack
 0x3
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x29)
 [0x7f7edff62b59]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x483)
 [0x7f7edfd774a3]
 Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0xeff0)
 [0x7f7edf4c9ff0]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(gsignal+0x35)
 [0x7f7eddf6c1b5]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(abort+0x180)
 [0x7f7eddf6efc0]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(ha_innobase::innobase_peek_autoinc()+0x8f)
 [0x7f7edfe1fa2f]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(ha_innobase::info_low(unsigned int, bool)+0x18f)
 [0x7f7edfe2524f]
 Oct 12 11:53:33 cebycny
 mysqld:
 /usr/sbin/mysqld(ha_innobase::update_create_info(st_ha_create_information*)+0x29)
 [0x7f7edfe256b9]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(+0x49e3dc)
 [0x7f7edfd953dc]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysqld_show_create(THD*,
 TABLE_LIST*)+0x7a8) [0x7f7edfd9d388]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(mysql_execute_command(THD*)+0x184a)
 [0x7f7edfc7cb0a]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysql_parse(THD*, char*,
 unsigned int, char const**)+0x3fb) [0x7f7edfc80dbb]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*,
 unsigned int)+0x115a) [0x7f7edfc81f2a]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(do_command(THD*)+0xea)
 [0x7f7edfc8285a]
 Oct 12 11:53:33 cebycny mysqld:
 /usr/sbin/mysqld(handle_one_connection+0x235)
 [0x7f7edfc74435]
 Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0x68ca)
 [0x7f7edf4c18ca]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(clone+0x6d) [0x7f7ede00992d]
 Oct 12 11:53

Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Jørn Dahl-Stamnes
On Saturday 12 October 2013 13:07, Andrew Moore wrote:
 Could be a crash related to innodb data dictionary being out of sync. Could
 be a bug.

Seems like a bug yes. However, we had a strange situation yesterday when we 
had several processes in the state copying to tmp table (if i remember the 
exact phrase). After witing 2 seconds, I restarted the server. It seemed 
to work OK until the backup started.

Perhaps we should restore the database that I suspect cause this, in order to 
rebuild the complete database.

-- 
Jørn Dahl-Stamnes
homepage: http://photo.dahl-stamnes.net/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Reindl Harald


Am 12.10.2013 17:02, schrieb Jørn Dahl-Stamnes:
 On Saturday 12 October 2013 13:07, Andrew Moore wrote:
 Could be a crash related to innodb data dictionary being out of sync. Could
 be a bug.
 
 Seems like a bug yes. However, we had a strange situation yesterday when we 
 had several processes in the state copying to tmp table (if i remember the 
 exact phrase). After witing 2 seconds, I restarted the server. It seemed 
 to work OK until the backup started

so someone did optimize table on a large table
you do yourself not a favour restarting the server in such a moment



signature.asc
Description: OpenPGP digital signature


Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Jørn Dahl-Stamnes
On Saturday 12 October 2013 17:36, Reindl Harald wrote:
 so someone did optimize table on a large table
 you do yourself not a favour restarting the server in such a moment

7 hours before the server was shut down, we did a alter table to add a primary 
key to a table that is read-only from the web application.

-- 
Jørn Dahl-Stamnes
homepage: http://photo.dahl-stamnes.net/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Reindl Harald


Am 12.10.2013 19:45, schrieb Jørn Dahl-Stamnes:
 On Saturday 12 October 2013 17:36, Reindl Harald wrote:
 so someone did optimize table on a large table
 you do yourself not a favour restarting the server in such a moment
 
 7 hours before the server was shut down, we did a alter table to add a 
 primary 
 key to a table that is read-only from the web application.

which means the table is most likely completly copied
in a temp file and depending on the table size this
takes time - you killed the alter table i guess



signature.asc
Description: OpenPGP digital signature


Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Chris McKeever
We had a similar issue a bit back - and although it sounds similar - based
on your followups it probably isnt, but will just toss this out there
anyhows.  We were experiencing connection timeouts when load would ramp up.
 Doing some digging we learned that our firewall between the servers
bandwidth would get consumed by a large wordpress load - and this in
essence backed up the rest of the requests until they timed out.

We fixed that load issue which reduced the data passing through and have
expereinced a significant performance boost in our app let alone reduction
of these timeout issues




On Sat, Oct 12, 2013 at 12:56 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 12.10.2013 19:45, schrieb Jørn Dahl-Stamnes:
  On Saturday 12 October 2013 17:36, Reindl Harald wrote:
  so someone did optimize table on a large table
  you do yourself not a favour restarting the server in such a moment
 
  7 hours before the server was shut down, we did a alter table to add a
 primary
  key to a table that is read-only from the web application.

 which means the table is most likely completly copied
 in a temp file and depending on the table size this
 takes time - you killed the alter table i guess




Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Reindl Harald
sounds like a scheduler issue

did you try deadline?
http://en.wikipedia.org/wiki/Deadline_scheduler

on Linux systems pass elevator=deadline as kernel param

Am 12.10.2013 20:58, schrieb Chris McKeever:
 We had a similar issue a bit back - and although it sounds similar - based
 on your followups it probably isnt, but will just toss this out there
 anyhows.  We were experiencing connection timeouts when load would ramp up.
  Doing some digging we learned that our firewall between the servers
 bandwidth would get consumed by a large wordpress load - and this in
 essence backed up the rest of the requests until they timed out.
 
 We fixed that load issue which reduced the data passing through and have
 expereinced a significant performance boost in our app let alone reduction
 of these timeout issues
 
 On Sat, Oct 12, 2013 at 12:56 PM, Reindl Harald h.rei...@thelounge.netwrote:
 


 Am 12.10.2013 19:45, schrieb Jørn Dahl-Stamnes:
 On Saturday 12 October 2013 17:36, Reindl Harald wrote:
 so someone did optimize table on a large table
 you do yourself not a favour restarting the server in such a moment

 7 hours before the server was shut down, we did a alter table to add a
 primary
 key to a table that is read-only from the web application.

 which means the table is most likely completly copied
 in a temp file and depending on the table size this
 takes time - you killed the alter table i guess




signature.asc
Description: OpenPGP digital signature


Re: help: innodb database cannot recover

2013-06-21 Thread Peter


boah you *must not* remove ibdata1
it contains the global tablespace even with file_per_table

ib_logfile0 and ib_logfile1 may be removed, but make sure you have
a as cinsistent as possible backup of the whole datadir

I removed ib_logfile0 and ib_logfile1 and restarted mysql with 
innodb_force_recovery=1,
mysql keeps crashing and restart:
 

thd: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = (nil) thread_stack 0x3
/usr/libexec/mysqld(my_print_stacktrace+0x2e) [0x84bbbae]
/usr/libexec/mysqld(handle_segfault+0x4bc) [0x81eca1c]
[0xf57fe400]
[0xf57fe416]
/lib/libc.so.6(gsignal+0x51) [0x45a7bb71]
/lib/libc.so.6(abort+0x17a) [0x45a7d44a]
/usr/libexec/mysqld(fil_io+0x377) [0x83ba177]
/usr/libexec/mysqld() [0x83a257b]
/usr/libexec/mysqld(buf_read_page+0x282) [0x83a3132]
/usr/libexec/mysqld(buf_page_get_gen+0x351) [0x839c111]
/usr/libexec/mysqld(btr_cur_search_to_nth_level+0x3c1) [0x838ca31]
/usr/libexec/mysqld(row_search_index_entry+0x79) [0x840d3c9]
/usr/libexec/mysqld() [0x840bf97]
/usr/libexec/mysqld(row_purge_step+0x574) [0x840d1e4]
/usr/libexec/mysqld(que_run_threads+0x535) [0x83fa815]
/usr/libexec/mysqld(trx_purge+0x365) [0x8427e25]
/usr/libexec/mysqld(srv_master_thread+0x75b) [0x842009b]
/lib/libpthread.so.0() [0x45bf09e9]
/lib/libc.so.6(clone+0x5e) [0x45b2dc2e]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
130620 00:47:21 mysqld_safe Number of processes running now: 0
130620 00:47:21 mysqld_safe mysqld restarted
InnoDB: Error: tablespace size stored in header is 456832 pages, but
InnoDB: the sum of data file sizes is only 262080 pages
InnoDB: Cannot start InnoDB. The tail of the system tablespace is
InnoDB: missing. Have you edited innodb_data_file_path in my.cnf in an
InnoDB: inappropriate way, removing ibdata files from there?
InnoDB: You can set innodb_force_recovery=1 in my.cnf to force
InnoDB: a startup if you are trying to recover a badly corrupt database.
130620  0:47:22 [ERROR] Plugin 'InnoDB' init function returned error.
130620  0:47:22 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.


if I set  innodb_force_recovery=4 to restart mysql and then run mysqldump, i 
got the following error:
mysqldump: Got error: 2013: Lost connection to MySQL server during query when 
using LOCK TABLES

it looks that all data from innodb is messed up and gone forever even though 
*.frm is still there.

Peter

  1   2   3   4   5   6   7   8   9   10   >