Re: need help from the list admin

2016-04-03 Thread Lentes, Bernd

- Am 1. Apr 2016 um 21:56 schrieb shawn l.green shawn.l.gr...@oracle.com:

> Correct. MyISAM is not a transactional storage engine. It has no concept
> of COMMIT or ROLLBACK. Changes to it are controlled by a full table lock
> and as soon as the change is complete, the table is unlocked and is
> immediately visible to every other session.
> 
> What the replication system has done is to extend the length of that
> lock until the transaction completes to avoid situations where changes
> appear "out of sequence" to what is recorded in the Binary Log.

>> So when transaction is rollbacked, the inserted data in the MyISAM table 
>> remains
>> ?
> 
> Correct. All of the changes that could be undone were undone. MyISAM
> changes can't be undone so they stayed in place.
> 

Aah. OK.

> Try this as an analogy.
> 
> MyISAM tables are like writing directly with ink on paper. If you can
> complete the write, you have changed the row.
> 
> InnoDB tables are like whiteboards. You can undo your pending changes
> before someone else (one of the background threads of the InnoDB engine)
> makes the changes permanent.
> 
> The Binary Log is like a recipe used to rebuild your data in case it
> goes boom. If you start from a backup then repeat the sequence of
> actions as they were recorded in the Binary Log since that backup was
> created, you should wind up with exactly the same data you had before
> the problem.  If there is a problem with that sequence (actions are out
> of order) then rebuilding that data could be a problem.
> 
> Sequence makes a big difference even in less esoteric settings. Try this...
> 
> Start with your phone lying flat on your desk (screen up) pointing
> directly away from you. Roll it 45 degrees to the right. Now lift it
> vertically towards you 90 degrees (maintain the roll).  The phone is now
> pointing straight up but the screen is turned away from you.
> 
> Then try those same actions in reverse order. lift first, then roll it
> to the right. In this case the screen is pointing in your general
> direction but the whole thing is leaning off to one side.

I don't understand the example completely, but i understand
what you want to say: Changing the order of statements
may lead to a different result.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-03 Thread Lentes, Bernd


- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

>> Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
>> with Statement-Based-Logging or with both ?
>>
>>
> 
> Both.
> 
>
>>
>> I don't understand the example:
>> Does "begin transaction" and "COMMIT" have any influence on the insert ?
>>  From what i understand a myisam table does not support transactions,
>> so it should not care about "begin transaction" and "commit".
>> So the insert should be done immediately. The select on the InnoDB also
>> should not wait, because it's applied without "LOCK IN SHARE MODE".
>> So x lines are added immediately. This is done on the master, written in the 
>> log
>> and then replicated to the slave, which also adds x lines.
>> Then connection 2 deletes 8 rows, one is from the previous insert.
>> First on the master and then on the slave.
>> I assume that the connections are established in the order they appear here
>> (connection 2 is established after the insert in connection 1).
>> So on both 8 rows are deleted.
>>
>>
> 
> 
> You said, "This is done on the master, written in the log and then
> replicated to the slave, "
> 
> The INSERT would not appear in the Binary log until after session 1
> commits. Even if session 1 does a rollback, you would still see the
> entire transaction including the ROLLBACK. We have to do it that way to
> preserve the transaction isolation of the InnoDB data.
> 
> Yes, you read the shorthand correctly and in the correct temporal sequence.
>   session1 did two commands.
>   session2 issued one command.
>   session1 did a commit.
> 
> It does not matter of the sessions were created in that order or not.
> Only the sequence in which the commands are executed matters.
> 
> 
>>
>>
>> Independent from the binlog_format ?
>> Does commit means "write now to the binlog" ?
>>
> 
> Effectively, it does (for InnoDB-based transactions). InnoDB first
> writes the entire transaction to the Binary Log (it was sitting in the
> Binlog cache up until this point) then it pumps the necessary data into
> the REDO log (for disaster recovery). At that point the transaction is
> considered "committed".  In the case of a rollback, there is nothing to
> log in either location, no permanent changes were made to the data.
> However if the transaction that rolled back contained statements that
> changed MyISAM tables, then the entire transaction (all of the work it
> did) needs to be written into the Binary Log and REDO log just to have
> the very last command be "ROLLBACK".   What that will do is create the
> same sequence of data changes on the slave that happened on the master.
> 
> 
In case of a rollback: is the INSERT in the MyISAM table also rollbacked ? 
I think no.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread Reindl Harald



Am 01.04.2016 um 21:09 schrieb Lentes, Bernd:

- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

You said, "This is done on the master, written in the log and then
replicated to the slave, "

The INSERT would not appear in the Binary log until after session 1
commits.


So the INSERT take care about the transaction (begin transaction ... COMMIT)
although it's a MyISAM table ?
Because i read MyISAM does not care about it:
http://stackoverflow.com/questions/8036005/myisam-engine-transaction-support


and hence you should not mix innodb and non-transactional tables, a 
MyISAM table is not and will never be part of a transaction




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd
Sorry for pm !

- Am 1. Apr 2016 um 17:45 schrieb shawn l.green shawn.l.gr...@oracle.com:

>>> You would be better served by first converting your MyISAM tables to
>>> InnoDB to stop mixing storage engine behaviors (transactional and
>>> non-transactional) within the scope of a single transaction. But if you
>>> cannot convert them, using MIXED will be a good compromise.
>>
>> Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
>> with Statement-Based-Logging or with both ?
> Both.

Aah ! In the beginning i thought it's just a problem for RBL.

>>> Look at this sequence and think what would happen without that "stronger
>>> locking" you mentioned earlier.
>>>
>>> (connection 1)
>>>begin transaction
>>>INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
>>>   (connection 2)
>>>   DELETE myisam_table WHERE ...  (this removes one of the rows that
>>>   connection 1 just added)
>> (end of connection 2)
>>> (connection 1)
>>>COMMIT


>> I don't understand the example:
>> Does "begin transaction" and "COMMIT" have any influence on the insert ?
>>  From what i understand a myisam table does not support transactions,
>> so it should not care about "begin transaction" and "commit".
>> So the insert should be done immediately. The select on the InnoDB also
>> should not wait, because it's applied without "LOCK IN SHARE MODE".
>> So x lines are added immediately. This is done on the master, written in the 
>> log
>> and then replicated to the slave, which also adds x lines.
>> Then connection 2 deletes 8 rows, one is from the previous insert.
>> First on the master and then on the slave.
>> I assume that the connections are established in the order they appear here
>> (connection 2 is established after the insert in connection 1).
>> So on both 8 rows are deleted.
>>
>>
> 
> 
> You said, "This is done on the master, written in the log and then
> replicated to the slave, "
> 
> The INSERT would not appear in the Binary log until after session 1
> commits. 

So the INSERT take care about the transaction (begin transaction ... COMMIT) 
although it's a MyISAM table ?
Because i read MyISAM does not care about it:
http://stackoverflow.com/questions/8036005/myisam-engine-transaction-support


>> Does commit means "write now to the binlog" ?
>>
> 
> Effectively, it does (for InnoDB-based transactions). InnoDB first
> writes the entire transaction to the Binary Log (it was sitting in the
> Binlog cache up until this point) then it pumps the necessary data into
> the REDO log (for disaster recovery). 

And when in that temporal sequence is the data written to the tablespace ?

> At that point the transaction is
> considered "committed".  In the case of a rollback, there is nothing to
> log in either location, no permanent changes were made to the data.
> However if the transaction that rolled back contained statements that
> changed MyISAM tables, then the entire transaction (all of the work it
> did) needs to be written into the Binary Log and REDO log just to have
> the very last command be "ROLLBACK".   What that will do is create the
> same sequence of data changes on the slave that happened on the master.

So when transaction is rollbacked, the inserted data in the MyISAM table 
remains ?

Thaks again.

Bernd

P.S. i tried several times to rename the subject into something like
"Replication - was "need help from the list admin"", but this mail
is always bounced back because it is recognized as spam !?!
I just renamed the subject !


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd

- Am 1. Apr 2016 um 17:52 schrieb shawn l.green shawn.l.gr...@oracle.com:

>> What is true ? when the transaction started or when the first read is 
>> performed ?
 
> Until you need to establish a snapshot of the data, then you don't need
> a snapshot position.
> 
> The transaction physically begins (rows begin to be protected against
> changes by other transactions) with the first read.

OK. But only when the first read is in a transaction and isolation level
is REPEATABLE READ the query is "snapshotted" for further queries, so
seeing the same result ?
When several SELECT are issued not inside a transaction they always get
the current data ?
 
> Consider the alternative: If we started protecting data with the START
> TRANSACTION command we would need to protect every row in every table in
> every database.  That is simply not efficient.

YES.

> We protect the pages that contain the rows that are physically required
> by the individual transaction. This is a much smaller locking footprint
> and is much easier to manage.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 10:08 AM, Lentes, Bernd wrote:



- On Apr 1, 2016, at 3:12 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

Btw:
i read about isolation levels. REPEATABLE READ is the default for InnoDB.
http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_repeatable_read says:

"...so that all queries within a transaction see data from the same snapshot,
that is, the data as it was at the time the transaction started.".

http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_consistent_read says:

"With the repeatable read isolation level, the snapshot is based on the time
when the first read operation is performed".

What is true ? when the transaction started or when the first read is performed 
?



Until you need to establish a snapshot of the data, then you don't need 
a snapshot position.


The transaction physically begins (rows begin to be protected against 
changes by other transactions) with the first read.


Consider the alternative: If we started protecting data with the START 
TRANSACTION command we would need to protect every row in every table in 
every database.  That is simply not efficient.


We protect the pages that contain the rows that are physically required 
by the individual transaction. This is a much smaller locking footprint 
and is much easier to manage.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 9:12 AM, Lentes, Bernd wrote:

- On Mar 25, 2016, at 9:54 PM, shawn l.green shawn.l.gr...@oracle.com wrote:



"Unsafe" in that sense replies to the fact that certain commands can
have a different effect when processed from the Binary Log than they did
when they were executed originally on the system that wrote the Binary
Log. This would be true for both a point-in-time recovery situation and
for replication. The topic of unsafe commands is covered rather well on
these pages:
http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional
boundaries and change non-transactional tables.  The effect of those
commands are apparent immediately to any other user of the server. They
do not rely on the original transaction to complete with a COMMIT. The
workaround we employed was to keep the non-transactional table locked
(to keep others from altering it) until the transaction completes
(COMMIT or ROLLBACK). That way we do our best to make all changes
"permanent" at the same time.



Hi,

oh my god. The more i read the more i'm getting confused. I totally underrated 
replication.
But i will not give up ;-) And i appreciate your help, Shawn.
What do you mean with the workaround ? Does MySQL this automatically or has it 
be done
in the app code ?



It's inside the server. You don't need to do anything as a user.



You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
with Statement-Based-Logging or with both ?




Both.



Look at this sequence and think what would happen without that "stronger
locking" you mentioned earlier.

(connection 1)
   begin transaction
   INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
  (connection 2)
  DELETE myisam_table WHERE ...  (this removes one of the rows that
  connection 1 just added)

(end of connection 2)

(connection 1)
   COMMIT

When the slave sees this sequence, it will get the command from
Connection2 first (it completed first so it winds up in the Binary Log).
It removed 8 rows on the master but it would only see 7 on the slave.
Why? The 8th row has not been added to the MyISAM table on the slave
because the transaction that does it hasn't been recorded to the Binary
Log yet.

That's why there is stronger locking comes into play. If we had not
blocked connection 2 until connection 1 completed things would be out of
temporally speaking. It's still possible for things to happen out of
sequence on the slave when mixing transactional and non-transactional
tables in the same transaction.



I don't understand the example:
Does "begin transaction" and "COMMIT" have any influence on the insert ?
 From what i understand a myisam table does not support transactions,
so it should not care about "begin transaction" and "commit".
So the insert should be done immediately. The select on the InnoDB also
should not wait, because it's applied without "LOCK IN SHARE MODE".
So x lines are added immediately. This is done on the master, written in the log
and then replicated to the slave, which also adds x lines.
Then connection 2 deletes 8 rows, one is from the previous insert.
First on the master and then on the slave.
I assume that the connections are established in the order they appear here
(connection 2 is established after the insert in connection 1).
So on both 8 rows are deleted.





You said, "This is done on the master, written in the log and then 
replicated to the slave, "


The INSERT would not appear in the Binary log until after session 1 
commits. Even if session 1 does a rollback, you would still see the 
entire transaction including the ROLLBACK. We have to do it that way to 
preserve the transaction isolation of the InnoDB data.


Yes, you read the shorthand correctly and in the correct temporal sequence.
  session1 did two commands.
  session2 issued one command.
  session1 did a commit.

It does not matter of the sessions were created in that order or not. 
Only the sequence in which the commands are executed matters.






This takes us to the next point you have...

The doc says: "Due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transactional
and nontransactional tables. MySQL tries to preserve causality among
these statements by writing nontransactional statements to the
transaction cache, which is flushed upon commit. However, problems arise
when modifications done to nontransactional tables on behalf of a
transaction become immediately visible to other connections because
these changes may not be written immediately into the 

Re: need help from the list admin

2016-04-01 Thread Lentes, Bernd


- On Apr 1, 2016, at 3:12 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

Btw:
i read about isolation levels. REPEATABLE READ is the default for InnoDB.
http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_repeatable_read says:

"...so that all queries within a transaction see data from the same snapshot, 
that is, the data as it was at the time the transaction started.".

http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_consistent_read says:

"With the repeatable read isolation level, the snapshot is based on the time 
when the first read operation is performed".

What is true ? when the transaction started or when the first read is performed 
?


Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: need help from the list admin

2016-04-01 Thread Lentes, Bernd
- On Mar 25, 2016, at 9:54 PM, shawn l.green shawn.l.gr...@oracle.com wrote:


> "Unsafe" in that sense replies to the fact that certain commands can
> have a different effect when processed from the Binary Log than they did
> when they were executed originally on the system that wrote the Binary
> Log. This would be true for both a point-in-time recovery situation and
> for replication. The topic of unsafe commands is covered rather well on
> these pages:
> http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
> http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html
> 
> This is particularly true for commands that may cross transactional
> boundaries and change non-transactional tables.  The effect of those
> commands are apparent immediately to any other user of the server. They
> do not rely on the original transaction to complete with a COMMIT. The
> workaround we employed was to keep the non-transactional table locked
> (to keep others from altering it) until the transaction completes
> (COMMIT or ROLLBACK). That way we do our best to make all changes
> "permanent" at the same time.
> 

Hi,

oh my god. The more i read the more i'm getting confused. I totally underrated 
replication.
But i will not give up ;-) And i appreciate your help, Shawn.
What do you mean with the workaround ? Does MySQL this automatically or has it 
be done 
in the app code ?
 
> You would be better served by first converting your MyISAM tables to
> InnoDB to stop mixing storage engine behaviors (transactional and
> non-transactional) within the scope of a single transaction. But if you
> cannot convert them, using MIXED will be a good compromise.

Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
with Statement-Based-Logging or with both ?


> Look at this sequence and think what would happen without that "stronger
> locking" you mentioned earlier.
> 
> (connection 1)
>   begin transaction
>   INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
>  (connection 2)
>  DELETE myisam_table WHERE ...  (this removes one of the rows that
>  connection 1 just added)
   (end of connection 2)
> (connection 1)
>   COMMIT
> 
> When the slave sees this sequence, it will get the command from
> Connection2 first (it completed first so it winds up in the Binary Log).
> It removed 8 rows on the master but it would only see 7 on the slave.
> Why? The 8th row has not been added to the MyISAM table on the slave
> because the transaction that does it hasn't been recorded to the Binary
> Log yet.
> 
> That's why there is stronger locking comes into play. If we had not
> blocked connection 2 until connection 1 completed things would be out of
> temporally speaking. It's still possible for things to happen out of
> sequence on the slave when mixing transactional and non-transactional
> tables in the same transaction.
> 

I don't understand the example:
Does "begin transaction" and "COMMIT" have any influence on the insert ?
>From what i understand a myisam table does not support transactions,
so it should not care about "begin transaction" and "commit".
So the insert should be done immediately. The select on the InnoDB also
should not wait, because it's applied without "LOCK IN SHARE MODE".
So x lines are added immediately. This is done on the master, written in the log
and then replicated to the slave, which also adds x lines.
Then connection 2 deletes 8 rows, one is from the previous insert.
First on the master and then on the slave.
I assume that the connections are established in the order they appear here
(connection 2 is established after the insert in connection 1).
So on both 8 rows are deleted.



> This takes us to the next point you have...
>> The doc says: "Due to concurrency issues, a slave can become
>> inconsistent when a transaction contains updates to both transactional
>> and nontransactional tables. MySQL tries to preserve causality among
>> these statements by writing nontransactional statements to the
>> transaction cache, which is flushed upon commit. However, problems arise
>> when modifications done to nontransactional tables on behalf of a
>> transaction become immediately visible to other connections because
>> these changes may not be written immediately into the binary log.
>> Beginning with MySQL 5.5.2, the binlog_direct_non_transactional_updates
>> variable offers one possible workaround to this issue. By default, this
>> variable is disabled. Enabling binlog_direct_non_transactional_updates
>> causes updates to nontransactional tables to be written directly to the
>> binary log, rather than to the transaction cache.
>> binlog_direct_non_transactional_updates works only for statements that
>> are replicated using the statement-based binary logging format; that is,
>> it works only when the value of binlog_format is STATEMENT, or when
>> binlog_format is MIXED and a given statement is being replicated using
>> the statement-based 

Re: need help from the list admin

2016-03-30 Thread shawn l.green



On 3/30/2016 1:26 PM, Lentes, Bernd wrote:

- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:



So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a
webrequest are half written due a crash or restart use autocommit


Autocommit means that every statement is committed implicitly. Right ?
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?



it depends:  If the transaction made it into the Binary Log (if it is 
enabled) and the REDO log as "committed", then InnoDB will finish the 
commit (put the actual data in its proper place in the data files) after 
recovery. If not, it will rollback and your data remains as it was.


http://dev.mysql.com/doc/refman/5.6/en/innodb-recovery.html



if you care that all or nothing is written use transactions
if you care that way don't mix non-transactional tables with innodb


I'm planning to convert the MyISAM tables to InnoDB.



That will solve many of your data consistency problems (particularly 
those related to how things are recorded in the Binary Log), presuming 
you surround changes that involve multiple commands with transaction 
control commands.


If your sets of data changes only need one command to complete, then the 
overhead of issuing explicit START TRANSACTION and COMMIT commands is 
just going to create work you don't need for your workflow. If you need 
more than one command to make a complete and consistent update to your 
data, then use a transaction. If not, operating in autocommit mode is 
ideal.




Bernd



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-30 Thread Reindl Harald



Am 30.03.2016 um 19:26 schrieb Lentes, Bernd:

- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:



So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a
webrequest are half written due a crash or restart use autocommit


Autocommit means that every statement is committed implicitly. Right ?
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?


transactions are not about single queries, transactions are all about 
multiple queries when you want them all or nothing written


please do some homework and read https://en.wikipedia.org/wiki/ACID 
which is basic knowledge about databases


the crash safety of innodb has nothing to do with commits



signature.asc
Description: OpenPGP digital signature


RE: need help from the list admin

2016-03-30 Thread Lentes, Bernd
- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:


>> So i should use the default (autocommit=1)?
> 
> no, you should what is appropriate for your application
> 
> if you don't care about inserts/updates triggered by let say a
> webrequest are half written due a crash or restart use autocommit

Autocommit means that every statement is committed implicitly. Right ? 
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?

> 
> if you care that all or nothing is written use transactions
> if you care that way don't mix non-transactional tables with innodb

I'm planning to convert the MyISAM tables to InnoDB.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-30 Thread Reindl Harald


Am 30.03.2016 um 18:56 schrieb Lentes, Bernd:

- On Mar 28, 2016, at 9:53 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

I read that the converting is not difficult. But has the code of our webapp to
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be
commited. Yes ?


No. The server's default is to have --autocommit=1, which means that
there is an implicit commit at the end of every command. You do not need
to state explicitly "COMMIT" every time you want this to happen.

In fact, disabling autocommit has gotten many new users into trouble
because they did not understand the behavior they changed.


So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a 
webrequest are half written due a crash or restart use autocommit


if you care that all or nothing is written use transactions
if you care that way don't mix non-transactional tables with innodb



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-30 Thread Lentes, Bernd

- On Mar 28, 2016, at 9:53 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

>>
>> I read that the converting is not difficult. But has the code of our webapp 
>> to
>> be changed ? It's written in php and perl.
>> What i understand is that inserts/updates/deletions in InnoDB tables have to 
>> be
>> commited. Yes ?
> 
> No. The server's default is to have --autocommit=1, which means that
> there is an implicit commit at the end of every command. You do not need
> to state explicitly "COMMIT" every time you want this to happen.
> 
> In fact, disabling autocommit has gotten many new users into trouble
> because they did not understand the behavior they changed.

So i should use the default (autocommit=1) ?

 
> Here is a reference from the 5.0 manual to illustrate that this behavior
> has been around for a long time:
> http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html

 
Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-28 Thread Reindl Harald



Am 28.03.2016 um 21:36 schrieb Lentes, Bernd:

- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:


Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can
convert


surely - when you have non-transactional tables involved in
updates/inserts you can go and forget using transactions at all since
interruption or rollback would not rollback already written changes in
MyISAM tables

transactions are all about consistency - impossible with a mix of InnoDB
and MyISAM tables


I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?


first please stop place a space before "?" - it hurts :-)

NO - it only has to be commited if you START A TRANSACTION at the begin


This has to be done in the code ? Or can we use the system variable autocommit ?


if you are using automcommit you lose any purpose of having transactions


That means that everything is commited immediately ? Is this a good solution ?


if you don't care about consistency yes


What means "By default, client connections begin with autocommit set to 1" in 
the doc ?


that nobody is starting a transaction automatically because nobody can 
tell when it is finished automatically end hence you need to tell it the 
db server



That every client connection established via perl/php is started with 
autocommit=1 ?


surely


And when does the commit happen ? When the connection is closed ? Is that 
helpful ?


depends - maybe you should start to read what transactions in the 
database world are because than you can answer all of your questions 
above at your own




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-28 Thread shawn l.green

Hello Bernd,

On 3/28/2016 3:36 PM, Lentes, Bernd wrote:



- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:


Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can
convert


surely - when you have non-transactional tables involved in
updates/inserts you can go and forget using transactions at all since
interruption or rollback would not rollback already written changes in
MyISAM tables

transactions are all about consistency - impossible with a mix of InnoDB
and MyISAM tables


I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?


No. The server's default is to have --autocommit=1, which means that 
there is an implicit commit at the end of every command. You do not need 
to state explicitly "COMMIT" every time you want this to happen.


In fact, disabling autocommit has gotten many new users into trouble 
because they did not understand the behavior they changed.



This has to be done in the code ? Or can we use the system variable autocommit ?


You should need to change nothing.


That means that everything is commited immediately ? Is this a good solution ?


It is going to behave better than the data you have now. The changes to 
the tables you will convert from MyISAM to InnoDB will not become 
visible to other sessions until after the COMMIT (implicit or explicit) 
completes. For finer-grained control over data visibility, you need to 
understand the broader topic of transaction isolation.



What means "By default, client connections begin with autocommit set to 1" in 
the doc ?


It means that every command is already running in its own private 
mini-transaction. To start a multi-statement transaction you do not need 
to disable autocommit, you simply need to use the START TRANSACTION 
command.


Here is a reference from the 5.0 manual to illustrate that this behavior 
has been around for a long time:

http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html


That every client connection established via perl/php is started with 
autocommit=1 ?


It is as long as:

1) the global variable autocommit=1
2) the client does nothing to change its own session variable to 
autocommit=0



And when does the commit happen ? When the connection is closed ? Is that 
helpful ?



The commit happens at the end of each command. If you need to contain 
multiple commands within a single transaction, use START TRANSACTION and 
COMMIT.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-28 Thread Lentes, Bernd


- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:

> Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:
>>> You would be better served by first converting your MyISAM tables to
>>> InnoDB to stop mixing storage engine behaviors (transactional and
>>> non-transactional) within the scope of a single transaction. But if you
>>> cannot convert them, using MIXED will be a good compromise.
>>
>> Is this a big problem ? Something to take care of ? Currently we have a mix.
>> I will ask the girl who developed it why we have both kinds. I hope i can
>> convert
> 
> surely - when you have non-transactional tables involved in
> updates/inserts you can go and forget using transactions at all since
> interruption or rollback would not rollback already written changes in
> MyISAM tables
> 
> transactions are all about consistency - impossible with a mix of InnoDB
> and MyISAM tables

I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?
This has to be done in the code ? Or can we use the system variable autocommit ?
That means that everything is commited immediately ? Is this a good solution ?
What means "By default, client connections begin with autocommit set to 1" in 
the doc ?
That every client connection established via perl/php is started with 
autocommit=1 ?
And when does the commit happen ? When the connection is closed ? Is that 
helpful ?

Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-27 Thread Reindl Harald



Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can 
convert


surely - when you have non-transactional tables involved in 
updates/inserts you can go and forget using transactions at all since 
interruption or rollback would not rollback already written changes in 
MyISAM tables


transactions are all about consistency - impossible with a mix of InnoDB 
and MyISAM tables




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-27 Thread Lentes, Bernd


- Am 25. Mrz 2016 um 21:54 schrieb shawn l.green shawn.l.gr...@oracle.com:

> Hello Bernd,
> 
> Sorry for the delay, I wanted to make sure I had enough time to address
> all of your points.


>> He proposed to have two hosts, and on each is running a MySQL instance
>> as master AND slave. But it's not a "real multi master solution",
>> because pacemaker takes care that the IP for the web app just points to
>> one master. So i don't have the multi-master problems with concurrent
>> inserts (i believe).
> 
> This is wise advice. We (MySQL Support) often recommend exactly the same
> setup:  a master + one(or more) slave(s) using replication to keep the
> slaves in relative sync. I say "relative" because replication is
> asynchronous.
> 
> All writes are directed at the master. Clients that can tolerate the
> natural lag of the replication system can use any available slave for
> read-only queries.
> 

is semi-synchronous a good idea ? I think we just have several 100 inserts per 
day, so i believe the lag should not be a problem.

>> His idea is that host A is master for the slave on host B, and host B is
>> the master for the slave on host A. OK ?
>> Let's imagining that the IP to the web app points to host A, inserts are
>> done to the master on host A and replicated to the slave on host B. Now
>> host A has problems, pacemaker redirects the IP to host B, and
>> everything should be fine.
>> What do you think about this setup ? Where is the advantage to a
>> "classical Master/Slave Replication" ? How should i configure
>> log-slave-updates in this scenario ?
> 
> We have a page on that in the manual (with a diagram):
> http://dev.mysql.com/doc/refman/5.6/en/replication-solutions-switch.html
> 

I will read that.

> 
>> Let's imagine i have two hosts again: Host A is master, host B is slave.
>> Nothing else. No real or pseudo "Multi-Master". IP points to host A.
>> Host A has problems, pacemaker recognizes it, promotes B to master and
>> pivot the IP. Everything should be fine. Where is the disadvantage of
>> this setup compared to the "Multi-Master Replication" in the book ? The
>> OCF ressource agent for mysql should be able to handle the mysql stuff
>> and the RA for the IP pivots the IP.
>>
> 
> Remember to wait for the slave to catch up to the master it lost contact
> with. That way its data is as current as possible. Then redirect your
> clients to the new read-write node in your replication topology.
> 

What is if the slave is behind and the master is gone ? So he has neither 
possibility to be up-to-date nor to catch up.


>>
>> The doc says: "For tables using the MYISAM storage engine, a stronger
>> lock is required on the slave for INSERT statements when applying them
>> as row-based events to the binary log than when applying them as
>> statements. This means that concurrent inserts on MyISAM tables are not
>> supported when using row-based replication."
>> What does this exactly mean ? Concurrent inserts in MyISAM-tables are
>> not possible if using RBL ? Or unsafe in the meaning they create
>> inconsistencies ?
>>
> 
> "Unsafe" in that sense replies to the fact that certain commands can
> have a different effect when processed from the Binary Log than they did
> when they were executed originally on the system that wrote the Binary
> Log. This would be true for both a point-in-time recovery situation and
> for replication. The topic of unsafe commands is covered rather well on
> these pages:
> http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
> http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

I will read that.
> 
> This is particularly true for commands that may cross transactional
> boundaries and change non-transactional tables.  The effect of those
> commands are apparent immediately to any other user of the server. They
> do not rely on the original transaction to complete with a COMMIT. The
> workaround we employed was to keep the non-transactional table locked
> (to keep others from altering it) until the transaction completes
> (COMMIT or ROLLBACK). That way we do our best to make all changes
> "permanent" at the same time.
> 
> 
>> "RBL (Row Based Logging) and synchronization of nontransactional tables.
>> When many rows are affected, the set of changes is split into several
>> events; when the statement commits, all of these events are written to
>> the binary log. When executing on the slave, a table lock is taken on
>> all tables involved, and then
>> the rows are applied in batch mode. (This may or may not be effective,
>> depending on the engine used for the slave抯 copy of the table.)"
>> What does that mean ? Effective ? Is it creating inconsistencies ? Or
>> just not effective in the sense of slow or inconvinient ?
>>
>> Or should i prefer MIXED for binlog_format ?
>>
> 
> You would be better served by first converting your MyISAM tables to
> InnoDB to stop mixing storage engine behaviors (transactional and
> non-transactional) within 

Re: need help from the list admin

2016-03-25 Thread shawn l.green

Hello Bernd,

Sorry for the delay, I wanted to make sure I had enough time to address 
all of your points.


On 3/22/2016 7:07 AM, william drescher wrote:

sent for Bernd, and to see if it works from another sender
--
  Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when you have
a look in the archive it's nearly complete empty. Really not busy.
So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with perl
and apache httpd. Webpages serve reading and writing into the db. The db
is important for our own work flow, so i'd like to make it HA. I have
two HP servers and will use SLES 11 SP4 64bit as OS. MySQL is 5.5.47.
For HA i'd like to use pacemaker, which is available in SLES High
Availibility Extension. I have experience in linux, but i'm not a
database administrator nor developer. HA is important for us, we don't
have performance problems.
My first idea was to run the web app and the db in a virtual machine on
the host and in case of a failure of one host pacemaker would run the vm
on the other host. VM would be stored on a FC SAN. I stopped following
this idea. I have bought a book about HA: "..." from Oliver Liebel. It's
only available in german. But i can recommend it, it's very detailed and
well explained.
He proposed to have two hosts, and on each is running a MySQL instance
as master AND slave. But it's not a "real multi master solution",
because pacemaker takes care that the IP for the web app just points to
one master. So i don't have the multi-master problems with concurrent
inserts (i believe).


This is wise advice. We (MySQL Support) often recommend exactly the same 
setup:  a master + one(or more) slave(s) using replication to keep the 
slaves in relative sync. I say "relative" because replication is 
asynchronous.


All writes are directed at the master. Clients that can tolerate the 
natural lag of the replication system can use any available slave for 
read-only queries.



His idea is that host A is master for the slave on host B, and host B is
the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A, inserts are
done to the master on host A and replicated to the slave on host B. Now
host A has problems, pacemaker redirects the IP to host B, and
everything should be fine.
What do you think about this setup ? Where is the advantage to a
"classical Master/Slave Replication" ? How should i configure
log-slave-updates in this scenario ?


We have a page on that in the manual (with a diagram):
http://dev.mysql.com/doc/refman/5.6/en/replication-solutions-switch.html



Let's imagine i have two hosts again: Host A is master, host B is slave.
Nothing else. No real or pseudo "Multi-Master". IP points to host A.
Host A has problems, pacemaker recognizes it, promotes B to master and
pivot the IP. Everything should be fine. Where is the disadvantage of
this setup compared to the "Multi-Master Replication" in the book ? The
OCF ressource agent for mysql should be able to handle the mysql stuff
and the RA for the IP pivots the IP.



Remember to wait for the slave to catch up to the master it lost contact 
with. That way its data is as current as possible. Then redirect your 
clients to the new read-write node in your replication topology.




Now some dedicated questions to replication. I read a lot in the
official documentation, but some things are not clear to me.
In our db we have MyISAM and InnoDB tables.

 From what i read i'd prefer row based replication. The doc says is the
safest approach. But there seems to be still some problems:

The doc says: "For tables using the MYISAM storage engine, a stronger
lock is required on the slave for INSERT statements when applying them
as row-based events to the binary log than when applying them as
statements. This means that concurrent inserts on MyISAM tables are not
supported when using row-based replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables are
not possible if using RBL ? Or unsafe in the meaning they create
inconsistencies ?



"Unsafe" in that sense replies to the fact that certain commands can 
have a different effect when processed from the Binary Log than they did 
when they were executed originally on the system that wrote the Binary 
Log. This would be true for both a point-in-time recovery situation and 
for replication. The topic of unsafe commands is covered rather well on 
these pages:

http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional 
boundaries and change non-transactional tables.  The effect of those 
commands are apparent immediately to any other user of the server. They 
do not rely on the original transaction to complete with a COMMIT. The 
workaround we employed was to keep the 

Re: need help from the list admin

2016-03-23 Thread Lentes, Bernd


- On Mar 23, 2016, at 11:11 AM, william drescher will...@techservsys.com 
wrote:

>>
>> Hi William,
>>
>> thanks for the try. Good idea !
>> Did you change anything ?
>>
>>
>> Bernd
> 
> Yes, in the original document there were some characters that
> were put on the screen as asian pictograph characters.  I
> replaced them with periods:
> I have bought a book
> about HA: "..." from Oliver Liebel
> 
> And I found the same characters in your sig and removed them.
> 
> bill
> 
> 
> 


Hi Bill,

thanks. I will try now to ask again.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-23 Thread william drescher

On 3/22/2016 7:49 AM, Lentes, Bernd wrote:



- On Mar 22, 2016, at 12:07 PM, william drescher will...@techservsys.com 
wrote:


sent for Bernd, and to see if it works from another sender
--
  Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when
you have a look in the archive it's nearly complete empty. Really
not busy.
So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with
perl and apache httpd. Webpages serve reading and writing into
the db. The db is important for our own work flow, so i'd like to
make it HA. I have two HP servers and will use SLES 11 SP4 64bit
as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which
is available in SLES High Availibility Extension. I have
experience in linux, but i'm not a database administrator nor
developer. HA is important for us, we don't have performance
problems.
My first idea was to run the web app and the db in a virtual
machine on the host and in case of a failure of one host
pacemaker would run the vm on the other host. VM would be stored
on a FC SAN. I stopped following this idea. I have bought a book
about HA: "..." from Oliver Liebel. It's only available in
german. But i can recommend it, it's very detailed and well
explained.
He proposed to have two hosts, and on each is running a MySQL
instance as master AND slave. But it's not a "real multi master
solution", because pacemaker takes care that the IP for the web
app just points to one master. So i don't have the multi-master
problems with concurrent inserts (i believe).
His idea is that host A is master for the slave on host B, and
host B is the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A,
inserts are done to the master on host A and replicated to the
slave on host B. Now host A has problems, pacemaker redirects the
IP to host B, and everything should be fine.
What do you think about this setup ? Where is the advantage to a
"classical Master/Slave Replication" ? How should i configure
log-slave-updates in this scenario ?
Let's imagine i have two hosts again: Host A is master, host B is
slave. Nothing else. No real or pseudo "Multi-Master". IP points
to host A. Host A has problems, pacemaker recognizes it, promotes
B to master and pivot the IP. Everything should be fine. Where is
the disadvantage of this setup compared to the "Multi-Master
Replication" in the book ? The OCF ressource agent for mysql
should be able to handle the mysql stuff and the RA for the IP
pivots the IP.

Now some dedicated questions to replication. I read a lot in the
official documentation, but some things are not clear to me.
In our db we have MyISAM and InnoDB tables.

 From what i read i'd prefer row based replication. The doc says
is the safest approach. But there seems to be still some problems:

The doc says: "For tables using the MYISAM storage engine, a
stronger lock is required on the slave for INSERT statements when
applying them as row-based events to the binary log than when
applying them as statements. This means that concurrent inserts
on MyISAM tables are not supported when using row-based
replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables
are not possible if using RBL ? Or unsafe in the meaning they
create inconsistencies ?

"RBL (Row Based Logging) and synchronization of nontransactional
tables. When many rows are affected, the set of changes is split
into several events; when the statement commits, all of these
events are written to the binary log. When executing on the
slave, a table lock is taken on all tables involved, and then
the rows are applied in batch mode. (This may or may not be
effective, depending on the engine used for the slave抯 copy of
the table.)"
What does that mean ? Effective ? Is it creating inconsistencies
? Or just not effective in the sense of slow or inconvinient ?

Or should i prefer MIXED for binlog_format ?

The doc says: " If a statement is logged by row and the session
that executed the statement has any temporary tables, logging by
row is used for all subsequent statements (except for those
accessing temporary tables) until all temporary tables in use by
that session are dropped.
This is true whether or not any temporary tables are actually
logged. Temporary tables cannot be logged using row-based format;
thus, once row-based logging is used, all subsequent statements
using that table are unsafe. The server approximates this
condition by treating all statements executed during the session
as unsafe until the session no longer holds any temporary tables."
What does that mean ? Unsafe ? Causing inconsistencies ? Problem
with SBL or RBL ?

The doc says: "Due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both
transactional and nontransactional tables. MySQL tries to
preserve causality among these statements by writing

Re: need help from the list admin

2016-03-22 Thread Lentes, Bernd


- On Mar 22, 2016, at 12:07 PM, william drescher will...@techservsys.com 
wrote:

> sent for Bernd, and to see if it works from another sender
> --
>  Lentes, Bernd wrote:
> Hi,
> 
> i know that there is a list dedicated to replication, but when
> you have a look in the archive it's nearly complete empty. Really
> not busy.
> So i hope it's ok if i ask here.
> we have a web app which runs a MySQL DB and dynamic webpages with
> perl and apache httpd. Webpages serve reading and writing into
> the db. The db is important for our own work flow, so i'd like to
> make it HA. I have two HP servers and will use SLES 11 SP4 64bit
> as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which
> is available in SLES High Availibility Extension. I have
> experience in linux, but i'm not a database administrator nor
> developer. HA is important for us, we don't have performance
> problems.
> My first idea was to run the web app and the db in a virtual
> machine on the host and in case of a failure of one host
> pacemaker would run the vm on the other host. VM would be stored
> on a FC SAN. I stopped following this idea. I have bought a book
> about HA: "..." from Oliver Liebel. It's only available in
> german. But i can recommend it, it's very detailed and well
> explained.
> He proposed to have two hosts, and on each is running a MySQL
> instance as master AND slave. But it's not a "real multi master
> solution", because pacemaker takes care that the IP for the web
> app just points to one master. So i don't have the multi-master
> problems with concurrent inserts (i believe).
> His idea is that host A is master for the slave on host B, and
> host B is the master for the slave on host A. OK ?
> Let's imagining that the IP to the web app points to host A,
> inserts are done to the master on host A and replicated to the
> slave on host B. Now host A has problems, pacemaker redirects the
> IP to host B, and everything should be fine.
> What do you think about this setup ? Where is the advantage to a
> "classical Master/Slave Replication" ? How should i configure
> log-slave-updates in this scenario ?
> Let's imagine i have two hosts again: Host A is master, host B is
> slave. Nothing else. No real or pseudo "Multi-Master". IP points
> to host A. Host A has problems, pacemaker recognizes it, promotes
> B to master and pivot the IP. Everything should be fine. Where is
> the disadvantage of this setup compared to the "Multi-Master
> Replication" in the book ? The OCF ressource agent for mysql
> should be able to handle the mysql stuff and the RA for the IP
> pivots the IP.
> 
> Now some dedicated questions to replication. I read a lot in the
> official documentation, but some things are not clear to me.
> In our db we have MyISAM and InnoDB tables.
> 
> From what i read i'd prefer row based replication. The doc says
> is the safest approach. But there seems to be still some problems:
> 
> The doc says: "For tables using the MYISAM storage engine, a
> stronger lock is required on the slave for INSERT statements when
> applying them as row-based events to the binary log than when
> applying them as statements. This means that concurrent inserts
> on MyISAM tables are not supported when using row-based
> replication."
> What does this exactly mean ? Concurrent inserts in MyISAM-tables
> are not possible if using RBL ? Or unsafe in the meaning they
> create inconsistencies ?
> 
> "RBL (Row Based Logging) and synchronization of nontransactional
> tables. When many rows are affected, the set of changes is split
> into several events; when the statement commits, all of these
> events are written to the binary log. When executing on the
> slave, a table lock is taken on all tables involved, and then
> the rows are applied in batch mode. (This may or may not be
> effective, depending on the engine used for the slave抯 copy of
> the table.)"
> What does that mean ? Effective ? Is it creating inconsistencies
> ? Or just not effective in the sense of slow or inconvinient ?
> 
> Or should i prefer MIXED for binlog_format ?
> 
> The doc says: " If a statement is logged by row and the session
> that executed the statement has any temporary tables, logging by
> row is used for all subsequent statements (except for those
> accessing temporary tables) until all temporary tables in use by
> that session are dropped.
> This is true whether or not any temporary tables are actually
> logged. Temporary tables cannot be logged using row-based format;
> thus, once row-based logging is used, all subsequent statements
> using that table are unsafe. The server approximates this
> condition by treating all statements executed during the session
> as unsafe until the session no longer holds any temporary tables."
> What does that mean ? Unsafe ? Causing inconsistencies ? Problem
> with SBL or RBL ?
> 
> The doc says: "Due to concurrency issues, a slave can become
> inconsistent when a transaction 

Re: need help from the list admin

2016-03-22 Thread william drescher

sent for Bernd, and to see if it works from another sender
--
 Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when 
you have a look in the archive it's nearly complete empty. Really 
not busy.

So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with 
perl and apache httpd. Webpages serve reading and writing into 
the db. The db is important for our own work flow, so i'd like to 
make it HA. I have two HP servers and will use SLES 11 SP4 64bit 
as OS. MySQL is 5.5.47. For HA i'd like to use pacemaker, which 
is available in SLES High Availibility Extension. I have 
experience in linux, but i'm not a database administrator nor 
developer. HA is important for us, we don't have performance 
problems.
My first idea was to run the web app and the db in a virtual 
machine on the host and in case of a failure of one host 
pacemaker would run the vm on the other host. VM would be stored 
on a FC SAN. I stopped following this idea. I have bought a book 
about HA: "..." from Oliver Liebel. It's only available in 
german. But i can recommend it, it's very detailed and well 
explained.
He proposed to have two hosts, and on each is running a MySQL 
instance as master AND slave. But it's not a "real multi master 
solution", because pacemaker takes care that the IP for the web 
app just points to one master. So i don't have the multi-master 
problems with concurrent inserts (i believe).
His idea is that host A is master for the slave on host B, and 
host B is the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A, 
inserts are done to the master on host A and replicated to the 
slave on host B. Now host A has problems, pacemaker redirects the 
IP to host B, and everything should be fine.
What do you think about this setup ? Where is the advantage to a 
"classical Master/Slave Replication" ? How should i configure 
log-slave-updates in this scenario ?
Let's imagine i have two hosts again: Host A is master, host B is 
slave. Nothing else. No real or pseudo "Multi-Master". IP points 
to host A. Host A has problems, pacemaker recognizes it, promotes 
B to master and pivot the IP. Everything should be fine. Where is 
the disadvantage of this setup compared to the "Multi-Master 
Replication" in the book ? The OCF ressource agent for mysql 
should be able to handle the mysql stuff and the RA for the IP 
pivots the IP.


Now some dedicated questions to replication. I read a lot in the 
official documentation, but some things are not clear to me.

In our db we have MyISAM and InnoDB tables.

From what i read i'd prefer row based replication. The doc says 
is the safest approach. But there seems to be still some problems:


The doc says: "For tables using the MYISAM storage engine, a 
stronger lock is required on the slave for INSERT statements when 
applying them as row-based events to the binary log than when 
applying them as statements. This means that concurrent inserts 
on MyISAM tables are not supported when using row-based 
replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables 
are not possible if using RBL ? Or unsafe in the meaning they 
create inconsistencies ?


"RBL (Row Based Logging) and synchronization of nontransactional 
tables. When many rows are affected, the set of changes is split 
into several events; when the statement commits, all of these 
events are written to the binary log. When executing on the 
slave, a table lock is taken on all tables involved, and then
the rows are applied in batch mode. (This may or may not be 
effective, depending on the engine used for the slave抯 copy of 
the table.)"
What does that mean ? Effective ? Is it creating inconsistencies 
? Or just not effective in the sense of slow or inconvinient ?


Or should i prefer MIXED for binlog_format ?

The doc says: " If a statement is logged by row and the session 
that executed the statement has any temporary tables, logging by 
row is used for all subsequent statements (except for those 
accessing temporary tables) until all temporary tables in use by 
that session are dropped.
This is true whether or not any temporary tables are actually 
logged. Temporary tables cannot be logged using row-based format; 
thus, once row-based logging is used, all subsequent statements 
using that table are unsafe. The server approximates this 
condition by treating all statements executed during the session 
as unsafe until the session no longer holds any temporary tables."
What does that mean ? Unsafe ? Causing inconsistencies ? Problem 
with SBL or RBL ?


The doc says: "Due to concurrency issues, a slave can become 
inconsistent when a transaction contains updates to both 
transactional and nontransactional tables. MySQL tries to 
preserve causality among these statements by writing 
nontransactional statements to the transaction cache, which is 
flushed 

Re: need help from the list admin

2016-03-21 Thread Lentes, Bernd


- On Mar 19, 2016, at 3:28 PM, Reindl Harald h.rei...@thelounge.net wrote:

> Am 19.03.2016 um 15:23 schrieb Reindl Harald:
>>
>>
>> Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:
>>> one further question:
>>> if some of my e-mails get through (like this one) and others don't, it
>>> does not depend on theh reputation of our domain or mailserver ? Right ?
>>> So the reason has to be that particular e-Mail?
>>
>> both
>>
>> a spamfilter is typically score based and combines a ton of rules, some
>> add points, some remove points and the decision is made of the summary
>>
>> when you have a bad server reputation you start with a penalty, some
>> other rules hitting and a not well trained bayes makes the rest
>>
>> "How do i have to provide the ip" in case of RBLs?
>> https://en.wikipedia.org/wiki/Reverse_DNS_lookup
> 
> and that your domain even don't provide a "~all" SPF policy if you can't
> or don't want a stricht "-all" makes things not better, typically a
> SPF_PASS gives benefits in spamfilter scorings
> 
> Received-SPF: none (helmholtz-muenchen.de: No applicable sender policy
>  available) receiver=amysql-list-wsv01.oracle.com; identity=mailfrom;
>  envelope-from="bernd.len...@helmholtz-muenchen.de";
>  helo=mtaextp1.scidom.de; client-ip=146.107.103.20
> 

OK guys. I asked our computing center to provide a SPF ressource record for our 
outgoing mta in the DNS and to take 
care that the ip of our outgoing mta appears on https://www.dnswl.org/ (our 
domain is listed already). I hope they will do.

The score of our mta raised already:


sunhb58820:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.1.88
Address:146.107.1.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.76
=

76 isn't bad.

But nevertheless it must also have to do with the e-mail itself. I sent it from 
gmx.de and web.de, both werde declined.
My other mails (like this one) arrive.
I shrinked the mail already, but also this did not help.
You can have a look on the two mails i tried: 
https://hmgubox.helmholtz-muenchen.de:8001/d/dc1ec4eb38/

I'm thankful for any hint what else i can do, also with the mail itself.


Bernd


 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-19 Thread Reindl Harald


Am 18.03.2016 um 14:56 schrieb Chris Knipe:

Blah blah blah...

Delivery to the following recipient failed permanently:

  mysql@lists.mysql.com

Technical details of permanent failure:
Your message was rejected by the server for the recipient domain
lists.mysql.com by lists-mx.mysql.com. [137.254.60.71].

The error that the other server returned was:
550 Currently Sending Spam See:
http://www.sorbs.net/lookup.shtml?5.200.22.158

Show me one site, where that IP is, or WAS ever blacklisted?


on sorbs as you quoated yourself, that only idiots block by 
"spam.dnsbl.sorbs.net" ann response 127.0.0.6 instead RBL scoring is a 
different story


51.192.85.209.in-addr.arpa name = mail-qg0-f51.google.com

Name:   51.192.85.209.spam.dnsbl.sorbs.net
Address: 127.0.0.6

Received-SPF: pass (savage.za.org: Sender is authorized to use
 'ckn...@savage.za.org' in 'mfrom' identity (mechanism
 'include:_spf.google.com' matched)) receiver=amysql-list-wsv01.oracle.com;
 identity=mailfrom; envelope-from="ckn...@savage.za.org";
 helo=mail-qg0-f51.google.com; client-ip=209.85.192.51




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Chris Knipe
On Fri, Mar 18, 2016 at 3:43 PM, Lentes, Bernd <
bernd.len...@helmholtz-muenchen.de> wrote:

> i need your help. I'm trying to write an e-Mail to the list for already
> one week. I always get it back because it's classified as spam.
>


Ditto.  I've pretty much given up on this list...


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:23 schrieb Reindl Harald:



Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:

one further question:
if some of my e-mails get through (like this one) and others don't, it
does not depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail?


both

a spamfilter is typically score based and combines a ton of rules, some
add points, some remove points and the decision is made of the summary

when you have a bad server reputation you start with a penalty, some
other rules hitting and a not well trained bayes makes the rest

"How do i have to provide the ip" in case of RBLs?
https://en.wikipedia.org/wiki/Reverse_DNS_lookup


and that your domain even don't provide a "~all" SPF policy if you can't 
or don't want a stricht "-all" makes things not better, typically a 
SPF_PASS gives benefits in spamfilter scorings


Received-SPF: none (helmholtz-muenchen.de: No applicable sender policy
 available) receiver=amysql-list-wsv01.oracle.com; identity=mailfrom;
 envelope-from="bernd.len...@helmholtz-muenchen.de";
 helo=mtaextp1.scidom.de; client-ip=146.107.103.20

[harry@srv-rhsoft:~]$ dig TXT helmholtz-muenchen.de @8.8.8.8
; <<>> DiG 9.10.3-P4-RedHat-9.10.3-12.P4.fc23 <<>> TXT 
helmholtz-muenchen.de @8.8.8.8

;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25126
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

[harry@srv-rhsoft:~]$ dig TXT thelounge.net @8.8.8.8
;; ANSWER SECTION:
thelounge.net.  21599   IN  TXT 
"google-site-verification=XQcET0ij0uOdn8AvlL82t8FoGTthvfPKWRfNjSNTfaM"
thelounge.net.  21599   IN  TXT "v=spf1 a 
ip4:91.118.73.0/24 ip4:95.129.202.170 -all"




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:17 schrieb Lentes, Bernd:

one further question:
if some of my e-mails get through (like this one) and others don't, it does not 
depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail?


both

a spamfilter is typically score based and combines a ton of rules, some 
add points, some remove points and the decision is made of the summary


when you have a bad server reputation you start with a penalty, some 
other rules hitting and a not well trained bayes makes the rest


"How do i have to provide the ip" in case of RBLs?
https://en.wikipedia.org/wiki/Reverse_DNS_lookup




signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 19.03.2016 um 15:08 schrieb Lentes, Bernd:

Ok. I tried again:

pc53200:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.8.88
Address:146.107.8.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.62

My result is 127.0.4.62. How can i interpret this result?


simple - it's a reputation score
100 = perfect reputation - whitelist score
0 = no reputation, pure spammer

with 62 you make it through postscreen but end with 2.5 points penalty 
in SA and that you had 2 days ago "senderscore.com LISTED 127.0.4.63" 
and now got worser scores shows that your outgoing server sends spam 
(given that we have full reputation 100 there without any actve 
operation, even before i did know about the RBL/DNSWL)


anything below 127.0.4.70 should raise alerts
___

our postscreen-scoring:
score.senderscore.com=127.0.4.[0..20]*2
score.senderscore.com=127.0.4.[0..69]*2
score.senderscore.com=127.0.4.[90..100]*-1
___

our spamassassin scoring:

header   CUST_DNSBL_21 
eval:check_rbl('cust21-lastexternal','score.senderscore.com.','^127\.0\.4\.(1?[0-9]|20)$')

describe CUST_DNSBL_21 score.senderscore.com (senderscore.com High)
scoreCUST_DNSBL_21 1.5

header   CUST_DNSBL_25 
eval:check_rbl('cust25-lastexternal','score.senderscore.com.','^127\.0\.4\.(0?[0-6]?[0-9])$')

describe CUST_DNSBL_25 score.senderscore.com (senderscore.com Medium)
scoreCUST_DNSBL_25 1.0

header   CUST_DNSWL_2 
eval:check_rbl('cust35-lastexternal','score.senderscore.com.','^127\.0\.4\.(9[0-9]|100)$')

describe CUST_DNSWL_2 score.senderscore.com (Low Trust)
scoreCUST_DNSWL_2 -0.1



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd
Hi,

one further question:
if some of my e-mails get through (like this one) and others don't, it does not 
depend on theh reputation of our domain or mailserver ? Right ?
So the reason has to be that particular e-Mail ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd


- Am 18. Mrz 2016 um 15:34 schrieb Reindl Harald h.rei...@thelounge.net:

> Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:
>>
>> - Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:
>>

> 
> as i already told you offlist
> senderscore.com  LISTED  127.0.4.67
> 
> this *is* a bad reputation
> 
> and more worse: you did not manage to get your server on any DNSWL
> 
> [harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   20.103.107.146.score.senderscore.com
> Address: 127.0.4.67

Ok. I tried again:

pc53200:~ # nslookup 20.103.107.146.score.senderscore.com.
Server: 146.107.8.88
Address:146.107.8.88#53

Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.62

My result is 127.0.4.62. How can i interpret this result ? I was looking on 
senderscroe.com to find any explaination, but not have been successfull.
Also i'm redirected to senderscore.org. Is that ok ?
Does that mean my reputation is 62 ? That would be bad. Because if i check the 
ip of our outgoing mailserver (146.107.103.20) in the webinterface, i get a 
reputation of 74, which is not great but hopefully ok.


I also tested sorbs.net:

pc53200:~ # nslookup 20.103.107.146.dnsbl.sorbs.net
Server: 146.107.8.88
Address:146.107.8.88#53

*** Can't find 20.103.107.146.dnsbl.sorbs.net: No answer

pc53200:~ # nslookup 146.107.103.20.dnsbl.sorbs.net
Server: 146.107.8.88
Address:146.107.8.88#53

*** Can't find 146.107.103.20.dnsbl.sorbs.net: No answer

(How do i have to provide the ip ? ) Our mailserver seems not do appear on 
sorbs.net. Is it a good sign ?


Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-19 Thread Reindl Harald



Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:


- Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:


and yet, both of those messages made it through :-p

Stick your domain in http://mxtoolbox.com to see if there's any problems that
might be worth solving. If the mailserver classifies you as spam, that's
usually caused by something on your side :-)

- Original Message -

From: "Chris Knipe" <sav...@savage.za.org>
To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
Cc: "MySql" <mysql@lists.mysql.com>
Sent: Friday, 18 March, 2016 14:46:26
Subject: Re: need help from the list admin



Ditto.  I've pretty much given up on this list...




Neither our outgoing mailserver 
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
 nor our domain
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
 is listed there.
I checked that before i wrote the e-Mail. If you could help me to point out 
what's wrong on our side i could ask our mail admin to correct it.
Currently i don't have any idea


as i already told you offlist
senderscore.com  LISTED  127.0.4.67

this *is* a bad reputation

and more worse: you did not manage to get your server on any DNSWL

[harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
Server: 127.0.0.1
Address:127.0.0.1#53
Non-authoritative answer:
Name:   20.103.107.146.score.senderscore.com
Address: 127.0.4.67
_

compare with 91.118.73.15 (our outgoing server) which has there the best 
possible reputation (treated as whitelist) and is at the same time on 
the "list.dnswl.org" and "hostkarma.junkemailfilter" while one of both 
would possibly neutralize the BL listing in a scoring system


[harry@srv-rhsoft:~]$ nslookup 15.73.118.91.score.senderscore.com.
Server: 127.0.0.1
Address:127.0.0.1#53
Non-authoritative answer:
Name:   15.73.118.91.score.senderscore.com
Address: 127.0.4.100



signature.asc
Description: OpenPGP digital signature


Re: need help from the list admin

2016-03-19 Thread Chris Knipe
Ok :-)



On Fri, Mar 18, 2016 at 4:34 PM, Reindl Harald <h.rei...@thelounge.net>
wrote:

>
>
> Am 18.03.2016 um 15:25 schrieb Lentes, Bernd:
>
>>
>> - Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman
>> vegiv...@tuxera.be:
>>
>> and yet, both of those messages made it through :-p
>>>
>>> Stick your domain in http://mxtoolbox.com to see if there's any
>>> problems that
>>> might be worth solving. If the mailserver classifies you as spam, that's
>>> usually caused by something on your side :-)
>>>
>>> - Original Message -
>>>
>>>> From: "Chris Knipe" <sav...@savage.za.org>
>>>> To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
>>>> Cc: "MySql" <mysql@lists.mysql.com>
>>>> Sent: Friday, 18 March, 2016 14:46:26
>>>> Subject: Re: need help from the list admin
>>>>
>>>
>>> Ditto.  I've pretty much given up on this list...
>>>>
>>>
>>>
>> Neither our outgoing mailserver (
>> http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
>> nor our domain
>> (
>> http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
>> is listed there.
>> I checked that before i wrote the e-Mail. If you could help me to point
>> out what's wrong on our side i could ask our mail admin to correct it.
>> Currently i don't have any idea
>>
>
> as i already told you offlist
> senderscore.com  LISTED  127.0.4.67
>
> this *is* a bad reputation
>
> and more worse: you did not manage to get your server on any DNSWL
>
> [harry@srv-rhsoft:~]$ nslookup 20.103.107.146.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   20.103.107.146.score.senderscore.com
> Address: 127.0.4.67
> _
>
> compare with 91.118.73.15 (our outgoing server) which has there the best
> possible reputation (treated as whitelist) and is at the same time on the "
> list.dnswl.org" and "hostkarma.junkemailfilter" while one of both would
> possibly neutralize the BL listing in a scoring system
>
> [harry@srv-rhsoft:~]$ nslookup 15.73.118.91.score.senderscore.com.
> Server: 127.0.0.1
> Address:127.0.0.1#53
> Non-authoritative answer:
> Name:   15.73.118.91.score.senderscore.com
> Address: 127.0.4.100
>
>


-- 

Regards,
Chris Knipe


Re: need help from the list admin

2016-03-19 Thread Lentes, Bernd

- Am 18. Mrz 2016 um 14:52 schrieb Johan De Meersman vegiv...@tuxera.be:

> and yet, both of those messages made it through :-p
> 
> Stick your domain in http://mxtoolbox.com to see if there's any problems that
> might be worth solving. If the mailserver classifies you as spam, that's
> usually caused by something on your side :-)
> 
> - Original Message -
>> From: "Chris Knipe" <sav...@savage.za.org>
>> To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
>> Cc: "MySql" <mysql@lists.mysql.com>
>> Sent: Friday, 18 March, 2016 14:46:26
>> Subject: Re: need help from the list admin
> 
>> Ditto.  I've pretty much given up on this list...
> 
> --
> Unhappiness is discouraged and will be corrected with kitten pictures.

Hi Johan,


Neither our outgoing mailserver 
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3a146.107.103.20=toolpage#)
 nor our domain
(http://mxtoolbox.com/SuperTool.aspx?action=blacklist%3ahelmholtz-muenchen.de=toolpage#)
 is listed there.
I checked that before i wrote the e-Mail. If you could help me to point out 
what's wrong on our side i could ask our mail admin to correct it.
Currently i don't have any idea.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-18 Thread Chris Knipe
Blah blah blah...

Delivery to the following recipient failed permanently:

 mysql@lists.mysql.com

Technical details of permanent failure:
Your message was rejected by the server for the recipient domain
lists.mysql.com by lists-mx.mysql.com. [137.254.60.71].

The error that the other server returned was:
550 Currently Sending Spam See:
http://www.sorbs.net/lookup.shtml?5.200.22.158


Show me one site, where that IP is, or WAS ever blacklisted?


--
Chris.




On Fri, Mar 18, 2016 at 3:52 PM, Johan De Meersman <vegiv...@tuxera.be>
wrote:

>
> and yet, both of those messages made it through :-p
>
> Stick your domain in http://mxtoolbox.com to see if there's any problems
> that might be worth solving. If the mailserver classifies you as spam,
> that's usually caused by something on your side :-)
>
> - Original Message -
> > From: "Chris Knipe" <sav...@savage.za.org>
> > To: "Lentes, Bernd" <bernd.len...@helmholtz-muenchen.de>
> > Cc: "MySql" <mysql@lists.mysql.com>
> > Sent: Friday, 18 March, 2016 14:46:26
> > Subject: Re: need help from the list admin
>
> > Ditto.  I've pretty much given up on this list...
>
> --
> Unhappiness is discouraged and will be corrected with kitten pictures.
>



-- 

Regards,
Chris Knipe


Re: Query Help...

2015-10-22 Thread shawn l.green



On 10/22/2015 11:48 AM, Don Wieland wrote:



On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:

Which release of MySQL are you using?


Version 5.5.45-cll


How many rows do you get if you remove the GROUP_CONCAT operator? We don't need 
to see the results. (sometimes it is a good idea to look at the raw, 
unprocessed results)

Is it possible that you are attempting to concat more values than allowed by 
--group-concat-max-len ?


When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys.

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND "2015-12-31 
23:59:59"
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band




Thank you for sharing your solution.

Best wishes,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-22 Thread Don Wieland

> On Oct 20, 2015, at 1:24 PM, shawn l.green  wrote:
> 
> Which release of MySQL are you using?

Version 5.5.45-cll

> How many rows do you get if you remove the GROUP_CONCAT operator? We don't 
> need to see the results. (sometimes it is a good idea to look at the raw, 
> unprocessed results)
> 
> Is it possible that you are attempting to concat more values than allowed by 
> --group-concat-max-len ?

When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys. 

SELECT 
ht.*, 
CONCAT(o.first_name, " ", o.last_name) AS orphan, 
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc 
FROM hiv_transactions ht 
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id 
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id 
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND 
"2015-12-31 23:59:59" 
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band






Re: Query Help...

2015-10-20 Thread Peter Brawley

On 2015-10-20 12:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr _date BETWEEN "2014-01-01 00:00:00" AND 
"2014-12-31 23:59:59"
ORDER BY ht.tr _date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Group_Concat() is an aggregating function, so you need to Group By the 
column(s) on which you wish to aggregate, and for valid results you need 
to limit Selected columns to those on which you're aggregating plus 
those columns that have unique values for your aggregating columns..


PB





Don Wieland
D W   D a t a



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-20 Thread shawn l.green



On 10/20/2015 1:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2014-01-01 00:00:00" AND "2014-12-31 
23:59:59"
ORDER BY ht.tr_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland



Which release of MySQL are you using?

How many rows do you get if you remove the GROUP_CONCAT operator? We 
don't need to see the results. (sometimes it is a good idea to look at 
the raw, unprocessed results)


Is it possible that you are attempting to concat more values than 
allowed by --group-concat-max-len ?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Design help

2013-04-21 Thread Denis Jedig

Neil,

Am 21.04.2013 08:47, schrieb Neil Tompkins:


Using joins I can obtain which country each city belongs too.  However,
should I consider putting a foreign key in the CITIES table referencing the
countries_id ?  Or is it sufficient to access using a join ?


It depends. Adding a reference to countries into the cities table 
would break normalization and would require you to maintain the 
correct reference (e.g. through the use of ON UPDATE triggers).


It might be beneficial to do so if you have a high number of 
queries for cities filtering for countries - having a direct 
reference obviously would spare you a JOIN execution and at least 
two index lookups.


In your current example however, the data set will typically be 
small enough (in the order of 1,000 - 10,000 cities) so the query 
performance certainly would not be that much of an issue to 
justify the denormalization[1].


[1] http://en.wikipedia.org/wiki/Denormalization
--
Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Design help

2013-04-21 Thread Neil Tompkins
Many thanks for your response.  Can yo u offer any advice with regards
usage of country_codes eg gb and regions, cities etc ?  I've been reading
up on http://en.wikipedia.org/wiki/ISO_3166 etc.  Should I be looking to
use a Surrogate key for countries ?  Or the country code like fr for France
?

Same with regions/states and cities and districts ?


On Sun, Apr 21, 2013 at 9:28 AM, Denis Jedig d...@syneticon.net wrote:

 Neil,

 Am 21.04.2013 08:47, schrieb Neil Tompkins:

  Using joins I can obtain which country each city belongs too.  However,
 should I consider putting a foreign key in the CITIES table referencing
 the
 countries_id ?  Or is it sufficient to access using a join ?


 It depends. Adding a reference to countries into the cities table would
 break normalization and would require you to maintain the correct reference
 (e.g. through the use of ON UPDATE triggers).

 It might be beneficial to do so if you have a high number of queries for
 cities filtering for countries - having a direct reference obviously would
 spare you a JOIN execution and at least two index lookups.

 In your current example however, the data set will typically be small
 enough (in the order of 1,000 - 10,000 cities) so the query performance
 certainly would not be that much of an issue to justify the
 denormalization[1].

 [1] 
 http://en.wikipedia.org/wiki/**Denormalizationhttp://en.wikipedia.org/wiki/Denormalization
 --
 Denis Jedig

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




RE: Design help

2013-04-21 Thread Ilya Kazakevich
Hello, 

Many thanks for your response.  Can yo u offer any advice with regards
usage
of country_codes eg gb and regions, cities etc ?  I've been reading up on
http://en.wikipedia.org/wiki/ISO_3166 etc.  Should I be looking to use a
Surrogate key for countries ?  Or the country code like fr for France ?

Same with regions/states and cities and districts ?

I do not think you need surrogate key for country. Country code is 2
letters, so if you use char(2) charset ASCII you only need 2 bytes for that.
That is ok and your queries would be easier to read.
You do not need surrogate keys for US states also but if you speak about
states in general you may need key (many countries over the world have
states)
The same is about cities: city name is too big to be used as primary key,
and there may be many cities with similar names.

Ilya.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Query Help

2013-04-19 Thread Ilya Kazakevich
Hello Richard, 

the count(*) for each week of 2013 so that I end up with:
http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html

Ilya.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help

2013-04-19 Thread Larry Martell
On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina gatorre...@gmail.com wrote:
 Hello All,

 Happy Friday! I know how to do the following query:

select count(*) from sales where WEEK(sale_date)=15 AND
 YEAR(sale_date)=2013;

 But can someone tell me I can do a query that will give me:

 the count(*) for each week of 2013 so that I end up with:

 WEEK | COUNT
 1   | 22
 2   | 31
 3   | 29
 etc...

select WEEK(sale_date), count(*) from sales where YEAR(sale_date)=2013
group by WEEK(sale_date);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help

2013-04-19 Thread Richard Reina
Perfect! Thank you Larry et all.

Have a great weekend.


2013/4/19 Larry Martell larry.mart...@gmail.com

 On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina gatorre...@gmail.com
 wrote:
  Hello All,
 
  Happy Friday! I know how to do the following query:
 
 select count(*) from sales where WEEK(sale_date)=15 AND
  YEAR(sale_date)=2013;
 
  But can someone tell me I can do a query that will give me:
 
  the count(*) for each week of 2013 so that I end up with:
 
  WEEK | COUNT
  1   | 22
  2   | 31
  3   | 29
  etc...

 select WEEK(sale_date), count(*) from sales where YEAR(sale_date)=2013
 group by WEEK(sale_date);



Re: Please Help. selectcol_arrayref problem

2013-04-02 Thread Richard Reina
I did a GRANT REPLICATION SLAVE ON *.* TO 'user'@'192.168.0.23' IDENTIFIED
BY 'psswd';

on the master. Doesn't *.* mean everything? Why would it just show me to
databases?






2013/4/2 Larry Martell larry.mart...@gmail.com

 On Tue, Apr 2, 2013 at 11:30 AM, Richard Reina gatorre...@gmail.com
 wrote:
  use DBI;
  my $dbh = DBI-connect( DBI:mysql:rushload;192.168.0.1, $usrr, $passw,
 {
  RaiseError = 3 } );
  my $dbs = $dbh-selectcol_arrayref(show databases);
 
  #my $dsn = dbi:mysql:information_schema:192.168.0.1:3306;
  #my $dbh = DBI-connect($dsn, $usrr, $passw);
 
  my $dbs = $dbh-selectcol_arrayref('show databases');
 
  print @$dbs\n;
 
  When I query the server for a list of databases with the code above it
  returns the name of just two and there are over 10.
 
  Any ideas?

 Permissions - the user you're logging in as probably only has
 permission to see the 2 that are being returned.



Re: Please Help. selectcol_arrayref problem

2013-04-02 Thread Larry Martell
On Tue, Apr 2, 2013 at 11:30 AM, Richard Reina gatorre...@gmail.com wrote:
 use DBI;
 my $dbh = DBI-connect( DBI:mysql:rushload;192.168.0.1, $usrr, $passw, {
 RaiseError = 3 } );
 my $dbs = $dbh-selectcol_arrayref(show databases);

 #my $dsn = dbi:mysql:information_schema:192.168.0.1:3306;
 #my $dbh = DBI-connect($dsn, $usrr, $passw);

 my $dbs = $dbh-selectcol_arrayref('show databases');

 print @$dbs\n;

 When I query the server for a list of databases with the code above it
 returns the name of just two and there are over 10.

 Any ideas?

Permissions - the user you're logging in as probably only has
permission to see the 2 that are being returned.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Please Help. selectcol_arrayref problem

2013-04-02 Thread Stillman, Benjamin
Nope. That's just granting replication privileges so it can read updates
on all tables on all databases. It cannot select anything.

Why are you trying to connect with a replication slave user?







On 4/2/13 1:47 PM, Richard Reina gatorre...@gmail.com wrote:

I did a GRANT REPLICATION SLAVE ON *.* TO 'user'@'192.168.0.23'
IDENTIFIED
BY 'psswd';

on the master. Doesn't *.* mean everything? Why would it just show me to
databases?






2013/4/2 Larry Martell larry.mart...@gmail.com

 On Tue, Apr 2, 2013 at 11:30 AM, Richard Reina gatorre...@gmail.com
 wrote:
  use DBI;
  my $dbh = DBI-connect( DBI:mysql:rushload;192.168.0.1, $usrr,
$passw,
 {
  RaiseError = 3 } );
  my $dbs = $dbh-selectcol_arrayref(show databases);
 
  #my $dsn = dbi:mysql:information_schema:192.168.0.1:3306;
  #my $dbh = DBI-connect($dsn, $usrr, $passw);
 
  my $dbs = $dbh-selectcol_arrayref('show databases');
 
  print @$dbs\n;
 
  When I query the server for a list of databases with the code above it
  returns the name of just two and there are over 10.
 
  Any ideas?

 Permissions - the user you're logging in as probably only has
 permission to see the 2 that are being returned.





Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query help -

2013-03-31 Thread william drescher

On 3/31/2013 7:32 AM, william drescher wrote:

I have a table, schedule:
CREATE TABLE `schedule` (
   `schedule_id` mediumint(9) NOT NULL AUTO_INCREMENT,
   `provider` varchar(15) NOT NULL,
   `apptTime` datetime NOT NULL,
   `location` varchar(10) NOT NULL,
   `duration` smallint(5) unsigned NOT NULL,
   `standing_script` mediumint(9) DEFAULT NULL,
   `appt_status` char(1) NOT NULL,
   `patient_number` mediumint(9) NOT NULL,
   `notify` smallint(6) DEFAULT NULL,
   `comment` varchar(80) DEFAULT NULL,
   `history` varchar(200) DEFAULT NULL,
   `posted` tinyint(1) NOT NULL DEFAULT '0',
   PRIMARY KEY (`schedule_id`),
   UNIQUE KEY `patient` (`patient_number`,`apptTime`,`schedule_id`),
   UNIQUE KEY `user` (`user`,`apptTime`,`schedule_id`),
   KEY `standing` (`standing_script`),
   KEY `posted` (`posted`,`user`,`apptTime`)
) ENGINE=InnoDB  DEFAULT CHARSET=ascii;

all of which can be ignored except for 'provider' and apptTime.

I want to query the database and have the result be only the next
appointment for  'patient_number' with each user (the doctor).

eg:
2013-04-04 JSmith
2013-04-20 WJones

where the database contains:
2013-04-04 JSmith
2013-04-10 JSmith
2013-04-17 Jsmith
2013-04-20 WJones
2013-04-24 JSmith
etc

I can get a list of future appointments for 1 patient, but can't
figure out how to just get the first for each provider (there
might be 1..5 providers)

Any suggestions will be appreciated.
--bill


This will be a seldom used query and the schedule database is 
relatively small, so overhead is not a big deal.


--bill



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query help - Solved

2013-03-31 Thread william drescher


of course, Group By

bill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need Help Converting Character Sets

2012-10-02 Thread hsv
 2012/09/30 11:07 -0700, Mark Phillips 
The data for this table comes from a web page (charet utf8). I copy/paste word 
files into gedit (on linux) and then copy/paste from gedit to a text boxes on 
the web page input form. I had thought I was stripping out all the funky 
characters by using a simple ascii editor like gedit, but obviously not.

After looking at the mysqldump for the table in a hex editor, I discovered I 
have these characters scatter throughout the body and intro columns:
#8220;
#8221; 
#8217;
#8212;
#8230;
#8617;

How do you mean this? Is there an instance of Unicode character
LEFT DOUBLE QUOTATION MARK, or else of the string #8220;?

In any case, this sounds like not an SQL, but general-programming problem; 
furthermore, I suggest that you carefully select a left double quotation mark 
or any of the other punctuation characters (RIGHT DOUBLE QUOTATION MARK, RIGHT 
SINGLE QUOTATION MARK, EM DASH, HORIZONTAL ELLIPSIS, ...) that, it seems, you 
are getting and not liking, and carefully follow your procedure.

It further seems to me that you really do not want that to be UTF-8 string, but 
ASCII, or Latin1.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Need Help Converting Character Sets

2012-10-01 Thread Rick James
(Sorry; these are random notes, not well structured for your questions.)

#...; -- This is an html entity.  It is plain ascii, but _represents_ (in an 
HTML contect) some special character.  Some details:
http://htmltimes.com/html-special-characters.php
http://www.dwheeler.com/essays/quotes-in-html.html

That is, $#...; is not a character set issue in MySQL.  Most CHARACTER SETs 
(latin1, utf8, etc) will handle them the same.  And they will continue to 
render on an HTML page as the 'funky characters'.  That's HTML doing the deed 
in the browser.

The ALTERs did nothing because there was nothing to do.

I don't know what process converted from '' to #8220;.

The utf8 equivalent:
http://www.fileformat.info/info/unicode/char/201c/index.htm
It would be hex e2809c (3 bytes).

The correct way is to get the utf8 encoding from the form, and store that 
in a utf8-declared column in the table.

If you already have utf8 bytes in your latin1 fields, it gets messier.

hex 22 is the 'straight' ascii double quote.  8220 and 8221 are the curved 
double quotes.  A common convention is to translate both curved ones to the 
single straight one.  But that would be work for you to explicitly do.

If you are using PHP, see functions htmlentities() and html_entity_decode().



 -Original Message-
 From: Mark Phillips [mailto:m...@phillipsmarketing.biz]
 Sent: Sunday, September 30, 2012 11:08 AM
 To: Rick James
 Cc: h...@tbbs.net; Mysql List
 Subject: Re: Need Help Converting Character Sets
 
 Thanks to all of you for your very helpful suggestions! I was pulled
 off on a different project for a few days, and I am now just getting
 back to this one. Sorry for my absence.
 
 First, some more information:
 
 My table definition:
 CREATE TABLE `Articles` (
 `articleID` int(11) unsigned NOT NULL AUTO_INCREMENT, `title`
 varchar(200) NOT NULL DEFAULT '', `author` varchar(200) NOT NULL
 DEFAULT '', `body` text NOT NULL, `intro` text NOT NULL, `caption` text
 NOT NULL, `credits` text NOT NULL, `articleDate` date NOT NULL DEFAULT
 '-00-00', `imageTitle` varchar(255) NOT NULL DEFAULT '',
 `imageAltText` varchar(255) NOT NULL DEFAULT '', `imageWidth` float NOT
 NULL DEFAULT '1', `imageHeight` float NOT NULL DEFAULT '1', `imageFile`
 varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`articleID`), FULLTEXT
 KEY `search1` (`title`,`author`,`body`,`caption`,`credits`)
 ) ENGINE=MyISAM AUTO_INCREMENT=89 DEFAULT CHARSET=latin1
 
 First problem - CHARSET should be utf8.
 
 The data for this table comes from a web page (charet utf8). I
 copy/paste word files into gedit (on linux) and then copy/paste from
 gedit to a text boxes on the web page input form. I had thought I was
 stripping out all the funky characters by using a simple ascii editor
 like gedit, but obviously not.
 
 After looking at the mysqldump for the table in a hex editor, I
 discovered I have these characters scatter throughout the body and
 intro columns:
 #8220;
 #8221;
 #8217;
 #8212;
 #8230;
 #8617;
 
 I tried converting the columns into utf8 with the command ALTER TABLE
 table_name CONVERT TO CHARACTER SET utf8; but all the special
 characters are still there.
 
 I tried converting to blob and back to utf8, and that didn't change
 anything. (I had to first drop the fulltext key to convert to blob).
 ALTER TABLE t1 CHANGE c1 c1 BLOB;
 ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8; The above special
 characters are still in the body column when I view the dump file with
 a hex editor.
 
 Is there a way to replace these special characters with the appropriate
 utf8 characters (eg #8220; - 22 hex) within the text columns body and
 intro columns?
 
 Thanks,
 
 Mark
 
 On Fri, Sep 28, 2012 at 7:59 AM, Rick James rja...@yahoo-inc.com
 wrote:
 
  Thanks for that link!  That's another subtle issue I had not noted.
 
  There are so many combinations, that it is hard to say do this:
  * Incoming bytes are latin1 / utf8 / Microsquish control characters.
  * You do/don't have SET NAMES (or equivalent)
  * The database/table/column is declared latin1/utf8/other.
  * The problem is on ingestion / on retrieval.
 
  The thing mentioned involved 2 steps:
  ALTER TABLE ... MODIFY COLUMN  BINARY (or BLOB);  -- to forget any
  charset knowledge ALTER TABLE ... MODIFY COLUMN  CHARACTER SET ...;
  -- coming from BINARY, this does not check the encoding.
  (sorry, don't have the link handy)
 
   -Original Message-
   From: h...@tbbs.net [mailto:h...@tbbs.net]
   Sent: Thursday, September 27, 2012 2:24 PM
   To: Mark Phillips
   Cc: Mysql List
   Subject: Re: Need Help Converting Character Sets
  
2012/09/24 16:28 -0700, Mark Phillips 
   I have a table, Articles, of news articles (in English) with three
   text columns for the intro, body, and caption. The data came from a
   web page, and the content was cut and pasted from other sources. I
   am finding that there are some non utf-8 characters in these three
   text columns. I would like to (1) convert these text fields

Re: Need Help Converting Character Sets

2012-09-30 Thread Mark Phillips
Thanks to all of you for your very helpful suggestions! I was pulled off on
a different project for a few days, and I am now just getting back to this
one. Sorry for my absence.

First, some more information:

My table definition:
CREATE TABLE `Articles` (
`articleID` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(200) NOT NULL DEFAULT '',
`author` varchar(200) NOT NULL DEFAULT '',
`body` text NOT NULL,
`intro` text NOT NULL,
`caption` text NOT NULL,
`credits` text NOT NULL,
`articleDate` date NOT NULL DEFAULT '-00-00',
`imageTitle` varchar(255) NOT NULL DEFAULT '',
`imageAltText` varchar(255) NOT NULL DEFAULT '',
`imageWidth` float NOT NULL DEFAULT '1',
`imageHeight` float NOT NULL DEFAULT '1',
`imageFile` varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (`articleID`),
FULLTEXT KEY `search1` (`title`,`author`,`body`,`caption`,`credits`)
) ENGINE=MyISAM AUTO_INCREMENT=89 DEFAULT CHARSET=latin1

First problem - CHARSET should be utf8.

The data for this table comes from a web page (charet utf8). I copy/paste
word files into gedit (on linux) and then copy/paste from gedit to a text
boxes on the web page input form. I had thought I was stripping out all the
funky characters by using a simple ascii editor like gedit, but obviously
not.

After looking at the mysqldump for the table in a hex editor, I discovered
I have these characters scatter throughout the body and intro columns:
#8220;
#8221;
#8217;
#8212;
#8230;
#8617;

I tried converting the columns into utf8 with the command
ALTER TABLE table_name CONVERT TO CHARACTER SET utf8;
but all the special characters are still there.

I tried converting to blob and back to utf8, and that didn't change
anything. (I had to first drop the fulltext key to convert to blob).
ALTER TABLE t1 CHANGE c1 c1 BLOB;
ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The above special characters are still in the body column when I view the
dump file with a hex editor.

Is there a way to replace these special characters with the appropriate
utf8 characters (eg #8220; - 22 hex) within the text columns body and
intro columns?

Thanks,

Mark

On Fri, Sep 28, 2012 at 7:59 AM, Rick James rja...@yahoo-inc.com wrote:

 Thanks for that link!  That's another subtle issue I had not noted.

 There are so many combinations, that it is hard to say do this:
 * Incoming bytes are latin1 / utf8 / Microsquish control characters.
 * You do/don't have SET NAMES (or equivalent)
 * The database/table/column is declared latin1/utf8/other.
 * The problem is on ingestion / on retrieval.

 The thing mentioned involved 2 steps:
 ALTER TABLE ... MODIFY COLUMN  BINARY (or BLOB);  -- to forget any charset
 knowledge
 ALTER TABLE ... MODIFY COLUMN  CHARACTER SET ...;  -- coming from BINARY,
 this does not check the encoding.
 (sorry, don't have the link handy)

  -Original Message-
  From: h...@tbbs.net [mailto:h...@tbbs.net]
  Sent: Thursday, September 27, 2012 2:24 PM
  To: Mark Phillips
  Cc: Mysql List
  Subject: Re: Need Help Converting Character Sets
 
   2012/09/24 16:28 -0700, Mark Phillips 
  I have a table, Articles, of news articles (in English) with three text
  columns for the intro, body, and caption. The data came from a web
  page, and the content was cut and pasted from other sources. I am
  finding that there are some non utf-8 characters in these three text
  columns. I would like to (1) convert these text fields to be strict
  utf-8 and then (2) fix the input page to keep all new submissions utf-
  8.
 
  91) For the first step, fixing the current database, I tried:
 
  update Articles set body = CONVERT(body USING ASCII);
 
  However, when I checked one of the articles I found an apostrophe had
  been converted into a question mark. (FWIW, the apostrophe was one of
  those offending non utf-8 characters):
 
  Before conversion: I stepped into the observatory?s control room ...
 
  After conversion: I stepped into the observatory?s control room...
 
  Is there a better way to accomplish my first goal, without reading each
  article and manually making the changes?
  
  I do not remember where on the MySQL website this is, but there was an
  article about converting from character sets in version 4 to those in
  version 5, when UTF-8 first was supported. It sounds to me that maybe
  the tricks shown there would be useful to you, since, in effect,
  through MySQL MySQL was fooled into accepting for UTF-8 that which was
  not. Conversion to binary string was mentioned.
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql




RE: Need Help Converting Character Sets

2012-09-28 Thread Rick James
Thanks for that link!  That's another subtle issue I had not noted.

There are so many combinations, that it is hard to say do this:
* Incoming bytes are latin1 / utf8 / Microsquish control characters.
* You do/don't have SET NAMES (or equivalent)
* The database/table/column is declared latin1/utf8/other.
* The problem is on ingestion / on retrieval.

The thing mentioned involved 2 steps:
ALTER TABLE ... MODIFY COLUMN  BINARY (or BLOB);  -- to forget any charset 
knowledge
ALTER TABLE ... MODIFY COLUMN  CHARACTER SET ...;  -- coming from BINARY, this 
does not check the encoding.
(sorry, don't have the link handy)

 -Original Message-
 From: h...@tbbs.net [mailto:h...@tbbs.net]
 Sent: Thursday, September 27, 2012 2:24 PM
 To: Mark Phillips
 Cc: Mysql List
 Subject: Re: Need Help Converting Character Sets
 
  2012/09/24 16:28 -0700, Mark Phillips 
 I have a table, Articles, of news articles (in English) with three text
 columns for the intro, body, and caption. The data came from a web
 page, and the content was cut and pasted from other sources. I am
 finding that there are some non utf-8 characters in these three text
 columns. I would like to (1) convert these text fields to be strict
 utf-8 and then (2) fix the input page to keep all new submissions utf-
 8.
 
 91) For the first step, fixing the current database, I tried:
 
 update Articles set body = CONVERT(body USING ASCII);
 
 However, when I checked one of the articles I found an apostrophe had
 been converted into a question mark. (FWIW, the apostrophe was one of
 those offending non utf-8 characters):
 
 Before conversion: I stepped into the observatory?s control room ...
 
 After conversion: I stepped into the observatory?s control room...
 
 Is there a better way to accomplish my first goal, without reading each
 article and manually making the changes?
 
 I do not remember where on the MySQL website this is, but there was an
 article about converting from character sets in version 4 to those in
 version 5, when UTF-8 first was supported. It sounds to me that maybe
 the tricks shown there would be useful to you, since, in effect,
 through MySQL MySQL was fooled into accepting for UTF-8 that which was
 not. Conversion to binary string was mentioned.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need Help Converting Character Sets

2012-09-27 Thread hsv
 2012/09/24 16:28 -0700, Mark Phillips 
I have a table, Articles, of news articles (in English) with three text
columns for the intro, body, and caption. The data came from a web page,
and the content was cut and pasted from other sources. I am finding that
there are some non utf-8 characters in these three text columns. I would
like to (1) convert these text fields to be strict utf-8 and then (2) fix
the input page to keep all new submissions utf-8.

91) For the first step, fixing the current database, I tried:

update Articles set body = CONVERT(body USING ASCII);

However, when I checked one of the articles I found an apostrophe had been
converted into a question mark. (FWIW, the apostrophe was one of those
offending non utf-8 characters):

Before conversion: I stepped into the observatory’s control room ...

After conversion: I stepped into the observatory?s control room...

Is there a better way to accomplish my first goal, without reading each
article and manually making the changes? 

I do not remember where on the MySQL website this is, but there was an article 
about converting from character sets in version 4 to those in version 5, when 
UTF-8 first was supported. It sounds to me that maybe the tricks shown there 
would be useful to you, since, in effect, through MySQL MySQL was fooled into 
accepting for UTF-8 that which was not. Conversion to binary string was 
mentioned.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need Help Converting Character Sets

2012-09-27 Thread Derek Downey
To go along with what Rick is saying, this link might help you: 
http://dba.stackexchange.com/questions/10467/how-to-convert-control-characters-in-mysql-from-latin1-to-utf-8

I remember doing a bunch of converting HEX() control characters (such as an 
apostrophe copied from a Word document) before attempting the SET NAMES.

Derek Downey


On Sep 24, 2012, at 7:53 PM, Rick James wrote:

 If you have a mixture of encodings, you are in deep doodoo.
 
 This page describes some debugging techniques and some issues:
   http://mysql.rjweb.org/doc.php/charcoll
 
 That apostrophe might be MicroSquish's smart quote.
 
 Can you provide SELECT HEX(the_field) FROM... ?  We (or the above page) might 
 be able to interpret the character.
 
 To prevent future char set issues, you must know what encoding the source is. 
  Then, with SET NAMES (etc), you tell mysqld that the bytes you have in hand 
 are encoded that way.  mysqld will then convert those bytes to the character 
 set of declared for the column they go in.  (Presumably, all the text columns 
 will be declared utf8 or utf8mb4.)
 
 -Original Message-
 From: Mark Phillips [mailto:m...@phillipsmarketing.biz]
 Sent: Monday, September 24, 2012 4:28 PM
 To: Mysql List
 Subject: Need Help Converting Character Sets
 
 I have a table, Articles, of news articles (in English) with three text
 columns for the intro, body, and caption. The data came from a web
 page, and the content was cut and pasted from other sources. I am
 finding that there are some non utf-8 characters in these three text
 columns. I would like to (1) convert these text fields to be strict
 utf-8 and then (2) fix the input page to keep all new submissions utf-
 8.
 
 91) For the first step, fixing the current database, I tried:
 
 update Articles set body = CONVERT(body USING ASCII);
 
 However, when I checked one of the articles I found an apostrophe had
 been converted into a question mark. (FWIW, the apostrophe was one of
 those offending non utf-8 characters):
 
 Before conversion: I stepped into the observatory's control room ...
 
 After conversion: I stepped into the observatory?s control room...
 
 Is there a better way to accomplish my first goal, without reading each
 article and manually making the changes?
 
 (2) For the second goal, insuring that all future articles are utf-8,
 do I need to change the table structure or the insert query to insure I
 get the correct utf-8 characters into the database?
 
 Thanks,
 
 Mark
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 



RE: Need Help Converting Character Sets

2012-09-24 Thread Rick James
If you have a mixture of encodings, you are in deep doodoo.

This page describes some debugging techniques and some issues:
   http://mysql.rjweb.org/doc.php/charcoll

That apostrophe might be MicroSquish's smart quote.

Can you provide SELECT HEX(the_field) FROM... ?  We (or the above page) might 
be able to interpret the character.

To prevent future char set issues, you must know what encoding the source is.  
Then, with SET NAMES (etc), you tell mysqld that the bytes you have in hand are 
encoded that way.  mysqld will then convert those bytes to the character set of 
declared for the column they go in.  (Presumably, all the text columns will be 
declared utf8 or utf8mb4.)

 -Original Message-
 From: Mark Phillips [mailto:m...@phillipsmarketing.biz]
 Sent: Monday, September 24, 2012 4:28 PM
 To: Mysql List
 Subject: Need Help Converting Character Sets
 
 I have a table, Articles, of news articles (in English) with three text
 columns for the intro, body, and caption. The data came from a web
 page, and the content was cut and pasted from other sources. I am
 finding that there are some non utf-8 characters in these three text
 columns. I would like to (1) convert these text fields to be strict
 utf-8 and then (2) fix the input page to keep all new submissions utf-
 8.
 
 91) For the first step, fixing the current database, I tried:
 
 update Articles set body = CONVERT(body USING ASCII);
 
 However, when I checked one of the articles I found an apostrophe had
 been converted into a question mark. (FWIW, the apostrophe was one of
 those offending non utf-8 characters):
 
 Before conversion: I stepped into the observatory's control room ...
 
 After conversion: I stepped into the observatory?s control room...
 
 Is there a better way to accomplish my first goal, without reading each
 article and manually making the changes?
 
 (2) For the second goal, insuring that all future articles are utf-8,
 do I need to change the table structure or the insert query to insure I
 get the correct utf-8 characters into the database?
 
 Thanks,
 
 Mark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: query help

2012-09-13 Thread Stillman, Benjamin
I think this will get you there:

SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;

It'll give you something more like:

| LEAD | COUNT(*) |
| F | 44 |
| S | 122   |
| R | 32 |



-Original Message-
From: Richard Reina [mailto:gatorre...@gmail.com]
Sent: Thursday, September 13, 2012 9:51 AM
To: mysql@lists.mysql.com
Subject: query help

I have a table like this:

|ORDERS|
|ID| DATE | QNT | LEAD |
|342  | 8-12-12 | 32   | F|
|345  | 8-15-12 | 12   | S|
|349  | 8-16-12 | 9 | R|

I am looking for a way to query it with counts by the LEAD column in order to 
tell what the number of each type lead is, so that I get something like this:

F_LEADS  S_LEADS  R_LEADS
 44 122 32

Is this possible?  If so can anyone help with syntax?

Thanks,

Richard

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: query help

2012-09-13 Thread Rick James
Transposing is ugly in SQL.  It is better done in some other language (PHP, 
Java, ...)

If you must do it in SQL, search for
pivot Peter Brawley

 -Original Message-
 From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
 Sent: Thursday, September 13, 2012 7:09 AM
 To: 'Richard Reina'; mysql@lists.mysql.com
 Subject: RE: query help
 
 I think this will get you there:
 
 SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;
 
 It'll give you something more like:
 
 | LEAD | COUNT(*) |
 | F | 44 |
 | S | 122   |
 | R | 32 |
 
 
 
 -Original Message-
 From: Richard Reina [mailto:gatorre...@gmail.com]
 Sent: Thursday, September 13, 2012 9:51 AM
 To: mysql@lists.mysql.com
 Subject: query help
 
 I have a table like this:
 
 |ORDERS|
 |ID| DATE | QNT | LEAD |
 |342  | 8-12-12 | 32   | F|
 |345  | 8-15-12 | 12   | S|
 |349  | 8-16-12 | 9 | R|
 
 I am looking for a way to query it with counts by the LEAD column in
 order to tell what the number of each type lead is, so that I get
 something like this:
 
 F_LEADS  S_LEADS  R_LEADS
  44 122 32
 
 Is this possible?  If so can anyone help with syntax?
 
 Thanks,
 
 Richard
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
 Notice: This communication may contain privileged and/or confidential
 information. If you are not the intended recipient, please notify the
 sender by email, and immediately delete the message and any attachments
 without copying or disclosing them. LBI may, for any reason, intercept,
 access, use, and disclose any information that is communicated by or
 through, or which is stored on, its networks, applications, services,
 and devices.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: query help

2012-08-07 Thread hsv
 2012/07/31 15:02 -0700, Haluk Karamete 
So, in a case like this

shop.orders.32442
shop.orders.82000
shop.orders.34442

It would be the record whose source_recid  is shop.orders.82000. Why? Cause
82000 happens to be the largest integer.

Now, if they are always 5-digit-long integers, try ending with
ORDER BY SUBSTRING_INDEX(source_recid, '.', -1)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: query help

2012-08-01 Thread Rick James
Might need some type coercion:

SELECTSUBSTRING(recid, 13, column size) AS numbers FROM table
 ORDER BY SUBSTRING(recid, 13, column size)+0  DESC

 -Original Message-
 From: Paul Halliday [mailto:paul.halli...@gmail.com]
 Sent: Tuesday, July 31, 2012 3:27 PM
 To: Haluk Karamete
 Cc: MySQL
 Subject: Re: query help
 
 On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete
 halukkaram...@gmail.com wrote:
  I've got a text field called source_recid. It stores half string half
  number like strings in it.
 
  Example
 
  shop.orders.32442
 
  the syntax is DATABASENAME.TABLENAME.RECID
 
  My goal is to scan this col and find out the biggest RECID ( the
  integer) in it.
 
  So, in a case like this
 
  shop.orders.32442
  shop.orders.82000
  shop.orders.34442
 
  It would be the record whose source_recid  is shop.orders.82000. Why?
  Cause
  82000 happens to be the largest integer.
 
  What SQL statement would get me that record?
 
  One option to this is to create a new column ( the_ids ) and move all
  the integers in it and then run something like this
 
  select source_recid from mytable where source_recid like
 'shop.orders.%'
  order by the_ids DESC LIMIT 1
 
  Is there a way to pull this off without going thru this step?
 
 Would substring work?
 
 SELECT SUBSTRING(recid, 13, column size) AS numbers FROM table ORDER
 BY numbers DESC
 
 
 --
 Paul Halliday
 http://www.pintumbler.org/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: query help

2012-08-01 Thread Vikas Shukla
Hi,

Use LIMIT 1 to limit the number of output to single record.

Regards,

Vikas Shukla


On Wed, Aug 1, 2012 at 3:56 AM, Paul Halliday paul.halli...@gmail.comwrote:

  On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete halukkaram...@gmail.com
 wrote:
  I've got a text field called source_recid. It stores half string half
  number like strings in it.
 
  Example
 
  shop.orders.32442
 
  the syntax is DATABASENAME.TABLENAME.RECID
 
  My goal is to scan this col and find out the biggest RECID ( the
  integer) in it.
 
  So, in a case like this
 
  shop.orders.32442
  shop.orders.82000
  shop.orders.34442
 
  It would be the record whose source_recid  is shop.orders.82000. Why?
 Cause
  82000 happens to be the largest integer.
 
  What SQL statement would get me that record?
 
  One option to this is to create a new column ( the_ids ) and move all the
  integers in it and then run something like this
 
  select source_recid from mytable where source_recid like 'shop.orders.%'
  order by the_ids DESC LIMIT 1
 
  Is there a way to pull this off without going thru this step?

 Would substring work?

 SELECT SUBSTRING(recid, 13, column size) AS numbers FROM table ORDER
 BY numbers DESC


 --
 Paul Halliday
 http://www.pintumbler.org/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: query help

2012-07-31 Thread Paul Halliday
On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete halukkaram...@gmail.com wrote:
 I've got a text field called source_recid. It stores half string half
 number like strings in it.

 Example

 shop.orders.32442

 the syntax is DATABASENAME.TABLENAME.RECID

 My goal is to scan this col and find out the biggest RECID ( the
 integer) in it.

 So, in a case like this

 shop.orders.32442
 shop.orders.82000
 shop.orders.34442

 It would be the record whose source_recid  is shop.orders.82000. Why? Cause
 82000 happens to be the largest integer.

 What SQL statement would get me that record?

 One option to this is to create a new column ( the_ids ) and move all the
 integers in it and then run something like this

 select source_recid from mytable where source_recid like 'shop.orders.%'
 order by the_ids DESC LIMIT 1

 Is there a way to pull this off without going thru this step?

Would substring work?

SELECT SUBSTRING(recid, 13, column size) AS numbers FROM table ORDER
BY numbers DESC


-- 
Paul Halliday
http://www.pintumbler.org/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: NoSQL help

2012-06-14 Thread Manivannan S.
I tried with myisam engine also. But it also taking more time to generate the 
report. In my database I am having 8 innodb tables and at the same time I am 
joining 4 tables to get the report.

I am maintaining 60days records because the user will try to generate the 
report out of 60 days in terms of second, minute, hourly, weekly and Monthly 
report also.

From: Ananda Kumar [mailto:anan...@gmail.com]
Sent: Thursday, June 14, 2012 12:32 AM
To: Rick James
Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
Subject: Re: NoSQL help

Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.com wrote:
I'll second Johan's comments.

Count the disk hits!

One minor change:  Don't store averages in the summary table; instead store the 
SUM().  That lets you get the mathematically correct AVERAGE over any time 
range via
SUM(sum_foo) / SUM(count_foo)

Switching between MySQL and Mongo requires rewriting _all_ of the relevant code.

opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count the 
disk hits!

I recently built a system that topped out at 350GB (90 days' data).  It 
involved hourly ingestion of a few GB of data and a variety of reports.  The 
prototype showed that most reports would take about an hour to run.  Not good.  
The final product, with summary tables, lets the reports be run on-demand and 
online and each takes only a few seconds.  By careful use of MEMORY tables, 
LOAD DATA, etc, the ingestion takes 5 minutes (each hour) for the raw data and 
2 minutes (total) for the 7 summary tables.  PARTITIONing was vital for the 
design.  Once an hour a new partition is populated; once a day, 24 hourly 
partitions are rolled into a new daily partition and the 90-day old partition 
is DROPped.


 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 6:20 AM
 To: Manivannan S.
 Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Re: NoSQL help


 - Original Message -
  From: Manivannan S. 
  manivanna...@spanservices.commailto:manivanna...@spanservices.com
 
  Hi all,
 
  [lots of data]
  [slow reports]
  [wooo NoSQL magic]

 Not that I want to discourage you, but my standard first question is
 why do you think NoSQL (let alone any specific product) is the right
 solution? :-)

 Don't get me wrong, it might be; but from what little I now know about
 your environment, it sounds like applying some data warehousing
 techniques might suffice - and being the cynical dinosaur that I am, I
 have a healthy reluctance about welding new technology onto a stable
 environment.

 To speed up reporting (and note that these techniques are often applied
 even when implementing NoSQL solutions, too) it is usually a good first
 step to set up a process of data summarization.

 Basically, you pre-calculate averages, medians, groupings, whatever you
 need for your reports; and your job also saves the last record IDs it's
 processed; then on the next run, you only read the new records and
 update your summary tables to incorporate the new data.

 Suppose I have a table like this:

 ID | Val
 
  1 1
  2 7
  3 5
  413

 I want to report the average on a daily basis, and calculating that
 over those rows is unbearably slow because I'm running the process on a
 wristwatch from 1860 :-)

 So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
 gets a record saying this:

 Avg | elementCount | lastSeen
 -
 6.5  4  4

 Now, over the course of the day, the elements 4, 17 and 2 get added
 with sequential row numbers. Instead of calculating
 (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
 summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
 4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
 like this:

 Avg | elementCount | lastSeen
 -
   7  7  7

 This is of course a stupid example, but it saves you a lot of time if
 you already have the summary of several thousand elements and only need
 to update it for a handful. Similar tricks are possible for a lot of
 typical reporting stuff - you don't need to re-calculate data for past
 months over and over again, for instance - and that's what makes your
 reports run fast.


 Just my 2 cents :-)
 /johan

 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

DISCLAIMER: This email message and all attachments are confidential and may 
contain information that is privileged, confidential or exempt from disclosure 
under applicable law.  If you are not the intended recipient, you

Re: NoSQL help

2012-06-14 Thread Ananda Kumar
can u share the sql, explain plan, indexes etc,
show full processlist out put when the sql's are running

On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
manivanna...@spanservices.com wrote:

 I tried with myisam engine also. But it also taking more time to generate
 the report. In my database I am having 8 innodb tables and at the same time
 I am joining 4 tables to get the report.

 I am maintaining 60days records because the user will try to generate the
 report out of 60 days in terms of second, minute, hourly, weekly and
 Monthly report also.

 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 12:32 AM
 To: Rick James
 Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
 Subject: Re: NoSQL help

 Did you try with myisam tables.
 They are supposed to be good for reporting requirement
 On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.commailto:
 rja...@yahoo-inc.com wrote:
 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S. manivanna...@spanservices.commailto:
 manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on a
  wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only need
  to update it for a handful. Similar tricks are possible for a lot of
  typical reporting stuff - you don't need to re-calculate data for past
  months over and over again, for instance - and that's what makes your
  reports run fast.
 
 
  Just my 2 cents :-)
  /johan
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives

RE: NoSQL help

2012-06-14 Thread Manivannan S.
id

select_type

table

type

possible_keys

key

key_len

ref

rows

Extra

1

SIMPLE

ibf

ALL

ibf_MsgId

\N

\N

\N

160944

1

SIMPLE

pl

ref

idx_unique_key_ib_msg\,index_message_id\,index_message_processing_status

idx_unique_key_ib_msg

180

reports.ibf.Message_Id\,const

1

Using where; Using index

1

SIMPLE

tl

ref

idx_unique_key_ib_text\,index_message_id

idx_unique_key_ib_text

153

reports.pl.Message_Id

1

1

SIMPLE

xl

ref

idx_unique_key_ib_xml\,index_message_id

idx_unique_key_ib_xml

153

reports.pl.Message_Id

1

Using where


This is the execution plan for 1.5 milion records..



From: Ananda Kumar [mailto:anan...@gmail.com]mailto:[mailto:anan...@gmail.com]
Sent: Thursday, June 14, 2012 3:33 PM
To: Manivannan S.
Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: NoSQL help

can u share the sql, explain plan, indexes etc,
show full processlist out put when the sql's are running
On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
manivanna...@spanservices.commailto:manivanna...@spanservices.com wrote:
I tried with myisam engine also. But it also taking more time to generate the 
report. In my database I am having 8 innodb tables and at the same time I am 
joining 4 tables to get the report.

I am maintaining 60days records because the user will try to generate the 
report out of 60 days in terms of second, minute, hourly, weekly and Monthly 
report also.

From: Ananda Kumar [mailto:anan...@gmail.commailto:anan...@gmail.com]
Sent: Thursday, June 14, 2012 12:32 AM
To: Rick James
Cc: Johan De Meersman; Manivannan S.; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: NoSQL help

Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.commailto:rja...@yahoo-inc.commailto:rja...@yahoo-inc.com
 wrote:
I'll second Johan's comments.

Count the disk hits!

One minor change:  Don't store averages in the summary table; instead store the 
SUM().  That lets you get the mathematically correct AVERAGE over any time 
range via
SUM(sum_foo) / SUM(count_foo)

Switching between MySQL and Mongo requires rewriting _all_ of the relevant code.

opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count the 
disk hits!

I recently built a system that topped out at 350GB (90 days' data).  It 
involved hourly ingestion of a few GB of data and a variety of reports.  The 
prototype showed that most reports would take about an hour to run.  Not good.  
The final product, with summary tables, lets the reports be run on-demand and 
online and each takes only a few seconds.  By careful use of MEMORY tables, 
LOAD DATA, etc, the ingestion takes 5 minutes (each hour) for the raw data and 
2 minutes (total) for the 7 summary tables.  PARTITIONing was vital for the 
design.  Once an hour a new partition is populated; once a day, 24 hourly 
partitions are rolled into a new daily partition and the 90-day old partition 
is DROPped.


 -Original Message-
 From: Johan De Meersman 
 [mailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 6:20 AM
 To: Manivannan S.
 Cc: 
 mysql@lists.mysql.commailto:mysql@lists.mysql.commailto:mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Re: NoSQL help


 - Original Message -
  From: Manivannan S. 
  manivanna...@spanservices.commailto:manivanna...@spanservices.commailto:manivanna...@spanservices.commailto:manivanna...@spanservices.com
 
  Hi all,
 
  [lots of data]
  [slow reports]
  [wooo NoSQL magic]

 Not that I want to discourage you, but my standard first question is
 why do you think NoSQL (let alone any specific product) is the right
 solution? :-)

 Don't get me wrong, it might be; but from what little I now know about
 your environment, it sounds like applying some data warehousing
 techniques might suffice - and being the cynical dinosaur that I am, I
 have a healthy reluctance about welding new technology onto a stable
 environment.

 To speed up reporting (and note that these techniques are often applied
 even when implementing NoSQL solutions, too) it is usually a good first
 step to set up a process of data summarization.

 Basically, you pre-calculate averages, medians, groupings, whatever you
 need for your reports; and your job also saves the last record IDs it's
 processed; then on the next run, you only read the new records and
 update your summary tables to incorporate the new data.

 Suppose I have a table like this:

 ID | Val
 
  1 1
  2 7
  3 5
  413

 I want to report the average on a daily basis, and calculating that
 over those rows is unbearably slow because I'm running the process on a
 wristwatch from 1860 :-)

 So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
 gets a record saying this:

 Avg | elementCount | lastSeen

RE: NoSQL help

2012-06-14 Thread Manivannan S.
id

select_type

table

type

possible_keys

key

key_len

ref

rows

Extra

1

SIMPLE

ibf

ALL

ibf_MsgId

\N

\N

\N

160944

1

SIMPLE

pl

ref

idx_unique_key_ib_msg\,index_message_id\,index_message_processing_status

idx_unique_key_ib_msg

180

reports.ibf.Message_Id\,const

1

Using where; Using index

1

SIMPLE

tl

ref

idx_unique_key_ib_text\,index_message_id

idx_unique_key_ib_text

153

reports.pl.Message_Id

1

1

SIMPLE

xl

ref

idx_unique_key_ib_xml\,index_message_id

idx_unique_key_ib_xml

153

reports.pl.Message_Id

1

Using where


Sorry for the previous mail. this is my execution plan for 1.5 million 
records

From: Ananda Kumar [mailto:anan...@gmail.com]
Sent: Thursday, June 14, 2012 3:33 PM
To: Manivannan S.
Cc: mysql@lists.mysql.com
Subject: Re: NoSQL help

can u share the sql, explain plan, indexes etc,
show full processlist out put when the sql's are running
On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
manivanna...@spanservices.commailto:manivanna...@spanservices.com wrote:
I tried with myisam engine also. But it also taking more time to generate the 
report. In my database I am having 8 innodb tables and at the same time I am 
joining 4 tables to get the report.

I am maintaining 60days records because the user will try to generate the 
report out of 60 days in terms of second, minute, hourly, weekly and Monthly 
report also.

From: Ananda Kumar [mailto:anan...@gmail.commailto:anan...@gmail.com]
Sent: Thursday, June 14, 2012 12:32 AM
To: Rick James
Cc: Johan De Meersman; Manivannan S.; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: NoSQL help

Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.commailto:rja...@yahoo-inc.commailto:rja...@yahoo-inc.com
 wrote:
I'll second Johan's comments.

Count the disk hits!

One minor change:  Don't store averages in the summary table; instead store the 
SUM().  That lets you get the mathematically correct AVERAGE over any time 
range via
SUM(sum_foo) / SUM(count_foo)

Switching between MySQL and Mongo requires rewriting _all_ of the relevant code.

opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count the 
disk hits!

I recently built a system that topped out at 350GB (90 days' data).  It 
involved hourly ingestion of a few GB of data and a variety of reports.  The 
prototype showed that most reports would take about an hour to run.  Not good.  
The final product, with summary tables, lets the reports be run on-demand and 
online and each takes only a few seconds.  By careful use of MEMORY tables, 
LOAD DATA, etc, the ingestion takes 5 minutes (each hour) for the raw data and 
2 minutes (total) for the 7 summary tables.  PARTITIONing was vital for the 
design.  Once an hour a new partition is populated; once a day, 24 hourly 
partitions are rolled into a new daily partition and the 90-day old partition 
is DROPped.


 -Original Message-
 From: Johan De Meersman 
 [mailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 6:20 AM
 To: Manivannan S.
 Cc: 
 mysql@lists.mysql.commailto:mysql@lists.mysql.commailto:mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Re: NoSQL help


 - Original Message -
  From: Manivannan S. 
  manivanna...@spanservices.commailto:manivanna...@spanservices.commailto:manivanna...@spanservices.commailto:manivanna...@spanservices.com
 
  Hi all,
 
  [lots of data]
  [slow reports]
  [wooo NoSQL magic]

 Not that I want to discourage you, but my standard first question is
 why do you think NoSQL (let alone any specific product) is the right
 solution? :-)

 Don't get me wrong, it might be; but from what little I now know about
 your environment, it sounds like applying some data warehousing
 techniques might suffice - and being the cynical dinosaur that I am, I
 have a healthy reluctance about welding new technology onto a stable
 environment.

 To speed up reporting (and note that these techniques are often applied
 even when implementing NoSQL solutions, too) it is usually a good first
 step to set up a process of data summarization.

 Basically, you pre-calculate averages, medians, groupings, whatever you
 need for your reports; and your job also saves the last record IDs it's
 processed; then on the next run, you only read the new records and
 update your summary tables to incorporate the new data.

 Suppose I have a table like this:

 ID | Val
 
  1 1
  2 7
  3 5
  413

 I want to report the average on a daily basis, and calculating that
 over those rows is unbearably slow because I'm running the process on a
 wristwatch from 1860 :-)

 So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
 gets a record saying this:

 Avg | elementCount | lastSeen
 -
 6.5  4  4

Re: NoSQL help

2012-06-14 Thread Ananda Kumar
As seen below,
Full table scan is happening on table ibf.
Can share the indexes on this table and also the complete sql

On Thu, Jun 14, 2012 at 4:17 PM, Manivannan S. 
manivanna...@spanservices.com wrote:

id

 select_type

 table

 type

 possible_keys

 key

 key_len

 ref

 rows

 Extra

 1

 SIMPLE

 ibf

 ALL

 ibf_MsgId

 \N

 \N

 \N

 160944

 1

 SIMPLE

 pl

 ref

 idx_unique_key_ib_msg\,index_message_id\,index_message_processing_status**
 **

 idx_unique_key_ib_msg

 180

 reports.ibf.Message_Id\,const

 1

 Using where; Using index

 1

 SIMPLE

 tl

 ref

 idx_unique_key_ib_text\,index_message_id

 idx_unique_key_ib_text

 153

 reports.pl.Message_Id

 1

 1

 SIMPLE

 xl

 ref

 idx_unique_key_ib_xml\,index_message_id

 idx_unique_key_ib_xml

 153

 reports.pl.Message_Id

 1

 Using where

 ** **

 Sorry for the previous mail….. this is my execution plan for 1.5 million
 records….

 ** **

 *From:* Ananda Kumar [mailto:anan...@gmail.com]
 *Sent:* Thursday, June 14, 2012 3:33 PM

 *To:* Manivannan S.
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: NoSQL help

 ** **

 can u share the sql, explain plan, indexes etc,

 show full processlist out put when the sql's are running

 On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
 manivanna...@spanservices.com wrote:

 I tried with myisam engine also. But it also taking more time to generate
 the report. In my database I am having 8 innodb tables and at the same time
 I am joining 4 tables to get the report.

 I am maintaining 60days records because the user will try to generate the
 report out of 60 days in terms of second, minute, hourly, weekly and
 Monthly report also.

 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 12:32 AM
 To: Rick James
 Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
 Subject: Re: NoSQL help


 Did you try with myisam tables.
 They are supposed to be good for reporting requirement

 On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.commailto:
 rja...@yahoo-inc.com wrote:
 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-

  From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:
 vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.

  Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -

   From: Manivannan S. manivanna...@spanservices.commailto:
 manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those

RE: NoSQL help

2012-06-14 Thread Rick James
SHOW CREATE TABLE
SHOW TABLE STATUS
EXPLAIN SELECT ... \G


 -Original Message-
 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 4:04 AM
 To: Manivannan S.
 Cc: mysql@lists.mysql.com
 Subject: Re: NoSQL help
 
 As seen below,
 Full table scan is happening on table ibf.
 Can share the indexes on this table and also the complete sql
 
 On Thu, Jun 14, 2012 at 4:17 PM, Manivannan S. 
 manivanna...@spanservices.com wrote:
 
 id
 
  select_type
 
  table
 
  type
 
  possible_keys
 
  key
 
  key_len
 
  ref
 
  rows
 
  Extra
 
  1
 
  SIMPLE
 
  ibf
 
  ALL
 
  ibf_MsgId
 
  \N
 
  \N
 
  \N
 
  160944
 
  1
 
  SIMPLE
 
  pl
 
  ref
 
 
 idx_unique_key_ib_msg\,index_message_id\,index_message_processing_stat
  us**
  **
 
  idx_unique_key_ib_msg
 
  180
 
  reports.ibf.Message_Id\,const
 
  1
 
  Using where; Using index
 
  1
 
  SIMPLE
 
  tl
 
  ref
 
  idx_unique_key_ib_text\,index_message_id
 
  idx_unique_key_ib_text
 
  153
 
  reports.pl.Message_Id
 
  1
 
  1
 
  SIMPLE
 
  xl
 
  ref
 
  idx_unique_key_ib_xml\,index_message_id
 
  idx_unique_key_ib_xml
 
  153
 
  reports.pl.Message_Id
 
  1
 
  Using where
 
  ** **
 
  Sorry for the previous mail. this is my execution plan for 1.5
  million
  records
 
  ** **
 
  *From:* Ananda Kumar [mailto:anan...@gmail.com]
  *Sent:* Thursday, June 14, 2012 3:33 PM
 
  *To:* Manivannan S.
  *Cc:* mysql@lists.mysql.com
  *Subject:* Re: NoSQL help
 
  ** **
 
  can u share the sql, explain plan, indexes etc,
 
  show full processlist out put when the sql's are running
 
  On Thu, Jun 14, 2012 at 3:03 PM, Manivannan S. 
  manivanna...@spanservices.com wrote:
 
  I tried with myisam engine also. But it also taking more time to
  generate the report. In my database I am having 8 innodb tables and
 at
  the same time I am joining 4 tables to get the report.
 
  I am maintaining 60days records because the user will try to generate
  the report out of 60 days in terms of second, minute, hourly, weekly
  and Monthly report also.
 
  From: Ananda Kumar [mailto:anan...@gmail.com]
  Sent: Thursday, June 14, 2012 12:32 AM
  To: Rick James
  Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  Did you try with myisam tables.
  They are supposed to be good for reporting requirement
 
  On Wed, Jun 13, 2012 at 11:52 PM, Rick James rjames@yahoo-
 inc.commailto:
  rja...@yahoo-inc.com wrote:
  I'll second Johan's comments.
 
  Count the disk hits!
 
  One minor change:  Don't store averages in the summary table; instead
  store the SUM().  That lets you get the mathematically correct
 AVERAGE
  over any time range via
  SUM(sum_foo) / SUM(count_foo)
 
  Switching between MySQL and Mongo requires rewriting _all_ of the
  relevant code.
 
  opinion NoSQL will be no better than MySQL for 150GB. /opinion
  Count the disk hits!
 
  I recently built a system that topped out at 350GB (90 days' data).
  It involved hourly ingestion of a few GB of data and a variety of
 reports.
   The prototype showed that most reports would take about an hour to
 run.
   Not good.  The final product, with summary tables, lets the reports
  be run on-demand and online and each takes only a few seconds.  By
  careful use of MEMORY tables, LOAD DATA, etc, the ingestion takes 5
  minutes (each hour) for the raw data and 2 minutes (total) for the 7
 summary tables.
   PARTITIONing was vital for the design.  Once an hour a new partition
  is populated; once a day, 24 hourly partitions are rolled into a new
  daily partition and the 90-day old partition is DROPped.
 
 
   -Original Message-
 
   From: Johan De Meersman [mailto:vegiv...@tuxera.bemailto:
  vegiv...@tuxera.be]
   Sent: Wednesday, June 13, 2012 6:20 AM
   To: Manivannan S.
 
   Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
   Subject: Re: NoSQL help
  
  
   - Original Message -
 
From: Manivannan S. manivanna...@spanservices.commailto:
  manivanna...@spanservices.com
   
Hi all,
   
[lots of data]
[slow reports]
[wooo NoSQL magic]
  
   Not that I want to discourage you, but my standard first question
 is
   why do you think NoSQL (let alone any specific product) is the
   right solution? :-)
  
   Don't get me wrong, it might be; but from what little I now know
   about your environment, it sounds like applying some data
   warehousing techniques might suffice - and being the cynical
   dinosaur that I am, I have a healthy reluctance about welding new
   technology onto a stable environment.
  
   To speed up reporting (and note that these techniques are often
   applied even when implementing NoSQL solutions, too) it is usually
 a
   good first step to set up a process of data

RE: NoSQL help

2012-06-14 Thread Rick James
 the report out of 60 days in terms of second, minute, hourly, weekly and 
 Monthly report also

1-second reports??  Wouldn't that have millions of rows?  Is there anything 
useful to summarize?

For Second and Minute over narrow ranges, you might do fine reading the raw 
data.

For hourly reports an hourly summary table might be 'right'.

For daily, weekly, and monthly, have a daily summary table.  There would be one 
row per widget per day.


 -Original Message-
 From: Manivannan S. [mailto:manivanna...@spanservices.com]
 Sent: Thursday, June 14, 2012 2:33 AM
 To: mysql@lists.mysql.com
 Subject: RE: NoSQL help
 
 I tried with myisam engine also. But it also taking more time to
 generate the report. In my database I am having 8 innodb tables and at
 the same time I am joining 4 tables to get the report.
 
 I am maintaining 60days records because the user will try to generate
 the report out of 60 days in terms of second, minute, hourly, weekly
 and Monthly report also.
 
 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Thursday, June 14, 2012 12:32 AM
 To: Rick James
 Cc: Johan De Meersman; Manivannan S.; mysql@lists.mysql.com
 Subject: Re: NoSQL help
 
 Did you try with myisam tables.
 They are supposed to be good for reporting requirement On Wed, Jun 13,
 2012 at 11:52 PM, Rick James rja...@yahoo-inc.commailto:rjames@yahoo-
 inc.com wrote:
 I'll second Johan's comments.
 
 Count the disk hits!
 
 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE
 over any time range via
 SUM(sum_foo) / SUM(count_foo)
 
 Switching between MySQL and Mongo requires rewriting _all_ of the
 relevant code.
 
 opinion NoSQL will be no better than MySQL for 150GB. /opinion
 Count the disk hits!
 
 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of
 reports.  The prototype showed that most reports would take about an
 hour to run.  Not good.  The final product, with summary tables, lets
 the reports be run on-demand and online and each takes only a few
 seconds.  By careful use of MEMORY tables, LOAD DATA, etc, the
 ingestion takes 5 minutes (each hour) for the raw data and 2 minutes
 (total) for the 7 summary tables.  PARTITIONing was vital for the
 design.  Once an hour a new partition is populated; once a day, 24
 hourly partitions are rolled into a new daily partition and the 90-day
 old partition is DROPped.
 
 
  -Original Message-
  From: Johan De Meersman
  [mailto:vegiv...@tuxera.bemailto:vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S.
  
 manivanna...@spanservices.commailto:manivanna...@spanservices.com
   
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know
 about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am,
 I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often
  applied even when implementing NoSQL solutions, too) it is usually a
  good first step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever
  you need for your reports; and your job also saves the last record
 IDs
  it's processed; then on the next run, you only read the new records
  and update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on
  a wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the
 already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only

Re: NoSQL help

2012-06-13 Thread Johan De Meersman

- Original Message -
 From: Manivannan S. manivanna...@spanservices.com
 
 Hi all,
 
 [lots of data]
 [slow reports]
 [wooo NoSQL magic]

Not that I want to discourage you, but my standard first question is why do 
you think NoSQL (let alone any specific product) is the right solution? :-)

Don't get me wrong, it might be; but from what little I now know about your 
environment, it sounds like applying some data warehousing techniques might 
suffice - and being the cynical dinosaur that I am, I have a healthy reluctance 
about welding new technology onto a stable environment.

To speed up reporting (and note that these techniques are often applied even 
when implementing NoSQL solutions, too) it is usually a good first step to set 
up a process of data summarization.

Basically, you pre-calculate averages, medians, groupings, whatever you need 
for your reports; and your job also saves the last record IDs it's processed; 
then on the next run, you only read the new records and update your summary 
tables to incorporate the new data.

Suppose I have a table like this:

ID | Val

 1 1
 2 7
 3 5
 413

I want to report the average on a daily basis, and calculating that over those 
rows is unbearably slow because I'm running the process on a wristwatch from 
1860 :-)

So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then gets a 
record saying this:

Avg | elementCount | lastSeen
-
6.5  4  4

Now, over the course of the day, the elements 4, 17 and 2 get added with 
sequential row numbers. Instead of calculating (1+7+5+13+4+17+2)/7, which would 
be slow; I can substitute the already summarized data by Avg*elementCount. 
Thus, I calculate (6.5*4 + 4+17+2)/7 = 7, which is a lot faster, and my summary 
table now looks like this:

Avg | elementCount | lastSeen
-
  7  7  7

This is of course a stupid example, but it saves you a lot of time if you 
already have the summary of several thousand elements and only need to update 
it for a handful. Similar tricks are possible for a lot of typical reporting 
stuff - you don't need to re-calculate data for past months over and over 
again, for instance - and that's what makes your reports run fast.


Just my 2 cents :-)
/johan

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NoSQL help

2012-06-13 Thread mos

At 07:27 AM 6/13/2012, Manivannan S. wrote:

Hi all,

I am using MySQL 5.1, in this I am inserting 5GB of 
data for two days into my database. I am trying to generate a 
report by processing these data which are available in my database. 
Our clients are planning to keep the records for 60 days then that 
will cross 150GB of data. To generate a report I have to use all 
this accumulated of 150 GB data. I have done all kind of 
optimizations in my procedure and  I have tuned up my MySQL server 
parameters also. But using MySQL getting the reports for this 
amount of data, within the short time is not possible.


I have seen the concept of NoSQL and I am planning 
to implement this NoSQL concept into my database.


Does anyone have any idea in NoSQL especially 
MongoDB technology and how to use this ?


Thanks in advance.

Regards,
Manivannan S

DISCLAIMER: This email message and all attachments are confidential 
and may contain information that is privileged, confidential or 
exempt from disclosure under applicable law.  If you are not the 
intended recipient, you are notified that any dissemination, 
distribution or copying of this email is strictly prohibited. If you 
have received this email in error, please notify us immediately by 
return email or to mailad...@spanservices.com and destroy the 
original message.  Opinions, conclusions and other information in 
this message that do not relate to the official business of SPAN, 
shall be understood to be neither given nor endorsed by SPAN.


You didn't say what the problem was when you tried to produce a 
report on this data.


1) Did the sorting take too long?
2) Did traversing the data take too long?
3) Were the reports tables locked by other processes?
4) Using too much resources like memory, CPU, or disk?
5) Joining tables takes too long?

You may want to look into Handler. I've used it often when I needed 
to traverse very large MyISAM tables. Handler requires no physical 
sorting of the table because it traverses the table using an index. 
It also ignores any locks on the table (which you may or may not 
like), but the Handler will start traversing the table immediately. 
It will solve problems 1,3 ,4 but not #2 because traversing a large 
table with an index will be slower than if the table was already 
sorted. One nice thing about the Handler is it uses virtually no 
additional memory regardless of table size and very little disk 
activity because there is no sorting. You can run it any time and it 
won't degrade other MySQL processes. I don't think Handler will join 
tables together; I have used it only to traverse a single table.


One other tip. When loading the data into the table, Load Data Infile 
will be much faster on an empty table so recreate your tables from 
scratch before loading the data. Also build all of the indexes after 
the data has been loaded using one Alter Table command, and if 
possible, reduce the number of unique indexes in the table.


http://dev.mysql.com/doc/refman/5.5/en/handler.html

Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: NoSQL help

2012-06-13 Thread Rick James
I'll second Johan's comments.

Count the disk hits!

One minor change:  Don't store averages in the summary table; instead store the 
SUM().  That lets you get the mathematically correct AVERAGE over any time 
range via
SUM(sum_foo) / SUM(count_foo)

Switching between MySQL and Mongo requires rewriting _all_ of the relevant code.

opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count the 
disk hits!

I recently built a system that topped out at 350GB (90 days' data).  It 
involved hourly ingestion of a few GB of data and a variety of reports.  The 
prototype showed that most reports would take about an hour to run.  Not good.  
The final product, with summary tables, lets the reports be run on-demand and 
online and each takes only a few seconds.  By careful use of MEMORY tables, 
LOAD DATA, etc, the ingestion takes 5 minutes (each hour) for the raw data and 
2 minutes (total) for the 7 summary tables.  PARTITIONing was vital for the 
design.  Once an hour a new partition is populated; once a day, 24 hourly 
partitions are rolled into a new daily partition and the 90-day old partition 
is DROPped.


 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Wednesday, June 13, 2012 6:20 AM
 To: Manivannan S.
 Cc: mysql@lists.mysql.com
 Subject: Re: NoSQL help
 
 
 - Original Message -
  From: Manivannan S. manivanna...@spanservices.com
 
  Hi all,
 
  [lots of data]
  [slow reports]
  [wooo NoSQL magic]
 
 Not that I want to discourage you, but my standard first question is
 why do you think NoSQL (let alone any specific product) is the right
 solution? :-)
 
 Don't get me wrong, it might be; but from what little I now know about
 your environment, it sounds like applying some data warehousing
 techniques might suffice - and being the cynical dinosaur that I am, I
 have a healthy reluctance about welding new technology onto a stable
 environment.
 
 To speed up reporting (and note that these techniques are often applied
 even when implementing NoSQL solutions, too) it is usually a good first
 step to set up a process of data summarization.
 
 Basically, you pre-calculate averages, medians, groupings, whatever you
 need for your reports; and your job also saves the last record IDs it's
 processed; then on the next run, you only read the new records and
 update your summary tables to incorporate the new data.
 
 Suppose I have a table like this:
 
 ID | Val
 
  1 1
  2 7
  3 5
  413
 
 I want to report the average on a daily basis, and calculating that
 over those rows is unbearably slow because I'm running the process on a
 wristwatch from 1860 :-)
 
 So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
 gets a record saying this:
 
 Avg | elementCount | lastSeen
 -
 6.5  4  4
 
 Now, over the course of the day, the elements 4, 17 and 2 get added
 with sequential row numbers. Instead of calculating
 (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
 summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
 4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
 like this:
 
 Avg | elementCount | lastSeen
 -
   7  7  7
 
 This is of course a stupid example, but it saves you a lot of time if
 you already have the summary of several thousand elements and only need
 to update it for a handful. Similar tricks are possible for a lot of
 typical reporting stuff - you don't need to re-calculate data for past
 months over and over again, for instance - and that's what makes your
 reports run fast.
 
 
 Just my 2 cents :-)
 /johan
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



Re: NoSQL help

2012-06-13 Thread Ananda Kumar
Did you try with myisam tables.
They are supposed to be good for reporting requirement

On Wed, Jun 13, 2012 at 11:52 PM, Rick James rja...@yahoo-inc.com wrote:

 I'll second Johan's comments.

 Count the disk hits!

 One minor change:  Don't store averages in the summary table; instead
 store the SUM().  That lets you get the mathematically correct AVERAGE over
 any time range via
 SUM(sum_foo) / SUM(count_foo)

 Switching between MySQL and Mongo requires rewriting _all_ of the relevant
 code.

 opinion NoSQL will be no better than MySQL for 150GB. /opinion  Count
 the disk hits!

 I recently built a system that topped out at 350GB (90 days' data).  It
 involved hourly ingestion of a few GB of data and a variety of reports.
  The prototype showed that most reports would take about an hour to run.
  Not good.  The final product, with summary tables, lets the reports be run
 on-demand and online and each takes only a few seconds.  By careful use of
 MEMORY tables, LOAD DATA, etc, the ingestion takes 5 minutes (each hour)
 for the raw data and 2 minutes (total) for the 7 summary tables.
  PARTITIONing was vital for the design.  Once an hour a new partition is
 populated; once a day, 24 hourly partitions are rolled into a new daily
 partition and the 90-day old partition is DROPped.


  -Original Message-
  From: Johan De Meersman [mailto:vegiv...@tuxera.be]
  Sent: Wednesday, June 13, 2012 6:20 AM
  To: Manivannan S.
  Cc: mysql@lists.mysql.com
  Subject: Re: NoSQL help
 
 
  - Original Message -
   From: Manivannan S. manivanna...@spanservices.com
  
   Hi all,
  
   [lots of data]
   [slow reports]
   [wooo NoSQL magic]
 
  Not that I want to discourage you, but my standard first question is
  why do you think NoSQL (let alone any specific product) is the right
  solution? :-)
 
  Don't get me wrong, it might be; but from what little I now know about
  your environment, it sounds like applying some data warehousing
  techniques might suffice - and being the cynical dinosaur that I am, I
  have a healthy reluctance about welding new technology onto a stable
  environment.
 
  To speed up reporting (and note that these techniques are often applied
  even when implementing NoSQL solutions, too) it is usually a good first
  step to set up a process of data summarization.
 
  Basically, you pre-calculate averages, medians, groupings, whatever you
  need for your reports; and your job also saves the last record IDs it's
  processed; then on the next run, you only read the new records and
  update your summary tables to incorporate the new data.
 
  Suppose I have a table like this:
 
  ID | Val
  
   1 1
   2 7
   3 5
   413
 
  I want to report the average on a daily basis, and calculating that
  over those rows is unbearably slow because I'm running the process on a
  wristwatch from 1860 :-)
 
  So I get a summary table, calculate (1+7+5+13)/4 = 6.5 and that then
  gets a record saying this:
 
  Avg | elementCount | lastSeen
  -
  6.5  4  4
 
  Now, over the course of the day, the elements 4, 17 and 2 get added
  with sequential row numbers. Instead of calculating
  (1+7+5+13+4+17+2)/7, which would be slow; I can substitute the already
  summarized data by Avg*elementCount. Thus, I calculate (6.5*4 +
  4+17+2)/7 = 7, which is a lot faster, and my summary table now looks
  like this:
 
  Avg | elementCount | lastSeen
  -
7  7  7
 
  This is of course a stupid example, but it saves you a lot of time if
  you already have the summary of several thousand elements and only need
  to update it for a handful. Similar tricks are possible for a lot of
  typical reporting stuff - you don't need to re-calculate data for past
  months over and over again, for instance - and that's what makes your
  reports run fast.
 
 
  Just my 2 cents :-)
  /johan
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql




Re: Need help for performance tuning with Mysql

2012-05-30 Thread Prabhat Kumar
Check performance with script :

http://mysqltuner.pl/mysqltuner.pl - Variable tunning
http://hackmysql.com/mysqlidxchk - Unused Index

Thanks,
Prabhat

On Thu, May 24, 2012 at 4:32 PM, Rick James rja...@yahoo-inc.com wrote:

 Thanks.  I got tired of answering the same questions about buffer_pool and
 key_buffer over and over on forums.mysql.com; now I just point people at
 that page.

 INT(1) and INT(50) are identical -- and take 4 bytes.  See TINYINT,
 BIGINT, etc.  Also, UNSIGNED is probably wanted in more places than you
 have it.

 555KB is not very big.  But a table scan (as indicated by the EXPLAIN)
 costs something.

 select  * -- Is this what you are really fetching?  If not, we
 can discuss a covering index.
from  thold_data
where  thold_enabled='on' -- 2-valued flag?  Not likely to be useful in
 an index, esp. not by itself
  AND  data_id = 91633;   -- Probably the best bet.

 Recommend:
 INDEX(data_id)  -- or UNIQUE, if it is unique
 INDEX(data_id, thold_enabled)  -- or the opposite order; this probably
 would not be noticeable better.

 `notify_default` enum('on','off') default NULL
 Did you really mean to have 3 values (on, off, NULL)?



  -Original Message-
  From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
   Sent: Wednesday, May 23, 2012 7:10 PM
  To: mysql@lists.mysql.com
  Subject: Re: Need help for performance tuning with Mysql
 
  Rick
 
  Thank you for the reply.
 
  1. There are only a few things worth tuning -- see
  http://mysql.rjweb.org/doc.php/memory (they don't include the ones you
  tried)
  
 
The page is really cool. Its very simple and easy to understand.
 
  2. Instead INDEXes and schema design must be studied.  Please provide:
  SHOW CREATE TABLE
 
| thold_data | CREATE TABLE `thold_data` (
  `id` int(11) NOT NULL auto_increment,
  `rra_id` int(11) NOT NULL default '0',
  `data_id` int(11) NOT NULL default '0',
  `thold_hi` varchar(100) default NULL,
  `thold_low` varchar(100) default NULL,
  `thold_fail_trigger` int(10) unsigned default NULL,
  `thold_fail_count` int(11) NOT NULL default '0',
  `thold_alert` int(1) NOT NULL default '0',
  `thold_enabled` enum('on','off') NOT NULL default 'on',
  `bl_enabled` enum('on','off') NOT NULL default 'off',
  `bl_ref_time` int(50) unsigned default NULL,
  `bl_ref_time_range` int(10) unsigned default NULL,
  `bl_pct_down` int(10) unsigned default NULL,
  `bl_pct_up` int(10) unsigned default NULL,
  `bl_fail_trigger` int(10) unsigned default NULL,
  `bl_fail_count` int(11) unsigned default NULL,
  `bl_alert` int(2) NOT NULL default '0',
  `lastread` varchar(100) default NULL,
  `oldvalue` varchar(100) NOT NULL default '',
  `repeat_alert` int(10) unsigned default NULL,
  `notify_default` enum('on','off') default NULL,
  `notify_extra` varchar(255) default NULL,
  `host_id` int(10) default NULL,
  `syslog_priority` int(2) default '3',
  `cdef` int(11) NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `rra_id` (`rra_id`)
) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 
  SHOW TABLE SIZE
 
You must be mentioning about the show table status
 
  mysql show table status where name = thold_data;
  +++-++--++-
  +-+--+---+-
  ---+-+-
  +-+---+--+
  +-+
  | Name   | Engine | Version | Row_format | Rows | Avg_row_length |
  Data_length | Max_data_length | Index_length | Data_free |
  Auto_increment | Create_time | Update_time | Check_time
  | Collation | Checksum | Create_options | Comment |
  +++-++--++-
  +-+--+---+-
  ---+-+-
  +-+---+--+
  +-+
  | thold_data | MyISAM |  10 | Dynamic| 6161 | 90 |
  555128 | 281474976710655 |   140288 | 0 |  70258 |
  2012-05-24 10:41:47 | 2012-05-24 10:47:19 | 2012-05-24
  10:41:47 | latin1_swedish_ci | NULL || |
  +++-++--++-
  +-+--+---+-
  ---+-+-
  +-+---+--+
  +-+
  1 row in set (0.00 sec)
 
  EXPLAIN SELECT
 
I have seen the following select query in the slow query log.
I also saw update queries as well.
 
  mysql explain select * from thold_data where thold_enabled='on' AND
  mysql data_id = 91633

Re: Architectural Help

2012-05-30 Thread Prabhat Kumar
Hi Anupam,

You can achive this by circular replication, Circular replication between 4
nodes works fine. but not sure with 50 node server how practical it was.

Each database node will be a data center,
*Node 1 - data center 1* *-* *Node 2 - Data center 2* *-* *Node 3 - Data
Center 3*  *-* *Node 4 - Data Center 4* *- .. Node 50 - Data Center -
50*.

so basicaly it will Multimaster Replication System;

Fail over :- eg. data center 20 have network issue , in that case Node 19
replicate from Node 21, bypass node 20 -- need to do some scripting.

http://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html

Thank You.

On Thu, May 24, 2012 at 4:41 PM, Rick James rja...@yahoo-inc.com wrote:

 I deal with dozens of systems, all doing cross-country or cross-ocean
 replication.  The only viable approach (that we have deployed in
 production) is
 * Dual-master, single writer -- That is, all 49 clients write to one
 machine
 * The two masters are geographically separate.
 * Failover involves switching to the 'other' master.  It is _not_ fully
 automated.
 * Slaves, as needed, scattered around the world -- This provides read
 scaling.

 1M writes per day -- that is an average of 12/sec.  Yawn.
 Replication delay -- you probably will almost never see any.
 Network outages -- one of many things that can cause trouble.  The sooner
 you write the data _and_ copy it to a _remote_ site, the sooner you are
 immune to floods, tornados, cyclones, power outages, motherboard death, etc.

  -Original Message-
  From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
  Sent: Thursday, May 24, 2012 3:39 AM
  To: Nigel Wood
  Cc: mysql@lists.mysql.com
  Subject: Re: Architectural Help
 
  Dear Nigel,
 
  Thank for reply..
 
 
  See my comments below
 
 
  --Anupam
 
 
  
   From: Nigel Wood nw...@plus.net
  To: Anupam Karmarkar sb_akarmar...@yahoo.com; mysql@lists.mysql.com
  mysql@lists.mysql.com
  Sent: Thursday, 24 May 2012 3:35 PM
  Subject: RE: Architectural Help
 
  A few questions:
  which is more or a problem: network outages, network capacity or query
  latency?
  Network Outages is problem, between data centres
  When you say near real-time do you need transactional consistent view
  on all 49 servers or can some lag be tolerated?
  Some lag can be tolerated considering network outage
  Can any one of the 49 local servers potentially update/delete the same
  rows or data?
  Yes central server can update same row or local server but not at
  same time,  Few data will be updated in central location and others
  majorly in local.One Application Writes in Centeral and Others in
  local, Local app and central app shares some data.
  Is there any natural segmentation point within the data?
  No
  Do the data centers have diverse networks so that connections to some
  data centers may remain when others?
  Yes, we have diverse nework , so connectivity will be avaiable in
  other data centers.
  In the event that a local data centre is totally isolated from the
  others what data should it be allowed to update?
  Loca application should always write in local database, Central
  Application Updated will not be available to local.
  Do your applications produce/examine  large data set querying by
  secondary keys or using tull text search?
  We dont have text search or we dont query large data
  Are you in a position to modify the applications?
  No, Micro changes ok
  __
  From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
  Sent: Thursday, May 24, 2012 10:17 AM
  To: mysql@lists.mysql.com
  Subject: Architectural Help
 
  Hi All,
 
 
  I need architectural help for our requirement,
 
 
  We have nearly 50 data centre through out different cities from these
  data center application connect to central database server currently,
  there are conectivity and nework flcutions issues for different data
  center, so we comeup with solution each data center we should have
  local database server which will keep syncing with other server so that
  application doesnt fail , User data can be updated in any of server and
  should reflect in every server.  Application consists of
  write/read/delete operations,
 
 
  Current writes each day central server 1million.
 
 
  Only 1/1000 need to be distrubuted acrross servce rest need to be in
  central server.
 
 
  How can we achive this ? solution needs very much real time data
  accepting nework lags.
 
 
  Solution
 
  Collect all changes in other 49 server into 1 central server(How can we
  collect data)
 
 
  49 keeps updating data into local database from central server(Using
  Repliation Can be done)
 
 
 
  --Anupam
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql To unsubscribe:
  http://lists.mysql.com/mysql

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Best Regards,

Prabhat

RE: Architectural Help

2012-05-30 Thread Rick James
Caution!!  If any one of them dies and cannot be recovered, you will have a 
mess to fix.

From: Prabhat Kumar [mailto:aim.prab...@gmail.com]
Sent: Wednesday, May 30, 2012 1:43 PM
To: Rick James
Cc: Anupam Karmarkar; Nigel Wood; mysql@lists.mysql.com
Subject: Re: Architectural Help

Hi Anupam,

You can achive this by circular replication, Circular replication between 4 
nodes works fine. but not sure with 50 node server how practical it was.

Each database node will be a data center,
Node 1 - data center 1 - Node 2 - Data center 2 - Node 3 - Data Center 3  - 
Node 4 - Data Center 4 - .. Node 50 - Data Center - 50.

so basicaly it will Multimaster Replication System;

Fail over :- eg. data center 20 have network issue , in that case Node 19 
replicate from Node 21, bypass node 20 -- need to do some scripting.

http://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html

Thank You.

On Thu, May 24, 2012 at 4:41 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.com wrote:
I deal with dozens of systems, all doing cross-country or cross-ocean 
replication.  The only viable approach (that we have deployed in production) is
* Dual-master, single writer -- That is, all 49 clients write to one machine
* The two masters are geographically separate.
* Failover involves switching to the 'other' master.  It is _not_ fully 
automated.
* Slaves, as needed, scattered around the world -- This provides read scaling.

1M writes per day -- that is an average of 12/sec.  Yawn.
Replication delay -- you probably will almost never see any.
Network outages -- one of many things that can cause trouble.  The sooner you 
write the data _and_ copy it to a _remote_ site, the sooner you are immune to 
floods, tornados, cyclones, power outages, motherboard death, etc.

 -Original Message-
 From: Anupam Karmarkar 
 [mailto:sb_akarmar...@yahoo.commailto:sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 3:39 AM
 To: Nigel Wood
 Cc: mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Re: Architectural Help

 Dear Nigel,

 Thank for reply..


 See my comments below


 --Anupam


 
  From: Nigel Wood nw...@plus.netmailto:nw...@plus.net
 To: Anupam Karmarkar 
 sb_akarmar...@yahoo.commailto:sb_akarmar...@yahoo.com; 
 mysql@lists.mysql.commailto:mysql@lists.mysql.com
 mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Sent: Thursday, 24 May 2012 3:35 PM
 Subject: RE: Architectural Help

 A few questions:
 which is more or a problem: network outages, network capacity or query
 latency?
 Network Outages is problem, between data centres
 When you say near real-time do you need transactional consistent view
 on all 49 servers or can some lag be tolerated?
 Some lag can be tolerated considering network outage
 Can any one of the 49 local servers potentially update/delete the same
 rows or data?
 Yes central server can update same row or local server but not at
 same time,  Few data will be updated in central location and others
 majorly in local.One Application Writes in Centeral and Others in
 local, Local app and central app shares some data.
 Is there any natural segmentation point within the data?
 No
 Do the data centers have diverse networks so that connections to some
 data centers may remain when others?
 Yes, we have diverse nework , so connectivity will be avaiable in
 other data centers.
 In the event that a local data centre is totally isolated from the
 others what data should it be allowed to update?
 Loca application should always write in local database, Central
 Application Updated will not be available to local.
 Do your applications produce/examine  large data set querying by
 secondary keys or using tull text search?
 We dont have text search or we dont query large data
 Are you in a position to modify the applications?
 No, Micro changes ok
 __
 From: Anupam Karmarkar 
 [sb_akarmar...@yahoo.commailto:sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 10:17 AM
 To: mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: Architectural Help

 Hi All,


 I need architectural help for our requirement,


 We have nearly 50 data centre through out different cities from these
 data center application connect to central database server currently,
 there are conectivity and nework flcutions issues for different data
 center, so we comeup with solution each data center we should have
 local database server which will keep syncing with other server so that
 application doesnt fail , User data can be updated in any of server and
 should reflect in every server.  Application consists of
 write/read/delete operations,


 Current writes each day central server 1million.


 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.


 How can we achive this ? solution needs very much real time data
 accepting nework lags.


 Solution

 Collect all changes in other 49 server into 1 central server(How can we

Re: Query help...

2012-05-28 Thread hsv
 2012/05/28 08:03 -0700, Don Wieland 
Any assistance would be appreciated. Thanks!


Maybe something like this:


SELECT usie.client_id, first_name, last_name, COUNT(anie.client_id)

FROM
(SELECT client_id, first_name, last_name, time_start
FROM tl_appt
JOIN tl_rooms USING(room_id)
JOIN tl_clients USING(client_id)
JOIN tl_users on USING(user_id)

WHERE
appt_id IS NOT NULL AND
time_start between '1293858000' and '1325393999' AND
location_id = '1' and
appt_status_id IN (3) and
user_id IN (506)

GROUP BY user_id, client_id) AS usie

LEFT JOIN

(SELECT client_id, time_start
FROM tl_appt
JOIN tl_rooms USING(room_id)
JOIN tl_clients USING(client_id)

WHERE
appt_id IS NOT NULL AND
location_id = '1' and
appt_status_id IN (3)) AS anie

ON usie.client_id = anie.client_id AND usie.time_start  anie.time_start
GROUP BY usie.client_id

**
Scanned by  MailScan AntiVirus and Content Security Software.
Visit http://www.escanav.com for more info on eScan and X-Spam.
**



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-24 Thread Yu Watanabe
Alex

Thank you for the advice.

Probably, we will put index (key) on both columns.

Thanks,
Yu

Alex Schaft さんは書きました:
On 2012/05/24 07:37, Alex Schaft wrote:
 You are selecting a record based on the value of data_id and
 thold_enabled, but don't have an index on either? Add an index for both.
 If data_id is unique, then you would only need an index on that.

 Alex


On second thought, an index on thold_enabled won't mean much I think, so
either leave it off or create an index on data_id plus thold_enabled.
Someone more knowledgeable may correct me.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-24 Thread Yu Watanabe
Johnny

Thank you for the reply.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

  The problem is mainly on MyISAM engine.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

  The server has no RAID.

Thanks,
Yu

Johnny Withers さんは書きました:
I don't see any attachments.

First, I would upgrade to 5.5 as 5.0 is very old. The upgrade process
is painless.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

Sent from my iPad

On May 22, 2012, at 9:08 PM, Yu Watanabe yu.watan...@jp.fujitsu.com wrote:

 Hello all.

 I would like to ask for advice with performance tuning with MySQL.

 Following are some data for my server.

 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise

 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from 
 mysqladmin

 I have 2 database working with high load.

 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.

 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

 and read Hiperformance Mysql vol.2 ,
 and increased the following values,

 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size

 but made not much difference.

 According to the ps and sar result

 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028

 *2 SAR
 Average CPU user 25%
sys  5%
io   3%

 I assume that MySQL can work more but currently not.

 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.

 It would be a great help if people in this forum can give
 us an adivice for the tuning.

 Best Regards,
 Yu Watanabe


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Architectural Help

2012-05-24 Thread Ananda Kumar
is the central database server just ONE server, to which all your 50 data
center app connects

On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:

 Hi All,


 I need architectural help for our requirement,


 We have nearly 50 data centre through out different cities from these data
 center application connect to central database server currently, there are
 conectivity and nework flcutions issues for different data center, so we
 comeup with solution each data center we should have local database server
 which will keep syncing with other server so that application doesnt fail ,
 User data can be updated in any of server and should reflect in every
 server.  Application consists of write/read/delete operations,


 Current writes each day central server 1million.


 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.


 How can we achive this ? solution needs very much real time data accepting
 nework lags.


 Solution

 Collect all changes in other 49 server into 1 central server(How can we
 collect data)


 49 keeps updating data into local database from central server(Using
 Repliation Can be done)



 --Anupam



RE: Architectural Help

2012-05-24 Thread Nigel Wood
A few questions:
which is more or a problem: network outages, network capacity or query latency? 
When you say near real-time do you need transactional consistent view on all 
49 servers or can some lag be tolerated?
Can any one of the 49 local servers potentially update/delete the same rows or 
data?
Is there any natural segmentation point within the data? 
Do the data centers have diverse networks so that connections to some data 
centers may remain when others? 
In the event that a local data centre is totally isolated from the others what 
data should it be allowed to update?
Do your applications produce/examine  large data set querying by secondary keys 
or using tull text search?
Are you in a position to modify the applications?
__
From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
Sent: Thursday, May 24, 2012 10:17 AM
To: mysql@lists.mysql.com
Subject: Architectural Help

Hi All,


I need architectural help for our requirement,


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations,


Current writes each day central server 1million.


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags.


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done)



--Anupam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Architectural Help

2012-05-24 Thread Anupam Karmarkar
Dear Nigel,

Thank for reply.. 


See my comments below


--Anupam



 From: Nigel Wood nw...@plus.net
To: Anupam Karmarkar sb_akarmar...@yahoo.com; mysql@lists.mysql.com 
mysql@lists.mysql.com 
Sent: Thursday, 24 May 2012 3:35 PM
Subject: RE: Architectural Help
 
A few questions:
which is more or a problem: network outages, network capacity or query latency? 
Network Outages is problem, between data centres
When you say near real-time do you need transactional consistent view on all 
49 servers or can some lag be tolerated?
Some lag can be tolerated considering network outage 
Can any one of the 49 local servers potentially update/delete the same rows or 
data?
Yes central server can update same row or local server but not at same time,  
Few data will be updated in central location and others majorly in local.One 
Application Writes in Centeral and Others in local, Local app and central app 
shares some data.
Is there any natural segmentation point within the data? 
No
Do the data centers have diverse networks so that connections to some data 
centers may remain when others? 
Yes, we have diverse nework , so connectivity will be avaiable in other data 
centers.
In the event that a local data centre is totally isolated from the others what 
data should it be allowed to update?
Loca application should always write in local database, Central Application 
Updated will not be available to local.
Do your applications produce/examine  large data set querying by secondary keys 
or using tull text search?
We dont have text search or we dont query large data
Are you in a position to modify the applications?
No, Micro changes ok
__
From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
Sent: Thursday, May 24, 2012 10:17 AM
To: mysql@lists.mysql.com
Subject: Architectural Help

Hi All,


I need architectural help for our requirement,


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations,


Current writes each day central server 1million.


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags.


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done)



--Anupam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

RE: Need help for performance tuning with Mysql

2012-05-24 Thread Rick James
Thanks.  I got tired of answering the same questions about buffer_pool and 
key_buffer over and over on forums.mysql.com; now I just point people at that 
page.

INT(1) and INT(50) are identical -- and take 4 bytes.  See TINYINT, BIGINT, 
etc.  Also, UNSIGNED is probably wanted in more places than you have it.

555KB is not very big.  But a table scan (as indicated by the EXPLAIN) costs 
something.

select  * -- Is this what you are really fetching?  If not, we can 
discuss a covering index.
from  thold_data
where  thold_enabled='on' -- 2-valued flag?  Not likely to be useful in an 
index, esp. not by itself
  AND  data_id = 91633;   -- Probably the best bet.

Recommend:
INDEX(data_id)  -- or UNIQUE, if it is unique
INDEX(data_id, thold_enabled)  -- or the opposite order; this probably would 
not be noticeable better.

`notify_default` enum('on','off') default NULL
Did you really mean to have 3 values (on, off, NULL)?



 -Original Message-
 From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
 Sent: Wednesday, May 23, 2012 7:10 PM
 To: mysql@lists.mysql.com
 Subject: Re: Need help for performance tuning with Mysql
 
 Rick
 
 Thank you for the reply.
 
 1. There are only a few things worth tuning -- see
 http://mysql.rjweb.org/doc.php/memory (they don't include the ones you
 tried)
 
 
   The page is really cool. Its very simple and easy to understand.
 
 2. Instead INDEXes and schema design must be studied.  Please provide:
 SHOW CREATE TABLE
 
   | thold_data | CREATE TABLE `thold_data` (
 `id` int(11) NOT NULL auto_increment,
 `rra_id` int(11) NOT NULL default '0',
 `data_id` int(11) NOT NULL default '0',
 `thold_hi` varchar(100) default NULL,
 `thold_low` varchar(100) default NULL,
 `thold_fail_trigger` int(10) unsigned default NULL,
 `thold_fail_count` int(11) NOT NULL default '0',
 `thold_alert` int(1) NOT NULL default '0',
 `thold_enabled` enum('on','off') NOT NULL default 'on',
 `bl_enabled` enum('on','off') NOT NULL default 'off',
 `bl_ref_time` int(50) unsigned default NULL,
 `bl_ref_time_range` int(10) unsigned default NULL,
 `bl_pct_down` int(10) unsigned default NULL,
 `bl_pct_up` int(10) unsigned default NULL,
 `bl_fail_trigger` int(10) unsigned default NULL,
 `bl_fail_count` int(11) unsigned default NULL,
 `bl_alert` int(2) NOT NULL default '0',
 `lastread` varchar(100) default NULL,
 `oldvalue` varchar(100) NOT NULL default '',
 `repeat_alert` int(10) unsigned default NULL,
 `notify_default` enum('on','off') default NULL,
 `notify_extra` varchar(255) default NULL,
 `host_id` int(10) default NULL,
 `syslog_priority` int(2) default '3',
 `cdef` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `rra_id` (`rra_id`)
   ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 
 SHOW TABLE SIZE
 
   You must be mentioning about the show table status
 
 mysql show table status where name = thold_data;
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 | Name   | Engine | Version | Row_format | Rows | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free |
 Auto_increment | Create_time | Update_time | Check_time
 | Collation | Checksum | Create_options | Comment |
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 | thold_data | MyISAM |  10 | Dynamic| 6161 | 90 |
 555128 | 281474976710655 |   140288 | 0 |  70258 |
 2012-05-24 10:41:47 | 2012-05-24 10:47:19 | 2012-05-24
 10:41:47 | latin1_swedish_ci | NULL || |
 +++-++--++-
 +-+--+---+-
 ---+-+-
 +-+---+--+
 +-+
 1 row in set (0.00 sec)
 
 EXPLAIN SELECT
 
   I have seen the following select query in the slow query log.
   I also saw update queries as well.
 
 mysql explain select * from thold_data where thold_enabled='on' AND
 mysql data_id = 91633;
 ++-++--+---+--+
 -+--+--+-+
 | id | select_type | table  | type | possible_keys | key  | key_len
 | ref  | rows | Extra   |
 ++-++--+---+--+
 -+--+--+-+
 |  1 | SIMPLE  | thold_data | ALL  | NULL

RE: Architectural Help

2012-05-24 Thread Rick James
I deal with dozens of systems, all doing cross-country or cross-ocean 
replication.  The only viable approach (that we have deployed in production) is
* Dual-master, single writer -- That is, all 49 clients write to one machine
* The two masters are geographically separate.
* Failover involves switching to the 'other' master.  It is _not_ fully 
automated.
* Slaves, as needed, scattered around the world -- This provides read scaling.

1M writes per day -- that is an average of 12/sec.  Yawn.
Replication delay -- you probably will almost never see any.
Network outages -- one of many things that can cause trouble.  The sooner you 
write the data _and_ copy it to a _remote_ site, the sooner you are immune to 
floods, tornados, cyclones, power outages, motherboard death, etc.

 -Original Message-
 From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 3:39 AM
 To: Nigel Wood
 Cc: mysql@lists.mysql.com
 Subject: Re: Architectural Help
 
 Dear Nigel,
 
 Thank for reply..
 
 
 See my comments below
 
 
 --Anupam
 
 
 
  From: Nigel Wood nw...@plus.net
 To: Anupam Karmarkar sb_akarmar...@yahoo.com; mysql@lists.mysql.com
 mysql@lists.mysql.com
 Sent: Thursday, 24 May 2012 3:35 PM
 Subject: RE: Architectural Help
 
 A few questions:
 which is more or a problem: network outages, network capacity or query
 latency?
 Network Outages is problem, between data centres
 When you say near real-time do you need transactional consistent view
 on all 49 servers or can some lag be tolerated?
 Some lag can be tolerated considering network outage
 Can any one of the 49 local servers potentially update/delete the same
 rows or data?
 Yes central server can update same row or local server but not at
 same time,  Few data will be updated in central location and others
 majorly in local.One Application Writes in Centeral and Others in
 local, Local app and central app shares some data.
 Is there any natural segmentation point within the data?
 No
 Do the data centers have diverse networks so that connections to some
 data centers may remain when others?
 Yes, we have diverse nework , so connectivity will be avaiable in
 other data centers.
 In the event that a local data centre is totally isolated from the
 others what data should it be allowed to update?
 Loca application should always write in local database, Central
 Application Updated will not be available to local.
 Do your applications produce/examine  large data set querying by
 secondary keys or using tull text search?
 We dont have text search or we dont query large data
 Are you in a position to modify the applications?
 No, Micro changes ok
 __
 From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
 Sent: Thursday, May 24, 2012 10:17 AM
 To: mysql@lists.mysql.com
 Subject: Architectural Help
 
 Hi All,
 
 
 I need architectural help for our requirement,
 
 
 We have nearly 50 data centre through out different cities from these
 data center application connect to central database server currently,
 there are conectivity and nework flcutions issues for different data
 center, so we comeup with solution each data center we should have
 local database server which will keep syncing with other server so that
 application doesnt fail , User data can be updated in any of server and
 should reflect in every server.  Application consists of
 write/read/delete operations,
 
 
 Current writes each day central server 1million.
 
 
 Only 1/1000 need to be distrubuted acrross servce rest need to be in
 central server.
 
 
 How can we achive this ? solution needs very much real time data
 accepting nework lags.
 
 
 Solution
 
 Collect all changes in other 49 server into 1 central server(How can we
 collect data)
 
 
 49 keeps updating data into local database from central server(Using
 Repliation Can be done)
 
 
 
 --Anupam
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql To unsubscribe:
 http://lists.mysql.com/mysql

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Andrew Moore
Yu,

The upgrade to 5.5 that Jonny advises should NOT your first action. If
MySQL is mis-configured on 5.0 it will likely be misconfigured on 5.1 and
5.5. Test your application thoroughly on the new version before heeding
that advice. Read the change logs and known bugs. Running the upgrade might
seem painless but if you have some legacy feature in place then things will
not work how you may expect them to.

Review your needs and see if a switch to innodb storage engine will give
you any performance gain. The locking differences alone might make this
worthwhile. TEST it.

You did not state your data and index size. You will benefit from having
enough RAM so that your 'working' data set fits to memory. This isn't
possible/practical for large data but if you have a 5G dataset and 8G
available memory you might not need to rush out and spend money.

If you're heavily using MyISAM, review and tune the MyISAM related buffers.
If you are working mostly with InnoDB tune those variables. Measure, change
measure again. It might be an iterative process but you will learn lots
along the way.

Good luck.

Andy

On Wed, May 23, 2012 at 5:44 AM, Tsubasa Tanaka yoku0...@gmail.com wrote:

 Hello, Yu-san,
 (へろへろな英語で申し訳ないです)

 Can I think that you already tweaked Index on the tables?
 if you yet,please create apt indexes.

 MyISAM caches only Index without data.
 i take way for decreasing disk seek,
  1) create more indexes on the tables,if the tables doesn't update quite
 often.
   including data into index forcibly.
   this makes slow for insert and update,and this is dirty idea,i think.
(よくSELECTされるカラムをINDEXに含めてしまいます。
ただし、SELECT * FROMで呼ばれることが多い場合には使えない上に
かなり美しくない策です。。)
  2) tune filesystem and disk drive parameter for datadir.
   MyISAM table's data caches only in the filesystem cache.
   But i regret that i don't have knowledge around filesystem.

 あまり力になれなくて申し訳ないです。

 regards,


 ts. tanaka//

 2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
  Hello Tsubasa.
 
  Thank you for the reply. (返信ありがとうございます。)
 
  Our high loaded DB are both INNODB and MyISAM.
  Espicially , on MyISAM.
 
  I will consider the tuning of innodb_buffer_pool_size as well.
 
  Do you know the tips for how to tune the disk access for MyISAM?
 
  Thanks,
  Yu
 
  Tsubasa Tanaka さんは書きました:
 Hello,
 
 I seem your mysqld doesn't use enough memory.
 
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028
 
 if your mysqld uses InnoDB oftenly,
 edit innodb_buffer_pool_size in you my.cnf.
 
 
 http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size
 
 
 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size
 
 but made not much difference.
 
 It is solution for only sql's large result,i think.
 if you doesn't recognize that problem causes large result,
 you should approach other way,too.
 
 regards,
 
 
 ts. tanaka//
 
 2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
  Also following is the free command result.
 
  total   used   free sharedbuffers
 cached
  Mem:   81623807843676 318704  0  95632
  5970892
  -/+ buffers/cache:17771526385228
  Swap:  8032492  235608008932
 
  Thanks,
  Yu
 
 
  Yu Watanabe さんは書きました:
 Hello all.
 
 I would like to ask for advice with performance tuning with MySQL.
 
 Following are some data for my server.
 
 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise
 
 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from
 mysqladmin
 
 I have 2 database working with high load.
 
 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.
 
 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
 
 and read Hiperformance Mysql vol.2 ,
 and increased the following values,
 
 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size
 
 but made not much difference.
 
 According to the ps and sar result
 
 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028
 
 *2 SAR
 Average CPU user 25%
 sys  5%
 io   3%
 
 I assume that MySQL can work more but currently not.
 
 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.
 
 It would be a great help if people in this forum can give
 us an adivice for the tuning.
 
 Best Regards,
 Yu Watanabe
 
 __
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 

Re: Need help for performance tuning with Mysql

2012-05-23 Thread Ananda Kumar
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.

Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings

regards
anandkl

On Wed, May 23, 2012 at 3:45 PM, Andrew Moore eroomy...@gmail.com wrote:

 Yu,

 The upgrade to 5.5 that Jonny advises should NOT your first action. If
 MySQL is mis-configured on 5.0 it will likely be misconfigured on 5.1 and
 5.5. Test your application thoroughly on the new version before heeding
 that advice. Read the change logs and known bugs. Running the upgrade might
 seem painless but if you have some legacy feature in place then things will
 not work how you may expect them to.

 Review your needs and see if a switch to innodb storage engine will give
 you any performance gain. The locking differences alone might make this
 worthwhile. TEST it.

 You did not state your data and index size. You will benefit from having
 enough RAM so that your 'working' data set fits to memory. This isn't
 possible/practical for large data but if you have a 5G dataset and 8G
 available memory you might not need to rush out and spend money.

 If you're heavily using MyISAM, review and tune the MyISAM related buffers.
 If you are working mostly with InnoDB tune those variables. Measure, change
 measure again. It might be an iterative process but you will learn lots
 along the way.

 Good luck.

 Andy

 On Wed, May 23, 2012 at 5:44 AM, Tsubasa Tanaka yoku0...@gmail.com
 wrote:

  Hello, Yu-san,
  (へろへろな英語で申し訳ないです)
 
  Can I think that you already tweaked Index on the tables?
  if you yet,please create apt indexes.
 
  MyISAM caches only Index without data.
  i take way for decreasing disk seek,
   1) create more indexes on the tables,if the tables doesn't update quite
  often.
including data into index forcibly.
this makes slow for insert and update,and this is dirty idea,i think.
 (よくSELECTされるカラムをINDEXに含めてしまいます。
 ただし、SELECT * FROMで呼ばれることが多い場合には使えない上に
 かなり美しくない策です。。)
   2) tune filesystem and disk drive parameter for datadir.
MyISAM table's data caches only in the filesystem cache.
But i regret that i don't have knowledge around filesystem.
 
  あまり力になれなくて申し訳ないです。
 
  regards,
 
 
  ts. tanaka//
 
  2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
   Hello Tsubasa.
  
   Thank you for the reply. (返信ありがとうございます。)
  
   Our high loaded DB are both INNODB and MyISAM.
   Espicially , on MyISAM.
  
   I will consider the tuning of innodb_buffer_pool_size as well.
  
   Do you know the tips for how to tune the disk access for MyISAM?
  
   Thanks,
   Yu
  
   Tsubasa Tanaka さんは書きました:
  Hello,
  
  I seem your mysqld doesn't use enough memory.
  
  Date   Time  CPU%  RSS VSZ
  2012/5/22  21:00:39  109   294752  540028
  
  if your mysqld uses InnoDB oftenly,
  edit innodb_buffer_pool_size in you my.cnf.
  
  
 
 http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size
  
  
  table_cache
  thread_cache_size
  tmp_table_size
  max_heap_table_size
  
  but made not much difference.
  
  It is solution for only sql's large result,i think.
  if you doesn't recognize that problem causes large result,
  you should approach other way,too.
  
  regards,
  
  
  ts. tanaka//
  
  2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
   Also following is the free command result.
  
   total   used   free sharedbuffers
  cached
   Mem:   81623807843676 318704  0  95632
   5970892
   -/+ buffers/cache:17771526385228
   Swap:  8032492  235608008932
  
   Thanks,
   Yu
  
  
   Yu Watanabe さんは書きました:
  Hello all.
  
  I would like to ask for advice with performance tuning with MySQL.
  
  Following are some data for my server.
  
  CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
  Memory : 8GB
  OS : RHEL 4.4 x86_64
  MySQL  : MySQL 5.0.50sp1-enterprise
  
  Attached file
  # my.cnf.txt  : my.cnf information
  # mysqlext_20120522131034.log : variable and status information from
  mysqladmin
  
  I have 2 database working with high load.
  
  I wanted to speed up my select and update queries not by
  optimizing the query itself but tuning the my.cnf.
  
  I have referred to following site,
  http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
  
  and read Hiperformance Mysql vol.2 ,
  and increased the following values,
  
  table_cache
  thread_cache_size
  tmp_table_size
  max_heap_table_size
  
  but made not much difference.
  
  According to the ps and sar result
  
  *1 PS result
  Date   Time  CPU%  RSS VSZ
  2012/5/22  21:00:39  109   294752  540028
  
  *2 SAR
  Average CPU user 25%
  sys  5%
  io   3%
  
  I assume that MySQL can work more but currently not.
  
  I am considersing to off load 1 high load database to
  seperate process and make MySQL work in multiple process.
  

RE: Need help for performance tuning with Mysql

2012-05-23 Thread Rick James
100% CPU -- A slow query.  Tuning will not help.  Period.

1. There are only a few things worth tuning -- see 
http://mysql.rjweb.org/doc.php/memory (they don't include the ones you tried)

2. Instead INDEXes and schema design must be studied.  Please provide:
SHOW CREATE TABLE
SHOW TABLE SIZE
EXPLAIN SELECT ...
 

 -Original Message-
 From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
 Sent: Tuesday, May 22, 2012 7:07 PM
 To: mysql@lists.mysql.com
 Subject: Need help for performance tuning with Mysql
 
 Hello all.
 
 I would like to ask for advice with performance tuning with MySQL.
 
 Following are some data for my server.
 
 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise
 
 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from
 mysqladmin
 
 I have 2 database working with high load.
 
 I wanted to speed up my select and update queries not by optimizing the
 query itself but tuning the my.cnf.
 
 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
 
 and read Hiperformance Mysql vol.2 ,
 and increased the following values,
 
 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size
 
 but made not much difference.
 
 According to the ps and sar result
 
 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028
 
 *2 SAR
 Average CPU user 25%
 sys  5%
 io   3%
 
 I assume that MySQL can work more but currently not.
 
 I am considersing to off load 1 high load database to seperate process
 and make MySQL work in multiple process.
 
 It would be a great help if people in this forum can give us an adivice
 for the tuning.
 
 Best Regards,
 Yu Watanabe


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Yu Watanabe
Rick

Thank you for the reply.

1. There are only a few things worth tuning -- see 
http://mysql.rjweb.org/doc.php/memory (they don't include the ones you tried)


  The page is really cool. Its very simple and easy to understand.

2. Instead INDEXes and schema design must be studied.  Please provide:
SHOW CREATE TABLE

  | thold_data | CREATE TABLE `thold_data` (
`id` int(11) NOT NULL auto_increment,
`rra_id` int(11) NOT NULL default '0',
`data_id` int(11) NOT NULL default '0',
`thold_hi` varchar(100) default NULL,
`thold_low` varchar(100) default NULL,
`thold_fail_trigger` int(10) unsigned default NULL,
`thold_fail_count` int(11) NOT NULL default '0',
`thold_alert` int(1) NOT NULL default '0',
`thold_enabled` enum('on','off') NOT NULL default 'on',
`bl_enabled` enum('on','off') NOT NULL default 'off',
`bl_ref_time` int(50) unsigned default NULL,
`bl_ref_time_range` int(10) unsigned default NULL,
`bl_pct_down` int(10) unsigned default NULL,
`bl_pct_up` int(10) unsigned default NULL,
`bl_fail_trigger` int(10) unsigned default NULL,
`bl_fail_count` int(11) unsigned default NULL,
`bl_alert` int(2) NOT NULL default '0',
`lastread` varchar(100) default NULL,
`oldvalue` varchar(100) NOT NULL default '',
`repeat_alert` int(10) unsigned default NULL,
`notify_default` enum('on','off') default NULL,
`notify_extra` varchar(255) default NULL,
`host_id` int(10) default NULL,
`syslog_priority` int(2) default '3',
`cdef` int(11) NOT NULL default '0',
PRIMARY KEY  (`id`),
KEY `rra_id` (`rra_id`)
  ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |

SHOW TABLE SIZE

  You must be mentioning about the show table status

mysql show table status where name = thold_data;
+++-++--++-+-+--+---++-+-
+-+---+--++-+
| Name   | Engine | Version | Row_format | Rows | Avg_row_length | 
Data_length | Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time  
| Collation | Checksum | Create_options | Comment |
+++-++--++-+-+--+---++-+-
+-+---+--++-+
| thold_data | MyISAM |  10 | Dynamic| 6161 | 90 |  
555128 | 281474976710655 |   140288 | 0 |  70258 | 
2012-05-24 10:41:47 | 2012-05-24 10:47:19 | 2012-05-24 
10:41:47 | latin1_swedish_ci | NULL || | 
+++-++--++-+-+--+---++-+-
+-+---+--++-+
1 row in set (0.00 sec)

EXPLAIN SELECT

  I have seen the following select query in the slow query log.
  I also saw update queries as well.

mysql explain select * from thold_data where thold_enabled='on' AND data_id = 
91633;
++-++--+---+--+-+--+--+-+
| id | select_type | table  | type | possible_keys | key  | key_len | ref  
| rows | Extra   |
++-++--+---+--+-+--+--+-+
|  1 | SIMPLE  | thold_data | ALL  | NULL  | NULL | NULL| NULL 
| 6161 | Using where | 
++-++--+---+--+-+--+--+-+
1 row in set (0.06 sec)

If cache size tuning is not an option ,
do you think that following action would be an choice to faten the queries 
little bit more?

1. depriving the database and setup as an another process. (multiple mysql 
processes)
2. Move the MYD, MYI, frm to ram disk (/dev/shm)

Thanks,
Yu

Rick James さんは書きました:
100% CPU -- A slow query.  Tuning will not help.  Period.

1. There are only a few things worth tuning -- see 
http://mysql.rjweb.org/doc.php/memory (they don't include the ones you tried)

2. Instead INDEXes and schema design must be studied.  Please provide:
SHOW CREATE TABLE
SHOW TABLE SIZE
EXPLAIN SELECT ...
 

 -Original Message-
 From: Yu Watanabe [mailto:yu.watan...@jp.fujitsu.com]
 Sent: Tuesday, May 22, 2012 7:07 PM
 To: mysql@lists.mysql.com
 Subject: Need help for performance tuning with Mysql
 
 Hello all.
 
 I would like to ask for advice with performance tuning with MySQL.
 
 Following are some data for my server.
 
 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 

Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 04:10, Yu Watanabe wrote:
 2. Instead INDEXes and schema design must be studied.  Please provide:
 SHOW CREATE TABLE
   | thold_data | CREATE TABLE `thold_data` (
 `id` int(11) NOT NULL auto_increment,
 `rra_id` int(11) NOT NULL default '0',
 `data_id` int(11) NOT NULL default '0',
 `thold_hi` varchar(100) default NULL,
 `thold_low` varchar(100) default NULL,
 `thold_fail_trigger` int(10) unsigned default NULL,
 `thold_fail_count` int(11) NOT NULL default '0',
 `thold_alert` int(1) NOT NULL default '0',
 `thold_enabled` enum('on','off') NOT NULL default 'on',
 `bl_enabled` enum('on','off') NOT NULL default 'off',
 `bl_ref_time` int(50) unsigned default NULL,
 `bl_ref_time_range` int(10) unsigned default NULL,
 `bl_pct_down` int(10) unsigned default NULL,
 `bl_pct_up` int(10) unsigned default NULL,
 `bl_fail_trigger` int(10) unsigned default NULL,
 `bl_fail_count` int(11) unsigned default NULL,
 `bl_alert` int(2) NOT NULL default '0',
 `lastread` varchar(100) default NULL,
 `oldvalue` varchar(100) NOT NULL default '',
 `repeat_alert` int(10) unsigned default NULL,
 `notify_default` enum('on','off') default NULL,
 `notify_extra` varchar(255) default NULL,
 `host_id` int(10) default NULL,
 `syslog_priority` int(2) default '3',
 `cdef` int(11) NOT NULL default '0',
 PRIMARY KEY  (`id`),
 KEY `rra_id` (`rra_id`)
   ) ENGINE=MyISAM AUTO_INCREMENT=69641 DEFAULT CHARSET=latin1 |
 EXPLAIN SELECT
   I have seen the following select query in the slow query log.
   I also saw update queries as well.

 mysql explain select * from thold_data where thold_enabled='on' AND data_id 
 = 91633;
 ++-++--+---+--+-+--+--+-+
 | id | select_type | table  | type | possible_keys | key  | key_len | ref 
  | rows | Extra   |
 ++-++--+---+--+-+--+--+-+
 |  1 | SIMPLE  | thold_data | ALL  | NULL  | NULL | NULL| 
 NULL | 6161 | Using where | 
 ++-++--+---+--+-+--+--+-+
 1 row in set (0.06 sec)

 If cache size tuning is not an option ,
 do you think that following action would be an choice to faten the queries 
 little bit more?
You are selecting a record based on the value of data_id and
thold_enabled, but don't have an index on either? Add an index for both.
If data_id is unique, then you would only need an index on that.

Alex

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-23 Thread Alex Schaft
On 2012/05/24 07:37, Alex Schaft wrote:
 You are selecting a record based on the value of data_id and
 thold_enabled, but don't have an index on either? Add an index for both.
 If data_id is unique, then you would only need an index on that.

 Alex


On second thought, an index on thold_enabled won't mean much I think, so
either leave it off or create an index on data_id plus thold_enabled.
Someone more knowledgeable may correct me.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Yu Watanabe
Also following is the free command result.

 total   used   free sharedbuffers cached
Mem:   81623807843676 318704  0  956325970892
-/+ buffers/cache:17771526385228
Swap:  8032492  235608008932

Thanks,
Yu


Yu Watanabe さんは書きました:
Hello all.

I would like to ask for advice with performance tuning with MySQL.

Following are some data for my server.

CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
Memory : 8GB
OS : RHEL 4.4 x86_64
MySQL  : MySQL 5.0.50sp1-enterprise

Attached file
# my.cnf.txt  : my.cnf information
# mysqlext_20120522131034.log : variable and status information from mysqladmin

I have 2 database working with high load.

I wanted to speed up my select and update queries not by
optimizing the query itself but tuning the my.cnf. 

I have referred to following site,
http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

and read Hiperformance Mysql vol.2 ,
and increased the following values, 

table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

According to the ps and sar result

*1 PS result
Date   Time  CPU%  RSS VSZ   
2012/5/22  21:00:39  109   294752  540028

*2 SAR
Average CPU user 25%
sys  5%
io   3%

I assume that MySQL can work more but currently not.

I am considersing to off load 1 high load database to 
seperate process and make MySQL work in multiple process.

It would be a great help if people in this forum can give
us an adivice for the tuning.

Best Regards,
Yu Watanabe

__


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Tsubasa Tanaka
Hello,

I seem your mysqld doesn't use enough memory.

Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

if your mysqld uses InnoDB oftenly,
edit innodb_buffer_pool_size in you my.cnf.

http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size


table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

It is solution for only sql's large result,i think.
if you doesn't recognize that problem causes large result,
you should approach other way,too.

regards,


ts. tanaka//

2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
 Also following is the free command result.

 total   used   free sharedbuffers cached
 Mem:   81623807843676 318704  0  956325970892
 -/+ buffers/cache:17771526385228
 Swap:  8032492  235608008932

 Thanks,
 Yu


 Yu Watanabe さんは書きました:
Hello all.

I would like to ask for advice with performance tuning with MySQL.

Following are some data for my server.

CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
Memory : 8GB
OS : RHEL 4.4 x86_64
MySQL  : MySQL 5.0.50sp1-enterprise

Attached file
# my.cnf.txt  : my.cnf information
# mysqlext_20120522131034.log : variable and status information from 
mysqladmin

I have 2 database working with high load.

I wanted to speed up my select and update queries not by
optimizing the query itself but tuning the my.cnf.

I have referred to following site,
http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

and read Hiperformance Mysql vol.2 ,
and increased the following values,

table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

According to the ps and sar result

*1 PS result
Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

*2 SAR
Average CPU user 25%
sys  5%
io   3%

I assume that MySQL can work more but currently not.

I am considersing to off load 1 high load database to
seperate process and make MySQL work in multiple process.

It would be a great help if people in this forum can give
us an adivice for the tuning.

Best Regards,
Yu Watanabe

__


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Johnny Withers
I don't see any attachments.

First, I would upgrade to 5.5 as 5.0 is very old. The upgrade process
is painless.

Second, make sure your Innodb buffer pool is allocating as much ram as
possible. I'd even go as far as adding another 8gb of ram to the
server. The buffer pool setting is going to give you the best
performance increase.

Also, what kind of hard disks do you have the data files on? Raid? No raid?

Sent from my iPad

On May 22, 2012, at 9:08 PM, Yu Watanabe yu.watan...@jp.fujitsu.com wrote:

 Hello all.

 I would like to ask for advice with performance tuning with MySQL.

 Following are some data for my server.

 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise

 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from 
 mysqladmin

 I have 2 database working with high load.

 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.

 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

 and read Hiperformance Mysql vol.2 ,
 and increased the following values,

 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size

 but made not much difference.

 According to the ps and sar result

 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028

 *2 SAR
 Average CPU user 25%
sys  5%
io   3%

 I assume that MySQL can work more but currently not.

 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.

 It would be a great help if people in this forum can give
 us an adivice for the tuning.

 Best Regards,
 Yu Watanabe


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Yu Watanabe
Hello Tsubasa.

Thank you for the reply. (返信ありがとうございます。)

Our high loaded DB are both INNODB and MyISAM.
Espicially , on MyISAM. 

I will consider the tuning of innodb_buffer_pool_size as well.

Do you know the tips for how to tune the disk access for MyISAM?

Thanks,
Yu

Tsubasa Tanaka さんは書きました:
Hello,

I seem your mysqld doesn't use enough memory.

Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

if your mysqld uses InnoDB oftenly,
edit innodb_buffer_pool_size in you my.cnf.

http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size


table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

It is solution for only sql's large result,i think.
if you doesn't recognize that problem causes large result,
you should approach other way,too.

regards,


ts. tanaka//

2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
 Also following is the free command result.

 total   used   free sharedbuffers cached
 Mem:   81623807843676 318704  0  956325970892
 -/+ buffers/cache:17771526385228
 Swap:  8032492  235608008932

 Thanks,
 Yu


 Yu Watanabe さんは書きました:
Hello all.

I would like to ask for advice with performance tuning with MySQL.

Following are some data for my server.

CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
Memory : 8GB
OS : RHEL 4.4 x86_64
MySQL  : MySQL 5.0.50sp1-enterprise

Attached file
# my.cnf.txt  : my.cnf information
# mysqlext_20120522131034.log : variable and status information from 
mysqladmin

I have 2 database working with high load.

I wanted to speed up my select and update queries not by
optimizing the query itself but tuning the my.cnf.

I have referred to following site,
http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

and read Hiperformance Mysql vol.2 ,
and increased the following values,

table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

According to the ps and sar result

*1 PS result
Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

*2 SAR
Average CPU user 25%
sys  5%
io   3%

I assume that MySQL can work more but currently not.

I am considersing to off load 1 high load database to
seperate process and make MySQL work in multiple process.

It would be a great help if people in this forum can give
us an adivice for the tuning.

Best Regards,
Yu Watanabe

__


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need help for performance tuning with Mysql

2012-05-22 Thread Tsubasa Tanaka
Hello, Yu-san,
(へろへろな英語で申し訳ないです)

Can I think that you already tweaked Index on the tables?
if you yet,please create apt indexes.

MyISAM caches only Index without data.
i take way for decreasing disk seek,
 1) create more indexes on the tables,if the tables doesn't update quite often.
   including data into index forcibly.
   this makes slow for insert and update,and this is dirty idea,i think.
(よくSELECTされるカラムをINDEXに含めてしまいます。
ただし、SELECT * FROMで呼ばれることが多い場合には使えない上に
かなり美しくない策です。。)
 2) tune filesystem and disk drive parameter for datadir.
   MyISAM table's data caches only in the filesystem cache.
   But i regret that i don't have knowledge around filesystem.

あまり力になれなくて申し訳ないです。

regards,


ts. tanaka//

2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
 Hello Tsubasa.

 Thank you for the reply. (返信ありがとうございます。)

 Our high loaded DB are both INNODB and MyISAM.
 Espicially , on MyISAM.

 I will consider the tuning of innodb_buffer_pool_size as well.

 Do you know the tips for how to tune the disk access for MyISAM?

 Thanks,
 Yu

 Tsubasa Tanaka さんは書きました:
Hello,

I seem your mysqld doesn't use enough memory.

Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

if your mysqld uses InnoDB oftenly,
edit innodb_buffer_pool_size in you my.cnf.

http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size


table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

It is solution for only sql's large result,i think.
if you doesn't recognize that problem causes large result,
you should approach other way,too.

regards,


ts. tanaka//

2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
 Also following is the free command result.

 total   used   free sharedbuffers cached
 Mem:   81623807843676 318704  0  956325970892
 -/+ buffers/cache:17771526385228
 Swap:  8032492  235608008932

 Thanks,
 Yu


 Yu Watanabe さんは書きました:
Hello all.

I would like to ask for advice with performance tuning with MySQL.

Following are some data for my server.

CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
Memory : 8GB
OS : RHEL 4.4 x86_64
MySQL  : MySQL 5.0.50sp1-enterprise

Attached file
# my.cnf.txt  : my.cnf information
# mysqlext_20120522131034.log : variable and status information from 
mysqladmin

I have 2 database working with high load.

I wanted to speed up my select and update queries not by
optimizing the query itself but tuning the my.cnf.

I have referred to following site,
http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html

and read Hiperformance Mysql vol.2 ,
and increased the following values,

table_cache
thread_cache_size
tmp_table_size
max_heap_table_size

but made not much difference.

According to the ps and sar result

*1 PS result
Date   Time  CPU%  RSS VSZ
2012/5/22  21:00:39  109   294752  540028

*2 SAR
Average CPU user 25%
sys  5%
io   3%

I assume that MySQL can work more but currently not.

I am considersing to off load 1 high load database to
seperate process and make MySQL work in multiple process.

It would be a great help if people in this forum can give
us an adivice for the tuning.

Best Regards,
Yu Watanabe

__


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Query help,,,

2012-05-17 Thread Rick James
For (1), ponder:

Group-by trick example: Find the most populous city in each state:

SELECT  state, city, population, COUNT(*) AS num_cities
FROM
  ( SELECT  state, city, population
FROM  us
ORDER BY  state, population DESC ) p
GROUP BY  state
ORDER BY  state;
+---+-+++
| state | city| population | num_cities |
+---+-+++
| AK| Anchorage   | 276263 | 16 |
| AL| Birmingham  | 231621 | 58 |
| AR| Little Rock | 184217 | 40 |
| AZ| Phoenix |1428509 | 51 |
| CA| Los Angeles |3877129 |447 |
...

 -Original Message-
 From: Don Wieland [mailto:d...@pointmade.net]
 Sent: Thursday, May 17, 2012 7:37 AM
 To: mysql@lists.mysql.com
 Subject: Query help,,,
 
 Hi folks,
 
 I am trying to compile a query that does statistics on appointments
 based on specific criteria. Here is my starting query:
 
 SELECT
  u.user_id,
  c.client_id,
  c.first_name,
  c.last_name,
  a.time_start AS stime,
  FROM_UNIXTIME(a.time_start,'%Y-%m-%d') AS formatted
 
   FROM tl_appt a
LEFT JOIN tl_users u ON a.user_id = u.user_id
LEFT JOIN tl_clients c ON a.client_id = c.client_id
LEFT JOIN tl_rooms r on a.room_id = r.room_id
 
WHERE a.appt_id IS NOT NULL AND FROM_UNIXTIME(a.time_start,'%Y-%m-
 %d') between '2011-05-01' and '2011-12-31' and r.location_id = '2' and
 a.user_id IN (14) ORDER BY u.last_name, u.first_name, c.last_name,
 c.first_name
 
 This will return a set of rows where a client may have MORE THEN ONE
 appointment. From this set I need to narrow more:
 
 1) Only display the first appointment PER Client. (there will be no
 duplicate client_id)
 
 Then once I have that set of rows established, I need to query for two
 more result:
 
 1) Show New Customers = those where the COUNT of appointments (in the
 entire tl_appt table) LESS THAN the stime = 0
 
 2) Show FORMER Customers = those where the COUNT of appointments (in
 the entire tl_appt table) LESS THAN the stime  0
 
 I am sure I need a multiple select query, but I am having a hard time
 wrapping my head around it.
 
 Thanks for any feedback.
 
 Don
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query help,,,

2012-05-17 Thread Peter Brawley

On 2012-05-17 9:37 AM, Don Wieland wrote:

Hi folks,

I am trying to compile a query that does statistics on appointments 
based on specific criteria. Here is my starting query:


SELECT
u.user_id,
c.client_id,
c.first_name,
c.last_name,
a.time_start AS stime,
FROM_UNIXTIME(a.time_start,'%Y-%m-%d') AS formatted

 FROM tl_appt a
  LEFT JOIN tl_users u ON a.user_id = u.user_id
  LEFT JOIN tl_clients c ON a.client_id = c.client_id
  LEFT JOIN tl_rooms r on a.room_id = r.room_id

  WHERE a.appt_id IS NOT NULL AND 
FROM_UNIXTIME(a.time_start,'%Y-%m-%d') between '2011-05-01' and 
'2011-12-31' and r.location_id = '2' and a.user_id IN (14) ORDER BY 
u.last_name, u.first_name, c.last_name, c.first_name


This will return a set of rows where a client may have MORE THEN ONE 
appointment. From this set I need to narrow more:


1) Only display the first appointment PER Client. (there will be no 
duplicate client_id)


Then once I have that set of rows established, I need to query for two 
more result:


1) Show New Customers = those where the COUNT of appointments (in the 
entire tl_appt table) LESS THAN the stime = 0


2) Show FORMER Customers = those where the COUNT of appointments (in 
the entire tl_appt table) LESS THAN the stime  0


I am sure I need a multiple select query, but I am having a hard time 
wrapping my head around it.


Thanks for any feedback.


Conceptually the hard bit might be narrowing to the first appt per 
client. For various approaches to this task see Within-group 
aggregates at http://www.artfulsoftware.com/infotree/queries.php.


If new  former clients are to be retrieved from first appts only, you 
might want to save the result of the within-groups agggregate query to a 
result table and query that. If the whole process has to happen often, 
consider developing a wee cube, or just a denormalised reslt table that 
can be invoked whenever needed.


PB

-



Don



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Group_Concat help...

2012-03-19 Thread Mihail Manolov
Try this

SELECT `user_id`, `login_ip`,
COUNT(`id`) AS 'connections'
FROM `mysql_test`
WHERE `login_datetime` BETWEEN '2012-03-19 00:00:00' AND '2012-03-19
23:59:59'
GROUP BY `user_id`, `login_ip`
HAVING COUNT(`id`)  2
ORDER BY COUNT(`id`) DESC 
LIMIT 0, 15;

On Mar 19, 2012, at 12:06 PM, Steven Staples wrote:

 SELECT `user_id`, GROUP_CONCAT(DISTINCT `login_ip`) AS 'login_ips',
 COUNT(`id`) AS 'connections'
 FROM `mysql_test`
 WHERE `login_datetime` BETWEEN '2012-03-19 00:00:00' AND '2012-03-19
 23:59:59'
 GROUP BY `user_id`
 HAVING COUNT(`id`)  2
 ORDER BY COUNT(`id`) DESC 
 LIMIT 0, 15;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



  1   2   3   4   5   6   7   8   9   10   >