MySQL password issue

2012-10-25 Thread Tim Thorburn

Hi All,

Today I ran into an interesting problem with my MySQL installation.  
I'll start off with the usual suspects: this is my development laptop 
running Windows 7 Ultimate 64-bit this is a fresh install, fully updated 
from Windows Update.  I downloaded the Windows Installer version of 
MySQL which is MySQL 5.5.28 Community Server.  I ran the setup choosing 
Developer install, made a root password, then created two users for 
Backup and Replication.


The install completed successfully, and I was able to start MySQL 
Workbench logging in as root once.  I made a user in Server 
Administration and then imported a .sql backup file.  I then went to SQL 
Development to review the result of the restore.  I wasn't able to view 
the imported tables, Workbench seemed stuck on Retrieving Tables.  This 
database is driving a website I'm working on, and I was able to load the 
site in a browser on my laptop - again once.


Next, I went to add another user, and then tried to import another .sql 
backup file.  This time, when I clicked Start Import, Workbench prompted 
me to enter my root password again.  Odd I thought, as I'd originally 
told Workbench to save the password in its vault.  Unfortunately, 
Workbench won't accept the root password.  I receive the following 
message in a popup window:


Cannot Connect to Database Server
Your connection attempt failed for user 'root' from your host to server 
at localhost:3306:

Access denied for user 'root'@'localhost' (using password: YES)

The MySQL Service is still running, I've tried stopping, starting, and 
just restarting the service.  I've tried rebooting.  All with the same 
result.  What could have happened to this install to make MySQL no 
longer accept my password?  I have already tried uninstalling via the 
MySQL Installer, confirming that C:\ProgramData\MySQL was removed, and 
then shutting Windows down completely before trying to re-install.  The 
same result happens - I'm able to log in once, then several minutes 
later it will no longer accept a password.  This is a fresh install 
again, so there's not a lot of pain in uninstalling and reinstalling.


Any thoughts on why this is happening, or more importantly, how to 
correct the problem?


TIA,
-Tim


RE: MySQL password issue

2012-10-25 Thread Nitin Mehta
Hi Tim,

It looks like your '.sql backup file' has changed the password for root user 
and why it is persisting is perhaps you have data directory outside the install 
directory.

How you correct the problem:

Stop the service, start the service with option '--skip-grant-tables', login 
with root user and change the password from inside mysql.

Hope that helps.



-Original Message-
From: Tim Thorburn [mailto:webmas...@athydro.com] 
Sent: Thursday, October 25, 2012 12:08 PM
To: mysql@lists.mysql.com
Subject: MySQL password issue

Hi All,

Today I ran into an interesting problem with my MySQL installation.  
I'll start off with the usual suspects: this is my development laptop running 
Windows 7 Ultimate 64-bit this is a fresh install, fully updated from Windows 
Update.  I downloaded the Windows Installer version of MySQL which is MySQL 
5.5.28 Community Server.  I ran the setup choosing Developer install, made a 
root password, then created two users for Backup and Replication.

The install completed successfully, and I was able to start MySQL Workbench 
logging in as root once.  I made a user in Server Administration and then 
imported a .sql backup file.  I then went to SQL Development to review the 
result of the restore.  I wasn't able to view the imported tables, Workbench 
seemed stuck on Retrieving Tables.  This database is driving a website I'm 
working on, and I was able to load the site in a browser on my laptop - again 
once.

Next, I went to add another user, and then tried to import another .sql backup 
file.  This time, when I clicked Start Import, Workbench prompted me to enter 
my root password again.  Odd I thought, as I'd originally told Workbench to 
save the password in its vault.  Unfortunately, Workbench won't accept the root 
password.  I receive the following message in a popup window:

Cannot Connect to Database Server
Your connection attempt failed for user 'root' from your host to server at 
localhost:3306:
Access denied for user 'root'@'localhost' (using password: YES)

The MySQL Service is still running, I've tried stopping, starting, and just 
restarting the service.  I've tried rebooting.  All with the same result.  What 
could have happened to this install to make MySQL no longer accept my password? 
 I have already tried uninstalling via the MySQL Installer, confirming that 
C:\ProgramData\MySQL was removed, and then shutting Windows down completely 
before trying to re-install.  The same result happens - I'm able to log in 
once, then several minutes later it will no longer accept a password.  This is 
a fresh install again, so there's not a lot of pain in uninstalling and 
reinstalling.

Any thoughts on why this is happening, or more importantly, how to correct the 
problem?

TIA,
-Tim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL password issue

2012-10-25 Thread Tim Thorburn
Not sure how the .sql backup file would have done that, but it seems to 
be working now.  I'll just step away slowly and carry on.  Thanks for 
the help!


On 10/25/2012 2:45 AM, Nitin Mehta wrote:

Hi Tim,

It looks like your '.sql backup file' has changed the password for root user 
and why it is persisting is perhaps you have data directory outside the install 
directory.

How you correct the problem:

Stop the service, start the service with option '--skip-grant-tables', login 
with root user and change the password from inside mysql.

Hope that helps.



-Original Message-
From: Tim Thorburn [mailto:webmas...@athydro.com]
Sent: Thursday, October 25, 2012 12:08 PM
To: mysql@lists.mysql.com
Subject: MySQL password issue

Hi All,

Today I ran into an interesting problem with my MySQL installation.
I'll start off with the usual suspects: this is my development laptop running 
Windows 7 Ultimate 64-bit this is a fresh install, fully updated from Windows 
Update.  I downloaded the Windows Installer version of MySQL which is MySQL 
5.5.28 Community Server.  I ran the setup choosing Developer install, made a 
root password, then created two users for Backup and Replication.

The install completed successfully, and I was able to start MySQL Workbench 
logging in as root once.  I made a user in Server Administration and then 
imported a .sql backup file.  I then went to SQL Development to review the 
result of the restore.  I wasn't able to view the imported tables, Workbench 
seemed stuck on Retrieving Tables.  This database is driving a website I'm 
working on, and I was able to load the site in a browser on my laptop - again 
once.

Next, I went to add another user, and then tried to import another .sql backup 
file.  This time, when I clicked Start Import, Workbench prompted me to enter 
my root password again.  Odd I thought, as I'd originally told Workbench to 
save the password in its vault.  Unfortunately, Workbench won't accept the root 
password.  I receive the following message in a popup window:

Cannot Connect to Database Server
Your connection attempt failed for user 'root' from your host to server at 
localhost:3306:
Access denied for user 'root'@'localhost' (using password: YES)

The MySQL Service is still running, I've tried stopping, starting, and just 
restarting the service.  I've tried rebooting.  All with the same result.  What 
could have happened to this install to make MySQL no longer accept my password? 
 I have already tried uninstalling via the MySQL Installer, confirming that 
C:\ProgramData\MySQL was removed, and then shutting Windows down completely 
before trying to re-install.  The same result happens - I'm able to log in 
once, then several minutes later it will no longer accept a password.  This is 
a fresh install again, so there's not a lot of pain in uninstalling and 
reinstalling.

Any thoughts on why this is happening, or more importantly, how to correct the 
problem?

TIA,
-Tim




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Replication Question

2012-10-25 Thread Sabika M
I have replication setup in the following way:

A - B -C

I am making updates to server A. I want to stop all my updates and point
them to server C. After I start writing to server C, can I use the change
master statement to make the C the master of A (take B out of the topology)
and proceed to set up peer-to-peer between A - C without taking a new
backup of c to set up the replication between A-C?

I guess what I am really asking is if the data is the same, is a backup
required for initialization?


Re: Schemas for storing and reporting on hashtags

2012-10-25 Thread Hank
Dehua,

Thanks. You are correct.  Perhaps I was just over-thinking it.

-Hank


On Fri, Oct 19, 2012 at 9:48 PM, Dehua Yang meflyingf...@gmail.com wrote:
 Hi Hank

 I just can think like this table

 CREATE TABLE xyz (
 hashtag VARCHAR(...) NOT NULL,
 comment_id ... NOT NULL,
 user_id bigint unsigned not null,
 PRIMARY KEY (hashtag, comment_id),
 INDEX(comment_id, hashtag),
 index idx_user_id(user_id)
 ) ENGINE = InnoDB;


  one user want to check all his comments
 select * from xyz where user_id=x;

 Actually, I think your case is very classic.

 Hope that you can tell me how would you to resolve this problem.



 On Sat, Oct 20, 2012 at 3:26 AM, Hank hes...@gmail.com wrote:

 It's actually much more complicated than a simple many-to-many
 relationship.  For instance, users may enter many hashtags in many
 comments, and a user might want to see all the hashtags they've used,
 and then find all the comments with those hashtags.

 I'm not trying to re-create the specification or build it here in
 discussion.  I'm looking to see if anyone has it done already, and I
 can review their flushed-out design.   I've been a database architect
 for 20+ years, so I know what I'm doing.  I'm not asking for people to
 show me what to do.  Like I said, I could sit down and design it
 myself pretty quickly, but I would like to see what other people have
 *actually done* to solve the problem before.

 -Hank


 On Fri, Oct 19, 2012 at 2:42 PM, Rick James rja...@yahoo-inc.com wrote:
  Many-to-many?  That is, can a comment have many different hashtags?  And
  a hashtag can be associated with many comments?
 
  Best practice for many-to-many:
  CREATE TABLE xyz (
  hashtag VARCHAR(...) NOT NULL,
  comment_id ... NOT NULL,
  PRIMARY KEY (hashtag, comment_id),
  INDEX(comment_id, hashtag)
  ) ENGINE = InnoDB;
 
  One might want to normalize the hashtags, but it does not seem
  warranted in this situation.
 
  -Original Message-
  From: Hank [mailto:hes...@gmail.com]
  Sent: Friday, October 19, 2012 8:58 AM
  To: MySql
  Subject: Schemas for storing and reporting on hashtags
 
  Are there any established best practices or schemas for incorporating
  twitter-like hashtags  into a database ?
 
  Let's say I have a blog with a commenting system, and I want to allow
  people to add hashtags to the comments.
 
  I could certainly create one on my own (it's not that difficult), but
  I'd like to see what other people have done in terms of storage and
  features.
 
  I'm also looking for a solid basic implementation, not something overly
  complex.
 
  Thanks,
 
  -Hank
 
  (query, mysql)
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
 Gtalk : meflyingf...@gmail.com
 Skype name : meflyingfish
 Twitter: http://twitter.com/whitepoplar


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL Monitor and Percona

2012-10-25 Thread Sabika M
We are in the process of switching to percona binaries of MySQL . I am
using Percona 5.5.27 and monitoring the MySQL server with the MySQL
Enterprise monitor. It starts up fine, but after a while I end up with
MySQL monitor connections stacking up until the server becomes pretty much
useless. This happens only on servers running percona. Anyone else have
this issue? Or heard of it? Anything helps!


Help with purging old logs for each customer ID

2012-10-25 Thread Daevid Vincent
I have a customer log table that is starting to rapidly fill up (we have
hundreds of thousands of users, but many are transient, and use the service
for a few months, or use the free trial and quit, etc.)

CREATE TABLE `customers_log` (
  `customer_log_id` bigint(20) unsigned NOT NULL auto_increment,
  `customer_id` int(10) unsigned default '0',
  `created_on` timestamp NOT NULL default CURRENT_TIMESTAMP on update
CURRENT_TIMESTAMP,
  `type`
enum('View','Action','Admin','Search','Login','Logout','Access','General','A
PI'),
  `source` enum('web','mobile','system'),
  `body` text,
  PRIMARY KEY  (`customer_log_id`),
  KEY `created_on` (`created_on`),
  KEY `customers_id` (`customer_id`)
) ENGINE=InnoDB

What I'd like to do now is make a 'rolling log' in that I want to DELETE any
entries older than 90 days for EACH `customer_id`.

I'm not sure how to do that in a query? I'd rather not iterate over each
customer_id if I can help it.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Help with purging old logs for each customer ID

2012-10-25 Thread Rick James
Off hand, I would iterate over the PRIMARY KEY, looking at a thousand rows at a 
time, DELETEing any that need to be purged.  I would use a Perl or PHP loop, or 
write a stored procedure.  More discussion of huge deletes (which this 
_could_ be):
http://mysql.rjweb.org/doc.php/deletebig
(PARTITIONing does not apply in your case, as I understand it.)

I like the loop:

SELECT @z := customer_log_id FROM customers_log LIMIT 1000, 1;
DELETE FROM customers_log
WHERE customer_log_id = @left_off AND customer_log_id  @z
  AND created_on  NOW() - INTERVAL 90 DAY;
sleep a few seconds (to be a nice guy)

Plus code to take care of iterating and terminating.

That loop could be done continually.

It seems that customer_id is irrelevant??  

 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Thursday, October 25, 2012 11:46 AM
 To: mysql@lists.mysql.com
 Subject: Help with purging old logs for each customer ID
 
 I have a customer log table that is starting to rapidly fill up (we
 have hundreds of thousands of users, but many are transient, and use
 the service for a few months, or use the free trial and quit, etc.)
 
 CREATE TABLE `customers_log` (
   `customer_log_id` bigint(20) unsigned NOT NULL auto_increment,
   `customer_id` int(10) unsigned default '0',
   `created_on` timestamp NOT NULL default CURRENT_TIMESTAMP on update
 CURRENT_TIMESTAMP,
   `type`
 enum('View','Action','Admin','Search','Login','Logout','Access','Genera
 l','A
 PI'),
   `source` enum('web','mobile','system'),
   `body` text,
   PRIMARY KEY  (`customer_log_id`),
   KEY `created_on` (`created_on`),
   KEY `customers_id` (`customer_id`)
 ) ENGINE=InnoDB
 
 What I'd like to do now is make a 'rolling log' in that I want to
 DELETE any entries older than 90 days for EACH `customer_id`.
 
 I'm not sure how to do that in a query? I'd rather not iterate over
 each customer_id if I can help it.
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Replication Question

2012-10-25 Thread Rick James
Sound right.
Be sure there are no writes during certain critical times.  And that 
replication is caught up.

If you want to verify the sameness see Percona's pt-table-checksum.


 -Original Message-
 From: Sabika M [mailto:sabika.makhd...@gmail.com]
 Sent: Thursday, October 25, 2012 10:16 AM
 To: MySql
 Subject: Replication Question
 
 I have replication setup in the following way:
 
 A - B -C
 
 I am making updates to server A. I want to stop all my updates and
 point them to server C. After I start writing to server C, can I use
 the change master statement to make the C the master of A (take B out
 of the topology) and proceed to set up peer-to-peer between A - C
 without taking a new backup of c to set up the replication between A-
 C?
 
 I guess what I am really asking is if the data is the same, is a backup
 required for initialization?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Help with purging old logs for each customer ID

2012-10-25 Thread Daevid Vincent
Well, the customer_id is relevant in that I want the last 90 days relative
to each customer.

customer_id = 123 might have logs from jan - mar
customer_id = 444 might have logs from feb - may

So it's a rolling log of THEIR last 90 days from their last log (most
recent) back 90 days from there.

Does that make more sense?

I guess I was trying to avoid looping over every customer ID and computing
if I could help it. I thought by using a GROUP BY or something it could
group all the logs for a given customer and then trim them that way.

But maybe brute force is the way to go?

 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: Thursday, October 25, 2012 1:09 PM
 To: Daevid Vincent; mysql@lists.mysql.com
 Subject: RE: Help with purging old logs for each customer ID
 
 Off hand, I would iterate over the PRIMARY KEY, looking at a thousand rows
 at a time, DELETEing any that need to be purged.  I would use a Perl or
PHP
 loop, or write a stored procedure.  More discussion of huge deletes
(which
 this _could_ be):
 http://mysql.rjweb.org/doc.php/deletebig
 (PARTITIONing does not apply in your case, as I understand it.)
 
 I like the loop:
 
 SELECT @z := customer_log_id FROM customers_log LIMIT 1000, 1;
 DELETE FROM customers_log
 WHERE customer_log_id = @left_off AND customer_log_id  @z
   AND created_on  NOW() - INTERVAL 90 DAY;
 sleep a few seconds (to be a nice guy)
 
 Plus code to take care of iterating and terminating.
 
 That loop could be done continually.
 
 It seems that customer_id is irrelevant??
 
  -Original Message-
  From: Daevid Vincent [mailto:dae...@daevid.com]
  Sent: Thursday, October 25, 2012 11:46 AM
  To: mysql@lists.mysql.com
  Subject: Help with purging old logs for each customer ID
 
  I have a customer log table that is starting to rapidly fill up (we
  have hundreds of thousands of users, but many are transient, and use
  the service for a few months, or use the free trial and quit, etc.)
 
  CREATE TABLE `customers_log` (
`customer_log_id` bigint(20) unsigned NOT NULL auto_increment,
`customer_id` int(10) unsigned default '0',
`created_on` timestamp NOT NULL default CURRENT_TIMESTAMP on update
  CURRENT_TIMESTAMP,
`type`
  enum('View','Action','Admin','Search','Login','Logout','Access','Genera
  l','A
  PI'),
`source` enum('web','mobile','system'),
`body` text,
PRIMARY KEY  (`customer_log_id`),
KEY `created_on` (`created_on`),
KEY `customers_id` (`customer_id`)
  ) ENGINE=InnoDB
 
  What I'd like to do now is make a 'rolling log' in that I want to
  DELETE any entries older than 90 days for EACH `customer_id`.
 
  I'm not sure how to do that in a query? I'd rather not iterate over
  each customer_id if I can help it.
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Help with purging old logs for each customer ID

2012-10-25 Thread Rick James
If the 90 days is back from MAX(created_on) for a given customer...
   INDEX(customer_id, created_on)
will probably be needed.  And that should replace KEY `customers_id` 
(`customer_id`).

Maybe...

DELETE FROM customers_log AS a
WHERE a.customer_log_id = @left_off AND a.customer_log_id  @z
  AND a.created_on  
 ( SELECT MAX(created_on) FROM customers_log
 WHERE customer_id = a.customer_id
 )
  - INTERVAL 90 DAY );
(Since this has the subquery, I would do only 100 at a time, not 1000)

Or...
CREATE TEMPORARY TABLE tmp
SELECT customer_id, MAX(created_on) - INTERVAL 90 DAY  AS cutoff
FROM customers_log
GROUP BY customer_id;
DELETE FROM customers_log AS a
   JOIN tmp  ON a.customer_id = tmp.customer_id
WHERE a.customer_log_id = @left_off AND a.customer_log_id  @z
  AND a.created_on  tmp.cutoff;

If you have millions of rows, a delete without some kind of loop is asking for 
trouble.

Or...
Turning things around to base it on customers...
Loop through customer_ids (yeah, you did not want to do this)

SELECT @id := 0;
StartLoop:
SELECT @id := customer_id WHERE customer_id  @id ORDER BY customer_id 
LIMIT 1;
if @id is NULL, exit
DELETE FROM customers_log AS a
WHERE a.customer_id = @id
  AND a.created_on  
 ( SELECT MAX(created_on) FROM customers_log
 WHERE customer_id = @id
 )
  - INTERVAL 90 DAY ); 
EndLoop.

Since there is no rush for the purging, there is little need to optimize it 
other than to keep it from interfering with other queries.  To that end, the 
compound index I propose is important.

 -Original Message-
 From: Daevid Vincent [mailto:dae...@daevid.com]
 Sent: Thursday, October 25, 2012 1:33 PM
 To: Rick James; mysql@lists.mysql.com
 Subject: RE: Help with purging old logs for each customer ID
 
 Well, the customer_id is relevant in that I want the last 90 days
 relative to each customer.
 
 customer_id = 123 might have logs from jan - mar customer_id = 444
 might have logs from feb - may
 
 So it's a rolling log of THEIR last 90 days from their last log (most
 recent) back 90 days from there.
 
 Does that make more sense?
 
 I guess I was trying to avoid looping over every customer ID and
 computing if I could help it. I thought by using a GROUP BY or
 something it could group all the logs for a given customer and then
 trim them that way.
 
 But maybe brute force is the way to go?
 
  -Original Message-
  From: Rick James [mailto:rja...@yahoo-inc.com]
  Sent: Thursday, October 25, 2012 1:09 PM
  To: Daevid Vincent; mysql@lists.mysql.com
  Subject: RE: Help with purging old logs for each customer ID
 
  Off hand, I would iterate over the PRIMARY KEY, looking at a thousand
  rows at a time, DELETEing any that need to be purged.  I would use a
  Perl or
 PHP
  loop, or write a stored procedure.  More discussion of huge deletes
 (which
  this _could_ be):
  http://mysql.rjweb.org/doc.php/deletebig
  (PARTITIONing does not apply in your case, as I understand it.)
 
  I like the loop:
 
  SELECT @z := customer_log_id FROM customers_log LIMIT 1000, 1;
  DELETE FROM customers_log
  WHERE customer_log_id = @left_off AND customer_log_id  @z
AND created_on  NOW() - INTERVAL 90 DAY;
  sleep a few seconds (to be a nice guy)
 
  Plus code to take care of iterating and terminating.
 
  That loop could be done continually.
 
  It seems that customer_id is irrelevant??
 
   -Original Message-
   From: Daevid Vincent [mailto:dae...@daevid.com]
   Sent: Thursday, October 25, 2012 11:46 AM
   To: mysql@lists.mysql.com
   Subject: Help with purging old logs for each customer ID
  
   I have a customer log table that is starting to rapidly fill up (we
   have hundreds of thousands of users, but many are transient, and
 use
   the service for a few months, or use the free trial and quit, etc.)
  
   CREATE TABLE `customers_log` (
 `customer_log_id` bigint(20) unsigned NOT NULL auto_increment,
 `customer_id` int(10) unsigned default '0',
 `created_on` timestamp NOT NULL default CURRENT_TIMESTAMP on
   update CURRENT_TIMESTAMP,
 `type`
  
 enum('View','Action','Admin','Search','Login','Logout','Access','Gen
   era
   l','A
   PI'),
 `source` enum('web','mobile','system'),
 `body` text,
 PRIMARY KEY  (`customer_log_id`),
 KEY `created_on` (`created_on`),
 KEY `customers_id` (`customer_id`)
   ) ENGINE=InnoDB
  
   What I'd like to do now is make a 'rolling log' in that I want to
   DELETE any entries older than 90 days for EACH `customer_id`.
  
   I'm not sure how to do that in a query? I'd rather not iterate over
   each customer_id if I can help it.
  
  
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
 
 
  --
  MySQL General Mailing List
  For list 

RE: MySQL Monitor and Percona

2012-10-25 Thread Rick James
What does SHOW FULL PROCESSLIST say?

What values do you have for
max_connections
wait_timeout (GLOBAL version)

 -Original Message-
 From: Sabika M [mailto:sabika.makhd...@gmail.com]
 Sent: Thursday, October 25, 2012 11:44 AM
 To: mysql@lists.mysql.com
 Subject: MySQL Monitor and Percona
 
 We are in the process of switching to percona binaries of MySQL . I am
 using Percona 5.5.27 and monitoring the MySQL server with the MySQL
 Enterprise monitor. It starts up fine, but after a while I end up with
 MySQL monitor connections stacking up until the server becomes pretty
 much useless. This happens only on servers running percona. Anyone else
 have this issue? Or heard of it? Anything helps!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL Monitor and Percona

2012-10-25 Thread Sabika M
I don't have the full anymore. It had a few hundred of these. I had issues
a drop database that hung which made me run show processlist. Sadly, I did
not check the processlist beforehand.

| 101189 | monitor   | localhost:42585 | NULL  | Query   |   4079 |
NULL | SHOW /*!5 ENGINE */ INNODB
STATUS | 0 | 0 | 1 |
| 101194 | monitor   | localhost:42589 | NULL  | Query   |   4069 |
NULL | SHOW /*!5 ENGINE */ INNODB
STATUS | 0 | 0 | 1 |
| 101204 | monitor   | localhost:42599 | NULL  | Query   |   4049 |
executing| SHOW /*!5 GLOBAL */
STATUS| 0 | 0 | 1 |
| 101214 | monitor   | localhost:42608 | NULL  | Query   |   4029 |
executing| SHOW /*!5 GLOBAL */
STATUS| 0 | 0 | 1 |

 wait_timeout  | 28800
max_connections | 5000

Since I sent this email, we have checked network connectivity and fixed a
typo in the hostname.


On Thu, Oct 25, 2012 at 1:56 PM, Rick James rja...@yahoo-inc.com wrote:

 What does SHOW FULL PROCESSLIST say?

 What values do you have for
 max_connections
 wait_timeout (GLOBAL version)

  -Original Message-
  From: Sabika M [mailto:sabika.makhd...@gmail.com]
  Sent: Thursday, October 25, 2012 11:44 AM
  To: mysql@lists.mysql.com
  Subject: MySQL Monitor and Percona
 
  We are in the process of switching to percona binaries of MySQL . I am
  using Percona 5.5.27 and monitoring the MySQL server with the MySQL
  Enterprise monitor. It starts up fine, but after a while I end up with
  MySQL monitor connections stacking up until the server becomes pretty
  much useless. This happens only on servers running percona. Anyone else
  have this issue? Or heard of it? Anything helps!



RE: Help with purging old logs for each customer ID

2012-10-25 Thread Daevid Vincent
Thank you Rick! You're a super freak! ;-p

This gets me in the vicinity of where I'm trying to go and I learned a few
new tricks with the StartLoop: stuff too! Neat!

d.

 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: Thursday, October 25, 2012 1:54 PM
 To: Daevid Vincent; mysql@lists.mysql.com
 Subject: RE: Help with purging old logs for each customer ID
 
 If the 90 days is back from MAX(created_on) for a given customer...
INDEX(customer_id, created_on)
 will probably be needed.  And that should replace KEY `customers_id`
 (`customer_id`).
 
 Maybe...
 
 DELETE FROM customers_log AS a
 WHERE a.customer_log_id = @left_off AND a.customer_log_id  @z
   AND a.created_on 
  ( SELECT MAX(created_on) FROM customers_log
  WHERE customer_id = a.customer_id
  )
   - INTERVAL 90 DAY );
 (Since this has the subquery, I would do only 100 at a time, not 1000)
 
 Or...
 CREATE TEMPORARY TABLE tmp
 SELECT customer_id, MAX(created_on) - INTERVAL 90 DAY  AS cutoff
 FROM customers_log
 GROUP BY customer_id;
 DELETE FROM customers_log AS a
JOIN tmp  ON a.customer_id = tmp.customer_id
 WHERE a.customer_log_id = @left_off AND a.customer_log_id  @z
   AND a.created_on  tmp.cutoff;
 
 If you have millions of rows, a delete without some kind of loop is asking
 for trouble.
 
 Or...
 Turning things around to base it on customers...
 Loop through customer_ids (yeah, you did not want to do this)
 
 SELECT @id := 0;
 StartLoop:
 SELECT @id := customer_id WHERE customer_id  @id ORDER BY customer_id
 LIMIT 1;
 if @id is NULL, exit
 DELETE FROM customers_log AS a
 WHERE a.customer_id = @id
   AND a.created_on 
  ( SELECT MAX(created_on) FROM customers_log
  WHERE customer_id = @id
  )
   - INTERVAL 90 DAY );
 EndLoop.
 
 Since there is no rush for the purging, there is little need to optimize
it
 other than to keep it from interfering with other queries.  To that end,
the
 compound index I propose is important.
 
  -Original Message-
  From: Daevid Vincent [mailto:dae...@daevid.com]
  Sent: Thursday, October 25, 2012 1:33 PM
  To: Rick James; mysql@lists.mysql.com
  Subject: RE: Help with purging old logs for each customer ID
 
  Well, the customer_id is relevant in that I want the last 90 days
  relative to each customer.
 
  customer_id = 123 might have logs from jan - mar customer_id = 444
  might have logs from feb - may
 
  So it's a rolling log of THEIR last 90 days from their last log (most
  recent) back 90 days from there.
 
  Does that make more sense?
 
  I guess I was trying to avoid looping over every customer ID and
  computing if I could help it. I thought by using a GROUP BY or
  something it could group all the logs for a given customer and then
  trim them that way.
 
  But maybe brute force is the way to go?
 
   -Original Message-
   From: Rick James [mailto:rja...@yahoo-inc.com]
   Sent: Thursday, October 25, 2012 1:09 PM
   To: Daevid Vincent; mysql@lists.mysql.com
   Subject: RE: Help with purging old logs for each customer ID
  
   Off hand, I would iterate over the PRIMARY KEY, looking at a thousand
   rows at a time, DELETEing any that need to be purged.  I would use a
   Perl or
  PHP
   loop, or write a stored procedure.  More discussion of huge deletes
  (which
   this _could_ be):
   http://mysql.rjweb.org/doc.php/deletebig
   (PARTITIONing does not apply in your case, as I understand it.)
  
   I like the loop:
  
   SELECT @z := customer_log_id FROM customers_log LIMIT 1000, 1;
   DELETE FROM customers_log
   WHERE customer_log_id = @left_off AND customer_log_id  @z
 AND created_on  NOW() - INTERVAL 90 DAY;
   sleep a few seconds (to be a nice guy)
  
   Plus code to take care of iterating and terminating.
  
   That loop could be done continually.
  
   It seems that customer_id is irrelevant??
  
-Original Message-
From: Daevid Vincent [mailto:dae...@daevid.com]
Sent: Thursday, October 25, 2012 11:46 AM
To: mysql@lists.mysql.com
Subject: Help with purging old logs for each customer ID
   
I have a customer log table that is starting to rapidly fill up (we
have hundreds of thousands of users, but many are transient, and
  use
the service for a few months, or use the free trial and quit, etc.)
   
CREATE TABLE `customers_log` (
  `customer_log_id` bigint(20) unsigned NOT NULL auto_increment,
  `customer_id` int(10) unsigned default '0',
  `created_on` timestamp NOT NULL default CURRENT_TIMESTAMP on
update CURRENT_TIMESTAMP,
  `type`
   
  enum('View','Action','Admin','Search','Login','Logout','Access','Gen
era
l','A
PI'),
  `source` enum('web','mobile','system'),
  `body` text,
  PRIMARY KEY  (`customer_log_id`),
  KEY `created_on` (`created_on`),
   

RE: MySQL Monitor and Percona

2012-10-25 Thread Rick James
A guess -- MEM is issuing that on a timed basis, but the query is running 
longer than the time period.  Can you slow MEM down?
Meanwhile, file a bug report.  You paid too much money for this kind of junior 
mistake.

From: Sabika M [mailto:sabika.makhd...@gmail.com]
Sent: Thursday, October 25, 2012 4:15 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: Re: MySQL Monitor and Percona

I don't have the full anymore. It had a few hundred of these. I had issues a 
drop database that hung which made me run show processlist. Sadly, I did not 
check the processlist beforehand.

| 101189 | monitor   | localhost:42585 | NULL  | Query   |   4079 | NULL
 | SHOW /*!5 ENGINE */ INNODB STATUS | 
0 | 0 | 1 |
| 101194 | monitor   | localhost:42589 | NULL  | Query   |   4069 | NULL
 | SHOW /*!5 ENGINE */ INNODB STATUS | 
0 | 0 | 1 |
| 101204 | monitor   | localhost:42599 | NULL  | Query   |   4049 | executing   
 | SHOW /*!5 GLOBAL */ STATUS| 
0 | 0 | 1 |
| 101214 | monitor   | localhost:42608 | NULL  | Query   |   4029 | executing   
 | SHOW /*!5 GLOBAL */ STATUS| 
0 | 0 | 1 |

 wait_timeout  | 28800
max_connections | 5000

Since I sent this email, we have checked network connectivity and fixed a typo 
in the hostname.

On Thu, Oct 25, 2012 at 1:56 PM, Rick James 
rja...@yahoo-inc.commailto:rja...@yahoo-inc.com wrote:
What does SHOW FULL PROCESSLIST say?

What values do you have for
max_connections
wait_timeout (GLOBAL version)

 -Original Message-
 From: Sabika M 
 [mailto:sabika.makhd...@gmail.commailto:sabika.makhd...@gmail.com]
 Sent: Thursday, October 25, 2012 11:44 AM
 To: mysql@lists.mysql.commailto:mysql@lists.mysql.com
 Subject: MySQL Monitor and Percona

 We are in the process of switching to percona binaries of MySQL . I am
 using Percona 5.5.27 and monitoring the MySQL server with the MySQL
 Enterprise monitor. It starts up fine, but after a while I end up with
 MySQL monitor connections stacking up until the server becomes pretty
 much useless. This happens only on servers running percona. Anyone else
 have this issue? Or heard of it? Anything helps!