Re: MySQL 5.5.33 History list not purging?

2014-09-11 Thread Brad Heller
So I've done some more digging on this.


   1. Upgraded to MySQL 5.5.37
   2. Made sure we didn't have any old/long running transactions--nothing
   more than a few seconds.
   3. Did a dump/reload in to a new DB and started with an empty history
   list. A few days later, we're back up over 3mil in the list.

We're currently writing about 50MB/s to that machine. Is it possible the
purge thread just...can't keep up for some reason? How can I get better
visibility in to how quickly the purge thread is working vs. how many undo
entries are being put in the thread?

Thanks,


*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> | @cloudability
<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs
<http://www.cloudability.com/jobs>


On Sun, Sep 7, 2014 at 2:04 AM, Jesper Wisborg Krogh 
wrote:

> Hi Brad,
>
> > -Original Message-
> > From: Brad Heller [mailto:b...@cloudability.com]
> > Sent: Sunday, 7 September 2014 03:07
> > To: MySQL General List
> > Subject: MySQL 5.5.33 History list not purging?
> >
> > For some reason, the history list isn't purging on one of my masters.
> This is
> > causing all kinds of weird issues/behavior with reads. Here's the last 8
> or so
> > hours of history list length:
> >
> > http://i.imgur.com/Q4DEeVi.png
> >
>
> I would start looking for an old transaction. You can use SHOW ENGINE
> INNODB STATUS or the information_schema.INNODB_TRX table to look for that.
>
> Best regards,
> Jesper Krogh
> MySQL Support
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: MySQL 5.5.33 History list not purging?

2014-09-06 Thread Brad Heller
Good point. We'll try that out.

I noticed our ibdata file is gigantic now, likely due to the alter table
migration we ran. What's the relationship here, do you think?

*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> | @cloudability
<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs
<http://www.cloudability.com/jobs>


On Sat, Sep 6, 2014 at 10:12 AM, Reindl Harald 
wrote:

>
> Am 06.09.2014 um 19:07 schrieb Brad Heller:
> > For some reason, the history list isn't purging on one of my masters.
> This
> > is causing all kinds of weird issues/behavior with reads
>
> so why do you not just try to update first?
>
> 5.5.39 is recent and contains *a lot* of bufixes
> look at the changelogs - there are tons of replication
> related bugixes with each minor update
>
>


MySQL 5.5.33 History list not purging?

2014-09-06 Thread Brad Heller
For some reason, the history list isn't purging on one of my masters. This
is causing all kinds of weird issues/behavior with reads. Here's the last 8
or so hours of history list length:

http://i.imgur.com/Q4DEeVi.png

Here are my innodb%purge% settings.

mysql> show variables where Variable_name like 'innodb%purge%'\G
*** 1. row ***
Variable_name: innodb_max_purge_lag
Value: 100
*** 2. row ***
Variable_name: innodb_purge_batch_size
Value: 100
*** 3. row ***
Variable_name: innodb_purge_threads
Value: 1
3 rows in set (0.09 sec)

This machine is doing about 500-100 TPS. This manifested right after I did
an ALTER TABLE ADD COLUMN to a massive table (600mil records, 100GB of
data).

What other things can I look at to figure out how to increase bandwidth for
the purge thread?

Thanks,

*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> | @cloudability
<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs
<http://www.cloudability.com/jobs>


Re: SHOW CREATE TABLE suddenly slow on InnoDB?

2014-03-17 Thread Brad Heller
Hey Morgan,

We actually only have about 60 tables in that database. I've tried
increasing the cache and open tables limits and get the same behavior.

mysql> select @@table_definition_cache, @@table_open_cache,
@@innodb_file_per_table, @@innodb_open_files;
+--++-+-+
| @@table_definition_cache | @@table_open_cache | @@innodb_file_per_table |
@@innodb_open_files |
+--++-+-+
| 4096 |   3000 |   1 |
300 |
+--++-+-+
1 row in set (0.10 sec)

A few other tests I've tried:

1. Stand up a new machine, dump just the schema in to it, and run the test.
Performs flawlessly, so it's probably just this machine/snapshot.
2. Stand up a snapshot of my existing machine, truncate the tables,
optimize the truncated tables, and run the test. I get the bad behavior!

Correct me if I'm wrong but it'd appear that there's just something
fundamentally broken this machines' InnoDB ibdata file/data dictionary? All
the contention comes out of the dictionary, but I'd expect the optimize to
re-write the dictionary entries...



*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs<http://www.cloudability.com/jobs>


On Mon, Mar 17, 2014 at 11:55 AM, Morgan Tocker wrote:

> Hi Brad,
>
> > That sounds right. Here's the process list (scrubbed) and the show engine
> > innodb status. Notice that all of the SHOW CREATE TABLE aren't for hte
> same
> > table, just got cleaned up that way.
>
> It shouldn't matter if they are for the same or different - in 5.5 there
> is one table open cache "instance" - so only one person can be opening or
> closing tables at a time.
>
> In 5.6 this is configurable to reduce contention:
>
> http://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_table_open_cache_instances
>
> > https://gist.github.com/bradhe/c9f00eaf93ac588b8339
> >
> > We have the defaults for table_definition_cache and table_open_cache (400
> > each).
>
> I am going to guess and say that you may have a sharded environment with a
> large number of tables?
>
> Another solution that may work, is to increase these caches.  In most
> cases it will work fine, but MPB has also blogged about the exception where
> you can get negative scalability (so many cache misses the cache can't work
> effectively):
>
> http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/
>
> I'm not sure how up-to-date the edge case issue is.  But hopefully this
> gives you some starting points.
>
> (Others, feel free to chime in!)
>
> - Morgan


Re: SHOW CREATE TABLE suddenly slow on InnoDB?

2014-03-17 Thread Brad Heller
Hey Morgan,

That sounds right. Here's the process list (scrubbed) and the show engine
innodb status. Notice that all of the SHOW CREATE TABLE aren't for hte same
table, just got cleaned up that way.

https://gist.github.com/bradhe/c9f00eaf93ac588b8339

We have the defaults for table_definition_cache and table_open_cache (400
each).

*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs<http://www.cloudability.com/jobs>


On Mon, Mar 17, 2014 at 5:55 AM, Morgan Tocker wrote:

> Hi Brad,
>
> > I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query
> so I
> > can figure out what could possibly have made them suddenly slow down?
> >
> > mysql> SHOW CREATE TABLE `my_table`;
> > ...
> > 1 row in set (37.48 sec)
> >
> > We tend to execute many of these statements concurrently, but it's never
> > been a problem until recently. I upgraded the IO subsystem, and our
> > statistics indicate that it's not maxing out IO (at least IOPS).
> >
> > This is problematic because the ORM we're using uses that to figure out
> the
> > structure of our DB...
>
> I am going to guess that it could be contention trying to open the table.
>  i.e. when you run into this issue and run SHOW PROCESSLIST, does it show
> any threads waiting in state "Opening tables"?
>
> (If you can paste a sanitized SHOW PROCESSLIST this helps a lot.)
>
> - Morgan


Re: SHOW CREATE TABLE suddenly slow on InnoDB?

2014-03-17 Thread Brad Heller
Hey Andrew,

I'm on 5.5.27. Good thought. Just flipped that setting off and getting the
same results. It pretty clearly seems to be InnoDB: If I create a HEAP
table, I don't get this behavior.

FWIW, I have (and always have had) innodb_file_per_table enabled, but my
tablespace file is still gigantic (56GB)?

*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs<http://www.cloudability.com/jobs>


On Mon, Mar 17, 2014 at 1:07 AM, Andrew Moore  wrote:

> Hey Brad. What version are you using? My immediate thought is to check if
> innodb_stats_on_metadata is off. If it is on, switch off and check your
> timings again.
>
> Regards
> On 17 Mar 2014 04:40, "Brad Heller"  wrote:
>
>> Hey all,
>>
>> I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query so
>> I
>> can figure out what could possibly have made them suddenly slow down?
>>
>> mysql> SHOW CREATE TABLE `my_table`;
>> ...
>> 1 row in set (37.48 sec)
>>
>> We tend to execute many of these statements concurrently, but it's never
>> been a problem until recently. I upgraded the IO subsystem, and our
>> statistics indicate that it's not maxing out IO (at least IOPS).
>>
>> This is problematic because the ORM we're using uses that to figure out
>> the
>> structure of our DB...
>>
>> *Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514|
>> Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
>> @cloudability<http://www.twitter.com/cloudability>
>>
>> We're hiring! https://cloudability.com/jobs<
>> http://www.cloudability.com/jobs>
>>
>


SHOW CREATE TABLE suddenly slow on InnoDB?

2014-03-16 Thread Brad Heller
Hey all,

I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query so I
can figure out what could possibly have made them suddenly slow down?

mysql> SHOW CREATE TABLE `my_table`;
...
1 row in set (37.48 sec)

We tend to execute many of these statements concurrently, but it's never
been a problem until recently. I upgraded the IO subsystem, and our
statistics indicate that it's not maxing out IO (at least IOPS).

This is problematic because the ORM we're using uses that to figure out the
structure of our DB...

*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>

We're hiring! https://cloudability.com/jobs<http://www.cloudability.com/jobs>


Re: Concurrent read performance problems

2013-08-11 Thread Brad Heller
Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
fixed the issue. Bumping them both to 512MB got our performance back
on-par. I came up with a way to avoid the contention using a complex set of
temp tables, but performance was abysmal.

By reverting to the more straight-forward query with the subselect as well
as tweaking the max_healp_table_size and tmp_table_size I saw no resource
contention causing slowdowns, as well as a 12x performance boost. Thanks
for your help!

*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>


On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman wrote:

> True, which is why I said I suspected file-based sort :-) At one million
> rows, that seems to be an accurate guess, too. Still on the phone, though,
> and in bed. I'll read the thread better tomorrow, but you might get some
> benefit from cutting out the subselect if that's possible.
>
> If you have plenty of memory, have a look at the max_temp_table_size and
> max_heap_table variables, too; those decide when the sort goes to disk.
>
>
> Johnny Withers  wrote:
>>
>> Just because it says filrsort doesn't mean it'll create a file on disk.
>> Table schema and full query would be helpful here too
>>
>> http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
>> On Aug 11, 2013 1:28 PM, "Brad Heller"  wrote:
>>
>>  Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
>>>  there is a lot of filesort goin' on here. Also note that I'm only using the
>>>  first two fields of the covering index (intentionally).
>>>
>>>
>>>  +--+-++---+
>>> --
>>> ++-+
>>> --
>>> +-+
>>> --
>>> +
>>>  | id   | select_type | table  | type  |
>>> possible_keys
>>> | key| key_len | ref
>>>| rows| Extra
>>>  |
>>>
>>>  +--+-++---+
>>> --
>>> ++-+
>>> --
>>> +-+
>>> --
>>> +
>>>  |1 | PRIMARY |  | ALL   | NULL
>>>  | NULL   | NULL| NULL
>>> | 1004685 | Using temporary; Using
>>>  filesort   |
>>>  |2 | DERIVED | accounts   | range |
>>>  PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
>>>  NULL  |   3 | Using
>>>  where; Using index; Using temporary; Using filesort |
>>>  |2 | DERIVED | facts  | ref   | covering
>>>  | covering   | 4   |
>>>  facts.accounts.id|  334895 | Using
>>>  where
>>>|
>>>
>>>  +--+-++---+
>>> --
>>> ++-+
>>> ------
>>> +-+
>>> --
>>> +
>>>
>>>
>>>  *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
>>>  Skype:
>>>  brad.heller | @bradhe <http://www.twitter.com/bradhe> |
>>>  @cloudability<http://www.twitter.com/cloudability>
>>>
>>>
>>>  On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman >>
>>>> wrote:
>>>>
>>>
>>>  On my phone now, but it smells of file-based sorting, making disk access
>>>>  the bottleneck. Can you provide the explain?
>>>>
>>>>
>>>>  Brad Heller  wrote:
>>>>
>>>>>
>>>>>  Hey list, first time posting here so apologies if this is the wrong
>>>>>
>>>>  forum
>>>
>>>>  for this but I'm really out of options on how to solve this problem!
>>>>>
>>>>>  *Short version:*
>>>>>

Re: Concurrent read performance problems

2013-08-11 Thread Brad Heller
Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
there is a lot of filesort goin' on here. Also note that I'm only using the
first two fields of the covering index (intentionally).

+--+-++---+++-+---+-+---+
| id   | select_type | table  | type  | possible_keys
   | key| key_len | ref
  | rows| Extra
|
+--+-++---+++-+---+-+---+
|1 | PRIMARY |  | ALL   | NULL
| NULL   | NULL| NULL
   | 1004685 | Using temporary; Using
filesort   |
|2 | DERIVED | accounts   | range |
PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
NULL  |   3 | Using
where; Using index; Using temporary; Using filesort |
|2 | DERIVED | facts  | ref   | covering
| covering   | 4   |
facts.accounts.id|  334895 | Using
where
  |
+--+-++---+++-+---+-+---+


*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>


On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman wrote:

> On my phone now, but it smells of file-based sorting, making disk access
> the bottleneck. Can you provide the explain?
>
>
> Brad Heller  wrote:
>>
>> Hey list, first time posting here so apologies if this is the wrong forum
>> for this but I'm really out of options on how to solve this problem!
>>
>> *Short version:*
>>
>>
>> 1. High concurrent reads, performing the same well-indexed query type to
>> the same two tables.
>> 2. No additional traffic at all--just reads from these two tables. No
>> writes anywhere.
>> 3. Very fast (sub-second) when server is only servicing one request at a
>> time.
>> 4. Very slow (30s+) when server is servicing 2-3 requests at a time.
>>
>> Questions:
>>
>> 1. What resource is under contention for servicing read-only queries if you
>> have a large buffer pool and a plenty-big thread cache?
>> 2. What parameters can I tune to increase concurrent reads to these two
>> tables?
>>
>> *Long version:*
>>
>>
>> I've got a MySQL server that has only about 50 connections open to it at
>> any given time. It basically only has one OLAP q
>>  uery
>> type being ran against
>> it that amounts to something like this:
>>
>> SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
>> FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
>>
>> These queries are well indexed and run very well individually--sub-second,
>> usually even faster. When I run a few of these queries simultaneously (2-3
>> on my laptop, 6-7 on our production boxes) performance grinds to a halt:
>> Consistently about 30 seconds to service a query.
>>
>> Ideally, I'd like to be able to run 50+ of these queries concurrently.
>>
>> I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on both
>> machines. I've tried tweaking the following my.cnf parameters to be higher:
>>
>> thread_concurrency = 20
>> thread_cache_size = 340
>> innodb_buffer_pool_size=2G
>>
>> A few other my.cnf parameters that I have set:
>>
>> innodb_file_format=Barracuda
>> innodb_file_format_max=Barracuda
>> innodb_file_per_table=1
>> skip-external-locking
>> innodb_log_files_in_group=2
>> innodb_log_file_size=2000M
>> max_allowed_packet=64M
>>
>> Thanks in advance,
>>
>> Brad Heller
>>
>>
> --
> Sent from Kaiten Mail. Please excuse my brevity.
>


Concurrent read performance problems

2013-08-10 Thread Brad Heller
Hey list, first time posting here so apologies if this is the wrong forum
for this but I'm really out of options on how to solve this problem!

*Short version:*

1. High concurrent reads, performing the same well-indexed query type to
the same two tables.
2. No additional traffic at all--just reads from these two tables. No
writes anywhere.
3. Very fast (sub-second) when server is only servicing one request at a
time.
4. Very slow (30s+) when server is servicing 2-3 requests at a time.

Questions:

1. What resource is under contention for servicing read-only queries if you
have a large buffer pool and a plenty-big thread cache?
2. What parameters can I tune to increase concurrent reads to these two
tables?

*Long version:*

I've got a MySQL server that has only about 50 connections open to it at
any given time. It basically only has one OLAP query type being ran against
it that amounts to something like this:

SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;

These queries are well indexed and run very well individually--sub-second,
usually even faster. When I run a few of these queries simultaneously (2-3
on my laptop, 6-7 on our production boxes) performance grinds to a halt:
Consistently about 30 seconds to service a query.

Ideally, I'd like to be able to run 50+ of these queries concurrently.

I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on both
machines. I've tried tweaking the following my.cnf parameters to be higher:

thread_concurrency = 20
thread_cache_size = 340
innodb_buffer_pool_size=2G

A few other my.cnf parameters that I have set:

innodb_file_format=Barracuda
innodb_file_format_max=Barracuda
innodb_file_per_table=1
skip-external-locking
innodb_log_files_in_group=2
innodb_log_file_size=2000M
max_allowed_packet=64M

Thanks in advance,

Brad Heller