Please provide SHOW CREATE TABLE and SHOW TABLE STATUS for each table.
It smells like there is an inconsistency in the datatype of facts.accounts.id 
and what it is JOINing to.

Also provide the full SELECT.
How much RAM do you have?

> -----Original Message-----
> From: Johan De Meersman [mailto:vegiv...@tuxera.be]
> Sent: Sunday, August 11, 2013 2:16 PM
> To: Brad Heller
> Cc: Johnny Withers; MySQL General List
> Subject: Re: Concurrent read performance problems
> 
> Good to hear. A word of warning, though: make sure you don't have more
> connections allocating those buffers than your machine can handle memory-
> wise, or you'll start swapping and performance will REALLY go down the
> drain.
> 
> A query/index based solution would still be preferred. Could you for
> instance materialize that subselect and periodically refresh it? Other
> tricks may also be available.
> 
> Brad Heller <b...@cloudability.com> wrote:
> >Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
> >fixed the issue. Bumping them both to 512MB got our performance back
> >on-par. I came up with a way to avoid the contention using a complex
> >set of temp tables, but performance was abysmal.
> >
> >By reverting to the more straight-forward query with the subselect as
> >well as tweaking the max_healp_table_size and tmp_table_size I saw no
> >resource contention causing slowdowns, as well as a 12x performance
> >boost.
> >Thanks
> >for your help!
> >
> >*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
> >Skype:
> >brad.heller | @bradhe <http://www.twitter.com/bradhe> |
> >@cloudability<http://www.twitter.com/cloudability>
> >
> >
> >On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman
> ><vegiv...@tuxera.be>wrote:
> >
> >> True, which is why I said I suspected file-based sort :-) At one
> >million
> >> rows, that seems to be an accurate guess, too. Still on the phone,
> >though,
> >> and in bed. I'll read the thread better tomorrow, but you might get
> >some
> >> benefit from cutting out the subselect if that's possible.
> >>
> >> If you have plenty of memory, have a look at the max_temp_table_size
> >and
> >> max_heap_table variables, too; those decide when the sort goes to
> >disk.
> >>
> >>
> >> Johnny Withers <joh...@pixelated.net> wrote:
> >>>
> >>> Just because it says filrsort doesn't mean it'll create a file on
> >disk.
> >>> Table schema and full query would be helpful here too
> >>>
> >>>
> >http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort
> >-mean-in-mysql/
> >>> On Aug 11, 2013 1:28 PM, "Brad Heller" <b...@cloudability.com>
> >wrote:
> >>>
> >>>  Yes sorry, here's the explain. It was taken from MariaDB 5.5.32.
> >Looks like
> >>>>  there is a lot of filesort goin' on here. Also note that I'm only
> >using the
> >>>>  first two fields of the covering index (intentionally).
> >>>>
> >>>>
> >>>>  +------+-------------+------------------------+-------+
> >>>> ------------------------------
> >>>> +----------------------------+---------+
> >>>> ------------------------------
> >>>> +---------+
> >>>> ------------------------------
> >>>> +
> >>>>  | id   | select_type | table                  | type  |
> >>>> possible_keys
> >>>>                 | key                        | key_len | ref
> >>>>                                    | rows    | Extra
> >>>>                              |
> >>>>
> >>>>  +------+-------------+------------------------+-------+
> >>>> ------------------------------
> >>>> +----------------------------+---------+
> >>>> ------------------------------
> >>>> +---------+
> >>>> ------------------------------
> >>>> +
> >>>>  |    1 | PRIMARY     | <derived2>             | ALL   | NULL
> >>>>                  | NULL                       | NULL    | NULL
> >>>>                                     | 1004685 | Using temporary;
> >Using
> >>>>  filesort                           |
> >>>>  |    2 | DERIVED     | accounts               | range |
> >>>>  PRIMARY,unique_account_identifiers | unique_account_identifiers |
> >257     |
> >>>>  NULL                                                  |       3 |
> >Using
> >>>>  where; Using index; Using temporary; Using filesort |
> >>>>  |    2 | DERIVED     | facts                  | ref   | covering
> >>>>                  | covering                   | 4       |
> >>>>  facts.accounts.id                                    |  334895 |
> >Using
> >>>>  where
> >>>>                                |
> >>>>
> >>>>  +------+-------------+------------------------+-------+
> >>>> ------------------------------
> >>>> +----------------------------+---------+
> >>>> ------------------------------
> >>>> +---------+
> >>>> ------------------------------
> >>>> +
> >>>>
> >>>>
> >>>>  *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514
> >|
> >>>>  Skype:
> >>>>  brad.heller | @bradhe <http://www.twitter.com/bradhe> |
> >>>> @cloudability<http://www.twitter.com/cloudability>
> >>>>
> >>>>
> >>>>  On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman
> ><vegiv...@tuxera.be
> >>>>
> >>>>> wrote:
> >>>>>
> >>>>
> >>>>  On my phone now, but it smells of file-based sorting, making disk
> >access
> >>>>>  the bottleneck. Can you provide the explain?
> >>>>>
> >>>>>
> >>>>>  Brad Heller <b...@cloudability.com> wrote:
> >>>>>
> >>>>>>
> >>>>>>  Hey list, first time posting here so apologies if this is the
> >wrong
> >>>>>>
> >>>>>  forum
> >>>>
> >>>>>  for this but I'm really out of options on how to solve this
> >problem!
> >>>>>>
> >>>>>>  *Short version:*
> >>>>>>
> >>>>>>
> >>>>>>  1. High concurrent reads, performing the same well-indexed query
> >type to
> >>>>>>  the same two tables.
> >>>>>>  2. No additional traffic at all--just reads from these two
> >tables. No
> >>>>>>
> >>>>>>
> >>>>>> writes anywhere.
> >>>>>>  3. Very fast (sub-second) when server is only servicing one
> >request at a
> >>>>>>  time.
> >>>>>>  4. Very slow (30s+) when server is servicing 2-3 requests at a
> >time.
> >>>>>>
> >>>>>>  Questions:
> >>>>>>
> >>>>>>  1. What resource is under contention for servicing read-only
> >queries if
> >>>>>>
> >>>>>  you
> >>>>
> >>>>>  have a large buffer pool and a plenty-big thread cache?
> >>>>>>  2. What parameters can I tune to increase concurrent reads to
> >these two
> >>>>>>  tables?
> >>>>>>
> >>>>>>  *Long version:*
> >>>>>>
> >>>>>>
> >>>>>>  I've got a MySQL server that has only about 50 connections open
> >to it at
> >>>>>>  any given time. It basically only has one OLAP q
> >>>>>>   uery
> >>>>>>  type being ran against
> >>>>>>  it that amounts to something like this:
> >>>>>>
> >>>>>>  S
> >>>>>>  ELECT
> >>>>>> (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
> >>>>>> FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY
> >...;
> >>>>>>
> >>>>>>  These queries are well indexed and run very well
> >>>>>>
> >>>>>  individually--sub-second,
> >>>>
> >>>>>  usually even faster. When I run a few of these queries
> >simultaneously
> >>>>>>
> >>>>>  (2-3
> >>>>
> >>>>>  on my laptop, 6-7 on our production boxes) performance grinds to
> >a halt:
> >>>>>>  Consistently about 30 seconds to service a query.
> >>>>>>
> >>>>>>  I
> >>>>>>  deally,
> >>>>>> I'd like to be able to run 50+ of these queries concurrently.
> >>>>>>
> >>>>>>  I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same
> >results on
> >>>>>>
> >>>>>  both
> >>>>
> >>>>>  machines. I've tried tweaking the following my.cnf parameters to
> >be
> >>>>>>
> >>>>>  higher:
> >>>>
> >>>>>
> >>>>>>  thread_concurrency = 20
> >>>>>>  thread_cache_size = 340
> >>>>>>  innodb_buffer_pool_size=2G
> >>>>>>
> >>>>>>  A few other my.cnf parameters that I have set:
> >>>>>>
> >>>>>>  innodb_file_format=Barracuda
> >>>>>>
> >>>>>> innodb_file_format_max=Barracuda
> >>>>>>  innodb_file_per_table=1
> >>>>>>  skip-external-locking
> >>>>>>  innodb_log_files_in_group=2
> >>>>>>  innodb_log_file_size=2000M
> >>>>>>  max_allowed_packet=64M
> >>>>>>
> >>>>>>  Thanks in advance,
> >>>>>>
> >>>>>>  Brad Heller
> >>>>>
> >>>>>
> >>>>>
> >>>>>  --
> >>>>>  Sent from Kaiten Mail. Please excuse my brevity.
> >>>>
> >>>>
> >>>>
> >>>>
> >> --
> >> Sent from Kaiten Mail. Please excuse my brevity.
> >>
> 
> --
> Sent from Kaiten Mail. Please excuse my brevity.

Reply via email to