ck up over 3mil in the list.
We're currently writing about 50MB/s to that machine. Is it possible the
purge thread just...can't keep up for some reason? How can I get better
visibility in to how quickly the purge thread is working vs. how many undo
entries are being put in the thread
Good point. We'll try that out.
I noticed our ibdata file is gigantic now, likely due to the alter table
migration we ran. What's the relationship here, do you think?
*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://ww
things can I look at to figure out how to increase bandwidth for
the purge thread?
Thanks,
*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> | @cloudability
<http://www.twitter.com/cloudability>
We&
ally broken this machines' InnoDB ibdata file/data dictionary? All
the contention comes out of the dictionary, but I'd expect the optimize to
re-write the dictionary entries...
*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <
faults for table_definition_cache and table_open_cache (400
each).
*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>
We're hiring! https://cloudability.com/jobs&
e is still gigantic (56GB)?
*Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
Skype: brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.com/cloudability>
We're hiring! https://cloudability.com/jobs<http://www.cloudabil
's never
been a problem until recently. I upgraded the IO subsystem, and our
statistics indicate that it's not maxing out IO (at least IOPS).
This is problematic because the ORM we're using uses that to figure out the
structure of our DB...
*Brad Heller *| Director of Engineering |
-forward query with the subselect as well
as tweaking the max_healp_table_size and tmp_table_size I saw no resource
contention causing slowdowns, as well as a 12x performance boost. Thanks
for your help!
*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe
5 | Using
where
|
+--+-++---+++-+---+-+---+
*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe <http://www.twitter.com/bradhe> |
@cloudability<http://www.twitter.co
e=1
skip-external-locking
innodb_log_files_in_group=2
innodb_log_file_size=2000M
max_allowed_packet=64M
Thanks in advance,
Brad Heller
10 matches
Mail list logo