In terms of the amount of processes, that is the max amount ever seen. The
servers are monitored 24/7 so it is constantly under hawk eye from both
staff and monitoring systems.

The other memory values I was referring to are things like the open-table
cache, and other buffers as you assumed yes. They use innodb engine for
their databases.

In terms of setting up the cronjobs using ps, this has been done using
several methods and it always come down to only one process using all
memory and that is mysql server.

Even cpu usage is fairly high sometimes, however this was found to be when
they run large updates or reports.

Biggest issue at the moment is mysql taking all physical memory until
nothing left, the starts swapping as well until that is completely used up.

On 07 Sep 2016 15:47, "Johan De Meersman" <vegiv...@tuxera.be> wrote:


----- Original Message -----
> From: "Machiel Richards" <machiel.richa...@gmail.com>
> Subject: MySql Swapping issues
>
>    I had a look at the settings and the innodb buffer pool on one server
> is set to about 70% of the physical memory and the others to about 30% of
> physical memory.

Not unreasonable, especially given the memory sizes you give.


>    All other buffer and memory relevant parameters are set to fairly low
> values as they were recently decreased with no difference in the symptoms.

Good, although 'fairly low' is vague :-) For most purposes, there's no need
to tune the specialised buffers at all. I assume you mean myisam key cache
and the like.


>     In terms of server and queries, The smallest server have 64Gb of
> physical memory and the biggest server have 128Gb of physical memory and
> the biggest server database size is about 600Gb odd.

That's a large difference,but not necessarily a problem, as long as your
active dataset fits in the bufferpool.


>     I had a look at the processes running and there are at best 38
> processes running including the replication processes.

Is that what you see whenever you look, or is it from a trending tool like
Munin or Cacti?

The former can be very deceiving, especially with connect-select-quit
applications like PHP sites. I strongly recommend setting up proper
trending if you don't have it, so you can see what's going on when you're
not looking, too - and compare to past activity.

Personally I use Munin; the standard plugins in there are a good base, but
there's a very good one at https://github.com/kjellm/munin-mysql. I have my
own fork of that, too, which contains a number of extra graphs that depend
on considerable modifications of the main module.

Cacti is pretty much just as good (and iirc the kjellm plugin is actually
based on a cacti plugin); I just prefer the way munin is managed.


>    I do not see any other hardware related issues and swappiness settings
> have been configured to 1.

For DB-only servers (and really, production servers in general) I generally
opt to not have any swap at all. Once you start using it, it's a slow death
struggle anyway; better off to just have it die immediately and fix the
configuration.


>    Any ideas , links, advice, etc... will be appreciated.

Memory creep is often hard to diagnose; set up a simple cronjob that runs
PS; sorts by memory use and outputs the top lines to a log every half hour
or so. You can then do some sed/awk/gnuplot magic on that to see what
process keeps growing.

If it turns out that it actually *is* the mysql server, that may be a
memory leak, but just as probably it could be a maintenance schedule
somewhere that suddenly bursts a couple of dozen connections, exhausting
server memory. Pretty hard to tell you more without telemetry :-)


--
Unhappiness is discouraged and will be corrected with kitten pictures.

Reply via email to