Update on this one.
I upped the open_table_cache to 30,000 to see what impact this would
have. I would have expected the memory foot print of the MariaDB
process to grow to about 30GB but it has stayed at ~27GB (as reported by
btop). This was from the SQL calculation:
SELECT @@innodb_buffer_pool_size + @@key_buffer_size + @@query_cache_size +
@@tmp_table_size + @@table_open_cache * @@open_files_limit +
(@@sort_buffer_size + @@read_buffer_size + @@read_rnd_buffer_size +
@@join_buffer_size + @@thread_stack + @@binlog_cache_size) * @@max_connections
AS MaxMemoryEstimate;
Value for that returns 32082607104 (29.88GB if my maths is right).
The variable is correct in the DB when I query it, but is this
expected? Is it being limited by some other server/OS limit? If the
memory usage remains the same I can increase it without having to add
more RAM - I just don't want it to suddenly spike, run out of available
resource and come up against the OOM Killer again.
Kind regards
Derick
On 21/03/2025 15:41, Gordan Bobic wrote:
On Fri, 21 Mar 2025 at 17:36, Derick Turner<[email protected]> wrote:
On 21/03/2025 14:32, Gordan Bobic via discuss wrote:
On Fri, 21 Mar 2025 at 16:19, Derick Turner<[email protected]> wrote:
Thanks Gordon,
That's a good shout. Current settings for table_open_cache is set to default
of 2000. I've run a test to see what the difference in the opened_tables value
is in the space of a minute and that returned 3426. So should definitely look
at increasing that value.
table_definition_cache is currently 103784
That sounds very high, unless you genuinely have > 100K tables in your
database (which seems rather high)
Database servers have 221 databases defined and, yes, 108080 ibd files.
That is on the high side, but you are probably OK unless that grows by
more than about 20% in the near future.
Is there a rubric for working out what these should be set to?
Ideally table_definition_cache should be set to the number of tables
you have, up to about 128K. Maybe 256K at a push. Beyond that you will
start to run into file handle limits even if you set LimitNOFILE to 1M
in your service script.
table_open_cache is less straightforward. Ideall you should size it so
that it never gets maxed out. Usually at least equal to
table_definition_cache, often multiples. If you are running into file
handle limits, then you can assess whether it is passably adequate by
checking the table cache hit rate and making sure it doesn't drop
below 99%.
OK - Sounds like I need to do some tuning calculations then. Or look at the
overall infrastructure and split the databases off onto separate sets of
clusters (if we are getting into diminishing returns territory)
You are probably OK for now unless you plan to add thousands more
tables in the near future.
A decent monitoring tool like Percona's PMM or Shattered Silicon's SSM
have graphs for that. SSM also has a tuning advisor dashboard for the
most important clearly determinable tunables.
Thanks for that - I'll look at getting SSM set up.
See here:https://shatteredsilicon.net/downloads/
Feel free to drop me a message off list, happy to talk you through it.
--
Derick Turner - He/Him
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]