We had another event today.
Everything went from fine with respect to cache hits (99.9% open table
cache) and INNODB buffer pool all good (22GB size) to 15% Open table
cache hit with 0 file opens and 3.11 misses and INNODB buffer pool size
of 475MB. The graphs on SSM were interesting (and where I got that
information)
Only unusual entry in the error log was:
2025-03-27 17:37:56 3194063 [Warning] InnoDB: A long wait (152 seconds)
was observed for dict_sys.latch
(17:35 was when SSM was showing everything nose-diving)
This wait time kept growing over the next few minutes till:
2025-03-27 17:41:17 3193777 [Warning] InnoDB: A long wait (354 seconds)
was observed for dict_sys.latch
I'd already switched our webservers off of the stricken DB server but
everything came unstuck after that last error log entry.
What would be causing the dict_sys.latch issue? What can be done to fix it?
Kind regards
Derick
On 24/03/2025 18:20, Gordan Bobic via discuss wrote:
On Mon, 24 Mar 2025 at 20:10, Derick Turner <[email protected]> wrote:
On 24/03/2025 18:00, Gordan Bobic wrote:
On Mon, 24 Mar 2025, 19:54 Derick Turner, <[email protected]> wrote:
Update on this one.
I upped the open_table_cache to 30,000 to see what impact this would have. I
would have expected the memory foot print of the MariaDB process to grow to
about 30GB but it has stayed at ~27GB (as reported by btop). This was from the
SQL calculation:
SELECT @@innodb_buffer_pool_size + @@key_buffer_size + @@query_cache_size +
@@tmp_table_size + @@table_open_cache * @@open_files_limit +
(@@sort_buffer_size + @@read_buffer_size + @@read_rnd_buffer_size +
@@join_buffer_size + @@thread_stack + @@binlog_cache_size) * @@max_connections
AS MaxMemoryEstimate;
That makes no sense, a file handle is about 1KB, so 30K of table_open_cache
won't add more than 30MB.
Ah - that is excellent news. I worried that a large value would absolutely
kill the memory. So, plenty of space to increase this a long way.
Value for that returns 32082607104 (29.88GB if my maths is right).
There is a lot wrong with that equation.
Anything quick and dirty you would recommend instead (And yes I still need to
set up the proper monitoring tool :)
https://shatteredsilicon.net/mariadb-performance-tuning-and-mysql-performance-tuning/
```
SELECT (
(
@@binlog_file_cache_size +
@@innodb_buffer_pool_size +
@@innodb_log_buffer_size +
@@key_buffer_size +
@@query_cache_size +
( @@max_connections *
( @@binlog_cache_size +
@@binlog_stmt_cache_size +
@@bulk_insert_buffer_size +
@@join_buffer_size +
@@net_buffer_length +
@@read_buffer_size +
@@read_rnd_buffer_size +
@@sort_buffer_size +
@@thread_stack
)
) +
( @@slave_parallel_threads *
( @@slave_parallel_max_queued )
) +
( @@open_files_limit * 1024 )
) / 1024 / 1024 / 1024) AS max_memory_GB;
```
All file handles have to fit within the open_files_limit, so there is
no need to add table_open_cache, max_connections,
table_definition_cache, etc, separately.
tmp_table_size is per temporary table, and there is no hard limit on
how many of those could be created by a query or how many can be
active in a session, so there is no reasonable way to account for it
mechanistically.
max_packet_length could in theory be reached by each connection if you
are sending queries large enough to max it out, but that is absurdly
unlikely unless all of your threads are importing mysqldump-ed tables
with massive multi-row INSERTs.
--
Derick Turner - He/Him
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]