On Mon, 24 Mar 2025 at 20:10, Derick Turner <[email protected]> wrote:
>
> On 24/03/2025 18:00, Gordan Bobic wrote:
>
>
>
> On Mon, 24 Mar 2025, 19:54 Derick Turner, <[email protected]> wrote:
>>
>> Update on this one.
>>
>> I upped the open_table_cache to 30,000 to see what impact this would have.  
>> I would have expected the memory foot print of the MariaDB process to grow 
>> to about 30GB but it has stayed at ~27GB (as reported by btop).  This was 
>> from the SQL calculation:
>>
>> SELECT @@innodb_buffer_pool_size + @@key_buffer_size + @@query_cache_size + 
>> @@tmp_table_size + @@table_open_cache * @@open_files_limit + 
>> (@@sort_buffer_size + @@read_buffer_size + @@read_rnd_buffer_size + 
>> @@join_buffer_size + @@thread_stack + @@binlog_cache_size) * 
>> @@max_connections AS MaxMemoryEstimate;
>
>
> That makes no sense, a file handle is about 1KB, so 30K of table_open_cache 
> won't add more than 30MB.
>
> Ah - that is excellent news.  I worried that a large value would absolutely 
> kill the memory.  So, plenty of space to increase this a long way.
>>
>> Value for that returns 32082607104 (29.88GB  if my maths is right).
>
>
> There is a lot wrong with that equation.
>
> Anything quick and dirty you would recommend instead (And yes I still need to 
> set up the proper monitoring tool :)

https://shatteredsilicon.net/mariadb-performance-tuning-and-mysql-performance-tuning/

```
SELECT (
           (
               @@binlog_file_cache_size +
               @@innodb_buffer_pool_size +
               @@innodb_log_buffer_size +
               @@key_buffer_size +
               @@query_cache_size +
               ( @@max_connections *
                   ( @@binlog_cache_size +
                     @@binlog_stmt_cache_size +
                     @@bulk_insert_buffer_size +
                     @@join_buffer_size +
                     @@net_buffer_length +
                     @@read_buffer_size +
                     @@read_rnd_buffer_size +
                     @@sort_buffer_size +
                     @@thread_stack
                   )
               ) +
               ( @@slave_parallel_threads *
                 ( @@slave_parallel_max_queued )
               ) +
               ( @@open_files_limit * 1024 )
           ) / 1024 / 1024 / 1024) AS max_memory_GB;
```

All file handles have to fit within the open_files_limit, so there is
no need to add table_open_cache, max_connections,
table_definition_cache, etc, separately.

tmp_table_size is per temporary table, and there is no hard limit on
how many of those could be created by a query or how many can be
active in a session, so there is no reasonable way to account for it
mechanistically.
max_packet_length could in theory be reached by each connection if you
are sending queries large enough to max it out, but that is absurdly
unlikely unless all of your threads are importing mysqldump-ed tables
with massive multi-row INSERTs.


-- 
Gordan Bobic
Database Specialist, Shattered Silicon Ltd.
https://shatteredsilicon.net
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to