Ondřej

   By the way, have you ever considered using Redis as an in-memory
   cache database? I’ve been thinking about offloading some of the TTL
   expiry and cache management to Redis.


   In some customer environments, the query volume is extremely high —
   we’re using Mellanox CX-6 25G interfaces, which already handle a lot
   of offloading and fair IRQ distribution at the NIC level — so I
   wonder if you ever ran into performance limitations with Redis under
   similar loads, or decided against it for architectural reasons.

   Just curious....

Thank you

Carlos Horowicz
Planisys


On 02/07/2025 06:53, Ondřej Surý wrote:
On 2. 7. 2025, at 0:14, OwN-3m-All<own3m...@gmail.com> wrote:

I wonder if other memory issues users are complaining about are related.
I don’t know. You were the first one to actually provided a reproducer and a 
usable test case. Despite your exaggeration about “countless” reports there 
were not that many of them actually.

How many zones can a bind instance handle realistically?
Internally, we are testing BIND 9 with 1M small zones and it works just fine.

What happened was that 9.20 introduced a new database backend called QP that 
replaced venerable custom red-black tree implementation we had. The side effect 
of that was 12K memory chunk overhead per zone. Under normal conditions, this 
would not manifest as that 12K would get filled with the zone data, but in the 
case of almost empty zone, the memory chunk would be mostly empty and it just 
blew up the memory requirements.

BIND 9.22 will contain an optimization that gradually increases the memory 
chunk size and that allows “auto tuning” for both small zones, large zones and 
the cache.

Ondrej
--
Ondřej Surý — ISC (He/Him)

My working hours and your working hours may be different. Please do not feel 
obligated to reply outside your normal working hours.
-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users

Reply via email to