Hi,

On 2018-02-07 19:28:29 +0300, Arthur Zakirov wrote:
> +     {
> +             {"max_shared_dictionaries_size", PGC_POSTMASTER, RESOURCES_MEM,
> +                     gettext_noop("Sets the maximum size of all text search 
> dictionaries loaded into shared memory."),
> +                     gettext_noop("Currently controls only loading of Ispell 
> dictionaries. "
> +                                              "If total size of 
> simultaneously loaded dictionaries "
> +                                              "reaches the maximum allowed 
> size then a new dictionary "
> +                                              "will be loaded into local 
> memory of a backend."),
> +                     GUC_UNIT_KB,
> +             },
> +             &max_shared_dictionaries_size,
> +             100 * 1024, 0, MAX_KILOBYTES,
> +             NULL, NULL, NULL
> +     },

So this uses shared memory, allocated at server start?  That doesn't
seem right. Wouldn't it make more sense to have a
'num_shared_dictionaries' GUC, and then allocate them with dsm? Or even
better not have any such limit and us a dshash table to point to
individual loaded tables?

Is there any chance we can instead can convert dictionaries into a form
we can just mmap() into memory?  That'd scale a lot higher and more
dynamicallly?

Regards,

Andres

Reply via email to