Re: [HACKERS] patch: preload dictionary new version

2010-07-20 Thread Itagaki Takahiro
2010/7/14 Pavel Stehule pavel.steh...@gmail.com:
 this patch is significantly reduced original patch. It doesn't propose
 a simple allocator - just eliminate a high memory usage for ispell
 dictionary.

I don't think introducing new methods is a good idea. If you want a
simple allocator, MemoryContextMethods layer seems better for me.

The original purpose of the patch -- preloading dictionaries --
is also need to be redesigned with precompiler approach.
I'll move the proposal to Returned with Feedback status.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-20 Thread Pavel Stehule
Hello

2010/7/20 Itagaki Takahiro itagaki.takah...@gmail.com:
 2010/7/14 Pavel Stehule pavel.steh...@gmail.com:
 this patch is significantly reduced original patch. It doesn't propose
 a simple allocator - just eliminate a high memory usage for ispell
 dictionary.

 I don't think introducing new methods is a good idea. If you want a
 simple allocator, MemoryContextMethods layer seems better for me.


can you explain it, please?

 The original purpose of the patch -- preloading dictionaries --
 is also need to be redesigned with precompiler approach.
 I'll move the proposal to Returned with Feedback status.


ok.

thank you very much

Pavel Stehule

 --
 Itagaki Takahiro


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-14 Thread Pavel Stehule
Hello

this patch is significantly reduced original patch. It doesn't propose
a simple allocator - just eliminate a high memory usage for ispell
dictionary.

without this patch the ispell dictionary takes 55MB for tsearch2
context and 27MB in temp context. With this patch it takes only 25MB
tsearch2 context and 19MB in temp context - its for Czech dictionary
and UTF8 encoding. The patch is litlle bit ugly - it was reason, why I
proposed a simple allocator, but it reduce a memory usage on 53% - the
startup is better from 620 to 560 ms ~ 10% faster. little bit strange
is repeated time - it goes down from 18ms to 5ms.

Regards

Pavel Stehule


lessmem.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-12 Thread Pavel Stehule
2010/7/12 Tom Lane t...@sss.pgh.pa.us:
 Itagaki Takahiro itagaki.takah...@gmail.com writes:
 2010/7/8 Tom Lane t...@sss.pgh.pa.us:
 For example, the dictionary-load code could automatically execute
 the precompile step if it observed that the precompiled copy of the
 dictionary was missing or had an older file timestamp than the source.

I am not sure, but it can be recompiled when tseach code is actualised
(minor update) too.


 There might be a problem in automatic precompiler -- Where should we
 save the result? OS users of postgres servers don't have write-permission
 to $PGSHARE in normal cases. Instead, we can store the precompiled
 result to $PGDATA/pg_dict_cache or so.

 Yeah.  Actually we'd *have* to do something like that because $PGSHARE
 should contain only architecture-independent files, while the
 precompiled files would presumably have dependencies on endianness etc.


It is file and can be broken - so we have to check its consistency.
There have to some evidency of dictionaries in cache - how will  be
identified a precompiled file? We cannot use a dictionary name,
because it is a combination of dictionary and stop words. Have to have
to thinking about filenames length here? Will be beter some generated
name and a new system table?

Regards

Pavel Stehule




                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-11 Thread Itagaki Takahiro
2010/7/8 Tom Lane t...@sss.pgh.pa.us:
 For example, the dictionary-load code could automatically execute
 the precompile step if it observed that the precompiled copy of the
 dictionary was missing or had an older file timestamp than the source.

There might be a problem in automatic precompiler -- Where should we
save the result? OS users of postgres servers don't have write-permission
to $PGSHARE in normal cases. Instead, we can store the precompiled
result to $PGDATA/pg_dict_cache or so.

 I like the idea of a precompiler step mainly because it still gives you
 most of the benefits of the patch on platforms without mmap.

I also like the precompiler solution. I think the most important benefit
in the approach is that we don't need to declare dictionaries to be preloaded
in configuration files; We can always use mmap() for all dictionary files.

-- 
Takahiro Itagaki

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-11 Thread Tom Lane
Itagaki Takahiro itagaki.takah...@gmail.com writes:
 2010/7/8 Tom Lane t...@sss.pgh.pa.us:
 For example, the dictionary-load code could automatically execute
 the precompile step if it observed that the precompiled copy of the
 dictionary was missing or had an older file timestamp than the source.

 There might be a problem in automatic precompiler -- Where should we
 save the result? OS users of postgres servers don't have write-permission
 to $PGSHARE in normal cases. Instead, we can store the precompiled
 result to $PGDATA/pg_dict_cache or so.

Yeah.  Actually we'd *have* to do something like that because $PGSHARE
should contain only architecture-independent files, while the
precompiled files would presumably have dependencies on endianness etc.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-09 Thread Pavel Stehule
2010/7/8 Tom Lane t...@sss.pgh.pa.us:
 Pavel Stehule pavel.steh...@gmail.com writes:
 2010/7/8 Robert Haas robertmh...@gmail.com:
 A precompiler can give you all the same memory management benefits.

 I use mmap(). And with  mmap the precompiler are not necessary.
 Dictionary is loaded only one time - in original ispell format. I
 think, it is much more simple for administration - just copy ispell
 files. There are not some possible problems with binary
 incompatibility, you don't need to solve serialisation,
 deserialiasation, ...you don't need to copy TSearch ispell parser code
 to client application - probably we would to support not compiled
 ispell dictionaries still. Using a precompiler means a new questions
 for upgrade!

 You're inventing a bunch of straw men to attack.  There's no reason that
 a precompiler approach would have to put any new requirements on the
 user.  For example, the dictionary-load code could automatically execute
 the precompile step if it observed that the precompiled copy of the
 dictionary was missing or had an older file timestamp than the source.

uff - just safe activation of precompiler needs lot of low level code
- but maybe I see it wrong, and I doesn't work directly with files
inside pg. But I can't to see it as simple solution.


 I like the idea of a precompiler step mainly because it still gives you
 most of the benefits of the patch on platforms without mmap.  (Instead
 of mmap'ing, just open and read() the precompiled file.)  In particular,
 you would still have a creditable improvement for Windows users without
 writing any Windows-specific code.


the loading cca 10 MB takes on my comp cca 30 ms - it is better than
90ms, but it isn't a win.


 I think we can divide this problem to three parts

 a) simple allocator - it can be used not only for TSearch dictionaries.

 I think that's a waste of time, frankly.  There aren't enough potential
 use cases.

 b) sharing a data - it is important for large dictionaries

 Useful but not really essential.

 c) preloading - it decrease load time of first TSearch query

 This is the part that is the make-or-break benefit of the patch.
 You need a solution that cuts load time even when mmap isn't
 available.


I am not sure if this existing, and if it is necessary. Probably main
problem is with Czech language - we have a few specialities. For Czech
environment is UNIX and Windows platform the most important. I have
not information about using Postgres and Fulltext on other platforms
here. So, probably the solution doesn't need be core. I am thinking
about some pgfoundry project now - some like ispell dictionary
preload.

I can send only simplified version without preloading and sharing.
Just solving a memory issue - I think so there are not different
opinions.

best regards

Pavel Stehule

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Pavel Stehule
Hello

2010/7/8 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp:

 Pavel Stehule pavel.steh...@gmail.com wrote:

 this version has enhanced AllocSet allocator - it can use a  mmap API.

 I review your patch and will report some comments. However, I don't have
 test cases for the patch because there is no large dictionaries in the
 default postgres installation. I'd like to ask you to supply test data
 for the patch.

you can use a Czech dictionary - please, download it from
http://www.pgsql.cz/data/czech.tar.gz

CREATE TEXT SEARCH DICTIONARY cspell
   (template=ispell, dictfile = czech, afffile=czech, stopwords=czech);
CREATE TEXT SEARCH CONFIGURATION cs (copy=english);
ALTER TEXT SEARCH CONFIGURATION cs
   ALTER MAPPING FOR word, asciiword WITH cspell, simple;

postgres=# select * from ts_debug('cs','Příliš žluťoučký kůň se napil
žluté vody');
   alias   |description|   token   |  dictionaries   |
dictionary |   lexemes
---+---+---+-++-
 word  | Word, all letters | Příliš| {cspell,simple} | cspell
   | {příliš}
 blank | Space symbols |   | {}  ||
 word  | Word, all letters | žluťoučký | {cspell,simple} | cspell
   | {žluťoučký}
 blank | Space symbols |   | {}  ||
 word  | Word, all letters | kůň   | {cspell,simple} | cspell
   | {kůň}
 blank | Space symbols |   | {}  ||
 asciiword | Word, all ASCII   | se| {cspell,simple} | cspell | {}
 blank | Space symbols |   | {}  ||
 asciiword | Word, all ASCII   | napil | {cspell,simple} | cspell
   | {napít}
 blank | Space symbols |   | {}  ||
 word  | Word, all letters | žluté | {cspell,simple} | cspell
   | {žlutý}
 blank | Space symbols |   | {}  ||
 asciiword | Word, all ASCII   | vody  | {cspell,simple} | cspell
   | {voda}



 This patch allocates memory with non-file-based mmap() to preload text search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

 The dictinary loader is also modified a bit to use simple_alloc() instead
 of palloc() for long-lived cache. It can reduce calls of AllocSetAlloc(),
 that have some overheads to support pfree(). Since the cache is never
 released, simple_alloc() seems to have better performance than palloc().
 Note that the optimization will also work for non-preloaded dicts.

it produce little bit better spead, but mainly it significant memory
reduction - palloc allocation is expensive, because add 4 bytes (8
bytes) to any allocations. And it is problem for thousands smalls
blocks like TSearch ispell dictionary uses. On 64 bit the overhead is
horrible


 === Questions ===
 - How do backends share the dict cache? You might expect postmaster's
  catalog is inherited to backends with fork(), but we don't use fork()
  on Windows.


I though about some variants
a) using a shared memory - but it needs more shared memory
reservation, maybe some GUC - but this variant was refused in
discussion.
b) using a mmap on Unix and CreateFileMapping API on windows - but it
is little bit problem for me. I am not have a develop tools for ms
windows. And I don't understand to MS Win platform :(

Magnus, can you do some tip?

Without MSWindows we don't need to solve a shared memory and can use
only fork. If we can think about MSWin too, then we have to calculate
only with some shared memory based solution. But it has more
possibilities - shared dictionary can be loaded in runtime too.

 - Why are SQL functions dpreloaddict_init() and dpreloaddict_lexize()
  defined but not used?

it is used, if I remember well. It uses ispell dictionary API. The
using is simlyfied - you can parametrize preload dictionary - and then
you use a preloaded dictionary - not some specific dictionary. This
has one advantage and one disadvantage + very simple configuration, +
there are not necessary some shared dictionary manager, - only one
preload dictionary can be used.



 === Design ===
 - You added 3 custom parameters (dict_preload.dictfile/afffile/stopwords),
  but I think text search configuration names is better than file names.
  However, it requires system catalog access but we cannot access any
  catalog at the moment of preloading. If config-name-based setting is
  difficult, we need to write docs about where we can get the dict names
  to be preloaded instead. (from \dFd+ ?)


yes - it is true argument - there are not possible access to these
data in preloaded time. I would to support preloading - (and possible
support sharing session loaded dictionaries), because it ensure a
constant time for TSearch queries everytime. Yes, some documentation,
some enhancing of dictionary list info can be solution.


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Pavel Stehule
Hello

I found a page http://www.genesys-e.org/jwalter//mix4win.htm where is
section Emulation of mmap/munmap. Can be a solution?

Regards

Pavel Stehule

2010/7/8 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 2010/7/8 Takahiro Itagaki itagaki.takah...@oss.ntt.co.jp:

 Pavel Stehule pavel.steh...@gmail.com wrote:

 this version has enhanced AllocSet allocator - it can use a  mmap API.

 I review your patch and will report some comments. However, I don't have
 test cases for the patch because there is no large dictionaries in the
 default postgres installation. I'd like to ask you to supply test data
 for the patch.

 you can use a Czech dictionary - please, download it from
 http://www.pgsql.cz/data/czech.tar.gz

 CREATE TEXT SEARCH DICTIONARY cspell
   (template=ispell, dictfile = czech, afffile=czech, stopwords=czech);
 CREATE TEXT SEARCH CONFIGURATION cs (copy=english);
 ALTER TEXT SEARCH CONFIGURATION cs
   ALTER MAPPING FOR word, asciiword WITH cspell, simple;

 postgres=# select * from ts_debug('cs','Příliš žluťoučký kůň se napil
 žluté vody');
   alias   |    description    |   token   |  dictionaries   |
 dictionary |   lexemes
 ---+---+---+-++-
  word      | Word, all letters | Příliš    | {cspell,simple} | cspell
   | {příliš}
  blank     | Space symbols     |           | {}              |            |
  word      | Word, all letters | žluťoučký | {cspell,simple} | cspell
   | {žluťoučký}
  blank     | Space symbols     |           | {}              |            |
  word      | Word, all letters | kůň       | {cspell,simple} | cspell
   | {kůň}
  blank     | Space symbols     |           | {}              |            |
  asciiword | Word, all ASCII   | se        | {cspell,simple} | cspell     | {}
  blank     | Space symbols     |           | {}              |            |
  asciiword | Word, all ASCII   | napil     | {cspell,simple} | cspell
   | {napít}
  blank     | Space symbols     |           | {}              |            |
  word      | Word, all letters | žluté     | {cspell,simple} | cspell
   | {žlutý}
  blank     | Space symbols     |           | {}              |            |
  asciiword | Word, all ASCII   | vody      | {cspell,simple} | cspell
   | {voda}



 This patch allocates memory with non-file-based mmap() to preload text search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

 The dictinary loader is also modified a bit to use simple_alloc() instead
 of palloc() for long-lived cache. It can reduce calls of AllocSetAlloc(),
 that have some overheads to support pfree(). Since the cache is never
 released, simple_alloc() seems to have better performance than palloc().
 Note that the optimization will also work for non-preloaded dicts.

 it produce little bit better spead, but mainly it significant memory
 reduction - palloc allocation is expensive, because add 4 bytes (8
 bytes) to any allocations. And it is problem for thousands smalls
 blocks like TSearch ispell dictionary uses. On 64 bit the overhead is
 horrible


 === Questions ===
 - How do backends share the dict cache? You might expect postmaster's
  catalog is inherited to backends with fork(), but we don't use fork()
  on Windows.


 I though about some variants
 a) using a shared memory - but it needs more shared memory
 reservation, maybe some GUC - but this variant was refused in
 discussion.
 b) using a mmap on Unix and CreateFileMapping API on windows - but it
 is little bit problem for me. I am not have a develop tools for ms
 windows. And I don't understand to MS Win platform :(

 Magnus, can you do some tip?

 Without MSWindows we don't need to solve a shared memory and can use
 only fork. If we can think about MSWin too, then we have to calculate
 only with some shared memory based solution. But it has more
 possibilities - shared dictionary can be loaded in runtime too.

 - Why are SQL functions dpreloaddict_init() and dpreloaddict_lexize()
  defined but not used?

 it is used, if I remember well. It uses ispell dictionary API. The
 using is simlyfied - you can parametrize preload dictionary - and then
 you use a preloaded dictionary - not some specific dictionary. This
 has one advantage and one disadvantage + very simple configuration, +
 there are not necessary some shared dictionary manager, - only one
 preload dictionary can be used.



 === Design ===
 - You added 3 custom parameters (dict_preload.dictfile/afffile/stopwords),
  but I think text search configuration names is better than file names.
  However, it requires system catalog access but we cannot access any
  catalog at the moment of preloading. If config-name-based setting is
  difficult, we need to write docs about where we can get the dict names
  to be preloaded instead. (from \dFd+ ?)


 yes - it is true argument - there are not possible access to these
 data in 

Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Robert Haas
On Wed, Jul 7, 2010 at 10:50 PM, Takahiro Itagaki
itagaki.takah...@oss.ntt.co.jp wrote:
 This patch allocates memory with non-file-based mmap() to preload text search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

I thought someone (Tom?) had proposed idea previously of writing a
dictionary precompiler that would produce a file which could then be
mmap()'d into the backend.  Has any thought been given to that
approach?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Pavel Stehule
2010/7/8 Robert Haas robertmh...@gmail.com:
 On Wed, Jul 7, 2010 at 10:50 PM, Takahiro Itagaki
 itagaki.takah...@oss.ntt.co.jp wrote:
 This patch allocates memory with non-file-based mmap() to preload text search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

 I thought someone (Tom?) had proposed idea previously of writing a
 dictionary precompiler that would produce a file which could then be
 mmap()'d into the backend.  Has any thought been given to that
 approach?

The precompiler can save only some time related to parsing. But it
isn't main issue. Without simple allocation the data from dictionary
takes about 55 MB, with simple allocation about 10 MB. If you have a
100 max_session, then these data can be 100 x repeated in memory -
about 1G (for Czech dictionary).  I think so memory can be used
better.

Minimally you have to read these 10MB from disc - maybe from file
cache - but it takes some time too - but it will be significantly
better than now.

Regards
Pavel Stehule


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Robert Haas
On Thu, Jul 8, 2010 at 7:03 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2010/7/8 Robert Haas robertmh...@gmail.com:
 On Wed, Jul 7, 2010 at 10:50 PM, Takahiro Itagaki
 itagaki.takah...@oss.ntt.co.jp wrote:
 This patch allocates memory with non-file-based mmap() to preload text 
 search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

 I thought someone (Tom?) had proposed idea previously of writing a
 dictionary precompiler that would produce a file which could then be
 mmap()'d into the backend.  Has any thought been given to that
 approach?

 The precompiler can save only some time related to parsing. But it
 isn't main issue. Without simple allocation the data from dictionary
 takes about 55 MB, with simple allocation about 10 MB. If you have a
 100 max_session, then these data can be 100 x repeated in memory -
 about 1G (for Czech dictionary).  I think so memory can be used
 better.

A precompiler can give you all the same memory management benefits.

 Minimally you have to read these 10MB from disc - maybe from file
 cache - but it takes some time too - but it will be significantly
 better than now.

If you use mmap(), you don't need to anything of the sort.  And the
EXEC_BACKEND case doesn't require as many gymnastics, either.  And the
variable can be PGC_SIGHUP or even PGC_USERSET instead of
PGC_POSTMASTER.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Pavel Stehule
2010/7/8 Robert Haas robertmh...@gmail.com:
 On Thu, Jul 8, 2010 at 7:03 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2010/7/8 Robert Haas robertmh...@gmail.com:
 On Wed, Jul 7, 2010 at 10:50 PM, Takahiro Itagaki
 itagaki.takah...@oss.ntt.co.jp wrote:
 This patch allocates memory with non-file-based mmap() to preload text 
 search
 dictionary files at the server start. Note that dist files are not mmap'ed
 directly in the patch; mmap() is used for reallocatable shared memory.

 I thought someone (Tom?) had proposed idea previously of writing a
 dictionary precompiler that would produce a file which could then be
 mmap()'d into the backend.  Has any thought been given to that
 approach?

 The precompiler can save only some time related to parsing. But it
 isn't main issue. Without simple allocation the data from dictionary
 takes about 55 MB, with simple allocation about 10 MB. If you have a
 100 max_session, then these data can be 100 x repeated in memory -
 about 1G (for Czech dictionary).  I think so memory can be used
 better.

 A precompiler can give you all the same memory management benefits.

 Minimally you have to read these 10MB from disc - maybe from file
 cache - but it takes some time too - but it will be significantly
 better than now.

 If you use mmap(), you don't need to anything of the sort.  And the
 EXEC_BACKEND case doesn't require as many gymnastics, either.  And the
 variable can be PGC_SIGHUP or even PGC_USERSET instead of
 PGC_POSTMASTER.

I use mmap(). And with  mmap the precompiler are not necessary.
Dictionary is loaded only one time - in original ispell format. I
think, it is much more simple for administration - just copy ispell
files. There are not some possible problems with binary
incompatibility, you don't need to solve serialisation,
deserialiasation, ...you don't need to copy TSearch ispell parser code
to client application - probably we would to support not compiled
ispell dictionaries still. Using a precompiler means a new questions
for upgrade!

The real problem is using a some API on MS Windows, where mmap doesn't exist.

I think we can divide this problem to three parts

a) simple allocator - it can be used not only for TSearch dictionaries.
b) sharing a data - it is important for large dictionaries
c) preloading - it decrease load time of first TSearch query

Regards

Pavel Stehule




 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-08 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 2010/7/8 Robert Haas robertmh...@gmail.com:
 A precompiler can give you all the same memory management benefits.

 I use mmap(). And with  mmap the precompiler are not necessary.
 Dictionary is loaded only one time - in original ispell format. I
 think, it is much more simple for administration - just copy ispell
 files. There are not some possible problems with binary
 incompatibility, you don't need to solve serialisation,
 deserialiasation, ...you don't need to copy TSearch ispell parser code
 to client application - probably we would to support not compiled
 ispell dictionaries still. Using a precompiler means a new questions
 for upgrade!

You're inventing a bunch of straw men to attack.  There's no reason that
a precompiler approach would have to put any new requirements on the
user.  For example, the dictionary-load code could automatically execute
the precompile step if it observed that the precompiled copy of the
dictionary was missing or had an older file timestamp than the source.

I like the idea of a precompiler step mainly because it still gives you
most of the benefits of the patch on platforms without mmap.  (Instead
of mmap'ing, just open and read() the precompiled file.)  In particular,
you would still have a creditable improvement for Windows users without
writing any Windows-specific code.

 I think we can divide this problem to three parts

 a) simple allocator - it can be used not only for TSearch dictionaries.

I think that's a waste of time, frankly.  There aren't enough potential
use cases.

 b) sharing a data - it is important for large dictionaries

Useful but not really essential.

 c) preloading - it decrease load time of first TSearch query

This is the part that is the make-or-break benefit of the patch.
You need a solution that cuts load time even when mmap isn't
available.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: preload dictionary new version

2010-07-07 Thread Takahiro Itagaki

Pavel Stehule pavel.steh...@gmail.com wrote:

 this version has enhanced AllocSet allocator - it can use a  mmap API.

I review your patch and will report some comments. However, I don't have
test cases for the patch because there is no large dictionaries in the
default postgres installation. I'd like to ask you to supply test data
for the patch.

This patch allocates memory with non-file-based mmap() to preload text search
dictionary files at the server start. Note that dist files are not mmap'ed
directly in the patch; mmap() is used for reallocatable shared memory.

The dictinary loader is also modified a bit to use simple_alloc() instead
of palloc() for long-lived cache. It can reduce calls of AllocSetAlloc(),
that have some overheads to support pfree(). Since the cache is never
released, simple_alloc() seems to have better performance than palloc().
Note that the optimization will also work for non-preloaded dicts.

=== Questions ===
- How do backends share the dict cache? You might expect postmaster's
  catalog is inherited to backends with fork(), but we don't use fork()
  on Windows.

- Why are SQL functions dpreloaddict_init() and dpreloaddict_lexize()
  defined but not used?

=== Design ===
- You added 3 custom parameters (dict_preload.dictfile/afffile/stopwords),
  but I think text search configuration names is better than file names.
  However, it requires system catalog access but we cannot access any
  catalog at the moment of preloading. If config-name-based setting is
  difficult, we need to write docs about where we can get the dict names
  to be preloaded instead. (from \dFd+ ?)

- Do we need to support multiple preloaded dicts? I think dict_preload.*
  should accept a list of items to be loaded. GUC_LIST_INPUT will be a help.

- Server doesn't start when I added dict_preload to
  shared_preload_libraries and didn't add any custom parameters.
FATAL:  missing AffFile parameter
  But server should start with no effects or print WARNING messages
  for no dicts are preloaded in such case.

- We could replace simple_alloc() to a new MemoryContextMethods that
  doesn't support pfree() but has better performance. It doesn't look
  ideal for me to implement simple_alloc() on the top of palloc().

=== Implementation ===
I'm sure that your patch is WIP, but I'll note some issues just in case.

- We need Makefile for contrib/dict_preload.

- mmap() is not always portable. We should check the availability
  in configure, and also have an alternative implementation for Win32.


Regards,
---
Takahiro Itagaki
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers