I'm working on tool that wants to construct analyzers 'at arms length' -- a
bit like from a solr schema -- so that multiple dueling analyzers could be
in their own class loaders at one time. I want to just define a simple
configuration for char filters, tokenizer, and token filter. So it would
be, well, convenient if there were a tokenizer factory at the lucene level
as there is a token filter factory. I can use Solr easily enough for now,
but I'd consider it cleaner if I could define this entirely at the Lucene
level.

Reply via email to