OK, so, here I go again making a public idiot of myself. Could it be that
the tokenizer factory is 'relatively recent' as in since 4.1?




On Mon, Oct 28, 2013 at 7:39 AM, Benson Margulies <ben...@basistech.com>wrote:

> I'm working on tool that wants to construct analyzers 'at arms length' --
> a bit like from a solr schema -- so that multiple dueling analyzers could
> be in their own class loaders at one time. I want to just define a simple
> configuration for char filters, tokenizer, and token filter. So it would
> be, well, convenient if there were a tokenizer factory at the lucene level
> as there is a token filter factory. I can use Solr easily enough for now,
> but I'd consider it cleaner if I could define this entirely at the Lucene
> level.
>
>

Reply via email to