This approach comes to mind. You could model your semantic tags as
tokens and index them at the same positions as the words or phrases to
which they apply. This is particularly easy if you can integrate your
taggers with your Analyzer. You would probably want to create one or
more new Query subclasses to facilitate certain types of matching,
making it easy to associate terms/phrases with different tags (e.g.,
OverlappingQuery). This approach would support generation of queries
that are tag-dependent, but would not directly help using tags in a
ranking algorithm for tag-independent queries. As an off-hand thought,
you might be able to extend the idea to support this by naming your tags
something like TERM_TAG where TERM is the term they apply to (best if
the character used for '_' cannot occur in any term). Then something
like a TaggedTermQuery could easily find the tags relevant to a term in
the query and iterate their docs/positions in parallel with those of the
term (rougly equilvaent to OverlappingQuery(term, PrefixQuery(term_*))).

Top-of-mind thoughts,

Chuck


eks dev wrote on 06/01/2006 12:10 AM:
> We have faced the following use case:
>
> In order to optimize performance and more importantly quality of search 
> results we are forced to attach more attributes to particular words (Terms). 
> Generic attributes like TF, IDF are usefull to model our "similarity" only up 
> to some level. 
>
> Examples:
> 1. Is one Term first or last name, (e.g. we have comprehensive list of such 
> words). This enables us to make smarter (faster and better queries) in case 
> someone has multiple first names, it influences ranking...
> 2. Agreement weight and Disagreement weigt of some words is modelled 
> diferently. 
> 3. Semantic classes of words influence ranking (if something verb or noun 
> changes search strategy and ranking radically)
>
> On top of that, we can afford to load all terms in memory, in order to alow 
> fast string distance callculations and some limited pattern matching using 
> some strange Trie-s. 
>
> Today, we solve these things by implementing totally redundant data 
> structures that keep some kind of map Term->ValuesObject, which is redundant 
> to Lucene Lexicon storage. Instead of "one access gets all" we have two 
> access terms using two diferent access paths, once using our dictionary and 
> second time implicitly via Query or so... So we introduce performance/memory 
> penalties. (Pls. do not forget, we need to access copy of analyzed document 
> in order to attach "additional info" to Terms)
>
> I guess we are not the only ones to face such a case, as increase in 
> precision above TF/IDF can be only achieved by introducing some "domain 
> semantics" where available. For this, "attaching" domain specific info to 
> Term would be perfect solution. Also, enabling flexible implementations for 
> Lexicon access could give us some flexibility (e.g. implementation in mg4j 
> goes in that direction)
>
> Could somebody imagine 2.x version of Lucene to have some Interface that 
> needs to be implemented with clear contract, that would enable us to attach 
> our implementation for accessing lexicon? 
>
> Or even better, some hints how I can do it today :)
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>   


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to