Hello, Grant,
there are two ways, to implement this, one is payloads, and the other one is
multiple tokens at the same positions.
Each of them can be useful, let me explain the way I thick they can be used.
Payloads : every token has extra information that can be used in the
processing , for example if I can add Part-of-speech then I can develop
tokenizers that take into account the POS (or for example I can generate
bigrams of Noum Adjective, or Noum prep Noum i can have a better stopwords
algorithm....)

Multiple tokes in one position: If I can have  different tokens at the same
place, I can have different informations like: "was #verb _be" so I can do a
search for "you _be #adjective" to find all the sentences that talk about
"you" for example "you were clever" "you are tall" ......


I have not understood the way that the    DelimitedPayloadTokenFilterFactory
may work in solr, which is the input format? 

so I was thinking in generating an xml where for each token a single string
is generated like "was#verb#be"
and then there is a tokenfilter that splits by # each white space separated
string,  in this case  in three words and adds the trailing character that
allows to search for the right semantic info. But gives them the same
increment. Of course the full processing chain must be aware of this.
But I must think on multiwords tokens  


Grant Ingersoll-6 wrote:
> 
> 
> On Jul 20, 2009, at 6:43 AM, JCodina wrote:
> 
>> D: Break things down. The CAS would only produce XML that solr can  
>> process.
>> Then different Tokenizers can be used to deal with the data in the  
>> CAS. the
>> main point is that the XML has  the doc and field labels of solr.
> 
> I just committed the DelimitedPayloadTokenFilterFactory, I suspect  
> this is along the lines of what you are thinking, but I haven't done  
> all that much with UIMA.
> 
> I also suspect the Tee/Sink capabilities of Lucene could be helpful,  
> but they aren't available in Solr yet.
> 
> 
> 
> 
>> E: The set of capabilities to process the xml is defined in XML,  
>> similar to
>> lucas to define the ouput and in the solr schema to define how this is
>> processed.
>>
>>
>> I want to use it in order to index something that is common but I  
>> can't get
>> any tool to do that with sol: indexing a word and coding at the same
>> position the syntactic and semantic information. I know that in  
>> Lucene this
>> is evolving and it will be possible to include metadata but for the  
>> moment
> 
> What does Lucas do with Lucene?  Is it putting multiple tokens at the  
> same position or using Payloads?
> 
> --------------------------
> Grant Ingersoll
> http://www.lucidimagination.com/
> 
> Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)  
> using Solr/Lucene:
> http://www.lucidimagination.com/search
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Solr-and-UIMA-tp24567504p24590509.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to