I was trying to run the Analyzer stub generation at 
http://wiki.apache.org/solr/CommitterInfo#head-c2ba467b48dcfd17c59d09a2ad70f4c4fffb4ce8

and it resulted in:
stub-factories:
[exec] /Volumes/User/grantingersoll/projects/lucene/solr/solr- clean/src/java/org/apache/solr/analysis/CharStreamAwareCJKTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/solr/solr- clean/src/java/org/apache/solr/analysis/ CharStreamAwareWhitespaceTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/ar/ ArabicLetterTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/ar/ ArabicNormalizationFilter.java
     [exec] can't stub ArabicNormalizationFilter
[exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/ar/ ArabicStemFilter.java
     [exec] can't stub ArabicStemFilter
[exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/sinks/ DateRecognizerSinkTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/sinks/ TokenRangeSinkTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/contrib/analyzers/src/java/org/apache/lucene/analysis/sinks/ TokenTypeSinkTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/src/java/org/apache/lucene/analysis/SinkTokenizer.java [exec] /Volumes/User/grantingersoll/projects/lucene/java/lucene- clean/src/java/org/apache/lucene/analysis/TeeTokenFilter.java
     [exec] Can't find java files for...
     [exec] org.apache.lucene.analysis.sinks.TokenRangeSinkTokenizer
     [exec] org.apache.lucene.analysis.SinkTokenizer
     [exec] org.apache.lucene.analysis.ar.ArabicNormalizationFilter
     [exec] org.apache.lucene.analysis.ar.ArabicStemFilter
     [exec] org.apache.solr.analysis.CharStreamAwareWhitespaceTokenizer
     [exec] org.apache.solr.analysis.CharStreamAwareCJKTokenizer
     [exec] org.apache.lucene.analysis.sinks.TokenTypeSinkTokenizer
[exec] org.apache.lucene.analysis.sinks.DateRecognizerSinkTokenizer


I don't see much in the way of reasons why, so perhaps Hoss can lend some insight.

I'm mostly interested in generating the Arabic ones, but I guess I will do them by hand.

Thanks,
Grant

Reply via email to