Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Michael McCandless
You are passing ScoreMode.NONE right now, when you create the
ToParentBlockJoinQuery; have a look at the javadocs for the other
options?

You could normalize all scores by the maxScore, if you must produce a
percentage?

Mike McCandless

http://blog.mikemccandless.com


On Thu, Jan 30, 2014 at 7:31 AM, Priyanka Tufchi
 wrote:
> Hello Mike
>
> Thanks for the reply ,Now we are able to get TopGroups.   but still we
> are not able to get score .how ever we got matched Parent and child
> through  following code. We need score for showing in our application
> for ranking .
>
>   Document childDoc = indexsearcher.doc(group.scoreDocs[0].doc);
>Document parentDoc = indexsearcher.doc(group.groupValue);
>
> And Second Question is that, how does lucene score ?   means we have
> used it before ,but the score is not between 0 to 1 example(16.0 or
> 1.4) so If i want to use it in percentage ..Then how it gonna work and
> the score is like this?
>
> On Thu, Jan 30, 2014 at 3:27 AM, Michael McCandless
>  wrote:
>> You should not use TextField.TYPE_STORED to index your docType field:
>> that field type runs the analyzer.  I'm not sure that matters in your
>> case, but that's deadly in general.  Use StringField instead (it
>> indexes the provided text as a single token).
>>
>> Likewise for color, size fields.
>>
>> Try running your parentQuery as a search itself and confirm you see
>> totalHits=2?  This will tell you if something is wrong in how you
>> indexed the docType field, or created the parent query/filter.
>>
>> Also try running your childQuery separately and confirm you get
>> non-zero totalHits.
>>
>> Hmm, this code never calls indexsearcher.search on the query?
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Thu, Jan 30, 2014 at 5:41 AM, Priyanka Tufchi
>>  wrote:
>>> Hello Michael,
>>>
>>> following is the code. This is the Sample which we are trying to get
>>> the the hits.Please Guide us
>>>
>>> 
>>>
>>> public void newTry() throws IOException
>>> {
>>> StandardAnalyzer analyzer = new StandardAnalyzer
>>>
>>> (Version.LUCENE_41);
>>> // 1. create the index
>>> Directory index = new RAMDirectory();
>>> IndexWriterConfig config = new IndexWriterConfig
>>>
>>> (Version.LUCENE_41,
>>> analyzer);
>>>
>>> IndexWriter w = new IndexWriter(index, config);
>>> List documents=new ArrayList();
>>> documents.add(createProductItem("red", "s", "999"));
>>> documents.add(createProductItem("red", "m", "1000"));
>>> documents.add(createProductItem("red", "l", "2000"));
>>> documents.add(createProduct("Polo Shirt", ".Made Of 100%
>>>
>>> cotton"));
>>> w.addDocuments(documents);
>>> documents.clear();
>>> documents.add(createProductItem("light blue", "s", "1000"));
>>> documents.add(createProductItem("blue", "s", "1900"));
>>> documents.add(createProductItem("dark blue", "s", "1999"));
>>> documents.add(createProductItem("light blue", "m", "2000"));
>>> documents.add(createProductItem("blue", "m", "2090"));
>>> documents.add(createProductItem("dark blue", "m", "2099"));
>>> documents.add(createProduct("white color", "...stripe pattern"));
>>> w.addDocuments(documents);
>>> IndexReader indexreader=DirectoryReader.open(w, false);
>>> IndexSearcher indexsearcher=new IndexSearcher(indexreader);
>>> Query parentQuery= new TermQuery(new Term("docType", "product"));
>>> Filter parentfilter=new CachingWrapperFilter(new QueryWrapperFilter
>>>
>>> (parentQuery));
>>> BooleanQuery mainQuery=new BooleanQuery();
>>> Query childQuery=new TermQuery(new Term("size","m"));
>>> mainQuery.add(childQuery,Occur.SHOULD);
>>> ScoreMode scoremode=ScoreMode.None;
>>> ToParentBlockJoinQuery productitemQuery=new ToParentBlockJoinQuery
>>>
>>> (mainQuery, parentfilter,
>>> scoremode);
>>> BooleanQuery query = new BooleanQuery();
>>>  query.add(new TermQuery(new Term("name", "white color")), Occur.MUST);
>>>  query.add(productitemQuery, Occur.MUST);
>>> ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
>>>Sort.RELEVANCE, // sort
>>>10, // numHits
>>>true,   //
>>>
>>> trackScores
>>>false   //
>>>
>>> trackMaxScore
>>>);
>>> TopGroups hits =
>>> c.getTopGroups(
>>> productitemQuery,
>>>Sort.RELEVANCE,
>>>0,   // offset
>>>10,  // maxDocsPerGroup
>>>0,   // withinGroupOffset
>>>true // fillSortFields
>>>  );
>>> System.out.println("abc");
>>>
>>>
>>> }
>>> private static Document createProduct(String name,String description)
>>> {
>>> Document document=new Document();
>>> document.add(new org.apache.lucene.document.Field("name", name,
>>>
>>> org.apache.lucene.document.TextField.TYPE_STORED));
>>> document.add(new org.apache.lucene.document.Field("docType",
>>>
>>> "product", org.apache.lucene.document.TextField.TYPE_STORED));
>>> document.add(new org.apache.lucene.document.Field("description",
>>>
>>> description, org.apache.lucene.document.TextField.TYPE_STORED));
>>> return docum

RE: LUCENE-5388 AbstractMethodError

2014-01-30 Thread Markus Jelsma
Hi Uwe,

You're right. Although using the analysis package won't hurt the index, this 
case is evidence that it's a bad thing, especially if no backport is made. I'll 
port my code to use the updated API of 5.0.

Thanks guys,
Markus
 
-Original message-
> From:Uwe Schindler 
> Sent: Thursday 30th January 2014 14:48
> To: java-user@lucene.apache.org
> Subject: RE: LUCENE-5388 AbstractMethodError
> 
> Hi Markus,
> 
> Lucene trunk has a backwards incompatible API, so Analyzers compiled with 
> Lucene 4.6 cannot be used on Lucene trunk. The change in the Analysis API 
> will not be backported, because this would cause the same problem for users 
> updating from Lucene 4.6 to Lucene 4.7. In Lucene 4, we try to keep as much 
> backwards compatibility as possible. If you Analyzers are correct you can use 
> them almost in any 4.x version, but no longer with Lucene trunk.
> 
> If you want to User Lucene trunk, you have to recompile your Analyzers with 
> trunk, too, and change their code to use the newest API (otherwise they won't 
> compile). Analyzers is not the only change, Lucene trunk has many more 
> changes affecting other parts of Lucene, like StoredDocument/IndexDocument 
> difference, so mixing those JARs is completely impossible. These other 
> changes may not have you affected until now, because Analyzers do not use the 
> Lucene Document and Index APIs, but it's still the wrong thing to do.
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
> > -Original Message-
> > From: Markus Jelsma [mailto:markus.jel...@openindex.io]
> > Sent: Thursday, January 30, 2014 12:52 PM
> > To: java-user@lucene.apache.org
> > Subject: RE: LUCENE-5388 AbstractMethodError
> > 
> > Hi Uwe,
> > 
> > The bug occurred only after LUCENE-5388 was committed to trunk, looks like
> > its the changes to Analyzer and friends. The full stack trace is not much 
> > more
> > helpful:
> > 
> > java.lang.AbstractMethodError
> > at 
> > org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)
> > at
> > io.openindex.lucene.analysis.util.QueryDigest.unigrams(QueryDigest.java:19
> > 6)
> > at
> > io.openindex.lucene.analysis.util.QueryDigest.calculate(QueryDigest.java:13
> > 5)
> > at
> > io.openindex.solr.handler.QueryDigestRequestHandler.handleRequestBody(
> > QueryDigestRequestHandler.java:56)
> > at
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandl
> > erBase.java:135)
> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785
> > )
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418
> > )
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203
> > )
> > at
> > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle
> > r.java:1419)
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> > at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:
> > 137)
> > at
> > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> > at
> > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.ja
> > va:231)
> > at
> > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.j
> > ava:1075)
> > at
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> > at
> > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.jav
> > a:193)
> > at
> > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.ja
> > va:1009)
> > at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:
> > 135)
> > at
> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHa
> > ndlerCollection.java:255)
> > at
> > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.
> > java:154)
> > at
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.ja
> > va:116)
> > at org.eclipse.jetty.server.Server.handle(Server.java:368)
> > at
> > org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHtt
> > pConnection.java:489)
> > at
> > org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHtt
> > pConnection.java:53)
> > at
> > org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractH
> > ttpConnection.java:942)
> > at
> > org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerCo
> > mplete(AbstractHttpConnection.java:1004)
> > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> > at 
> > org.eclipse.jetty.http.HttpParser.par

RE: LUCENE-5388 AbstractMethodError

2014-01-30 Thread Uwe Schindler
Hi Markus,

Lucene trunk has a backwards incompatible API, so Analyzers compiled with 
Lucene 4.6 cannot be used on Lucene trunk. The change in the Analysis API will 
not be backported, because this would cause the same problem for users updating 
from Lucene 4.6 to Lucene 4.7. In Lucene 4, we try to keep as much backwards 
compatibility as possible. If you Analyzers are correct you can use them almost 
in any 4.x version, but no longer with Lucene trunk.

If you want to User Lucene trunk, you have to recompile your Analyzers with 
trunk, too, and change their code to use the newest API (otherwise they won't 
compile). Analyzers is not the only change, Lucene trunk has many more changes 
affecting other parts of Lucene, like StoredDocument/IndexDocument difference, 
so mixing those JARs is completely impossible. These other changes may not have 
you affected until now, because Analyzers do not use the Lucene Document and 
Index APIs, but it's still the wrong thing to do.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Markus Jelsma [mailto:markus.jel...@openindex.io]
> Sent: Thursday, January 30, 2014 12:52 PM
> To: java-user@lucene.apache.org
> Subject: RE: LUCENE-5388 AbstractMethodError
> 
> Hi Uwe,
> 
> The bug occurred only after LUCENE-5388 was committed to trunk, looks like
> its the changes to Analyzer and friends. The full stack trace is not much more
> helpful:
> 
> java.lang.AbstractMethodError
> at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)
> at
> io.openindex.lucene.analysis.util.QueryDigest.unigrams(QueryDigest.java:19
> 6)
> at
> io.openindex.lucene.analysis.util.QueryDigest.calculate(QueryDigest.java:13
> 5)
> at
> io.openindex.solr.handler.QueryDigestRequestHandler.handleRequestBody(
> QueryDigestRequestHandler.java:56)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandl
> erBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
> at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785
> )
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418
> )
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203
> )
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandle
> r.java:1419)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:
> 137)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.ja
> va:231)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.j
> ava:1075)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.jav
> a:193)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.ja
> va:1009)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:
> 135)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHa
> ndlerCollection.java:255)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.
> java:154)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.ja
> va:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHtt
> pConnection.java:489)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHtt
> pConnection.java:53)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractH
> ttpConnection.java:942)
> at
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerCo
> mplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnec
> tion.java:72)
> at
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(Socke
> tConnector.java:264)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.j
> ava:608)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.ja
> va:543)
> at java.lang.Thread.run(Thread.java:724)
> 
> Here's what happens at the consumer code and where the exception begins:
> TokenStream stream = analyzer.tokenStream(null, new
> StringReader(inp

Re: LUCENE-5388 AbstractMethodError

2014-01-30 Thread Benson Margulies
If you are sensitive to things being committed to trunk, that suggests that
you are building your own jars and using the trunk. Are you perfectly sure
that you have built, and are using, a consistent set of jars? It looks as
if you've got some trunk-y stuff and some 4.6.1 stuff.



On Thu, Jan 30, 2014 at 6:51 AM, Markus Jelsma
wrote:

> Hi Uwe,
>
> The bug occurred only after LUCENE-5388 was committed to trunk, looks like
> its the changes to Analyzer and friends. The full stack trace is not much
> more helpful:
>
> java.lang.AbstractMethodError
> at
> org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)
> at
> io.openindex.lucene.analysis.util.QueryDigest.unigrams(QueryDigest.java:196)
> at
> io.openindex.lucene.analysis.util.QueryDigest.calculate(QueryDigest.java:135)
> at
> io.openindex.solr.handler.QueryDigestRequestHandler.handleRequestBody(QueryDigestRequestHandler.java:56)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
> at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:368)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
> at
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
> at
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:724)
>
> Here's what happens at the consumer code and where the exception begins:
> TokenStream stream = analyzer.tokenStream(null, new StringReader(input));
>
> We test trunk with our custom stuff as well, but all our custom stuff is
> nicely built with Maven against the most recent release of Solr and/or
> Lucene. If that stays a problem we may have to build stuff against
> branch_4x instead.
>
> Thanks,
> Markus
>
> -Original message-
> > From:Uwe Schindler 
> > Sent: Thursday 30th January 2014 11:18
> > To: java-user@lucene.apache.org
> > Subject: RE: LUCENE-5388 AbstractMethodError
> >
> > Hi,
> >
> > Can you please post your complete stack trace? I have no idea what
> LUCENE-5388 has to do with that error?
> >
> > Please make sure that all your Analyzers and all of your Solr
> installation only uses *one set* of Lucen/Solr JAR files from *one*
> version. Mixing Lucene/Solr JARs and mixing with Factories compiled against
> older versions does not work. You have to keep all in sync, and then all
> should be fine.
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> > > -Origin

Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Priyanka Tufchi
Hello Mike

Thanks for the reply ,Now we are able to get TopGroups.   but still we
are not able to get score .how ever we got matched Parent and child
through  following code. We need score for showing in our application
for ranking .

  Document childDoc = indexsearcher.doc(group.scoreDocs[0].doc);
   Document parentDoc = indexsearcher.doc(group.groupValue);

And Second Question is that, how does lucene score ?   means we have
used it before ,but the score is not between 0 to 1 example(16.0 or
1.4) so If i want to use it in percentage ..Then how it gonna work and
the score is like this?

On Thu, Jan 30, 2014 at 3:27 AM, Michael McCandless
 wrote:
> You should not use TextField.TYPE_STORED to index your docType field:
> that field type runs the analyzer.  I'm not sure that matters in your
> case, but that's deadly in general.  Use StringField instead (it
> indexes the provided text as a single token).
>
> Likewise for color, size fields.
>
> Try running your parentQuery as a search itself and confirm you see
> totalHits=2?  This will tell you if something is wrong in how you
> indexed the docType field, or created the parent query/filter.
>
> Also try running your childQuery separately and confirm you get
> non-zero totalHits.
>
> Hmm, this code never calls indexsearcher.search on the query?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Jan 30, 2014 at 5:41 AM, Priyanka Tufchi
>  wrote:
>> Hello Michael,
>>
>> following is the code. This is the Sample which we are trying to get
>> the the hits.Please Guide us
>>
>> 
>>
>> public void newTry() throws IOException
>> {
>> StandardAnalyzer analyzer = new StandardAnalyzer
>>
>> (Version.LUCENE_41);
>> // 1. create the index
>> Directory index = new RAMDirectory();
>> IndexWriterConfig config = new IndexWriterConfig
>>
>> (Version.LUCENE_41,
>> analyzer);
>>
>> IndexWriter w = new IndexWriter(index, config);
>> List documents=new ArrayList();
>> documents.add(createProductItem("red", "s", "999"));
>> documents.add(createProductItem("red", "m", "1000"));
>> documents.add(createProductItem("red", "l", "2000"));
>> documents.add(createProduct("Polo Shirt", ".Made Of 100%
>>
>> cotton"));
>> w.addDocuments(documents);
>> documents.clear();
>> documents.add(createProductItem("light blue", "s", "1000"));
>> documents.add(createProductItem("blue", "s", "1900"));
>> documents.add(createProductItem("dark blue", "s", "1999"));
>> documents.add(createProductItem("light blue", "m", "2000"));
>> documents.add(createProductItem("blue", "m", "2090"));
>> documents.add(createProductItem("dark blue", "m", "2099"));
>> documents.add(createProduct("white color", "...stripe pattern"));
>> w.addDocuments(documents);
>> IndexReader indexreader=DirectoryReader.open(w, false);
>> IndexSearcher indexsearcher=new IndexSearcher(indexreader);
>> Query parentQuery= new TermQuery(new Term("docType", "product"));
>> Filter parentfilter=new CachingWrapperFilter(new QueryWrapperFilter
>>
>> (parentQuery));
>> BooleanQuery mainQuery=new BooleanQuery();
>> Query childQuery=new TermQuery(new Term("size","m"));
>> mainQuery.add(childQuery,Occur.SHOULD);
>> ScoreMode scoremode=ScoreMode.None;
>> ToParentBlockJoinQuery productitemQuery=new ToParentBlockJoinQuery
>>
>> (mainQuery, parentfilter,
>> scoremode);
>> BooleanQuery query = new BooleanQuery();
>>  query.add(new TermQuery(new Term("name", "white color")), Occur.MUST);
>>  query.add(productitemQuery, Occur.MUST);
>> ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
>>Sort.RELEVANCE, // sort
>>10, // numHits
>>true,   //
>>
>> trackScores
>>false   //
>>
>> trackMaxScore
>>);
>> TopGroups hits =
>> c.getTopGroups(
>> productitemQuery,
>>Sort.RELEVANCE,
>>0,   // offset
>>10,  // maxDocsPerGroup
>>0,   // withinGroupOffset
>>true // fillSortFields
>>  );
>> System.out.println("abc");
>>
>>
>> }
>> private static Document createProduct(String name,String description)
>> {
>> Document document=new Document();
>> document.add(new org.apache.lucene.document.Field("name", name,
>>
>> org.apache.lucene.document.TextField.TYPE_STORED));
>> document.add(new org.apache.lucene.document.Field("docType",
>>
>> "product", org.apache.lucene.document.TextField.TYPE_STORED));
>> document.add(new org.apache.lucene.document.Field("description",
>>
>> description, org.apache.lucene.document.TextField.TYPE_STORED));
>> return document;
>> }
>> private static Document createProductItem(String color,String size,String
>>
>> price)
>> {
>> Document document=new Document();
>> document.add(new org.apache.lucene.document.Field("color", color,
>>
>> org.apache.lucene.document.TextField.TYPE_STORED));
>> document.add(new org.apache.lucene.document.Field("size", size,
>>
>> org.apache.lucene.document.TextField.TYPE_STORED));
>> document.add(new org.apache.lucene.document.Field("price", price,
>>
>> or

RE: LUCENE-5388 AbstractMethodError

2014-01-30 Thread Markus Jelsma
Hi Uwe,

The bug occurred only after LUCENE-5388 was committed to trunk, looks like its 
the changes to Analyzer and friends. The full stack trace is not much more 
helpful:

java.lang.AbstractMethodError
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)
at 
io.openindex.lucene.analysis.util.QueryDigest.unigrams(QueryDigest.java:196)
at 
io.openindex.lucene.analysis.util.QueryDigest.calculate(QueryDigest.java:135)
at 
io.openindex.solr.handler.QueryDigestRequestHandler.handleRequestBody(QueryDigestRequestHandler.java:56)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:724)

Here's what happens at the consumer code and where the exception begins:
TokenStream stream = analyzer.tokenStream(null, new StringReader(input));

We test trunk with our custom stuff as well, but all our custom stuff is nicely 
built with Maven against the most recent release of Solr and/or Lucene. If that 
stays a problem we may have to build stuff against branch_4x instead.

Thanks,
Markus
 
-Original message-
> From:Uwe Schindler 
> Sent: Thursday 30th January 2014 11:18
> To: java-user@lucene.apache.org
> Subject: RE: LUCENE-5388 AbstractMethodError
> 
> Hi,
> 
> Can you please post your complete stack trace? I have no idea what 
> LUCENE-5388 has to do with that error?
> 
> Please make sure that all your Analyzers and all of your Solr installation 
> only uses *one set* of Lucen/Solr JAR files from *one* version. Mixing 
> Lucene/Solr JARs and mixing with Factories compiled against older versions 
> does not work. You have to keep all in sync, and then all should be fine.
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> 
> > -Original Message-
> > From: Markus Jelsma [mailto:markus.jel...@openindex.io]
> > Sent: Thursday, January 30, 2014 10:50 AM
> > To: java-user@lucene.apache.org
> > Subject: LUCENE-5388 AbstractMethodError
> > 
> > Hi,
> > 
> > Apologies for cross posting; i got no response on the Sorl list.
> > 
> > We have a developement environment running trunk but have custom
> > analyzers and token filters built on 4.6.1. Now the constructors have 
> > changes
> > somewhat and stuff breaks. Here's a consumer 

Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Michael McCandless
You should not use TextField.TYPE_STORED to index your docType field:
that field type runs the analyzer.  I'm not sure that matters in your
case, but that's deadly in general.  Use StringField instead (it
indexes the provided text as a single token).

Likewise for color, size fields.

Try running your parentQuery as a search itself and confirm you see
totalHits=2?  This will tell you if something is wrong in how you
indexed the docType field, or created the parent query/filter.

Also try running your childQuery separately and confirm you get
non-zero totalHits.

Hmm, this code never calls indexsearcher.search on the query?

Mike McCandless

http://blog.mikemccandless.com


On Thu, Jan 30, 2014 at 5:41 AM, Priyanka Tufchi
 wrote:
> Hello Michael,
>
> following is the code. This is the Sample which we are trying to get
> the the hits.Please Guide us
>
> 
>
> public void newTry() throws IOException
> {
> StandardAnalyzer analyzer = new StandardAnalyzer
>
> (Version.LUCENE_41);
> // 1. create the index
> Directory index = new RAMDirectory();
> IndexWriterConfig config = new IndexWriterConfig
>
> (Version.LUCENE_41,
> analyzer);
>
> IndexWriter w = new IndexWriter(index, config);
> List documents=new ArrayList();
> documents.add(createProductItem("red", "s", "999"));
> documents.add(createProductItem("red", "m", "1000"));
> documents.add(createProductItem("red", "l", "2000"));
> documents.add(createProduct("Polo Shirt", ".Made Of 100%
>
> cotton"));
> w.addDocuments(documents);
> documents.clear();
> documents.add(createProductItem("light blue", "s", "1000"));
> documents.add(createProductItem("blue", "s", "1900"));
> documents.add(createProductItem("dark blue", "s", "1999"));
> documents.add(createProductItem("light blue", "m", "2000"));
> documents.add(createProductItem("blue", "m", "2090"));
> documents.add(createProductItem("dark blue", "m", "2099"));
> documents.add(createProduct("white color", "...stripe pattern"));
> w.addDocuments(documents);
> IndexReader indexreader=DirectoryReader.open(w, false);
> IndexSearcher indexsearcher=new IndexSearcher(indexreader);
> Query parentQuery= new TermQuery(new Term("docType", "product"));
> Filter parentfilter=new CachingWrapperFilter(new QueryWrapperFilter
>
> (parentQuery));
> BooleanQuery mainQuery=new BooleanQuery();
> Query childQuery=new TermQuery(new Term("size","m"));
> mainQuery.add(childQuery,Occur.SHOULD);
> ScoreMode scoremode=ScoreMode.None;
> ToParentBlockJoinQuery productitemQuery=new ToParentBlockJoinQuery
>
> (mainQuery, parentfilter,
> scoremode);
> BooleanQuery query = new BooleanQuery();
>  query.add(new TermQuery(new Term("name", "white color")), Occur.MUST);
>  query.add(productitemQuery, Occur.MUST);
> ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
>Sort.RELEVANCE, // sort
>10, // numHits
>true,   //
>
> trackScores
>false   //
>
> trackMaxScore
>);
> TopGroups hits =
> c.getTopGroups(
> productitemQuery,
>Sort.RELEVANCE,
>0,   // offset
>10,  // maxDocsPerGroup
>0,   // withinGroupOffset
>true // fillSortFields
>  );
> System.out.println("abc");
>
>
> }
> private static Document createProduct(String name,String description)
> {
> Document document=new Document();
> document.add(new org.apache.lucene.document.Field("name", name,
>
> org.apache.lucene.document.TextField.TYPE_STORED));
> document.add(new org.apache.lucene.document.Field("docType",
>
> "product", org.apache.lucene.document.TextField.TYPE_STORED));
> document.add(new org.apache.lucene.document.Field("description",
>
> description, org.apache.lucene.document.TextField.TYPE_STORED));
> return document;
> }
> private static Document createProductItem(String color,String size,String
>
> price)
> {
> Document document=new Document();
> document.add(new org.apache.lucene.document.Field("color", color,
>
> org.apache.lucene.document.TextField.TYPE_STORED));
> document.add(new org.apache.lucene.document.Field("size", size,
>
> org.apache.lucene.document.TextField.TYPE_STORED));
> document.add(new org.apache.lucene.document.Field("price", price,
>
> org.apache.lucene.document.TextField.TYPE_STORED));
> return document;
> }
>
> On Thu, Jan 30, 2014 at 2:24 AM, Michael McCandless
>  wrote:
>> Hi,
>>
>> It looks like the mailing list stripped the attachment; can you try
>> inlining the code into your email (is it brief?).
>>
>> Also, have a look at the unit-test for ToParentBJQ and compare how it
>> runs the query with your code?
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Thu, Jan 30, 2014 at 5:15 AM, Priyanka Tufchi
>>  wrote:
>>> Hello Michael,
>>>
>>> We tried the sample of code but the value of   "hits"   we are getting
>>> is null. We tried to search on net but no proper sample example given
>>> which can help us to understand. We attached our code with mail
>>> .please it would be great if you can give a 

Re: NRT index readers and new commits

2014-01-30 Thread Michael McCandless
Lucene absolutely relies on this behavior, that most filesystems support.

I.e., if you have an open file, and someone else deletes the file
behind it, your open file will continue to work until you close it,
and then it's "really" deleted.  ("delete on last close")

Unix achieves this by allowing the deletion of the directory entry
(but the file bytes / inode still remain allocated on disk).  Windows
achieves it by refusing to delete still-open files.

But some filesystems, e.g. NFS, do not do this when the two operations
are on separate clients (this results in the Stale NFS), which is why
you must use a custom IndexDeletionPolicy if your index is shared via
NFS.  Separately, such an approach usually results in poor search
performance ...

In your case, since you're using NRT, NFS is a non-issue: even if you
did have your index on NFS, the NFS client handles "delete on last
close" for an operations on that single client.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Jan 30, 2014 at 3:03 AM, Vitaly Funstein  wrote:
> Suppose I have an IndexReader instance obtained with this API:
>
> DirectoryReader.open(IndexWriter, boolean);
>
> (I actually use a ReaderManager in front of it, but that's beside the
> point).
>
> There is no manual commit happening prior to this call. Now, I would like
> to keep this reader around until no longer needed, i.e. until the app is
> done with the data this reader will see. Since the index is live, there may
> be new data added after the reader is returned, of course - followed by one
> or more commits at arbitrary points in time.
>
> The question is - what will happen to the flushed segment files this reader
> is backed by, when there is a later commit on the writer that is tied to
> the NRT reader? If I'm reading the code correctly, IndexFileDeleter will
> detect reference counts to those old segment files reaching 0 and try to
> delete them. This is because IndexWriter.getReader(boolean) doesn't
> checkpoint with the IndexFileDeleter associated with it, which would
> increase ref counts on the managed files. Also, actually closing this
> reader appears to have provide no feedback to the backing writer, it only
> closes the data stream(s) but doesn't seem to release used segment files to
> the deleter...
>
> Does all this just rely on that most OSs will allow deletion of a file that
> is still opened (Windows being a notable exception)? It seems the whole
> IndexWriter.deletePendingFiles() API exists to work around the situation
> when it's not allowed... And is it valid to assume it's a safe thing to do,
> even when the OS supports it?

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Need Help In code

2014-01-30 Thread Priyanka Tufchi
Hello Mike

We tried the following code, but it is giving null  :


TopGroups hits =
c.getTopGroups(
productitemQuery,
   Sort.RELEVANCE,
   0,   // offset
   10,  // maxDocsPerGroup
   0,   // withinGroupOffset
   true // fillSortFields
 );






On Thu, Jan 30, 2014 at 2:35 AM, Michael McCandless
 wrote:
> After indexsearcher.search you should call c.getTopGroups?  See the
> TestBlockJoin.java example...
>
> Can you boil this down to a runnable test case, i.e. include
> createProductItem/createProduct sources, etc.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Jan 30, 2014 at 2:20 AM, Priyanka Tufchi
>  wrote:
>> Hello
>>
>> This is the Sample Code Of BlockJoinQuery( we tried  .
>> Issues:
>> 1)Dont know how to get hits and score from it
>> 2) This code is not giving output.
>>
>> I have attached the code for easy view
>>
>>
>> StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_41);
>> // 1. create the index
>> Directory index = new RAMDirectory();
>> IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_41,
>> analyzer);
>>
>> IndexWriter w = new IndexWriter(index, config);
>> List documents=new ArrayList();
>> documents.add(createProductItem("red", "s", "999"));
>> documents.add(createProductItem("red", "m", "1000"));
>> documents.add(createProductItem("red", "l", "2000"));
>> documents.add(createProduct("...Polo Shirt", ".Made Of 100% cotton"));
>> w.addDocuments(documents);
>> documents.clear();
>> documents.add(createProductItem("light blue", "s", "1000"));
>> documents.add(createProductItem("blue", "s", "1900"));
>> documents.add(createProductItem("dark blue", "s", "1999"));
>> documents.add(createProductItem("light blue", "m", "2000"));
>> documents.add(createProductItem("blue", "m", "2090"));
>> documents.add(createProductItem("dark blue", "m", "2099"));
>> documents.add(createProduct(".white color", "...stripe pattern"));
>> w.addDocuments(documents);
>> IndexReader indexreader=DirectoryReader.open(w, false);
>> IndexSearcher indexsearcher=new IndexSearcher(indexreader);
>> Query parentQuery= new TermQuery(new Term("doctype", "product"));
>> Filter parentfilter=new CachingWrapperFilter(new
>> QueryWrapperFilter(parentQuery));
>> Query childQuery=new TermQuery(new Term("size","m"));
>> ScoreMode scoremode=ScoreMode.Max;
>> //String Query="blue AND l";
>> BooleanQuery mainQuery=new BooleanQuery();
>> //need to check this parameters
>> mainQuery.add(childQuery,Occur.MUST);
>> //mainQuery.add(query, occur);
>> ToParentBlockJoinQuery productitemQuery=new
>> ToParentBlockJoinQuery(mainQuery, parentfilter,
>> scoremode);
>> ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
>>Sort.RELEVANCE, // sort
>>10, // numHits
>>true,   // trackScores
>>false   // trackMaxScore
>>);
>>
>> indexsearcher.search(productitemQuery,c);
>>
>>
>>
>> -
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Priyanka Tufchi
Hello Michael,

following is the code. This is the Sample which we are trying to get
the the hits.Please Guide us



public void newTry() throws IOException
{
StandardAnalyzer analyzer = new StandardAnalyzer

(Version.LUCENE_41);
// 1. create the index
Directory index = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig

(Version.LUCENE_41,
analyzer);

IndexWriter w = new IndexWriter(index, config);
List documents=new ArrayList();
documents.add(createProductItem("red", "s", "999"));
documents.add(createProductItem("red", "m", "1000"));
documents.add(createProductItem("red", "l", "2000"));
documents.add(createProduct("Polo Shirt", ".Made Of 100%

cotton"));
w.addDocuments(documents);
documents.clear();
documents.add(createProductItem("light blue", "s", "1000"));
documents.add(createProductItem("blue", "s", "1900"));
documents.add(createProductItem("dark blue", "s", "1999"));
documents.add(createProductItem("light blue", "m", "2000"));
documents.add(createProductItem("blue", "m", "2090"));
documents.add(createProductItem("dark blue", "m", "2099"));
documents.add(createProduct("white color", "...stripe pattern"));
w.addDocuments(documents);
IndexReader indexreader=DirectoryReader.open(w, false);
IndexSearcher indexsearcher=new IndexSearcher(indexreader);
Query parentQuery= new TermQuery(new Term("docType", "product"));
Filter parentfilter=new CachingWrapperFilter(new QueryWrapperFilter

(parentQuery));
BooleanQuery mainQuery=new BooleanQuery();
Query childQuery=new TermQuery(new Term("size","m"));
mainQuery.add(childQuery,Occur.SHOULD);
ScoreMode scoremode=ScoreMode.None;
ToParentBlockJoinQuery productitemQuery=new ToParentBlockJoinQuery

(mainQuery, parentfilter,
scoremode);
BooleanQuery query = new BooleanQuery();
 query.add(new TermQuery(new Term("name", "white color")), Occur.MUST);
 query.add(productitemQuery, Occur.MUST);
ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
   Sort.RELEVANCE, // sort
   10, // numHits
   true,   //

trackScores
   false   //

trackMaxScore
   );
TopGroups hits =
c.getTopGroups(
productitemQuery,
   Sort.RELEVANCE,
   0,   // offset
   10,  // maxDocsPerGroup
   0,   // withinGroupOffset
   true // fillSortFields
 );
System.out.println("abc");


}
private static Document createProduct(String name,String description)
{
Document document=new Document();
document.add(new org.apache.lucene.document.Field("name", name,

org.apache.lucene.document.TextField.TYPE_STORED));
document.add(new org.apache.lucene.document.Field("docType",

"product", org.apache.lucene.document.TextField.TYPE_STORED));
document.add(new org.apache.lucene.document.Field("description",

description, org.apache.lucene.document.TextField.TYPE_STORED));
return document;
}
private static Document createProductItem(String color,String size,String

price)
{
Document document=new Document();
document.add(new org.apache.lucene.document.Field("color", color,

org.apache.lucene.document.TextField.TYPE_STORED));
document.add(new org.apache.lucene.document.Field("size", size,

org.apache.lucene.document.TextField.TYPE_STORED));
document.add(new org.apache.lucene.document.Field("price", price,

org.apache.lucene.document.TextField.TYPE_STORED));
return document;
}

On Thu, Jan 30, 2014 at 2:24 AM, Michael McCandless
 wrote:
> Hi,
>
> It looks like the mailing list stripped the attachment; can you try
> inlining the code into your email (is it brief?).
>
> Also, have a look at the unit-test for ToParentBJQ and compare how it
> runs the query with your code?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Jan 30, 2014 at 5:15 AM, Priyanka Tufchi
>  wrote:
>> Hello Michael,
>>
>> We tried the sample of code but the value of   "hits"   we are getting
>> is null. We tried to search on net but no proper sample example given
>> which can help us to understand. We attached our code with mail
>> .please it would be great if you can give a look to our code.
>>
>> Thanks.
>>
>> On Wed, Jan 29, 2014 at 3:15 AM, Priyanka Tufchi
>>  wrote:
>>> Hello Michael,
>>>
>>> In the example given in your blog   in following line there is error
>>>
>>>   searcher.search(query, c);
>>>
>>> whether it should convert in IndexSearcher
>>>
>>> there is no explanation given for document addition in blog example ?
>>> Can you please provide helping hand.
>>>
>>> Thanks.
>>>
>>>
>>>
>>> On Wed, Jan 29, 2014 at 2:59 AM, Michael McCandless
>>>  wrote:
 Actually, the blog post should still apply: just insert ToParent to
 rename things.

 ToChildBlockJoinQuery is the same idea, but it joins in the reverse
 direction, so e.g. if your index has CDs (parent docs) and individual
 songs on those CDs (child docs), you can take a parent-level
 constraint (e.g. maybe price < $12) and join it down to child level
 constraints (maybe a query against the song's title) and then see
 i

Re: Need Help In code

2014-01-30 Thread Michael McCandless
After indexsearcher.search you should call c.getTopGroups?  See the
TestBlockJoin.java example...

Can you boil this down to a runnable test case, i.e. include
createProductItem/createProduct sources, etc.

Mike McCandless

http://blog.mikemccandless.com


On Thu, Jan 30, 2014 at 2:20 AM, Priyanka Tufchi
 wrote:
> Hello
>
> This is the Sample Code Of BlockJoinQuery( we tried  .
> Issues:
> 1)Dont know how to get hits and score from it
> 2) This code is not giving output.
>
> I have attached the code for easy view
>
>
> StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_41);
> // 1. create the index
> Directory index = new RAMDirectory();
> IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_41,
> analyzer);
>
> IndexWriter w = new IndexWriter(index, config);
> List documents=new ArrayList();
> documents.add(createProductItem("red", "s", "999"));
> documents.add(createProductItem("red", "m", "1000"));
> documents.add(createProductItem("red", "l", "2000"));
> documents.add(createProduct("...Polo Shirt", ".Made Of 100% cotton"));
> w.addDocuments(documents);
> documents.clear();
> documents.add(createProductItem("light blue", "s", "1000"));
> documents.add(createProductItem("blue", "s", "1900"));
> documents.add(createProductItem("dark blue", "s", "1999"));
> documents.add(createProductItem("light blue", "m", "2000"));
> documents.add(createProductItem("blue", "m", "2090"));
> documents.add(createProductItem("dark blue", "m", "2099"));
> documents.add(createProduct(".white color", "...stripe pattern"));
> w.addDocuments(documents);
> IndexReader indexreader=DirectoryReader.open(w, false);
> IndexSearcher indexsearcher=new IndexSearcher(indexreader);
> Query parentQuery= new TermQuery(new Term("doctype", "product"));
> Filter parentfilter=new CachingWrapperFilter(new
> QueryWrapperFilter(parentQuery));
> Query childQuery=new TermQuery(new Term("size","m"));
> ScoreMode scoremode=ScoreMode.Max;
> //String Query="blue AND l";
> BooleanQuery mainQuery=new BooleanQuery();
> //need to check this parameters
> mainQuery.add(childQuery,Occur.MUST);
> //mainQuery.add(query, occur);
> ToParentBlockJoinQuery productitemQuery=new
> ToParentBlockJoinQuery(mainQuery, parentfilter,
> scoremode);
> ToParentBlockJoinCollector c = new ToParentBlockJoinCollector(
>Sort.RELEVANCE, // sort
>10, // numHits
>true,   // trackScores
>false   // trackMaxScore
>);
>
> indexsearcher.search(productitemQuery,c);
>
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Michael McCandless
Hi,

It looks like the mailing list stripped the attachment; can you try
inlining the code into your email (is it brief?).

Also, have a look at the unit-test for ToParentBJQ and compare how it
runs the query with your code?

Mike McCandless

http://blog.mikemccandless.com


On Thu, Jan 30, 2014 at 5:15 AM, Priyanka Tufchi
 wrote:
> Hello Michael,
>
> We tried the sample of code but the value of   "hits"   we are getting
> is null. We tried to search on net but no proper sample example given
> which can help us to understand. We attached our code with mail
> .please it would be great if you can give a look to our code.
>
> Thanks.
>
> On Wed, Jan 29, 2014 at 3:15 AM, Priyanka Tufchi
>  wrote:
>> Hello Michael,
>>
>> In the example given in your blog   in following line there is error
>>
>>   searcher.search(query, c);
>>
>> whether it should convert in IndexSearcher
>>
>> there is no explanation given for document addition in blog example ?
>> Can you please provide helping hand.
>>
>> Thanks.
>>
>>
>>
>> On Wed, Jan 29, 2014 at 2:59 AM, Michael McCandless
>>  wrote:
>>> Actually, the blog post should still apply: just insert ToParent to
>>> rename things.
>>>
>>> ToChildBlockJoinQuery is the same idea, but it joins in the reverse
>>> direction, so e.g. if your index has CDs (parent docs) and individual
>>> songs on those CDs (child docs), you can take a parent-level
>>> constraint (e.g. maybe price < $12) and join it down to child level
>>> constraints (maybe a query against the song's title) and then see
>>> individual songs (not albums) as the returned hits.
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Wed, Jan 29, 2014 at 5:44 AM, Priyanka Tufchi
>>>  wrote:
 Hello Michael,

 Can i get code snippet for this new classes: ToParentBlockJoinQuery,
 ToChildBlockJoinQuery
 and how to use it.

 thanks

 On Wed, Jan 29, 2014 at 2:22 AM, Michael McCandless
  wrote:
> Sorry, BlockJoinQuery was split into two separate classes:
> ToParentBlockJoinQuery, ToChildBlockJoinQuery.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Wed, Jan 29, 2014 at 4:32 AM, Priyanka Tufchi
>  wrote:
>> Subject: BlockJoinQuery is missing in lucene-4.1.0 .
>> To: java-user@lucene.apache.org
>>
>>
>> Hello ,
>>
>>  I want to  Search relational content  which has parent child
>> relationship . I am following below link
>>
>> http://blog.mikemccandless.com/2012/01/searching-relational-content-with.html
>>
>> but got problem in following line
>>
>>  BlockJoinQuery skuJoinQuery = new BlockJoinQuery(
>> skuQuery,
>> shirts,  ScoreMode.None);
>>
>> where is "BlockJoinQuery " class in lucene-4.1.0 ? It has been
>> removed or some other alternative way is in  this version.
>>
>> Thanks in advance.
>>
>> -
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

 -
 To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
 For additional commands, e-mail: java-user-h...@lucene.apache.org

>>>
>>> -
>>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>>
>
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



RE: LUCENE-5388 AbstractMethodError

2014-01-30 Thread Uwe Schindler
Hi,

Can you please post your complete stack trace? I have no idea what LUCENE-5388 
has to do with that error?

Please make sure that all your Analyzers and all of your Solr installation only 
uses *one set* of Lucen/Solr JAR files from *one* version. Mixing Lucene/Solr 
JARs and mixing with Factories compiled against older versions does not work. 
You have to keep all in sync, and then all should be fine.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Markus Jelsma [mailto:markus.jel...@openindex.io]
> Sent: Thursday, January 30, 2014 10:50 AM
> To: java-user@lucene.apache.org
> Subject: LUCENE-5388 AbstractMethodError
> 
> Hi,
> 
> Apologies for cross posting; i got no response on the Sorl list.
> 
> We have a developement environment running trunk but have custom
> analyzers and token filters built on 4.6.1. Now the constructors have changes
> somewhat and stuff breaks. Here's a consumer trying to get a TokenStream
> from an Analyzer object doing TokenStream stream =
> analyzer.tokenStream(null, new StringReader(input)); throwing:
> 
> Caused by: java.lang.AbstractMethodError
>   at
> org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)
> 
> Changing the constructors won't work either because on 4.x we must
> override that specific method: analyzer is not abstract and does not override
> abstract method createComponents(String,Reader) in Analyzer :)
> 
> So, any hints on how to deal with this thing? Wait for 4.x backport of 5388, 
> or
> do something clever like <...> fill in the blanks.
> 
> Many thanks,
> Markus
> 
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: BlockJoinQuery is missing in lucene-4.1.0 .

2014-01-30 Thread Priyanka Tufchi
Hello Michael,

We tried the sample of code but the value of   "hits"   we are getting
is null. We tried to search on net but no proper sample example given
which can help us to understand. We attached our code with mail
.please it would be great if you can give a look to our code.

Thanks.

On Wed, Jan 29, 2014 at 3:15 AM, Priyanka Tufchi
 wrote:
> Hello Michael,
>
> In the example given in your blog   in following line there is error
>
>   searcher.search(query, c);
>
> whether it should convert in IndexSearcher
>
> there is no explanation given for document addition in blog example ?
> Can you please provide helping hand.
>
> Thanks.
>
>
>
> On Wed, Jan 29, 2014 at 2:59 AM, Michael McCandless
>  wrote:
>> Actually, the blog post should still apply: just insert ToParent to
>> rename things.
>>
>> ToChildBlockJoinQuery is the same idea, but it joins in the reverse
>> direction, so e.g. if your index has CDs (parent docs) and individual
>> songs on those CDs (child docs), you can take a parent-level
>> constraint (e.g. maybe price < $12) and join it down to child level
>> constraints (maybe a query against the song's title) and then see
>> individual songs (not albums) as the returned hits.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Wed, Jan 29, 2014 at 5:44 AM, Priyanka Tufchi
>>  wrote:
>>> Hello Michael,
>>>
>>> Can i get code snippet for this new classes: ToParentBlockJoinQuery,
>>> ToChildBlockJoinQuery
>>> and how to use it.
>>>
>>> thanks
>>>
>>> On Wed, Jan 29, 2014 at 2:22 AM, Michael McCandless
>>>  wrote:
 Sorry, BlockJoinQuery was split into two separate classes:
 ToParentBlockJoinQuery, ToChildBlockJoinQuery.

 Mike McCandless

 http://blog.mikemccandless.com


 On Wed, Jan 29, 2014 at 4:32 AM, Priyanka Tufchi
  wrote:
> Subject: BlockJoinQuery is missing in lucene-4.1.0 .
> To: java-user@lucene.apache.org
>
>
> Hello ,
>
>  I want to  Search relational content  which has parent child
> relationship . I am following below link
>
> http://blog.mikemccandless.com/2012/01/searching-relational-content-with.html
>
> but got problem in following line
>
>  BlockJoinQuery skuJoinQuery = new BlockJoinQuery(
> skuQuery,
> shirts,  ScoreMode.None);
>
> where is "BlockJoinQuery " class in lucene-4.1.0 ? It has been
> removed or some other alternative way is in  this version.
>
> Thanks in advance.
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

 -
 To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
 For additional commands, e-mail: java-user-h...@lucene.apache.org

>>>
>>> -
>>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>>
>>
>> -
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

LUCENE-5388 AbstractMethodError

2014-01-30 Thread Markus Jelsma
Hi,

Apologies for cross posting; i got no response on the Sorl list.

We have a developement environment running trunk but have custom analyzers and 
token filters built on 4.6.1. Now the constructors have changes somewhat and 
stuff breaks. Here's a consumer trying to get a TokenStream from an Analyzer 
object doing TokenStream stream = analyzer.tokenStream(null, new 
StringReader(input)); throwing:

Caused by: java.lang.AbstractMethodError
at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:140)

Changing the constructors won't work either because on 4.x we must override 
that specific method: analyzer is not abstract and does not override abstract 
method createComponents(String,Reader) in Analyzer :)

So, any hints on how to deal with this thing? Wait for 4.x backport of 5388, or 
do something clever like <...> fill in the blanks.

Many thanks,
Markus

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



NRT index readers and new commits

2014-01-30 Thread Vitaly Funstein
Suppose I have an IndexReader instance obtained with this API:

DirectoryReader.open(IndexWriter, boolean);

(I actually use a ReaderManager in front of it, but that's beside the
point).

There is no manual commit happening prior to this call. Now, I would like
to keep this reader around until no longer needed, i.e. until the app is
done with the data this reader will see. Since the index is live, there may
be new data added after the reader is returned, of course - followed by one
or more commits at arbitrary points in time.

The question is - what will happen to the flushed segment files this reader
is backed by, when there is a later commit on the writer that is tied to
the NRT reader? If I'm reading the code correctly, IndexFileDeleter will
detect reference counts to those old segment files reaching 0 and try to
delete them. This is because IndexWriter.getReader(boolean) doesn't
checkpoint with the IndexFileDeleter associated with it, which would
increase ref counts on the managed files. Also, actually closing this
reader appears to have provide no feedback to the backing writer, it only
closes the data stream(s) but doesn't seem to release used segment files to
the deleter...

Does all this just rely on that most OSs will allow deletion of a file that
is still opened (Windows being a notable exception)? It seems the whole
IndexWriter.deletePendingFiles() API exists to work around the situation
when it's not allowed... And is it valid to assume it's a safe thing to do,
even when the OS supports it?