Re: IndexWriter has closed

2019-04-01 Thread Aroop Ganguly
Hi Edwin

Yes, we did not seem to have hit any filesystem upper-bounds.
I have not been able to reproduce this since this date.

> On Apr 1, 2019, at 7:28 PM, Zheng Lin Edwin Yeo  wrote:
> 
> Have you check if there are enough space to index all the documents on your
> disk?
> 
> Regards,
> Edwin
> 
> On Fri, 29 Mar 2019 at 15:16, Aroop Ganguly  wrote:
> 
>> Trying again .. Any idea why this might happen?
>> 
>> 
>>> On Mar 27, 2019, at 10:43 PM, Aroop Ganguly 
>> wrote:
>>> 
>>> Hi Everyone
>>> 
>>> My indexing jobs are failing with “this IndexWriter has closed” errors..
>>> This is a solr 7.5 setup, with an NRT index.
>>> 
>>> In deeper logs I see, some of these exceptions,
>>> Any idea what could have caused this ?
>>> 
>>> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
>> java.io.IOException: Input/output error
>>>  at
>> org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:477)
>>>  at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:833)
>>>  at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:817)
>>>  at
>> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:669)
>>>  at
>> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:93)
>>>  at
>> org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
>>>  at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1959)
>>>  at
>> org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1935)
>>>  at
>> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
>>>  at
>> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
>>>  at
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:62)
>>>  at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>>>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
>>>  at
>> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
>>>  at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
>>>  at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>>>  at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>>>  at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>>>  at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>>  at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>>  at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>>  at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>>  at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>>  at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>>>  at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>>>  at
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
>>>  at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>>>  at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>>>  at
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>>>  at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>>  at
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>>>  at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>>  at org.eclipse.jetty.server.Server.handle(Server.java:531)
>>>  at
>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
>>>  at
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>>>  at org.eclipse.jetty.io
>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
>>>  at org.eclipse.jetty.io
>> .FillInterest.fillable(FillInterest.java:102)
>>>  at org.eclipse.jetty.io
>> .ChannelEndPoint$2.run(ChannelEndPoint.java:118)

Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Aroop Ganguly
Turns out the cause was multiple indexing jobs indexing into the index 
simultaneously, which one can imagine can cause jvm loads on certain replicas 
for sure.
Once this was found and only one job ran at a time, things were back to normal.

Your comments seem right on no correlation to the stack trace! 

> On Apr 1, 2019, at 5:32 PM, Shawn Heisey  wrote:
> 
> 4/1/2019 5:40 PM, Aroop Ganguly wrote:
>> Thanks Shawn, for the initial response.
>> Digging into a bit, I was wondering if we’d care to read the inner most 
>> stack.
>> From the inner most stack it seems to be telling us something about what 
>> trigger it ?
>> Ofcourse, the system could have been overloaded as well, but is the 
>> exception telling us something or its of no use to consider this stack
> 
> The stacktrace on OOME is rarely useful.  The memory allocation where the 
> error is thrown probably has absolutely no connection to the part of the 
> program where major amounts of memory are being used.  It could be ANY memory 
> allocation that actually causes the error.
> 
> Thanks,
> Shawn



Re: A working example to play with Naive Bayes classifier

2019-04-01 Thread koponk
Hi, i have some problem when implementing this solr classification,

this is my schema : 







  
  
  


  
  

  

and this is my solrconfig : 



  classi

  

   

  pagetext_mlt
  knn_tags
  prebayes_tags
  
  bayes 


 
   


but this is not working, step : 
1. insert document A with pagetext_mlt="something A" and knn_tags="aaa"
2. insert document B with pagetext_mlt="something B" and knn_tags="bbb"
3. insert document C with pagetext_mlt="something B" and knn_tags=null

but field prebayes_tags always empty(i cant see it even when i stored the
field). is it something i miss?

Thanks,


Alessandro Benedetti wrote
> But how big it is your index ? Are you expecting Solr to automatically
> classify your documents without any knowledge groundbase ?
> Please attach an example of schema.
> There was a reason if I asked you :)
> Seems related the fact we get no token from the text analysis.
> 
> Cheers
> 
> On Fri, Jul 15, 2016 at 12:11 PM, Tomas Ramanauskas <

> Tomas.Ramanauskas@

>> wrote:
> 
>> Hi, Allesandro,
>>
>> sorry for the delay. What do you mean?
>>
>>
>> As I mentioned earlier, I followed a super simply set of steps.
>>
>> 1. Download Solr
>> 2. Configure classification
>> 3. Create some documents using curl over HTTP.
>>
>>
>> Is it difficult to reproduce the steps / problem?
>>
>>
>> Tomas
>>
>>
>>
>> > On 23 Jun 2016, at 16:42, Alessandro Benedetti <
>> 

> benedetti.alex85@

>> wrote:
>> >
>> > Can you give an example of your schema, and can you run a simple query
>> for
>> > you index, curious to see how the input fields are analyzed.
>> >
>> > Cheers
>> >
>> > On Wed, Jun 22, 2016 at 6:05 PM, Alessandro Benedetti <
>> > 

> benedetti.alex85@

>> wrote:
>> >
>> >> This is better!  At list the classifier is invoked!
>> >> How many docs in the index have the class assigned?
>> >> Take a look to the stacktrace and you should find the cause!
>> >> I am now on mobile, I will check the code tomorrow!
>> >> Cheers
>> >> On 22 Jun 2016 5:26 pm, "Tomas Ramanauskas" <
>> >> 

> Tomas.Ramanauskas@

>> wrote:
>> >>
>> >>>
>> >>> I also tried with this config (adding **):
>> >>>
>> >>>
>> >>>  
> 
>> >>>
> 
>> >>>  
> 
> classification
> 
>> >>>
> 
>> >>>  
> 
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> And I get the error:
>> >>>
>> >>>
>> >>>
>> >>> $ curl http://localhost:8983/solr/demo/update -d '
>> >>> [
>> >>> {"id" : "book15",
>> >>> "title_t":["The Way of Kings"],
>> >>> "author_s":"Brandon Sanderson",
>> >>> "cat_s": null,
>> >>> "pubyear_i":2010,
>> >>> "ISBN_s":"978-0-7653-2635-5"
>> >>> }
>> >>> ]'
>> >>>
>> {"responseHeader":{"status":500,"QTime":29},"error":{"trace":"java.lang.NullPointerException\n\tat
>> >>>
>> org.apache.lucene.classification.document.SimpleNaiveBayesDocumentClassifier.getTokenArray(SimpleNaiveBayesDocumentClassifier.java:202)\n\tat
>> >>>
>> org.apache.lucene.classification.document.SimpleNaiveBayesDocumentClassifier.analyzeSeedDocument(SimpleNaiveBayesDocumentClassifier.java:162)\n\tat
>> >>>
>> org.apache.lucene.classification.document.SimpleNaiveBayesDocumentClassifier.assignNormClasses(SimpleNaiveBayesDocumentClassifier.java:121)\n\tat
>> >>>
>> org.apache.lucene.classification.document.SimpleNaiveBayesDocumentClassifier.assignClass(SimpleNaiveBayesDocumentClassifier.java:81)\n\tat
>> >>>
>> org.apache.solr.update.processor.ClassificationUpdateProcessor.processAdd(ClassificationUpdateProcessor.java:94)\n\tat
>> >>>
>> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleAdds(JsonLoader.java:474)\n\tat
>> >>>
>> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:138)\n\tat
>> >>>
>> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:114)\n\tat
>> >>>
>> org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:77)\n\tat
>> >>>
>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)\n\tat
>> >>>
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69)\n\tat
>> >>>
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)\n\tat
>> >>> org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat
>> >>>
>> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat
>> >>>
>> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat
>> >>>
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>> >>>
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>> >>>
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>> >>>
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>> >>>
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>> >>>
>> 

Re: IndexWriter has closed

2019-04-01 Thread Zheng Lin Edwin Yeo
Have you check if there are enough space to index all the documents on your
disk?

Regards,
Edwin

On Fri, 29 Mar 2019 at 15:16, Aroop Ganguly  wrote:

> Trying again .. Any idea why this might happen?
>
>
> > On Mar 27, 2019, at 10:43 PM, Aroop Ganguly 
> wrote:
> >
> > Hi Everyone
> >
> > My indexing jobs are failing with “this IndexWriter has closed” errors..
> > This is a solr 7.5 setup, with an NRT index.
> >
> > In deeper logs I see, some of these exceptions,
> > Any idea what could have caused this ?
> >
> > o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
> java.io.IOException: Input/output error
> >   at
> org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:477)
> >   at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:833)
> >   at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:817)
> >   at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:669)
> >   at
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:93)
> >   at
> org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
> >   at
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1959)
> >   at
> org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1935)
> >   at
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
> >   at
> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
> >   at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:62)
> >   at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> >   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
> >   at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> >   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> >   at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> >   at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> >   at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> >   at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> >   at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> >   at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> >   at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> >   at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> >   at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
> >   at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> >   at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> >   at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
> >   at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
> >   at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> >   at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> >   at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> >   at org.eclipse.jetty.server.Server.handle(Server.java:531)
> >   at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
> >   at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
> >   at org.eclipse.jetty.io
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
> >   at org.eclipse.jetty.io
> .FillInterest.fillable(FillInterest.java:102)
> >   at org.eclipse.jetty.io
> .ChannelEndPoint$2.run(ChannelEndPoint.java:118)
> >   at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
> >   at
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
> >   

Re: problems with indexing documents

2019-04-01 Thread Zheng Lin Edwin Yeo
Hi Bill,

Previously, did you index the date in the same format as you are using now,
or in the Solr format of "-MM-DDTHH:MM:SSZ"?

Regards,
Edwin


On Tue, 2 Apr 2019 at 00:32, Bill Tantzen  wrote:

> In a legacy application using Solr 4.1 and solrj, I have always been
> able to add documents with TrieDateField types using java.util.Date
> objects, for instance,
>
> doc.addField ( "date", new java.util.Date() );
>
> having recently upgraded to Solr 7.7, and updating my schema to
> leverage DatePointField as my type, that code no longer works,  it
> throws an exception with an error like:
>
> Invalid Date String: 'Sun Jul 31 19:00:00 CDT 2016'
>
> I understand that this String is not what solr expects, but in lieu of
> formatting the correct String, is there no longer a way to pass in a
> simple Date object?  Was there some kind of implicit conversion taking
> place earlier that is no longer happening?
>
> In fact, in the some of the example code that come with the solr
> distribution, (SolrExampleTests.java), document timestamp fields are
> added using the same AddField call I am attempting to use, so I am
> very confused.
>
> Thanks for any advice!
>
> Regards,
> Bill
>


Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Shawn Heisey

4/1/2019 5:40 PM, Aroop Ganguly wrote:

Thanks Shawn, for the initial response.
Digging into a bit, I was wondering if we’d care to read the inner most stack.

 From the inner most stack it seems to be telling us something about what 
trigger it ?
Ofcourse, the system could have been overloaded as well, but is the exception 
telling us something or its of no use to consider this stack


The stacktrace on OOME is rarely useful.  The memory allocation where 
the error is thrown probably has absolutely no connection to the part of 
the program where major amounts of memory are being used.  It could be 
ANY memory allocation that actually causes the error.


Thanks,
Shawn


Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Aroop Ganguly
Thanks Shawn, for the initial response.
Digging into a bit, I was wondering if we’d care to read the inner most stack.

From the inner most stack it seems to be telling us something about what 
trigger it ?
Ofcourse, the system could have been overloaded as well, but is the exception 
telling us something or its of no use to consider this stack


Caused by: java.lang.OutOfMemoryError: Java heap space
at 
org.apache.lucene.index.FieldInfos$Builder.getOrAdd(FieldInfos.java:413)
at 
org.apache.lucene.index.DefaultIndexingChain.getOrAddField(DefaultIndexingChain.java:650)
at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:428)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:394)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:251)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:494)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1609)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1601)
at 
org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:964)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:341)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:288)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:235)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:970)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1186)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)



> On Apr 1, 2019, at 4:06 PM, Shawn Heisey  wrote:
> 
> On 4/1/2019 4:44 PM, Aroop Ganguly wrote:
>> I am facing this issue again.The stack mentions Heap space issue.
>> Are the document sizes too big ?
>> Not sure what I should be doing here; As on the solr admin ui I do not see 
>> jvm being anywhere close to being full.
>> Any advise on this is greatly welcome.
> 
> 
> 
>> Caused by: java.lang.OutOfMemoryError: Java heap space
> 
> Java ran out of heap space.  This means that for what that process is being 
> asked to do, its heap is too small.  Solr needs more memory than it is 
> allowed to use.
> 
> There are exactly two things you can do.
> 
> 1) Increase the heap size.
> 2) Change something so that less heap is required.
> 
> The second option is not always possible.
> 
> https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
> 
> Program operation is completely unpredictable when OOME strikes.  This is why 
> Solr is configured to self-destruct on OutOfMemoryError when it is running on 
> a non-Windows operating system.  We'd like the same thing to happen for 
> Windows, but don't have that capability yet.
> 
> Thanks,
> Shawn



Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Erick Erickson
Please follow the instructions here: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc
. You must use the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the "problems" 
link doesn't work for you, let us know. But note you need to show us the 
_entire_ return header to allow anyone to diagnose the problem.

Best,
Erick

> On Apr 1, 2019, at 3:55 PM, Ashwin Tandel  wrote:
> 
> Please un-subscribe me.
> 
> Thanks in Advance,
> Ashwin
> 
> On Mon, Apr 1, 2019 at 5:54 PM Aroop Ganguly  wrote:
>> 
>> Hi Group
>> 
>> I am facing this issue again.The stack mentions Heap space issue.
>> 
>> Are the document sizes too big ?
>> 
>> Not sure what I should be doing here; As on the solr admin ui I do not see 
>> jvm being anywhere close to being full.
>> Any advise on this is greatly welcome.
>> 
>> 
>> Full Stack trace:
>> 
>> 2019-04-01 22:13:54.833 ERROR (qtp484199463-773)
>> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Server error 
>> writing document id C9C280C4-B3B7-4BEE-9EA5-C4925F5092D9 to the index
>>at 
>> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:240)
>>at 
>> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
>>at 
>> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>>at 
>> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:970)
>>at 
>> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1186)
>>at 
>> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
>>at 
>> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>>at 
>> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
>>at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
>>at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
>>at 
>> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
>>at 
>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
>>at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
>>at 
>> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
>>at 
>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
>>at 
>> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
>>at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
>>at 
>> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
>>at 
>> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
>>at 
>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>>at 
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>>at 
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
>>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
>>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
>>at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
>>at 
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>>at 
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>at 
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>>at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>at 
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>at 
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
>>at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>at 
>> 

Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Shawn Heisey

On 4/1/2019 4:44 PM, Aroop Ganguly wrote:

I am facing this issue again.The stack mentions Heap space issue.

Are the document sizes too big ?

Not sure what I should be doing here; As on the solr admin ui I do not see jvm 
being anywhere close to being full.
Any advise on this is greatly welcome.





Caused by: java.lang.OutOfMemoryError: Java heap space


Java ran out of heap space.  This means that for what that process is 
being asked to do, its heap is too small.  Solr needs more memory than 
it is allowed to use.


There are exactly two things you can do.

1) Increase the heap size.
2) Change something so that less heap is required.

The second option is not always possible.

https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

Program operation is completely unpredictable when OOME strikes.  This 
is why Solr is configured to self-destruct on OutOfMemoryError when it 
is running on a non-Windows operating system.  We'd like the same thing 
to happen for Windows, but don't have that capability yet.


Thanks,
Shawn


Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Aroop Ganguly



Hi Group 

I am facing this issue again.The stack mentions Heap space issue.

Are the document sizes too big ?

Not sure what I should be doing here; As on the solr admin ui I do not see jvm 
being anywhere close to being full.
Any advise on this is greatly welcome.


Full Stack trace:

2019-04-01 22:13:54.833 ERROR (qtp484199463-773) 
 o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Server error 
writing document id C9C280C4-B3B7-4BEE-9EA5-C4925F5092D9 to the index
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:240)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:970)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1186)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 

Re: Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Ashwin Tandel
Please un-subscribe me.

Thanks in Advance,
Ashwin

On Mon, Apr 1, 2019 at 5:54 PM Aroop Ganguly  wrote:
>
> Hi Group
>
> I am facing this issue again.The stack mentions Heap space issue.
>
> Are the document sizes too big ?
>
> Not sure what I should be doing here; As on the solr admin ui I do not see 
> jvm being anywhere close to being full.
> Any advise on this is greatly welcome.
>
>
> Full Stack trace:
>
> 2019-04-01 22:13:54.833 ERROR (qtp484199463-773)
>  o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Server error 
> writing document id C9C280C4-B3B7-4BEE-9EA5-C4925F5092D9 to the index
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:240)
> at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:970)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1186)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
> at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
> at 
> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
> at 
> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
> at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

Solr 7.5 - Indexing Failing due to "IndexWriter is Closed"

2019-04-01 Thread Aroop Ganguly
Hi Group 

I am facing this issue again.The stack mentions Heap space issue.

Are the document sizes too big ?

Not sure what I should be doing here; As on the solr admin ui I do not see jvm 
being anywhere close to being full.
Any advise on this is greatly welcome.


Full Stack trace:

2019-04-01 22:13:54.833 ERROR (qtp484199463-773) 
 o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Server error 
writing document id C9C280C4-B3B7-4BEE-9EA5-C4925F5092D9 to the index
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:240)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:970)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1186)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:653)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 

ERROR: Error CREATEing SolrCore 'nutch': Unable to create core [nutch] Caused by: invalid boolean value:

2019-04-01 Thread vishal.thumm...@its.ny.gov
Hi,

I am getting the following error while creating a core in Solr 8.0.0

"ERROR: Error CREATEing SolrCore 'nutch': Unable to create core [nutch]
Caused by: invalid boolean value:"

Command I am using to create the core:
#/opt/solr/bin/solr create -c nutch -d /opt/solr/server/solr/nutch/conf/
-force

I am pretty new to solr and I am trying to integrate nutch 1.5 with Solr
8.0.0 and while create the core the following error is coming up.

I get the following output when running 
#/opt/nutch/bin/nutch solrindex http://127.0.0.1:8983/solr/nutch/
/opt/nutch/crawl/crawldb /opt/nutch/crawl/linkdb /opt/nutch/crawl/segments/*

The input path at linkdb is not a segment... skipping
Segment dir is complete: /opt/nutch/crawl/segments/20190329112840.
Indexer: starting at 2019-04-01 16:34:31
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
No exchange was configured. The documents will be routed to all index
writers.
No IndexWriters activated - check your configuration

Indexer: number of documents indexed, deleted, or skipped:
Indexer: finished at 2019-04-01 16:34:33, elapsed: 00:00:01

I believe this is coming up because there are no cores created on solr.

Any help on this issue is highly appreciated.




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


RE: IRA or IRA the Person

2019-04-01 Thread Moyer, Brett
Wow, thank you Trey, great information! We are a Fusion client, works well for 
us, we are leveraging the Signals Boosting. We were thinking omitNorms might be 
of help here, turning that off actually. The PERSON document ranks #1 always 
because it’s a tiny document with very short fields. I'll take a closer look at 
what you sent, Thank you!

Brett Moyer
Manager, Sr. Technical Lead | TFS Technology
  Public Production Support
  Digital Search & Discovery

8625 Andrew Carnegie Blvd | 4th floor
Charlotte, NC 28263
Tel: 704.988.4508
Fax: 704.988.4907
bmo...@tiaa.org 


-Original Message-
From: Trey Grainger [mailto:solrt...@gmail.com] 
Sent: Monday, April 01, 2019 1:15 PM
To: solr-user@lucene.apache.org
Subject: Re: IRA or IRA the Person

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Hi Brett,

There are a couple of angles you can take here. If you are only concerned
about this specific term or a small number of other known terms like "IRA"
and want to spot fix it, you can use something like the query elevation
component in Solr (
https://lucene.apache.org/solr/guide/7_7/the-query-elevation-component.html)
to explicitly include or exclude documents.

Otherwise, if you are looking for a more data-driven approach to solving
this, you can leverage the aggregate click-streams for your users across
all of the searches on your platform to boost documents higher that are
more popular for any given search. We do this in our commercial product
(Lucidworks Fusion) through our Signals Boosting feature, but you could
implement something similar yourself with some work, as the general
architecture is fairly well-documented here:
https://doc.lucidworks.com/fusion-ai/4.2/user-guide/signals/index.html

If you do not have long-lived content OR your do not have sufficient
signals history, you could alternatively use something like Solr's Semantic
Knowledge Graph to automatically find term vectors that are the most
related to your terms within your content. In that case, if the "individual
retirement account" meaning is more common across your documents, you'd
probably end up with terms more related to that which could be used to do
data-driven boosts on your query to that concept (instead of the person, in
this case).

I gave a presentation at Activate ("the Search & AI Conference") last year
on some of the more data-driven approaches to parsing and understanding the
meaning of terms within queries, that included things like disambiguation
(similar to what you're doing here) and some additional approaches
leveraging a combination of query log mining, the semantic knowledge graph,
and the Solr Text Tagger. If you start handling these use cases in a more
systematic and data-driven way, you might want to check out some of the
techniques I mentioned there: Video:
https://www.youtube.com/watch?v=4fMZnunTRF8 | Slides:
https://www.slideshare.net/treygrainger/how-to-build-a-semantic-search-system


All the best,

Trey Grainger
Chief Algorithms Officer @ Lucidworks


On Mon, Apr 1, 2019 at 11:45 AM Moyer, Brett  wrote:

> Hello,
>
> Looking for ideas on how to determine intent and drive results to
> a person result or an article result. We are a financial institution and we
> have IRA's Individual Retirement Accounts and we have a page that talks
> about an Advisor, IRA Black.
>
> Our users are in a bad habit of only using single terms for
> search. A very common search term is "ira". The PERSON page ranks higher
> than the article on IRA's. With essentially no information from the user,
> what are some way we can detect and rank differently? Thanks!
>
> Brett Moyer
> *
> This e-mail may contain confidential or privileged information.
> If you are not the intended recipient, please notify the sender
> immediately and then delete it.
>
> TIAA
> *
>
*
This e-mail may contain confidential or privileged information.
If you are not the intended recipient, please notify the sender immediately and 
then delete it.

TIAA
*


Re: Why can't we get multiple payloadFields from suggester?

2019-04-01 Thread akhilendrajha
Did you find out a way to get multiple payloadFields ?



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: IRA or IRA the Person

2019-04-01 Thread Trey Grainger
Hi Brett,

There are a couple of angles you can take here. If you are only concerned
about this specific term or a small number of other known terms like "IRA"
and want to spot fix it, you can use something like the query elevation
component in Solr (
https://lucene.apache.org/solr/guide/7_7/the-query-elevation-component.html)
to explicitly include or exclude documents.

Otherwise, if you are looking for a more data-driven approach to solving
this, you can leverage the aggregate click-streams for your users across
all of the searches on your platform to boost documents higher that are
more popular for any given search. We do this in our commercial product
(Lucidworks Fusion) through our Signals Boosting feature, but you could
implement something similar yourself with some work, as the general
architecture is fairly well-documented here:
https://doc.lucidworks.com/fusion-ai/4.2/user-guide/signals/index.html

If you do not have long-lived content OR your do not have sufficient
signals history, you could alternatively use something like Solr's Semantic
Knowledge Graph to automatically find term vectors that are the most
related to your terms within your content. In that case, if the "individual
retirement account" meaning is more common across your documents, you'd
probably end up with terms more related to that which could be used to do
data-driven boosts on your query to that concept (instead of the person, in
this case).

I gave a presentation at Activate ("the Search & AI Conference") last year
on some of the more data-driven approaches to parsing and understanding the
meaning of terms within queries, that included things like disambiguation
(similar to what you're doing here) and some additional approaches
leveraging a combination of query log mining, the semantic knowledge graph,
and the Solr Text Tagger. If you start handling these use cases in a more
systematic and data-driven way, you might want to check out some of the
techniques I mentioned there: Video:
https://www.youtube.com/watch?v=4fMZnunTRF8 | Slides:
https://www.slideshare.net/treygrainger/how-to-build-a-semantic-search-system


All the best,

Trey Grainger
Chief Algorithms Officer @ Lucidworks


On Mon, Apr 1, 2019 at 11:45 AM Moyer, Brett  wrote:

> Hello,
>
> Looking for ideas on how to determine intent and drive results to
> a person result or an article result. We are a financial institution and we
> have IRA's Individual Retirement Accounts and we have a page that talks
> about an Advisor, IRA Black.
>
> Our users are in a bad habit of only using single terms for
> search. A very common search term is "ira". The PERSON page ranks higher
> than the article on IRA's. With essentially no information from the user,
> what are some way we can detect and rank differently? Thanks!
>
> Brett Moyer
> *
> This e-mail may contain confidential or privileged information.
> If you are not the intended recipient, please notify the sender
> immediately and then delete it.
>
> TIAA
> *
>


problems with indexing documents

2019-04-01 Thread Bill Tantzen
In a legacy application using Solr 4.1 and solrj, I have always been
able to add documents with TrieDateField types using java.util.Date
objects, for instance,

doc.addField ( "date", new java.util.Date() );

having recently upgraded to Solr 7.7, and updating my schema to
leverage DatePointField as my type, that code no longer works,  it
throws an exception with an error like:

Invalid Date String: 'Sun Jul 31 19:00:00 CDT 2016'

I understand that this String is not what solr expects, but in lieu of
formatting the correct String, is there no longer a way to pass in a
simple Date object?  Was there some kind of implicit conversion taking
place earlier that is no longer happening?

In fact, in the some of the example code that come with the solr
distribution, (SolrExampleTests.java), document timestamp fields are
added using the same AddField call I am attempting to use, so I am
very confused.

Thanks for any advice!

Regards,
Bill


Re: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Erick Erickson
Minor nit. For IndexUpgraderTool and optimize to be identical, you have to 
specify maxSegments=1 on optimize. 

As of LUCENE-7976, optimize respects the max segment size and does _not_ 
necessarily rewrite segments that have no deleted documents, especially if 
they’re near 5G which is the default max segment size.

Which nit doesn’t matter in this case of course…

Best,
Erick



> On Apr 1, 2019, at 9:03 AM, Shawn Heisey  wrote:
> 
> On 4/1/2019 9:47 AM, Herbert Hackelsberger wrote:
>> So, am I correct:
>> - When using the IndexUpgrader, it will make the Index usable in the actual 
>> version, without all new features.
>> - Using the Index Upgrader in the future again on the next major version 
>> will again result in this error situation.
> 
> That is correct.
> 
> If the "new features" are not related to the index format, then you will have 
> full access to them even with an older index.
> 
> The Lucene IndexUpgrader function does a forceMerge on the index, down to one 
> segment.  Solr calls that operation "optimize".
> 
> There's really no need to use IndexUpgrader.  Solr will directly use an index 
> from one major version back with no trouble.  If you ask Solr to optimize the 
> index down to one segment, that is an identical operation to IndexUpgrader, 
> with the difference that you can still access the index while it is happening.
> 
> Thanks,
> Shawn



AW: AW: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Herbert Hackelsberger
Many Thanks!

-Ursprüngliche Nachricht-
Von: Shawn Heisey  
Gesendet: Montag, 1. April 2019 18:03
An: solr-user@lucene.apache.org
Betreff: Re: AW: Solr 8.0.0 + IndexUpgrader

On 4/1/2019 9:47 AM, Herbert Hackelsberger wrote:
> So, am I correct:
> - When using the IndexUpgrader, it will make the Index usable in the actual 
> version, without all new features.
> - Using the Index Upgrader in the future again on the next major version will 
> again result in this error situation.

That is correct.

If the "new features" are not related to the index format, then you will have 
full access to them even with an older index.

The Lucene IndexUpgrader function does a forceMerge on the index, down to one 
segment.  Solr calls that operation "optimize".

There's really no need to use IndexUpgrader.  Solr will directly use an index 
from one major version back with no trouble.  If you ask Solr to optimize the 
index down to one segment, that is an identical operation to IndexUpgrader, 
with the difference that you can still access the index while it is happening.

Thanks,
Shawn


Re: AW: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Shawn Heisey

On 4/1/2019 9:47 AM, Herbert Hackelsberger wrote:

So, am I correct:
- When using the IndexUpgrader, it will make the Index usable in the actual 
version, without all new features.
- Using the Index Upgrader in the future again on the next major version will 
again result in this error situation.


That is correct.

If the "new features" are not related to the index format, then you will 
have full access to them even with an older index.


The Lucene IndexUpgrader function does a forceMerge on the index, down 
to one segment.  Solr calls that operation "optimize".


There's really no need to use IndexUpgrader.  Solr will directly use an 
index from one major version back with no trouble.  If you ask Solr to 
optimize the index down to one segment, that is an identical operation 
to IndexUpgrader, with the difference that you can still access the 
index while it is happening.


Thanks,
Shawn


Index Upgrader Documention - Problem under Microsoft Windows

2019-04-01 Thread Herbert Hackelsberger
Hi all,

I just wanted to inform you, when I stumbled across the ref guide at 
https://lucene.apache.org/solr/guide/7_7/indexupgrader-tool.html
I had problems to run the command on my Windows System

Instead of
java -cp lucene-core-7.7.0.jar:lucene-backward-codecs-7.7.0.jar 
org.apache.lucene.index.IndexUpgrader [-delete-prior-commits] [-verbose] 
/path/to/index

I had to write
java -cp lucene-core-7.7.0.jar;lucene-backward-codecs-7.7.0.jar 
org.apache.lucene.index.IndexUpgrader [-delete-prior-commits] [-verbose] 
/path/to/index

(instead of : use a ; to separate classpath entries)

I found this information after research on the official documentation:
https://docs.oracle.com/javase/8/docs/technotes/tools/windows/classpath.html#BEHJBHCD

I don't know if that's specifically to Windows systems, but others might be 
happy, if an information regarding difference to MS Windows would be posted in 
the ref guide.


AW: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Herbert Hackelsberger
Thanks for the fast response!

I used the IndexUpgrader to upgrade to 7.7.1 from 6.x and afterwards from 7.7.1 
to 8.0.0

java -cp lucene-core-7.7.1.jar;lucene-backward-codecs-7.7.1.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\
java -cp lucene-core-8.0.0.jar;lucene-backward-codecs-8.0.0.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\

So, am I correct:
- When using the IndexUpgrader, it will make the Index usable in the actual 
version, without all new features.
- Using the Index Upgrader in the future again on the next major version will 
again result in this error situation.

Best Regards


-Ursprüngliche Nachricht-
Von: Erick Erickson  
Gesendet: Montag, 1. April 2019 17:33
An: solr-user@lucene.apache.org
Betreff: Re: Solr 8.0.0 + IndexUpgrader

As of Lucene 6, a marker was written into each segment, and when segments are 
merged the lowest marker is preserved. If any marker for version of Lucene X-2 
is found, you will see the error you see.

This has been a source of considerable confusion. The guarantee of “one major 
revision backwards compatability” has always actually meant that in a case like 
yours, say from Lucene 5x -> 7x, you wouldn’t get a failure, but you _would_ 
get subtle errors.

From Robert Muir:
“...Because it is a lossy index and does not retain all of the user's data, its 
not possible to safely migrate some things automagically…"

IndexUpgraderTool does not actually change this restriction. All it does is 
insure that all segments were written by the current version. It cannot 
recreate data that’s not there in the first place.

Your only choice at this point is to fully re-index.

Best,
Erick

> On Apr 1, 2019, at 8:19 AM, Herbert Hackelsberger  wrote:
> 
> Hi,
> 
> I tried to upgrade my test index from Solr 7.7.1 to Solr 8.0.0.
> The file segments_4h7 already contains the string Lucene70.
> I upgraded before with this command:
> 
> java -cp lucene-core-7.7.1.jar;lucene-backward-codecs-7.7.1.jar 
> org.apache.lucene.index.IndexUpgrader 
> C:\solr\server\solr\syneris\data\index\
> 
> Everything went successful, when I start solr via solr.cmd start, no errors 
> are logged.
> Now, when I try to upgrade to Solr 8 I also tried to upgraded the index with 
> the following command:
> 
> java -cp lucene-core-8.0.0.jar;lucene-backward-codecs-8.0.0.jar 
> org.apache.lucene.index.IndexUpgrader 
> C:\solr\server\solr\syneris\data\index\
> 
> But I always get an exception:
> 
> Exception in thread "main" 
> org.apache.lucene.index.IndexFormatTooOldException: Format version is not 
> supported (resource 
> BufferedChecksumIndexInput(MMapIndexInput(path="C:\solr\server\solr\syneris\data\index\segments_4h7"))):
>  This index was initially created with Lucene 6.x while the current version 
> is 8.0.0 and Lucene only supports reading the current and previous major 
> versions.. This version of Lucene only supports indexes created with release 
> 7.0 and later.
>at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:318)
>at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:289)
>at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:432)
>at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:429)
>at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:680)
>at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:632)
>at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:434)
>at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
>at 
> org.apache.lucene.index.IndexUpgrader.upgrade(IndexUpgrader.java:158)
>at 
> org.apache.lucene.index.IndexUpgrader.main(IndexUpgrader.java:78)
> 
> Any ideas, without performing a full reindex?
> 
> 
> Mit freundlichen Grüßen
> 
> Herbert Hackelsberger
> Kundensupport/Qualitätssicherung
> __
> TECHNODAT Technische Datenverarbeitung GmbH Jakob-Haringer-Straße 6
> 5020  Salzburg / Austria
> 
> T  | +43 (0)662 2282-141
> F  | +43 (0)662 2282-9
> E  | h...@technodat.at
> W | www.technodat.at
> 
> Rechtsform: GmbH; Firmensitz: Salzburg
> Firmenbuchgericht: Landesgericht Salzburg FN 64072z; DVR: 0481831; 
> UID-Nr. ATU33826508
> 



IRA or IRA the Person

2019-04-01 Thread Moyer, Brett
Hello,

Looking for ideas on how to determine intent and drive results to a 
person result or an article result. We are a financial institution and we have 
IRA's Individual Retirement Accounts and we have a page that talks about an 
Advisor, IRA Black.

Our users are in a bad habit of only using single terms for search. A 
very common search term is "ira". The PERSON page ranks higher than the article 
on IRA's. With essentially no information from the user, what are some way we 
can detect and rank differently? Thanks!

Brett Moyer
*
This e-mail may contain confidential or privileged information.
If you are not the intended recipient, please notify the sender immediately and 
then delete it.

TIAA
*


Re: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Erick Erickson
As of Lucene 6, a marker was written into each segment, and when segments are 
merged the lowest marker is preserved. If any marker for version of Lucene X-2 
is found, you will see the error you see.

This has been a source of considerable confusion. The guarantee of “one major 
revision backwards compatability” has always actually meant that in a case like 
yours, say from Lucene 5x -> 7x, you wouldn’t get a failure, but you _would_ 
get subtle errors.

From Robert Muir:
“...Because it is a lossy index and does not retain all of the user's data, its 
not possible to safely migrate some things automagically…"

IndexUpgraderTool does not actually change this restriction. All it does is 
insure that all segments were written by the current version. It cannot 
recreate data that’s not there in the first place.

Your only choice at this point is to fully re-index.

Best,
Erick

> On Apr 1, 2019, at 8:19 AM, Herbert Hackelsberger  wrote:
> 
> Hi,
> 
> I tried to upgrade my test index from Solr 7.7.1 to Solr 8.0.0.
> The file segments_4h7 already contains the string Lucene70.
> I upgraded before with this command:
> 
> java -cp lucene-core-7.7.1.jar;lucene-backward-codecs-7.7.1.jar 
> org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\
> 
> Everything went successful, when I start solr via solr.cmd start, no errors 
> are logged.
> Now, when I try to upgrade to Solr 8 I also tried to upgraded the index with 
> the following command:
> 
> java -cp lucene-core-8.0.0.jar;lucene-backward-codecs-8.0.0.jar 
> org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\
> 
> But I always get an exception:
> 
> Exception in thread "main" 
> org.apache.lucene.index.IndexFormatTooOldException: Format version is not 
> supported (resource 
> BufferedChecksumIndexInput(MMapIndexInput(path="C:\solr\server\solr\syneris\data\index\segments_4h7"))):
>  This index was initially created with Lucene 6.x while the current version 
> is 8.0.0 and Lucene only supports reading the current and previous major 
> versions.. This version of Lucene only supports indexes created with release 
> 7.0 and later.
>at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:318)
>at 
> org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:289)
>at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:432)
>at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:429)
>at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:680)
>at 
> org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:632)
>at 
> org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:434)
>at 
> org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
>at 
> org.apache.lucene.index.IndexUpgrader.upgrade(IndexUpgrader.java:158)
>at org.apache.lucene.index.IndexUpgrader.main(IndexUpgrader.java:78)
> 
> Any ideas, without performing a full reindex?
> 
> 
> Mit freundlichen Grüßen
> 
> Herbert Hackelsberger
> Kundensupport/Qualitätssicherung
> __
> TECHNODAT Technische Datenverarbeitung GmbH
> Jakob-Haringer-Straße 6
> 5020  Salzburg / Austria
> 
> T  | +43 (0)662 2282-141
> F  | +43 (0)662 2282-9
> E  | h...@technodat.at
> W | www.technodat.at
> 
> Rechtsform: GmbH; Firmensitz: Salzburg
> Firmenbuchgericht: Landesgericht Salzburg
> FN 64072z; DVR: 0481831; UID-Nr. ATU33826508
> 



Re: Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Shawn Heisey

On 4/1/2019 9:19 AM, Herbert Hackelsberger wrote:

I tried to upgrade my test index from Solr 7.7.1 to Solr 8.0.0.
The file segments_4h7 already contains the string Lucene70.
I upgraded before with this command:

java -cp lucene-core-7.7.1.jar;lucene-backward-codecs-7.7.1.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\

Everything went successful, when I start solr via solr.cmd start, no errors are 
logged.
Now, when I try to upgrade to Solr 8 I also tried to upgraded the index with 
the following command:

java -cp lucene-core-8.0.0.jar;lucene-backward-codecs-8.0.0.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\


Upgrading through two or more major versions is not supported.  If the 
index has ever been touched by version 6.6.x or older, then 8.x will not 
be able to read that index, even if it is upgraded to 7.x first.


Reindexing from scratch is the only option.  In my opinion, all indexes 
should be rebuilt from scratch when upgrading, even when the new version 
can read the old format.


Thanks,
Shawn


Solr 8.0.0 + IndexUpgrader

2019-04-01 Thread Herbert Hackelsberger
Hi,

I tried to upgrade my test index from Solr 7.7.1 to Solr 8.0.0.
The file segments_4h7 already contains the string Lucene70.
I upgraded before with this command:

java -cp lucene-core-7.7.1.jar;lucene-backward-codecs-7.7.1.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\

Everything went successful, when I start solr via solr.cmd start, no errors are 
logged.
Now, when I try to upgrade to Solr 8 I also tried to upgraded the index with 
the following command:

java -cp lucene-core-8.0.0.jar;lucene-backward-codecs-8.0.0.jar 
org.apache.lucene.index.IndexUpgrader C:\solr\server\solr\syneris\data\index\

But I always get an exception:

Exception in thread "main" org.apache.lucene.index.IndexFormatTooOldException: 
Format version is not supported (resource 
BufferedChecksumIndexInput(MMapIndexInput(path="C:\solr\server\solr\syneris\data\index\segments_4h7"))):
 This index was initially created with Lucene 6.x while the current version is 
8.0.0 and Lucene only supports reading the current and previous major 
versions.. This version of Lucene only supports indexes created with release 
7.0 and later.
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:318)
at 
org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:289)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:432)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:429)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:680)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:632)
at 
org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:434)
at 
org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:260)
at org.apache.lucene.index.IndexUpgrader.upgrade(IndexUpgrader.java:158)
at org.apache.lucene.index.IndexUpgrader.main(IndexUpgrader.java:78)

Any ideas, without performing a full reindex?


Mit freundlichen Grüßen

Herbert Hackelsberger
Kundensupport/Qualitätssicherung
__
TECHNODAT Technische Datenverarbeitung GmbH
Jakob-Haringer-Straße 6
5020  Salzburg / Austria

T  | +43 (0)662 2282-141
F  | +43 (0)662 2282-9
E  | h...@technodat.at
W | www.technodat.at

Rechtsform: GmbH; Firmensitz: Salzburg
Firmenbuchgericht: Landesgericht Salzburg
FN 64072z; DVR: 0481831; UID-Nr. ATU33826508



Re: unable to find valid certification path to requested target

2019-04-01 Thread Branham, Jeremy (Experis)
Hi Joseph –
I don’t think this is a Solr issue. It sounds like your http crawling process 
doesn’t trust the cert that Solr is using.

Looks like you’re on the right track here – [I stumbled onto your post at 
Github]
https://github.com/Norconex/collector-http/issues/581

 
Jeremy Branham
jb...@allstate.com

On 3/31/19, 9:26 PM, "JTytler"  wrote:

I have created a keystore file and have enabled SSL on my solr server using
the following  procedures:
 
1) Created pkcs#12 file using the command:
Keytool –genkey –alias aliasname –keystore /solr-ssl.keystore.pfx –storetype
PKCS12 –keyalg RSA –storepass password –ext
SAN=dns:localhost,dns:solr-devapp01.devt1.restOfDomain –validity 730
–keysize 2048
 
2) Imported the pkcs keystore file into Trusted Root Certification Authority
 
3) Copied the pkcs file solr-ssl.keystore.pfx to the solr /server/etc folder
 
4) Modified solr.in.cmd file with the following:
 
set SOLR_SSL_ENABLED=true
set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.pfx
set SOLR_SSL_KEY_STORE_PASSWORD=secret
set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.pfx
set SOLR_SSL_TRUST_STORE_PASSWORD=secret
 
set SOLR_SSL_NEED_CLIENT_AUTH=false
set SOLR_SSL_WANT_CLIENT_AUTH=false
set SOLR_SSL_KEY_STORE_TYPE=PKCS12
set SOLR_SSL_TRUST_STORE_TYPE=PKCS12
 
 
I can access the Solr admin at 
https://urldefense.proofpoint.com/v2/url?u=https-3A__localhost-3A8983_solr=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=rnbRtumEySeUlFWuHX0AE4JO-I9o94nUnAfkrNPaAss=F7YCAJHvVKTe_QYZF14Rwcodu9JysDyVLVOzvLfc2l4=
 and can also
crawl websites using Norconex httpcrawler.   However, after the documents
are crawled, I am unable to commit the crawled documents into the Solr
index.   I get the error "unable to find valid certification path to
requested target".  

I will appreciate if someone can help me with this as this is the first time
I am trying to set up SSL/TLM.




--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lucene.472066.n3.nabble.com_Solr-2DUser-2Df472068.html=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=rnbRtumEySeUlFWuHX0AE4JO-I9o94nUnAfkrNPaAss=ex4KC7OKX1YMFfDWsANRffjk8DLl0SES-X04KWZzowg=




Re: Documentation for Apache Solr 8.0.0?

2019-04-01 Thread Jason Gerlowski
The Solr Reference Guide (of which the online documentation is a part)
gets built and released separately from the Solr distribution itself.
The Solr community tries to keep the code and documentation releases
as close together as we can, but the releases require work and are
done on a volunteer basis.  No one has volunteered for the 8.0.0
reference-guide release yet, but I suspect a volunteer will come
forward soon.

In the meantime though, there is documentation for Solr 8.0.0
available.  Solr's documentation is included alongside the code.  You
can checkout Solr and build the documentation yourself by moving to
"solr/solr-ref-guide" and running the command "ant clean default" from
that directory.  This will build the same HTML pages you're used to
seeing at lucene.apache.org/solr/guide, and you can open the local
copies in your browser and browse them as you normally would.

Alternatively, the Solr mirror on Github does its best to preview the
documentation.  It doesn't display perfectly, but it might be helpful
for tiding you over until the official documentation is available, if
you're unwilling or unable to build the documentation site locally:
https://github.com/apache/lucene-solr/blob/branch_8_0/solr/solr-ref-guide/src/index.adoc

Hope that helps,

Jason

On Mon, Apr 1, 2019 at 7:34 AM Yoann Moulin  wrote:
>
> Hello,
>
> I’m looking for the documentation for the latest release of SolR (8.0) but it 
> looks like it’s not online yet.
>
> https://lucene.apache.org/solr/news.html
>
> http://lucene.apache.org/solr/guide/
>
> Do you know when it will be available?
>
> Best regards.
>
> --
> Yoann Moulin
> EPFL IC-IT


Documentation for Apache Solr 8.0.0?

2019-04-01 Thread Yoann Moulin
Hello,

I’m looking for the documentation for the latest release of SolR (8.0) but it 
looks like it’s not online yet.

https://lucene.apache.org/solr/news.html

http://lucene.apache.org/solr/guide/

Do you know when it will be available?

Best regards.

-- 
Yoann Moulin
EPFL IC-IT