Re: Client Connection Resets during Solr API Calls

2019-04-05 Thread bban954
Resolved this when I noticed that the ELB health checks were able to access
the admin UI. The issue was that port 8983 was indeed blocked from the SSH
machine, as that machine was located over a VPN tunnel into AWS. Our
deployed applications had no issue accessing the test collection as they
resided purely in AWS.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: high cpu threads (solr 7.5)

2019-04-05 Thread Hari Nakka
Thank you. We are planning to upgrade the JDK 11.
Is solr 7.5 fully compatible with openjdk 11.


On Thu, Apr 4, 2019 at 9:58 AM Erick Erickson 
wrote:

> It hasn’t been addressed by any Java 8 releases that I know of.
>
> See: https://issues.apache.org/jira/browse/SOLR-13349
>
> The work-around in Solr is trivial, see the patch so it’d be simple to
> patch/compile on your own.
>
> It will be released in a Solr 7.7.2 and Solr 8.1 or later, neither of
> which have been released yet.
>
> Or move to Java 9 or later.
>
> > On Apr 3, 2019, at 4:39 PM, Hari Nakka  wrote:
> >
> > We are noticing high CPU utilization on below threads.  Looks like a
> known
> > issue with. (https://github.com/netty/netty/issues/327)
> >
> > But not sure if this has been addressed in any of the 1.8 releases.
> >
> > Can anyone help with this?
> >
> >
> > Version: solr cloud 7.5
> >
> > OS: CentOS 7
> >
> > JDK: Oracle JDK 1.8.0_191
> >
> >
> >
> >
> >
> > "qtp574568002-3821728" #3821728 prio=5 os_prio=0 tid=0x7f4f20018000
> > nid=0x4996 runnable [0x7f51fc6d8000]
> >
> >   java.lang.Thread.State: RUNNABLE
> >
> >at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> >
> >at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> >
> >at
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> >
> >at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> >
> >- locked <0x00064cded430> (a sun.nio.ch.Util$3)
> >
> >- locked <0x00064cded418> (a
> > java.util.Collections$UnmodifiableSet)
> >
> >- locked <0x00064cdf6e38> (a sun.nio.ch.EPollSelectorImpl)
> >
> >at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> >
> >at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
> >
> >at
> > org.eclipse.jetty.io
> .ManagedSelector$SelectorProducer.select(ManagedSelector.java:396)
> >
> >at
> > org.eclipse.jetty.io
> .ManagedSelector$SelectorProducer.produce(ManagedSelector.java:333)
> >
> >at
> >
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
> >
> >at
> >
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
> >
> >at
> >
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
> >
> >at
> >
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
> >
> >at
> >
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
> >
> >at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
> >
> >at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
> >
> >at java.lang.Thread.run(Thread.java:748)
>
>


Client Connection Resets during Solr API Calls

2019-04-05 Thread bban954
I've recently stood up a SolrCloud 7.7.1 cluster on AWS EC2 instances, with a
dedicated zookeeper ensemble (3.4.13). It looks to have the basic trappings
of a 'test' collection, I can access the Solr admin UI through an SSH tunnel
as well as run the following API calls with success (from a solr node in the
cluster):

curl http://localhost:8983/solr/test/admin/ping
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":0,
"params":{
  "q":"{!lucene}*:*",
  "distrib":"false",
  "df":"_text_",
  "rows":"10",
  "echoParams":"all"}},
  "status":"OK"}

curl http://localhost:8983/solr/test/select?q=*:*
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":10,
"params":{
  "q":"*:*"}},
  "response":{"numFound":0,"start":0,"maxScore":0.0,"docs":[]
  }}

However, attempting the same API calls from a remote client results in
connection resets. It does not appear to be a firewall issue, as neither
netcat nor SSH have issue. The connection is being made over private IP
addresses within the network, from the same machine I use to SSH into a Solr
EC2 instance (10.131.200.233 as an example).

nc -vz 10.131.200.233 8983
found 0 associations
found 1 connections:
 1: flags=82
outif gpd0
src 172.16.253.5 port 50830
dst 10.131.200.233 port 8983
rank info not available
TCP aux info available

Connection to 10.131.200.233 port 8983 [tcp/*] succeeded!

curl http://10.131.200.233:8983/solr/test/admin/ping
curl: (56) Recv failure: Connection reset by peer
curl http://10.131.200.233:8983/solr/test/select?q=*:*
curl: (56) Recv failure: Connection reset by peer

I am unable to access the Solr Admin UI without an SSH tunnel, contrary to
the doc at
https://lucene.apache.org/solr/guide/7_7/aws-solrcloud-tutorial.html. My
internet searches have resulted in a fair bit of confusion, but it seems
like Solr is denying anything not localhost as a security feature. In our
use case we have a number of client applications already deployed on a fleet
of other EC2 instances and would like to give them API search capabilities
against this up-and-coming SolrCloud cluster. I was thinking just to put an
AWS ALB/ELB in front of the Solr nodes, but the primary concern is simply
getting remote queries working in the first place.




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr LTR model Performance Issues

2019-04-05 Thread Kamal Kishore Aggarwal
Hi,

Any update on this?
Is this model running in multi threaded mode or is there is any scope to do
this. Please let me know.

Regards
Kamal

On Sat, Mar 23, 2019 at 10:35 AM Kamal Kishore Aggarwal <
kkroyal@gmail.com> wrote:

> HI Jörn Franke,
>
> Thanks for the quick reply.
>
> I have performed the jmeter load testing on one of the server for Linear
> vs Multipleadditive tree model. We are using lucidworks fusion.
> There is some business logic in the query pipeline followed by main solr
> ltr query. This is the total time taken by query pipeline.
> Below are the response time:
>
> # of Threads Ramup Period Loop Count Type Total Requests Average Response
> Time (ms)
> Iteration 1 Iteration 2 Iteration 3
> 10 1 10 Linear Model  100 2038 1998 1975
> 25 1 10 Linear Model  250 4329 3961 3726
>
> 10 1 10 MultiAdditive Model 100 12721 12631 12567
> 25 1 10 MultiAdditive Model 250 27924 31420 30758
> # of docs: 500K and Indexing size is 10 GB.
>
> As of now, I did not checked the CPU or memory usage, but did not observed
> any errors during jmeter load test.
>
> Let me know if any other information is required.
>
> Regards
> Kamal
>
>
> 
>  I’m
> protected online with Avast Free Antivirus. Get it here — it’s free
> forever.
> 
> <#m_-1438210790161476832_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> On Fri, Mar 22, 2019 at 5:13 PM Jörn Franke  wrote:
>
>> Can you share the time needed of the two models? How many documents? What
>> is your loading pipeline? Have you observed cpu/memory?
>>
>> > Am 22.03.2019 um 12:01 schrieb Kamal Kishore Aggarwal <
>> kkroyal@gmail.com>:
>> >
>> > Hi,
>> >
>> > I am trying to use LTR with solr 6.6.2.There are different types of
>> model
>> > like Linear Model, Multiple Additive Trees Model and Neural Network
>> Model.
>> >
>> > I have tried using Linear & Multiadditive model and compared the
>> > performance of results. There is a major difference in response time
>> > between the 2 models. I am observing that Multiadditive model is taking
>> way
>> > higher time than linear model.
>> >
>> > Is there a way we can improve the performance here.
>> >
>> > Note: The size of Multiadditive model is 136 MB.
>> >
>> > Regards
>> > Kamal Kishore
>> >
>> > <
>> https://www.avast.com/en-in/recommend?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=default3=d4ef6ef9-b8d1-40b8-96ac-2354fd69483b
>> >
>> > I’m
>> > protected online with Avast Free Antivirus. Get it here — it’s free
>> forever.
>> > <
>> https://www.avast.com/en-in/recommend?utm_medium=email_source=link_campaign=sig-email_content=webmail_term=default3=d4ef6ef9-b8d1-40b8-96ac-2354fd69483b
>> >
>> > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>


Re: Solr 6.6 and OpenJDK11

2019-04-05 Thread Jan Høydahl
The page below is just a draft for some wording that will go in the Reference 
Guide in next version.
You can stick with Java8 for Solr 7.x and 8.x as well, but in most cases Java11 
will work just fine.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 5. apr. 2019 kl. 15:04 skrev e_bri...@videotron.ca:
> 
> There is a lack of consensus about Java 11 support. We have been recommended 
> to stick to Java 8 even on Solr 7.X. Is the page bellow the 'official' 
> position?
> 
> Eric.
> 
> Le 05/04/19 03:23, Jan Høydahl   a écrit : 
>> 
>> Solr7 is the first Solr version that has been proved to work with JDK9+
>> So you better stick with Java8. Solr 7/8 will work with JDK11, and Solr 9 
>> will likely require it.
>> Much more details to be found here: 
>> https://wiki.apache.org/solr/SolrJavaVersions
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 5. apr. 2019 kl. 05:46 skrev solrnoobie :
>>> 
>>> So we are having some production issues with solr 6.6 with OpenJDK 11. There
>>> are a lot of heap errors (ours was set to 10gig on a 16 gig instance) and we
>>> never encountered this until we upgraded from Oracle JDK 8 to OpenJDK 11.
>>> 
>>> So is it advisable to keep it at openjdk 11 or should we downgrade to
>>> OpenJDK 8?
>>> 
>>> 
>>> 
>>> --
>>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>> 
>> 
> 



Re: Control Solr spellcheck functionality to provide suggestions for correct word

2019-04-05 Thread Rohan Kasat
Hi Rashi,

Can you share your spellcheck configuration, it will be easy to check from
the configuration?

Regards,
Rohan Kasat

On Fri, Apr 5, 2019 at 10:29 AM rashi gandhi 
wrote:

> HI,
>
> I am working on Solr spellcheck feature, and I am using index based
> spellcheck dictionary as a source for spellcheck suggestions.
> I observed that collated results returned by spellcheck component, provide
> the suggestions for misspelled words, however also provide suggestions for
> correctly spelled word in query.
>
> For example,
>  misspelled query - root priviladge to user
>
> *collated results (even suggestion includes the same) *-
> root privilege to user, room privilege to user, root privilege to users,
> rest privilege to user, root privilege to used
>
> It corrected word 'privilege' which was misspelled, however also provided
> suggestions for 'root' or 'user', which were already correct.
>
> is there a way , we can tell Solr not to provide suggestions for correct
> word, when using spellcheck feature.
>
> Please provide pointers.
>
-- 

*Regards,Rohan Kasat*


Control Solr spellcheck functionality to provide suggestions for correct word

2019-04-05 Thread rashi gandhi
HI,

I am working on Solr spellcheck feature, and I am using index based
spellcheck dictionary as a source for spellcheck suggestions.
I observed that collated results returned by spellcheck component, provide
the suggestions for misspelled words, however also provide suggestions for
correctly spelled word in query.

For example,
 misspelled query - root priviladge to user

*collated results (even suggestion includes the same) *-
root privilege to user, room privilege to user, root privilege to users,
rest privilege to user, root privilege to used

It corrected word 'privilege' which was misspelled, however also provided
suggestions for 'root' or 'user', which were already correct.

is there a way , we can tell Solr not to provide suggestions for correct
word, when using spellcheck feature.

Please provide pointers.


Re: Model type does not exist MultipleAdditiveTreesModel

2019-04-05 Thread Kamal Kishore Aggarwal
Hi Roee,

It looks the error is due to blank feature param value in the json.

"name" : "my",
   "features":[],
   "params" : {

I have observed that many a times solr ltr returns generic error that 'Model
type does not
exist', but later actually found to be an issue with json. Just wanted to
share my experience.

Regards
Kamal

On Thu, May 31, 2018 at 4:07 PM Roee T  wrote:

> Hi all,
> I'm trying to upload the most simple model to solr 7.3.1 and i get an
> error:
>
> the model:
>
> {
>"class" : "org.apache.solr.ltr.model.MultipleAdditiveTreesModel",
>"name" : "my",
>"features":[],
>"params" : {
>"trees" : [
>{
>"weight" : 1,
>"root" : {
>"value" : -10
>}} ]}}
>
> The error:
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","java.lang.IllegalArgumentException"],
> "msg":"org.apache.solr.ltr.model.ModelException: Model type does not
> exist org.apache.solr.ltr.model.MultipleAdditiveTreesModel",
> "code":400}}
>
>
> I inserted the configurations to solrconfig.xml like
>regex=".*\.jar" />
> and started solr using   -Dsolr.ltr.enabled=true
>
> please help me
> Thanks you all ;)
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: Document IDs instead of count for facets?

2019-04-05 Thread Jan Moszczynski
This is old. Still worth replying, as sometimes somebody might stumble across 
the same problem. 

Well I achieved this rather simply by using facet.pivot. If you define your 
facet.pivot as “n_cellered_diseaseExact,id” you will get in return the data 
structure containing pivot list of all diseases, with each disease containing 
in turn a list of ids of documents matching it. You can control the max length 
of this id sublist using the facet.limit parameter. It would be useful to set 
it to some large number, larger, than length of any list You expect. In my case 
number 200 was sufficient.

So the solution is:

pivot.facet=n_cellered_diseaseExact,id
facet.limit=200

Good luck.

Re: DocValues or stored fields to enable atomic updates

2019-04-05 Thread Emir Arnautović
Hi Andreas,
Stored values are compressed so should take less disk. I am thinking that doc 
values might perform better when it comes to executing atomic update.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 5 Apr 2019, at 12:54, Andreas Hubold  wrote:
> 
> Hi,
> 
> I have a question on schema design: If a single-valued StrField is just used 
> for filtering results by exact value (indexed=true) and its value isn't 
> needed in the search result and not for sorting, faceting or highlighting - 
> should I use docValues=true or stored=true to enable atomic updates? Or even 
> both? I understand that either docValues or stored fields are needed for 
> atomic updates but which of the two would perform better / consume less 
> resources in this scenario?
> 
> Thank you.
> 
> Best regards,
> Andreas
> 
> 
> 



Re : Re: Solr 6.6 and OpenJDK11

2019-04-05 Thread e_briere
There is a lack of consensus about Java 11 support. We have been recommended to 
stick to Java 8 even on Solr 7.X. Is the page bellow the 'official' position?

Eric.

Le 05/04/19 03:23, Jan Høydahl   a écrit : 
> 
> Solr7 is the first Solr version that has been proved to work with JDK9+
> So you better stick with Java8. Solr 7/8 will work with JDK11, and Solr 9 
> will likely require it.
> Much more details to be found here: 
> https://wiki.apache.org/solr/SolrJavaVersions
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> 
> > 5. apr. 2019 kl. 05:46 skrev solrnoobie :
> > 
> > So we are having some production issues with solr 6.6 with OpenJDK 11. There
> > are a lot of heap errors (ours was set to 10gig on a 16 gig instance) and we
> > never encountered this until we upgraded from Oracle JDK 8 to OpenJDK 11.
> > 
> > So is it advisable to keep it at openjdk 11 or should we downgrade to
> > OpenJDK 8?
> > 
> > 
> > 
> > --
> > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> 
> 



DocValues or stored fields to enable atomic updates

2019-04-05 Thread Andreas Hubold

Hi,

I have a question on schema design: If a single-valued StrField is just 
used for filtering results by exact value (indexed=true) and its value 
isn't needed in the search result and not for sorting, faceting or 
highlighting - should I use docValues=true or stored=true to enable 
atomic updates? Or even both? I understand that either docValues or 
stored fields are needed for atomic updates but which of the two would 
perform better / consume less resources in this scenario?


Thank you.

Best regards,
Andreas





Re: Windows SSL.Keystore and Windows TrustStore requires an empty PKCS#12 Key Store

2019-04-05 Thread Jan Høydahl
Hi,

Thanks for your proposal. I think it warrants a new JIRA issue as a feature 
request.
Patches to both code and documentation are highly welcome!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 5. apr. 2019 kl. 10:53 skrev Herbert Hackelsberger :
> 
> Hi,
> 
> I managed to get Windows-MY (SSL Personal Store) and Windows-ROOT (Root CA 
> Store) with Solr 8.0.0 to work.
> How?
> 
> I enabled the following in solr.in.cmd:
> 
> set SOLR_SSL_CHECK_PEER_NAME=true
> set SOLR_SSL_ENABLED=true
> set SOLR_SSL_KEY_STORE=NONE
> set SOLR_SSL_KEY_STORE_PASSWORD=
> set SOLR_SSL_TRUST_STORE=NONE
> set SOLR_SSL_TRUST_STORE_PASSWORD=
> set SOLR_SSL_NEED_CLIENT_AUTH=true
> set SOLR_SSL_WANT_CLIENT_AUTH=false
> set SOLR_SSL_KEY_STORE_TYPE=Windows-MY
> set SOLR_SSL_TRUST_STORE_TYPE=Windows-ROOT
> 
> A also edited solr.cmd in the following way:
> set "SOLR_SSL_OPTS= -Djavax.net.ssl.keyStoreProvider=SunMSCAPI 
> -Djavax.net.ssl.trustStoreProvider=SunMSCAPI"
> 
> But there is one problem:
> The Microsoft Key Store is not a file based Keystore.
> 
> What happens:
> SOLR logs a missing KEYSTORE File "NONE"
> 
> The official documentation at
> https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html
> tells me:
> 
> * javax.net.ssl.keyStore system property.
> Note that the value NONE may be specified. This setting is appropriate if the 
> keystore is not file-based (for example, it resides in a hardware token)
> 
> The same is valid for trustStore.
> 
> So my workaround here is to place an empty PKCS#12 keystore File called 
> "NONE" in the \server directory, where start.jar resides.
> Solr 4.4 was happy with just an empty 0 byte NONE file.
> 
> It seems to me, that currently only file based key stores are working without 
> manual workarounds.
> A proper solution would be very nice for other so it can be easily configured.
> 
> When I specify null, Solr requires the keystore file to be called null.
> And if not password specified at all, you won't get it to work.
> 
> The Solr Reference Guide also lacks information here.
> 
> 
> The solution would be in the code to specify null when loading the keystore 
> file, and password also null.
> I found that while searching:
> 
> https://stackoverflow.com/questions/13697934/windows-keystores-and-certificates/29534497
> 
> 
> Other software also seems to have problems with this:
> https://github.com/gradle/gradle/issues/6584
> 
> 
> It would be great to see better integration of the Windows keystore I the 
> future, as it was very difficulty to analyze find out, when you start from 
> zero.
> 



Windows SSL.Keystore and Windows TrustStore requires an empty PKCS#12 Key Store

2019-04-05 Thread Herbert Hackelsberger
Hi,

I managed to get Windows-MY (SSL Personal Store) and Windows-ROOT (Root CA 
Store) with Solr 8.0.0 to work.
How?

I enabled the following in solr.in.cmd:

set SOLR_SSL_CHECK_PEER_NAME=true
set SOLR_SSL_ENABLED=true
set SOLR_SSL_KEY_STORE=NONE
set SOLR_SSL_KEY_STORE_PASSWORD=
set SOLR_SSL_TRUST_STORE=NONE
set SOLR_SSL_TRUST_STORE_PASSWORD=
set SOLR_SSL_NEED_CLIENT_AUTH=true
set SOLR_SSL_WANT_CLIENT_AUTH=false
set SOLR_SSL_KEY_STORE_TYPE=Windows-MY
set SOLR_SSL_TRUST_STORE_TYPE=Windows-ROOT

A also edited solr.cmd in the following way:
set "SOLR_SSL_OPTS= -Djavax.net.ssl.keyStoreProvider=SunMSCAPI 
-Djavax.net.ssl.trustStoreProvider=SunMSCAPI"

But there is one problem:
The Microsoft Key Store is not a file based Keystore.

What happens:
SOLR logs a missing KEYSTORE File "NONE"

The official documentation at
https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html
tells me:

* javax.net.ssl.keyStore system property.
Note that the value NONE may be specified. This setting is appropriate if the 
keystore is not file-based (for example, it resides in a hardware token)

The same is valid for trustStore.

So my workaround here is to place an empty PKCS#12 keystore File called "NONE" 
in the \server directory, where start.jar resides.
Solr 4.4 was happy with just an empty 0 byte NONE file.

It seems to me, that currently only file based key stores are working without 
manual workarounds.
A proper solution would be very nice for other so it can be easily configured.

When I specify null, Solr requires the keystore file to be called null.
And if not password specified at all, you won't get it to work.

The Solr Reference Guide also lacks information here.


The solution would be in the code to specify null when loading the keystore 
file, and password also null.
I found that while searching:

https://stackoverflow.com/questions/13697934/windows-keystores-and-certificates/29534497


Other software also seems to have problems with this:
https://github.com/gradle/gradle/issues/6584


It would be great to see better integration of the Windows keystore I the 
future, as it was very difficulty to analyze find out, when you start from zero.



[ANNOUNCE] Apache Solr 6.6.6 released

2019-04-05 Thread Ishan Chattopadhyaya
5 April 2019, Apache Solr™ 6.6.6 available

The Lucene PMC is pleased to announce the release of Apache Solr 6.6.6.

Solr is the popular, blazing fast, open source NoSQL search platform from the
Apache Lucene project. Its major features include powerful full-text search,
hit highlighting, faceted search and analytics, rich document parsing,
geospatial search, extensive REST APIs as well as parallel SQL. Solr is
enterprise grade, secure and highly scalable, providing fault tolerant
distributed search and indexing, and powers the search and navigation features
of many of the world's largest internet sites.

This release contains the following important bug fixes:

 * Fix memory leak (upon collection reload or ZooKeeper session
expiry) in ZkIndexSchemaReader.
 * Fix for Rule-based Authorization skipping authorization if querying
node host the collection
 * (CVE-2017-3164) Make it possible to configure a host whitelist for
distributed search

The release is available for immediate download at:

https://www-us.apache.org/dist/lucene/solr/6.6.6/

Please read CHANGES.txt for a detailed list of changes:

  https://lucene.apache.org/solr/6_6_6/changes/Changes.html

Please report any feedback to the mailing lists
(http://lucene.apache.org/solr/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring
network for distributing releases. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.


Re: solr tika extraction video creation date problem (hours ahead)

2019-04-05 Thread Alexandre Rafalovitch
Well, Tika would use different libraries to extract different formats.
So maybe there is a bug. I would just get a standalone tika (of
matching version to the one in Solr) and see what the output from two
sample files are. Then, I would check with the latest Tika, just in
case.

I would also use some non-Tika way to check what the dates are, just
in case the date is wrong during encoding rather than during indexing.
A low-probability chance, but just covering all the bases.

Regards,
   Alex.

On Fri, 5 Apr 2019 at 01:39, whisere  wrote:
>
> Thanks Alex. The problem is image creation date is correct, but the video
> creation date is wrong (hours behind), if I set the time_zone I think the
> image creation date will be wrong then. wonder what the difference between
> image and video extraction in tika.
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr 6.6 and OpenJDK11

2019-04-05 Thread Jan Høydahl
Solr7 is the first Solr version that has been proved to work with JDK9+
So you better stick with Java8. Solr 7/8 will work with JDK11, and Solr 9 will 
likely require it.
Much more details to be found here: 
https://wiki.apache.org/solr/SolrJavaVersions

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 5. apr. 2019 kl. 05:46 skrev solrnoobie :
> 
> So we are having some production issues with solr 6.6 with OpenJDK 11. There
> are a lot of heap errors (ours was set to 10gig on a 16 gig instance) and we
> never encountered this until we upgraded from Oracle JDK 8 to OpenJDK 11.
> 
> So is it advisable to keep it at openjdk 11 or should we downgrade to
> OpenJDK 8?
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html