Is it possible to use the Lucene Query Builder? Is there any API to create boolean queries?

2019-11-27 Thread email
I am trying to simulate the following query(Lucene query builder) using Solr


 

BooleanQuery.Builder main = new BooleanQuery.Builder();

Term t1 = new Term("f1","term");
Term t2 = new Term("f1","second");
Term t3 = new Term("f1","another");

BooleanQuery.Builder q1 = new BooleanQuery.Builder();
q1.add(new FuzzyQuery(t1,2), BooleanClause.Occur.SHOULD);
q1.add(new FuzzyQuery(t2,2), BooleanClause.Occur.SHOULD);
q1.add(new FuzzyQuery(t3,2), BooleanClause.Occur.SHOULD);
q1.setMinimumNumberShouldMatch(2);

Term t4 = new Term("f1","anothert");
Term t5 = new Term("f1","anothert2");
Term t6 = new Term("f1","anothert3");

BooleanQuery.Builder q2 = new BooleanQuery.Builder();
q2.add(new FuzzyQuery(t4,2), BooleanClause.Occur.SHOULD);
q2.add(new FuzzyQuery(t5,2), BooleanClause.Occur.SHOULD);
q2.add(new FuzzyQuery(t6,2), BooleanClause.Occur.SHOULD);
q2.setMinimumNumberShouldMatch(2);


main.add(q1.build(),BooleanClause.Occur.SHOULD);
main.add(q2.build(),BooleanClause.Occur.SHOULD);
main.setMinimumNumberShouldMatch(1);

System.out.println(main.build()); // (((f1:term~2 f1:second~2
f1:another~2)~2) ((f1:anothert~2 f1:anothert2~2 f1:anothert3~2)~2))~1   -->
Invalid Solr Query

 

 

In a few words :  ( q1 OR q2 )

 

Where q1 and q2 are a set of different terms using I'd like to do a fuzzy
search but I also need a minimum of terms to match. 

 

The best I was able to create was something like this  : 

 

SolrQuery query = new SolrQuery();
query.set("fl", "term");
query.set("q", "term~1 term2~2 term3~2");
query.set("mm",2);

System.out.println(query);

 

And I was unable to find any example that would allow me to do the type of
query that I am trying to build with only one solr query. 

 

Is it possible to use the Lucene Query builder with Solr? Is there any way
to create Boolean queries with Solr? Do I need to build the query as a
String? If so , how do I set the mm parameter in a String query? 

 

Thank you



Re: A Last Message to the Solr Users

2019-11-27 Thread Mark Miller
Now one company thinks I’m after them because they were the main source of
the jokes.

Companies is not a typo.

If you are using Solr to make or save tons of money or run your business
and you employee developers, please include yourself in this list.

You are taking and in my opinion Solr is going down. It’s all against your
own interest even.

I know of enough people that want to solve this now, that it’s likely only
a matter of time before they fix the situation - you ever know though.
Things change, people get new jobs, jobs change. It will take at least 3-6
months to make things reasonable even with a good group banding together.

But if you are extracting value from this project and have Solr developers
- id like to think you have enough of a stake in this to think about
changing the approach everyone has been taking. It’s not working, and the
longer it goes on, the harder it’s getting to fix things.


-- 
- Mark

http://about.me/markrmiller


Re: Backup v.s. Snapshot API for Solr

2019-11-27 Thread Kayak28
Hello, Mr. Paras and Community Members:

>Once you have created a snapshot with
>CREATESNAPSHOT, you can restore the snapshot with same replication restore
>command, right?

Is it?

As far as I know,
CREATESNAPSHOT is an action to create a file, named snapshot_N, under
data/snapshot_metadata directory.

The snapshot_N is a binary file, which contains the path to the index and
the commit name you defined like below.


> ?×l solr-snapshots
> commit1:/home/vagrant/solr-7.4.0/server/solr/core1/data/index/
>
commit1 is the commit name I named and /home/vagrant/ is the path to my
index.

My question is,  having the path to the index and the commit name, how is a
snapshot possible to restore my index?
Or is the CREATESNAPSHOT command only available for Solr Cloud mode?
(whereas my case is standalone)

I should try with SolrCloud Mode later.
Then, I wll execute CREATESNAPSHOT and restore command, let's see if I can
really restore my index on CloudMode.
I only played with a standalone mode.


2019年11月26日(火) 19:26 Paras Lehana :

> Hi Kaya,
>
> Sorry that I still cannot understand. Once you have created a snapshot with
> CREATESNAPSHOT, you can restore the snapshot with same replication restore
> command, right?
>
> How can I use a "snapshot", which is generated by  CREATESNAPSHOT API?
>
>
> You just used the name to restore the backup. Please try to explain what is
> your use case and what do you want to achieve. Maybe I'm not able to
> understand your query so I'll appreciate if someone else helps.
>
> On Mon, 25 Nov 2019 at 12:21, Kayak28  wrote:
>
> > Hello, Mr. Paras:
> >
> > Thank you for your response, and I apologize for confusing you.
> >
> > Actually, I can do restore by /replication hander.
> > What I did not get the idea is, how to use the following URLs, which are
> > from the "Making And Restoring Backups" section of the Solr Reference
> > Guide.
> >
> > 1.
> >
> http://localhost:8983/solr/admin/cores?action=CREATESNAPSHOT=techproducts=commit1
> > 2.
> >
> http://localhost:8983/solr/admin/cores?action=LISTSNAPSHOTS=techproducts=commit1
> > 3.
> >
> http://localhost:8983/solr/admin/cores?action=DELETESNAPSHOT=techproducts=commit1
> >
> >
> > It seems like "Snapshot", made by  CREATESNAPSHOT API, hold the path to
> the
> > index and commit name only.
> >
> > How can I use a "snapshot", which is generated by  CREATESNAPSHOT API?
> >
> >
> > Sincerely,
> > Kaya Ota
> >
>
>
> --
> --
> Regards,
>
> *Paras Lehana* [65871]
> Development Engineer, Auto-Suggest,
> IndiaMART Intermesh Ltd.
>
> 8th Floor, Tower A, Advant-Navis Business Park, Sector 142,
> Noida, UP, IN - 201303
>
> Mob.: +91-9560911996
> Work: 01203916600 | Extn:  *8173*
>
> --
> IMPORTANT:
> NEVER share your IndiaMART OTP/ Password with anyone.
>


Re: A Last Message to the Solr Users

2019-11-27 Thread Walter Underwood
I’m a big fan of master/slave Solr. Super robust and trivial to scale-out.

Solr Cloud has been useful for managing sharding and replicas, but less robust 
than I would like. Also less robust than my managers would like. It has gotten 
a bad reputation, only partially undeserved.

I’m also not especially impressed with the Solr community openness to changes 
from outside of the committers. At some point, I’ll do the fourth port of a 
small enhancement to edismax. I’d love to run stock Solr, but I can’t. Fuzzy 
search got 100X faster in 4.x, but it isn’t available in edismax, even though 
it makes a huge difference for misspelled queries.

Want to make a change that radically improves basic search? Implement IDF for 
phrases. Infoseek used that to beat Google in relevance for years. The patent 
has expired, so go for it.

Also, how about a free text parser so we don’t have to rip out all the Lucene 
syntax in queries?

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Nov 27, 2019, at 8:37 AM, Mark Miller  wrote:
> 
> If SolrCloud worked well I’d still agree both options are very valid
> depending on your use case. As it is, I’m embarrassed that people give me
> any credit for this. I’m here to try and delight users and I have failed in
> that. I tried to put a lot of my own time to address things outside of
> working on my job of integrating Hadoop and upgrading Solr 4 instances for
> years. But I couldn’t convince anyone of what was necessary to address what
> has been happening, and my paid job has always been doing other things
> since 2012.
> 
> On Wed, Nov 27, 2019 at 6:23 PM David Hastings 
> wrote:
> 
>> Personally I found nothing in solr cloud worth changing from standalone
>> for, and just added more complications, more servers, and required becoming
>> an expert/knowledgeable in zoo keeper, id rather spend my time developing
>> than becoming a systems administrator
>> 
>> On Wed, Nov 27, 2019 at 3:45 AM Mark Miller  wrote:
>> 
>>> This is your queue to come and make your jokes with your name attached.
>>> I’m
>>> sure the Solr users will appreciate them more than I do. I can’t laugh at
>>> this situation because I take production code seriously.
>>> 
>>> --
>>> - Mark
>>> 
>>> http://about.me/markrmiller
>>> 
>> --
> - Mark
> 
> http://about.me/markrmiller



Convert javabin to json

2019-11-27 Thread Wei
Hi,

Is there a reliable way to convert solr's javabin response to json format?
We use solrj client with wt=javabin, but want to convert the received
javabin response to json for passing to client.  We don't want to use
wt=json as javabin is more efficient.  We tried the noggit jsonutil

https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/noggit/JSONUtil.java

but seems it is not able to convert parts of the query response such as
facet.  Are there any other options available?

Thanks,
Wei


Re: Prevent Solr overwriting documents

2019-11-27 Thread Walter Underwood
That would be “do-not-overwrite”.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Nov 27, 2019, at 4:38 PM, Walter Underwood  wrote:
> 
> Even if that works, it is evil as something to leave in a client codebase. 
> Maybe a do-no-overwrite flag would be useful.
> 
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
> 
>> On Nov 27, 2019, at 3:24 PM, Alexandre Rafalovitch  
>> wrote:
>> 
>> How about Optimistic Concurrency with _version_ set to negative value?
>> 
>> You could inject that extra value in URP chain if need be.
>> 
>> Regards,
>>   Alex
>> 
>> On Wed, Nov 27, 2019, 5:41 PM Aaron Hoffer,  wrote:
>> 
>>> We want to prevent Solr from overwriting an existing document if document's
>>> ID already exists in the core.
>>> 
>>> This unit test fails because the update/overwrite is permitted:
>>> 
>>> public void testUpdateProhibited() {
>>> final Index index = baseInstance();
>>> indexRepository.save(index);
>>> Index index0 = indexRepository.findById(INDEX_ID).get();
>>> index0.setContents("AAA");
>>> indexRepository.save(index0);
>>> Index index1 = indexRepository.findById(INDEX_ID).get();
>>> assertThat(index, equalTo(index1));
>>> }
>>> 
>>> 
>>> The failure is:
>>> Expected: 
>>> but: was >> ...>
>>> 
>>> What do I need to do prevent the second save from overwriting the existing
>>> document?
>>> 
>>> I commented out the updateHandler in the solr config file, to no effect.
>>> We are using Spring Data with Solr 8.1.
>>> In the core's schema, id is defined as unique  like this:
>>> id
>>> 
> 



Re: Prevent Solr overwriting documents

2019-11-27 Thread Walter Underwood
Even if that works, it is evil as something to leave in a client codebase. 
Maybe a do-no-overwrite flag would be useful.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Nov 27, 2019, at 3:24 PM, Alexandre Rafalovitch  wrote:
> 
> How about Optimistic Concurrency with _version_ set to negative value?
> 
> You could inject that extra value in URP chain if need be.
> 
> Regards,
>Alex
> 
> On Wed, Nov 27, 2019, 5:41 PM Aaron Hoffer,  wrote:
> 
>> We want to prevent Solr from overwriting an existing document if document's
>> ID already exists in the core.
>> 
>> This unit test fails because the update/overwrite is permitted:
>> 
>> public void testUpdateProhibited() {
>>  final Index index = baseInstance();
>>  indexRepository.save(index);
>>  Index index0 = indexRepository.findById(INDEX_ID).get();
>>  index0.setContents("AAA");
>>  indexRepository.save(index0);
>>  Index index1 = indexRepository.findById(INDEX_ID).get();
>>  assertThat(index, equalTo(index1));
>> }
>> 
>> 
>> The failure is:
>> Expected: 
>> but: was > ...>
>> 
>> What do I need to do prevent the second save from overwriting the existing
>> document?
>> 
>> I commented out the updateHandler in the solr config file, to no effect.
>> We are using Spring Data with Solr 8.1.
>> In the core's schema, id is defined as unique  like this:
>> id
>> 



Re: Prevent Solr overwriting documents

2019-11-27 Thread Alexandre Rafalovitch
Oops. And the link...
https://lucene.apache.org/solr/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-OptimisticConcurrency

On Wed, Nov 27, 2019, 6:24 PM Alexandre Rafalovitch, 
wrote:

> How about Optimistic Concurrency with _version_ set to negative value?
>
> You could inject that extra value in URP chain if need be.
>
> Regards,
> Alex
>
> On Wed, Nov 27, 2019, 5:41 PM Aaron Hoffer,  wrote:
>
>> We want to prevent Solr from overwriting an existing document if
>> document's
>> ID already exists in the core.
>>
>> This unit test fails because the update/overwrite is permitted:
>>
>> public void testUpdateProhibited() {
>>   final Index index = baseInstance();
>>   indexRepository.save(index);
>>   Index index0 = indexRepository.findById(INDEX_ID).get();
>>   index0.setContents("AAA");
>>   indexRepository.save(index0);
>>   Index index1 = indexRepository.findById(INDEX_ID).get();
>>   assertThat(index, equalTo(index1));
>> }
>>
>>
>> The failure is:
>> Expected: 
>> but: was > ...>
>>
>> What do I need to do prevent the second save from overwriting the existing
>> document?
>>
>> I commented out the updateHandler in the solr config file, to no effect.
>> We are using Spring Data with Solr 8.1.
>> In the core's schema, id is defined as unique  like this:
>> id
>>
>


Re: Prevent Solr overwriting documents

2019-11-27 Thread Alexandre Rafalovitch
How about Optimistic Concurrency with _version_ set to negative value?

You could inject that extra value in URP chain if need be.

Regards,
Alex

On Wed, Nov 27, 2019, 5:41 PM Aaron Hoffer,  wrote:

> We want to prevent Solr from overwriting an existing document if document's
> ID already exists in the core.
>
> This unit test fails because the update/overwrite is permitted:
>
> public void testUpdateProhibited() {
>   final Index index = baseInstance();
>   indexRepository.save(index);
>   Index index0 = indexRepository.findById(INDEX_ID).get();
>   index0.setContents("AAA");
>   indexRepository.save(index0);
>   Index index1 = indexRepository.findById(INDEX_ID).get();
>   assertThat(index, equalTo(index1));
> }
>
>
> The failure is:
> Expected: 
> but: was  ...>
>
> What do I need to do prevent the second save from overwriting the existing
> document?
>
> I commented out the updateHandler in the solr config file, to no effect.
> We are using Spring Data with Solr 8.1.
> In the core's schema, id is defined as unique  like this:
> id
>


Prevent Solr overwriting documents

2019-11-27 Thread Aaron Hoffer
We want to prevent Solr from overwriting an existing document if document's
ID already exists in the core.

This unit test fails because the update/overwrite is permitted:

public void testUpdateProhibited() {
  final Index index = baseInstance();
  indexRepository.save(index);
  Index index0 = indexRepository.findById(INDEX_ID).get();
  index0.setContents("AAA");
  indexRepository.save(index0);
  Index index1 = indexRepository.findById(INDEX_ID).get();
  assertThat(index, equalTo(index1));
}


The failure is:
Expected: 
but: was 

What do I need to do prevent the second save from overwriting the existing
document?

I commented out the updateHandler in the solr config file, to no effect.
We are using Spring Data with Solr 8.1.
In the core's schema, id is defined as unique  like this:
id


Re: Solr master issue : IndexNotFoundException

2019-11-27 Thread Shawn Heisey

On 11/27/2019 6:28 AM, Akreeti Agarwal wrote:

Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file 
found in 
LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@/solr-m/server/solr/sitecore_web_index/data/index
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@5c6c24fd; 
maxCacheMB=48.0 maxMergeSizeMB=4.0))


The index looks like it's corrupt.  It's missing at least one of its files.

If you have deleted the index, then you must also delete the index 
directory itself.  If the index directory exists and is empty, Lucene 
will be unable to open it.


Thanks,
Shawn


Re: problem using Http2SolrClient with solr 8.3.0

2019-11-27 Thread Jörn Franke
Which jdk version? In this Setting i would recommend JDK11.

> Am 27.11.2019 um 22:00 schrieb Odysci :
> 
> Hi,
> I have a solr cloud setup using solr 8.3 and SolrJj, which works fine using
> the HttpSolrClient as well as the CloudSolrClient. I use 2 solr nodes with
> 3 Zookeeper nodes.
> Recently I configured my machines to handle ssl, http/2 and then I tried
> using in my java code the Http2SolrClient supported by SolrJ 8.3.0, but I
> got the following error at run time upon instantiating the Http2SolrClient
> object:
> 
> Has anyone seen this problem?
> Thanks
> Reinaldo
> ===
> 
> Oops: NoClassDefFoundError
> Unexpected error : Unexpected Error, caused by exception
> NoClassDefFoundError: org/eclipse/jetty/client/api/Request
> 
> play.exceptions.UnexpectedException: Unexpected Error
> at play.jobs.Job.onException(Job.java:180)
> at play.jobs.Job.call(Job.java:250)
> at Invocation.Job(Play!)
> Caused by: java.lang.NoClassDefFoundError:
> org/eclipse/jetty/client/api/Request
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient$AsyncTracker.(Http2SolrClient.java:789)
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:131)
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
> ... more
> Caused by: java.lang.ClassNotFoundException:
> org.eclipse.jetty.client.api.Request
> at
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
> at
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
> ... 16 more
> ==


Re: problem using Http2SolrClient with solr 8.3.0

2019-11-27 Thread Houston Putman
Are you overriding the Jetty version in your application using SolrJ?

On Wed, Nov 27, 2019 at 4:00 PM Odysci  wrote:

> Hi,
> I have a solr cloud setup using solr 8.3 and SolrJj, which works fine using
> the HttpSolrClient as well as the CloudSolrClient. I use 2 solr nodes with
> 3 Zookeeper nodes.
> Recently I configured my machines to handle ssl, http/2 and then I tried
> using in my java code the Http2SolrClient supported by SolrJ 8.3.0, but I
> got the following error at run time upon instantiating the Http2SolrClient
> object:
>
> Has anyone seen this problem?
> Thanks
> Reinaldo
> ===
>
> Oops: NoClassDefFoundError
> Unexpected error : Unexpected Error, caused by exception
> NoClassDefFoundError: org/eclipse/jetty/client/api/Request
>
> play.exceptions.UnexpectedException: Unexpected Error
> at play.jobs.Job.onException(Job.java:180)
> at play.jobs.Job.call(Job.java:250)
> at Invocation.Job(Play!)
> Caused by: java.lang.NoClassDefFoundError:
> org/eclipse/jetty/client/api/Request
> at
>
> org.apache.solr.client.solrj.impl.Http2SolrClient$AsyncTracker.(Http2SolrClient.java:789)
> at
>
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:131)
> at
>
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
> ... more
> Caused by: java.lang.ClassNotFoundException:
> org.eclipse.jetty.client.api.Request
> at
>
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
> at
>
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
> at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
> ... 16 more
> ==
>


problem using Http2SolrClient with solr 8.3.0

2019-11-27 Thread Odysci
Hi,
I have a solr cloud setup using solr 8.3 and SolrJj, which works fine using
the HttpSolrClient as well as the CloudSolrClient. I use 2 solr nodes with
3 Zookeeper nodes.
Recently I configured my machines to handle ssl, http/2 and then I tried
using in my java code the Http2SolrClient supported by SolrJ 8.3.0, but I
got the following error at run time upon instantiating the Http2SolrClient
object:

Has anyone seen this problem?
Thanks
Reinaldo
===

Oops: NoClassDefFoundError
Unexpected error : Unexpected Error, caused by exception
NoClassDefFoundError: org/eclipse/jetty/client/api/Request

play.exceptions.UnexpectedException: Unexpected Error
at play.jobs.Job.onException(Job.java:180)
at play.jobs.Job.call(Job.java:250)
at Invocation.Job(Play!)
Caused by: java.lang.NoClassDefFoundError:
org/eclipse/jetty/client/api/Request
at
org.apache.solr.client.solrj.impl.Http2SolrClient$AsyncTracker.(Http2SolrClient.java:789)
at
org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:131)
at
org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
... more
Caused by: java.lang.ClassNotFoundException:
org.eclipse.jetty.client.api.Request
at
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 16 more
==


Re: A Last Message to the Solr Users

2019-11-27 Thread Mark Miller
If SolrCloud worked well I’d still agree both options are very valid
depending on your use case. As it is, I’m embarrassed that people give me
any credit for this. I’m here to try and delight users and I have failed in
that. I tried to put a lot of my own time to address things outside of
working on my job of integrating Hadoop and upgrading Solr 4 instances for
years. But I couldn’t convince anyone of what was necessary to address what
has been happening, and my paid job has always been doing other things
since 2012.

On Wed, Nov 27, 2019 at 6:23 PM David Hastings 
wrote:

> Personally I found nothing in solr cloud worth changing from standalone
> for, and just added more complications, more servers, and required becoming
> an expert/knowledgeable in zoo keeper, id rather spend my time developing
> than becoming a systems administrator
>
> On Wed, Nov 27, 2019 at 3:45 AM Mark Miller  wrote:
>
>> This is your queue to come and make your jokes with your name attached.
>> I’m
>> sure the Solr users will appreciate them more than I do. I can’t laugh at
>> this situation because I take production code seriously.
>>
>> --
>> - Mark
>>
>> http://about.me/markrmiller
>>
> --
- Mark

http://about.me/markrmiller


Re: A Last Message to the Solr Users

2019-11-27 Thread David Hastings
Personally I found nothing in solr cloud worth changing from standalone
for, and just added more complications, more servers, and required becoming
an expert/knowledgeable in zoo keeper, id rather spend my time developing
than becoming a systems administrator

On Wed, Nov 27, 2019 at 3:45 AM Mark Miller  wrote:

> This is your queue to come and make your jokes with your name attached. I’m
> sure the Solr users will appreciate them more than I do. I can’t laugh at
> this situation because I take production code seriously.
>
> --
> - Mark
>
> http://about.me/markrmiller
>


Re: Solr master issue : IndexNotFoundException

2019-11-27 Thread Atita Arora
Did you happen to check the permissions , give it a try to add 777, maybe
the user running solr does not have permission to access the index dir.

On Wed, Nov 27, 2019 at 3:45 PM Akreeti Agarwal  wrote:

> Hi,
>
> I removed the write.lock file from the index and then restarted the solr
> server, but still the same issue was observed.
>
> Thanks & Regards,
> Akreeti Agarwal
> (M) +91-8318686601
>
> -Original Message-
> From: Atita Arora 
> Sent: Wednesday, November 27, 2019 7:21 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr master issue : IndexNotFoundException
>
> It seems to be either the permission problem or maybe because of the
> write.lock file not removed due to process kill.
>
> Did you happen to check this one ?
>
> https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.472066.n3.nabble.com%2FSolrCore-collection1-is-not-available-due-to-init-failure-td4094869.htmldata=02%7C01%7CAkreetiA%40hcl.com%7Cdcd1c099bccf4ed715d808d77340ed96%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637104595013784239sdata=SGjPkO92vio%2Ff8c5FqinQleRkz4nfor9fRcC5FEw3Ss%3Dreserved=0
>
> On Wed, Nov 27, 2019 at 2:28 PM Akreeti Agarwal  wrote:
>
> > Hi All,
> >
> > I am getting these two errors after restarting my solr master server:
> >
> > null:org.apache.solr.common.SolrException: SolrCore 'sitecore_web_index'
> > is not available due to init failure: Error opening new searcher
> >
> > Caused by: org.apache.lucene.index.IndexNotFoundException: no
> > segments* file found in
> > LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@/solr
> > -m/server/solr/sitecore_web_index/data/index
> > lockFactory=org.apache.lucene.store.NativeFSLockFactory@5c6c24fd;
> > maxCacheMB=48.0 maxMergeSizeMB=4.0))
> >
> > Please help to resolve this.
> >
> > Thanks & Regards,
> > Akreeti Agarwal
> > (M) +91-8318686601
> >
> > ::DISCLAIMER::
> > 
> > The contents of this e-mail and any attachment(s) are confidential and
> > intended for the named recipient(s) only. E-mail transmission is not
> > guaranteed to be secure or error-free as information could be
> > intercepted, corrupted, lost, destroyed, arrive late or incomplete, or
> > may contain viruses in transmission. The e mail and its contents (with
> > or without referred errors) shall therefore not attach any liability
> > on the originator or HCL or its affiliates. Views or opinions, if any,
> > presented in this email are solely those of the author and may not
> > necessarily reflect the views or opinions of HCL or its affiliates.
> > Any form of reproduction, dissemination, copying, disclosure,
> > modification, distribution and / or publication of this message
> > without the prior written consent of authorized representative of HCL
> > is strictly prohibited. If you have received this email in error
> > please delete it and notify the sender immediately. Before opening any
> > email and/or attachments, please check them for viruses and other
> defects.
> > 
> >
>


RE: Solr master issue : IndexNotFoundException

2019-11-27 Thread Akreeti Agarwal
Hi,

I removed the write.lock file from the index and then restarted the solr 
server, but still the same issue was observed.

Thanks & Regards,
Akreeti Agarwal
(M) +91-8318686601

-Original Message-
From: Atita Arora  
Sent: Wednesday, November 27, 2019 7:21 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr master issue : IndexNotFoundException

It seems to be either the permission problem or maybe because of the write.lock 
file not removed due to process kill.

Did you happen to check this one ?
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.472066.n3.nabble.com%2FSolrCore-collection1-is-not-available-due-to-init-failure-td4094869.htmldata=02%7C01%7CAkreetiA%40hcl.com%7Cdcd1c099bccf4ed715d808d77340ed96%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637104595013784239sdata=SGjPkO92vio%2Ff8c5FqinQleRkz4nfor9fRcC5FEw3Ss%3Dreserved=0

On Wed, Nov 27, 2019 at 2:28 PM Akreeti Agarwal  wrote:

> Hi All,
>
> I am getting these two errors after restarting my solr master server:
>
> null:org.apache.solr.common.SolrException: SolrCore 'sitecore_web_index'
> is not available due to init failure: Error opening new searcher
>
> Caused by: org.apache.lucene.index.IndexNotFoundException: no 
> segments* file found in 
> LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@/solr
> -m/server/solr/sitecore_web_index/data/index
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@5c6c24fd;
> maxCacheMB=48.0 maxMergeSizeMB=4.0))
>
> Please help to resolve this.
>
> Thanks & Regards,
> Akreeti Agarwal
> (M) +91-8318686601
>
> ::DISCLAIMER::
> 
> The contents of this e-mail and any attachment(s) are confidential and 
> intended for the named recipient(s) only. E-mail transmission is not 
> guaranteed to be secure or error-free as information could be 
> intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
> may contain viruses in transmission. The e mail and its contents (with 
> or without referred errors) shall therefore not attach any liability 
> on the originator or HCL or its affiliates. Views or opinions, if any, 
> presented in this email are solely those of the author and may not 
> necessarily reflect the views or opinions of HCL or its affiliates. 
> Any form of reproduction, dissemination, copying, disclosure, 
> modification, distribution and / or publication of this message 
> without the prior written consent of authorized representative of HCL 
> is strictly prohibited. If you have received this email in error 
> please delete it and notify the sender immediately. Before opening any 
> email and/or attachments, please check them for viruses and other defects.
> 
>


Re: Solr master issue : IndexNotFoundException

2019-11-27 Thread Atita Arora
It seems to be either the permission problem or maybe because of the
write.lock file not removed due to process kill.

Did you happen to check this one ?
https://lucene.472066.n3.nabble.com/SolrCore-collection1-is-not-available-due-to-init-failure-td4094869.html

On Wed, Nov 27, 2019 at 2:28 PM Akreeti Agarwal  wrote:

> Hi All,
>
> I am getting these two errors after restarting my solr master server:
>
> null:org.apache.solr.common.SolrException: SolrCore 'sitecore_web_index'
> is not available due to init failure: Error opening new searcher
>
> Caused by: org.apache.lucene.index.IndexNotFoundException: no segments*
> file found in
> LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@/solr-m/server/solr/sitecore_web_index/data/index
> lockFactory=org.apache.lucene.store.NativeFSLockFactory@5c6c24fd;
> maxCacheMB=48.0 maxMergeSizeMB=4.0))
>
> Please help to resolve this.
>
> Thanks & Regards,
> Akreeti Agarwal
> (M) +91-8318686601
>
> ::DISCLAIMER::
> 
> The contents of this e-mail and any attachment(s) are confidential and
> intended for the named recipient(s) only. E-mail transmission is not
> guaranteed to be secure or error-free as information could be intercepted,
> corrupted, lost, destroyed, arrive late or incomplete, or may contain
> viruses in transmission. The e mail and its contents (with or without
> referred errors) shall therefore not attach any liability on the originator
> or HCL or its affiliates. Views or opinions, if any, presented in this
> email are solely those of the author and may not necessarily reflect the
> views or opinions of HCL or its affiliates. Any form of reproduction,
> dissemination, copying, disclosure, modification, distribution and / or
> publication of this message without the prior written consent of authorized
> representative of HCL is strictly prohibited. If you have received this
> email in error please delete it and notify the sender immediately. Before
> opening any email and/or attachments, please check them for viruses and
> other defects.
> 
>


Solr master issue : IndexNotFoundException

2019-11-27 Thread Akreeti Agarwal
Hi All,

I am getting these two errors after restarting my solr master server:

null:org.apache.solr.common.SolrException: SolrCore 'sitecore_web_index' is not 
available due to init failure: Error opening new searcher

Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file 
found in 
LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory@/solr-m/server/solr/sitecore_web_index/data/index
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@5c6c24fd; 
maxCacheMB=48.0 maxMergeSizeMB=4.0))

Please help to resolve this.

Thanks & Regards,
Akreeti Agarwal
(M) +91-8318686601

::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.



Re: Icelandic support in Solr

2019-11-27 Thread Andrzej Białecki
If I’m not mistaken Hunspell supports Icelandic (see here: 
https://cgit.freedesktop.org/libreoffice/dictionaries/tree/is 
) and Lucene 
HunspellStemFilter should be able to use these dictionaries.

> On 27 Nov 2019, at 10:10, Charlie Hull  wrote:
> 
> On 26/11/2019 16:35, Mikhail Ibraheem wrote:
>> Hi,Does Solr supports Icelandic language out of the box? If not, can you 
>> please let me know how to add that with custom analyzers?
>> Thanks
> 
> The Snowball stemmer project which is used by Solr 
> (https://snowballstem.org/algorithms/ - co-created by Martin Porter, author 
> of the famous stemmer) doesn't support Icelandic unfortunately. I can't find 
> any other stemmers that you could use in Solr.
> 
> Basis Technology offer various commercial software for language processing 
> that can work with Solr and other engines, not sure if they support Icelandic.
> 
> So, not very positive I'm afraid: you could look into creating your own 
> stemmer using Snowball, or some heuristic approaches, but you'd need a good 
> grasp of the structure of the language.
> 
> 
> Best
> 
> 
> Charlie
> 
> 
> -- 
> Charlie Hull
> Flax - Open Source Enterprise Search
> 
> tel/fax: +44 (0)8700 118334
> mobile:  +44 (0)7767 825828
> web: www.flax.co.uk
> 



Re: Some newby questions ...

2019-11-27 Thread Christian Dannemann
Hmmm... I tried that as well, but it doesn't pick up the security.json
settings.

I run this instance on a computer that is on the internet, so just changing
the port is asking for trouble.

Looks like nobody knows how to import imap data 

Best regards,

Christian

On Tue, 26 Nov 2019 at 23:55, Shawn Heisey  wrote:

> On 11/26/2019 2:17 PM, Christian Dannemann wrote:
> > Issue 1: I want to secure my server with basic authentication (that's why
> > I'm running on port 10539 at the moment, but that's not security ...
> >
> > I've put a file security.json in
> > /opt/solr/server/solr/configsets/_default/conf, but that doesn't do
> > anything.
>
> Your solr home appears to be /var/solr/data ... that is where you need
> to place the security.json file for a setup that is not running SolrCloud.
>
>
> https://lucene.apache.org/solr/guide/8_3/authentication-and-authorization-plugins.html#in-standalone-mode
>
> If you install your Solr server in a network location where it cannot be
> reached by people you cannot trust, there is usually no need for
> security measures like authentication.
>
> > Issue 2: I would like to index a lot of emails that reside on a local
> imap
> > server (dovecot).
>
> I've got no idea how to use the dataimport imap capability.
>
> Thanks,
> Shawn
>


-- 
Christian Dannemann
Managing Director
Merus Software Ltd
http://merus.eu
DDI:+44 1453 708610


Re: Production Issue: cannot connect to solr server suddenly

2019-11-27 Thread Vignan Malyala
Sure!

Error:
Failed to connect to server at '
http://127.0.0.1:8983/solr/my_core/update/?commit=true
',
are you sure that URL is correct? Checking it in a browser might help:
HTTPConnectionPool(host='127.0.0.1', port=8983): Max retries exceeded with
url: /solr/my_core/update/?commit=true (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno 111]
Connection refused',))

I'm using python with pysolr to connect to Solr Instance.
In my production server, I get this issue at certain times in my python
logs while it tries to connect to solr to index data or search results.
When I check the solr UI, it will be working fine, but this issue of solr
refusing to connect at certain times occurs and so my users are not able to
index documents or search data.
Please help with this issue.

Should I increase threads and how to increase threads ? Should I increase
physical memory ?
whats the solution?

On Tue, Nov 26, 2019 at 3:51 PM Paras Lehana 
wrote:

> Hi Sai,
>
> Please elaborate. What language is the code written in? Why is there
> google.com in the query?
>
> Max retries exceeded with url
>
>
> This happens when you make too many requests on a server than allowed.
> Check with server at solradmin in case you have DoS or related policy
> preventing this.
>
> On Mon, 25 Nov 2019 at 16:39, Vignan Malyala  wrote:
>
> > I don't get this error always. At certain times, I get this error with my
> > Solr suddenly.
> > However, If I check my Solr url, it will be working but. When I want to
> > update via code, it will not work.
> > Please help me out with this.
> >
> > ERROR:
> > *Failed to connect to server at
> > 'http://127.0.0.1:8983/solr/my_core/update/?commit=true
> > <
> >
> https://www.google.com/url?q=http://solradmin:Red8891@127.0.0.1:8983/solr/tenant_311/update/?commit%3Dtrue=D=hangouts=1574765671451000=AFQjCNGE326wW7hZNwLUH2dEw8scCTyEXw
> > >',
> > are you sure that URL is correct? Checking it in a browser might help:
> > HTTPConnectionPool(host='127.0.0.1', port=8983): Max retries exceeded
> with
> > url: /solr/my_core/update/?commit=true (Caused by
> > NewConnectionError(' > 0x7efd7be78a98>: Failed to establish a new connection: [Errno 111]
> > Connection refused',))*
> >
> >
> >
> >
> > Regards,
> > Sai Vignan
> >
>
>
> --
> --
> Regards,
>
> *Paras Lehana* [65871]
> Development Engineer, Auto-Suggest,
> IndiaMART Intermesh Ltd.
>
> 8th Floor, Tower A, Advant-Navis Business Park, Sector 142,
> Noida, UP, IN - 201303
>
> Mob.: +91-9560911996
> Work: 01203916600 | Extn:  *8173*
>
> --
> IMPORTANT:
> NEVER share your IndiaMART OTP/ Password with anyone.
>


Re: Custom sort function

2019-11-27 Thread Sripra deep
My exact requirement is,  I will have a new field for a set of documents
for sorting. So the number of fields is growing like 10k
fields(custom_sort1,custom_sort2.,etc), for 10k groups, each group grouping
a set of documents. Since the documents can reside in more than one group I
cannot use a single field.
So I tried to use a single field for each document in the format as
"custom_field:group1=1$group2=3". During query time I will filter all the
group1 documents and parse this field and return the respective sort
number.

My solr version is *7.1.0*.

*This is my custom Class:*

public class CustomSortValueSource extends ValueSource{

protected final boolean isasc;
protected final String s1;
protected final ValueSource source;


public CustomSort(ValueSource sources, String s1, String order ) {
if (sources == null || s1 ==null || order == null) {
 throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
 "One or more inputs missing ");
   }

this.s1 = s1;
this.source = sources;
this.isasc = order.equals("asc") ? true : false;
}

@Override
public FunctionValues getValues(Map context, LeafReaderContext reader)
throws IOException {
final FunctionValues vals = source.getValues(context, reader);
return new FunctionValues() {
   public float floatVal(int doc) {
float toRet = isasc ? Float.MAX_VALUE : Float.MIN_VALUE;
/*
My custom logic in reading a string array and returning a value out
of some condition checks
*/
}
}
}
}

There is nothing happening in the constructor other than assigning.

Thanks,
Sripradeep P


On Wed, Nov 27, 2019 at 2:30 PM Jörn Franke  wrote:

> Maybe can you do it as part of the loading to calculate the score?
> Which Solr version are you using?
> Are you doing some heavily lifting into the constructor or your filter?
>
> Am 27.11.2019 um 09:34 schrieb Sripra deep :
>
> 
> Hi Jörn Franke,
>
>   I modified the custom function to just return a constant value as 1.0
> for all the docs and ran the load again, the latency is worst like more
> than 20sec. The filter I am using will fetch 15k documents (so this
> function is called 15k times). And if I don't call this function in my
> query then latency is less than 10ms.
>
> So it looks like some fundamental issue with this approach and I believe
> its been used in many business scenarios. Looking out for what's going
>  wrong.
>
> Thanks,
> Sripradeep P
>
>
> On Wed, Nov 27, 2019 at 1:00 PM Jörn Franke  wrote:
>
>> And have you tried how fast it is if you don’t do anything in this
>> method?
>>
>> > Am 27.11.2019 um 07:52 schrieb Sripra deep > >:
>> >
>> > Hi Team,
>> >  I wrote a custom sort function that will read the field value and parse
>> > and returns a float value that will be used for sorting. this field is
>> > indexed, stored and docvalues enabled. my latency increases drastically
>> > from 10ms when the filter loads 50 documents and 200ms when it loads 250
>> > documents and 1sec when it loads 1000 documents.
>> >
>> > This is my function:
>> >
>> > @Override
>> > public FunctionValues getValues(Map context, LeafReaderContext reader)
>> > throws IOException {
>> > final FunctionValues vals = source.getValues(context, reader);
>> > return new FunctionValues() {
>> >public float floatVal(int doc) {
>> >float toRet = isasc ? Float.MAX_VALUE : Float.MIN_VALUE;
>> >/*
>> >My custom logic in reading a string array and returning a value
>> out
>> > of some condition checks
>> >*/
>> >}
>> >}
>> > }
>> >
>> > Can you suggest me some suggestions on this ?
>> >
>> > Thanks,
>> > Sripradeep P
>>
>


Re: Icelandic support in Solr

2019-11-27 Thread Charlie Hull

On 26/11/2019 16:35, Mikhail Ibraheem wrote:

Hi,Does Solr supports Icelandic language out of the box? If not, can you please 
let me know how to add that with custom analyzers?
Thanks


The Snowball stemmer project which is used by Solr 
(https://snowballstem.org/algorithms/ - co-created by Martin Porter, 
author of the famous stemmer) doesn't support Icelandic unfortunately. I 
can't find any other stemmers that you could use in Solr.


Basis Technology offer various commercial software for language 
processing that can work with Solr and other engines, not sure if they 
support Icelandic.


So, not very positive I'm afraid: you could look into creating your own 
stemmer using Snowball, or some heuristic approaches, but you'd need a 
good grasp of the structure of the language.



Best


Charlie


--
Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk



Re: Custom sort function

2019-11-27 Thread Jörn Franke
Maybe can you do it as part of the loading to calculate the score?
Which Solr version are you using? 
Are you doing some heavily lifting into the constructor or your filter?

> Am 27.11.2019 um 09:34 schrieb Sripra deep :
> 
> 
> Hi Jörn Franke,
> 
>   I modified the custom function to just return a constant value as 1.0 for 
> all the docs and ran the load again, the latency is worst like more than 
> 20sec. The filter I am using will fetch 15k documents (so this function is 
> called 15k times). And if I don't call this function in my query then latency 
> is less than 10ms. 
> 
> So it looks like some fundamental issue with this approach and I believe its 
> been used in many business scenarios. Looking out for what's going wrong. 
> 
> Thanks,
> Sripradeep P
> 
> 
>> On Wed, Nov 27, 2019 at 1:00 PM Jörn Franke  wrote:
>> And have you tried how fast it is if you don’t do anything in this method? 
>> 
>> > Am 27.11.2019 um 07:52 schrieb Sripra deep :
>> > 
>> > Hi Team,
>> >  I wrote a custom sort function that will read the field value and parse
>> > and returns a float value that will be used for sorting. this field is
>> > indexed, stored and docvalues enabled. my latency increases drastically
>> > from 10ms when the filter loads 50 documents and 200ms when it loads 250
>> > documents and 1sec when it loads 1000 documents.
>> > 
>> > This is my function:
>> > 
>> > @Override
>> > public FunctionValues getValues(Map context, LeafReaderContext reader)
>> > throws IOException {
>> > final FunctionValues vals = source.getValues(context, reader);
>> > return new FunctionValues() {
>> >public float floatVal(int doc) {
>> >float toRet = isasc ? Float.MAX_VALUE : Float.MIN_VALUE;
>> >/*
>> >My custom logic in reading a string array and returning a value out
>> > of some condition checks
>> >*/
>> >}
>> >}
>> > }
>> > 
>> > Can you suggest me some suggestions on this ?
>> > 
>> > Thanks,
>> > Sripradeep P


Re: A Last Message to the Solr Users

2019-11-27 Thread Mark Miller
This is your queue to come and make your jokes with your name attached. I’m
sure the Solr users will appreciate them more than I do. I can’t laugh at
this situation because I take production code seriously.

-- 
- Mark

http://about.me/markrmiller


Re: Custom sort function

2019-11-27 Thread Sripra deep
Hi Jörn Franke,

  I modified the custom function to just return a constant value as 1.0 for
all the docs and ran the load again, the latency is worst like more than
20sec. The filter I am using will fetch 15k documents (so this function is
called 15k times). And if I don't call this function in my query then
latency is less than 10ms.

So it looks like some fundamental issue with this approach and I believe
its been used in many business scenarios. Looking out for what's going
 wrong.

Thanks,
Sripradeep P


On Wed, Nov 27, 2019 at 1:00 PM Jörn Franke  wrote:

> And have you tried how fast it is if you don’t do anything in this method?
>
> > Am 27.11.2019 um 07:52 schrieb Sripra deep  >:
> >
> > Hi Team,
> >  I wrote a custom sort function that will read the field value and parse
> > and returns a float value that will be used for sorting. this field is
> > indexed, stored and docvalues enabled. my latency increases drastically
> > from 10ms when the filter loads 50 documents and 200ms when it loads 250
> > documents and 1sec when it loads 1000 documents.
> >
> > This is my function:
> >
> > @Override
> > public FunctionValues getValues(Map context, LeafReaderContext reader)
> > throws IOException {
> > final FunctionValues vals = source.getValues(context, reader);
> > return new FunctionValues() {
> >public float floatVal(int doc) {
> >float toRet = isasc ? Float.MAX_VALUE : Float.MIN_VALUE;
> >/*
> >My custom logic in reading a string array and returning a value
> out
> > of some condition checks
> >*/
> >}
> >}
> > }
> >
> > Can you suggest me some suggestions on this ?
> >
> > Thanks,
> > Sripradeep P
>


Re: A Last Message to the Solr Users

2019-11-27 Thread Mark Miller
And if you are a developer, enjoy that Gradle build! It was the highlight
of my year.

On Wed, Nov 27, 2019 at 10:00 AM Mark Miller  wrote:

> If you have a SolrCloud installation that is somehow working for you,
> personally I would never upgrade. The software is getting progressively
> more unstable every release.
>
>
> I wrote most of the core of SolrCloud in a prototype fashion many, many
> years ago. Only Yonik’s isolated work is solid and most of my work still
> stands as it was. This situation has me abandoning that project so that
> people understand I won’t stand by garbage work.
>
> Given that no one seems to understand what is happening in SolrCloud under
> the covers or how it was intended to work, their best bet is to start
> rewriting. Until they do this, I recommend you do not upgrade from an
> install that is working for your needs. A new feature will not be worth the
> headaches.
>
>
> Some of the other committers, who certainly do not understand the scope of
> the problem or my code (they would have touched it a bit if they did) would
> prefer to laugh or form a defensive posture than fix the situation. Wait
> them out. The project will collapse or get better. If I ran a production
> instance of SolrCloud, I would wait to see which happens first before
> embracing any update.
>
>
> At this point, the best way to use Solr is as it’s always been - avoid
> SolrCloud and setup your own system in standalone mode. If I had to build a
> new Solr install today, this is what I would do.
>
>
> In my opinion, the companies that have been claiming to back Solr and
> SolrCloud have been negligent, and all of the users are paying the price.
> It hasn’t been my job to work on it in any real fashion since 2012. I’m
> sorry I couldn’t help improve the situation for you.
>
>
> Take it for what it’s worth. To some, not much I’m sure.
>
>
> Mark Miller
> --
> - Mark
>
> http://about.me/markrmiller
>
-- 
- Mark

http://about.me/markrmiller


A Last Message to the Solr Users

2019-11-27 Thread Mark Miller
If you have a SolrCloud installation that is somehow working for you,
personally I would never upgrade. The software is getting progressively
more unstable every release.


I wrote most of the core of SolrCloud in a prototype fashion many, many
years ago. Only Yonik’s isolated work is solid and most of my work still
stands as it was. This situation has me abandoning that project so that
people understand I won’t stand by garbage work.

Given that no one seems to understand what is happening in SolrCloud under
the covers or how it was intended to work, their best bet is to start
rewriting. Until they do this, I recommend you do not upgrade from an
install that is working for your needs. A new feature will not be worth the
headaches.


Some of the other committers, who certainly do not understand the scope of
the problem or my code (they would have touched it a bit if they did) would
prefer to laugh or form a defensive posture than fix the situation. Wait
them out. The project will collapse or get better. If I ran a production
instance of SolrCloud, I would wait to see which happens first before
embracing any update.


At this point, the best way to use Solr is as it’s always been - avoid
SolrCloud and setup your own system in standalone mode. If I had to build a
new Solr install today, this is what I would do.


In my opinion, the companies that have been claiming to back Solr and
SolrCloud have been negligent, and all of the users are paying the price.
It hasn’t been my job to work on it in any real fashion since 2012. I’m
sorry I couldn’t help improve the situation for you.


Take it for what it’s worth. To some, not much I’m sure.


Mark Miller
-- 
- Mark

http://about.me/markrmiller