TLOG+Pull replicas

2019-02-02 Thread Omar Tamer
Hi,

I have a collection with 2 TLOG and 6 PULL replicas.  When one of the pull
nodes dies and is replaced with a new instance. It is being replaced by an
NRT replica in the collection (the desired action is to be replaced by a
PULL replica).

collection is being created by

collections?action=CREATE=blog=1=2=blog=6=true"

"cluster-policy": [{"replica": "#ALL","type": "PULL","ip_3": "20"},{"replica
": "#ALL","type": "TLOG","ip_3": "16"},{"replica": "1","node": "#EACH"}],

thanks


RE: [EXTERNAL] Re: [CDCR]Unable to locate core

2019-02-02 Thread Timothy Springsteen
Thank you for the reply. Sorry I did not include more information in the first 
post.

So maybe there's some confusion here from my end. So both the target and source 
clusters are running in cloud mode. So I think you're correct that it is a 
different issue. So it looks like the source leader to target leader is 
successful but the target leader is then unsuccessful in replicating to its 
followers.

The "unable to locate core" message is originally coming from the target 
cluster.
Here are the logs being generated from the source for reference:
2019-02-02 20:10:19.551 INFO  
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr 
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12) 
[c:testcollection s:shard3 r:core_node12 x:testcollection_shard3_replica_n10] 
o.a.s.h.CdcrReplicatorManager CDCR bootstrap successful in 3 seconds
2019-02-02 20:10:19.564 INFO  
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr 
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12) 
[c:testcollection s:shard3 r:core_node12 x:testcollection_shard3_replica_n10] 
o.a.s.h.CdcrReplicatorManager Create new update log reader for target 
testcollection with checkpoint 1624389130873995265 @ testcollection:shard3
2019-02-02 20:10:19.568 ERROR 
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr 
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12) 
[c:testcollection s:shard3 r:core_node12 x:testcollection_shard3_replica_n10] 
o.a.s.h.CdcrReplicatorManager Unable to bootstrap the target collection 
testcollection shard: shard3
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://targethost001.com:30100/solr: Unable to locate core 
testcollection_shard2_replica_n4
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 
org.apache.solr.handler.CdcrReplicatorManager.sendRequestRecoveryToFollower(CdcrReplicatorManager.java:439)
 ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:55]
at 
org.apache.solr.handler.CdcrReplicatorManager.sendRequestRecoveryToFollowers(CdcrReplicatorManager.java:428)
 ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:55]
at 
org.apache.solr.handler.CdcrReplicatorManager.access$300(CdcrReplicatorManager.java:63)
 ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:55]
at 
org.apache.solr.handler.CdcrReplicatorManager$BootstrapStatusRunnable.run(CdcrReplicatorManager.java:306)
 ~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:55]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_192]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_192]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df - jimczi 
- 2018-09-18 13:07:58]
at 

Re: [CDCR]Unable to locate core

2019-02-02 Thread Tim
Thank you for the reply. Sorry I did not include more information in the
first post. 

So maybe there's some confusion here from my end. So both the target and
source clusters are running in cloud mode. So I think you're correct that it
is a different issue. So it looks like the source leader to target leader is
successful but the target leader is then unsuccessful in replicating to its
followers.

The "unable to locate core" message is originally coming from the target
cluster. 
*Here are the logs being generated from the source for reference:*
2019-02-02 20:10:19.551 INFO 
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12)
[c:testcollection s:shard3 r:core_node12
x:testcollection_shard3_replica_n10] o.a.s.h.CdcrReplicatorManager CDCR
bootstrap successful in 3 seconds
2019-02-02 20:10:19.564 INFO 
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12)
[c:testcollection s:shard3 r:core_node12
x:testcollection_shard3_replica_n10] o.a.s.h.CdcrReplicatorManager Create
new update log reader for target testcollection with checkpoint
1624389130873995265 @ testcollection:shard3
2019-02-02 20:10:19.568 ERROR
(cdcr-bootstrap-status-81-thread-1-processing-n:sourcehost001.com:30100_solr
x:testcollection_shard3_replica_n10 c:testcollection s:shard3 r:core_node12)
[c:testcollection s:shard3 r:core_node12
x:testcollection_shard3_replica_n10] o.a.s.h.CdcrReplicatorManager Unable to
bootstrap the target collection testcollection shard: shard3
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at http://targethost001.com:30100/solr: Unable to locate core
testcollection_shard2_replica_n4
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
org.apache.solr.handler.CdcrReplicatorManager.sendRequestRecoveryToFollower(CdcrReplicatorManager.java:439)
~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:55]
at
org.apache.solr.handler.CdcrReplicatorManager.sendRequestRecoveryToFollowers(CdcrReplicatorManager.java:428)
~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:55]
at
org.apache.solr.handler.CdcrReplicatorManager.access$300(CdcrReplicatorManager.java:63)
~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:55]
at
org.apache.solr.handler.CdcrReplicatorManager$BootstrapStatusRunnable.run(CdcrReplicatorManager.java:306)
~[solr-core-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:55]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[?:1.8.0_192]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[?:1.8.0_192]
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
~[solr-solrj-7.5.0.jar:7.5.0 b5bf70b7e32d7ddd9742cc821d471c5fabd4e3df -
jimczi - 2018-09-18 13:07:58]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_192]
  

Re: Alternative for DIH

2019-02-02 Thread Nish Karve
If you absolutely want to use Kafka after trying other mechanisms, I would
suggest Kafka Connect. Jeremy Custenborder has a good Kafka connector as a
sink to SOLR. You can define your own avro schemas on the Kafka topic that
adhere to your SOLR schema to give you that degree of control.

We have used Lucidworks Spark Connector to index 500 million documents to
SOLR within 4 hours. We had around 70 fields per document. This is a very
good choice if you want to sync data from a DB to SOLR. Have an interim
step of using an ETL tool like Ab Initio that will perform the basic joins
on your table, extract the data in CSV for the Spark Connector. All the
hardwork of opening and managing the connections with SOLR is done in the
connector. Please note that this connector indexed data to a live SOLR
cluster unlike offline indexing using Map Reduce.

Thanks
Nishant

On Thu, Jan 31, 2019, 5:15 AM Srinivas Kashyap  Hello,
>
> As we all know DIH is single threaded and has it's own issues while
> indexing.
>
> Got to know that we can write our own API's to pull data from DB and push
> it into solr. One such I heard was Apache Kafka being used for the purpose.
>
> Can any of you send me the links and guides to use apache kafka to pull
> data from DB and push into solr?
>
> If there are any other alternatives please suggest.
>
> Thanks and Regards,
> Srinivas Kashyap
> 
> DISCLAIMER:
> E-mails and attachments from Bamboo Rose, LLC are confidential.
> If you are not the intended recipient, please notify the sender
> immediately by replying to the e-mail, and then delete it without making
> copies or using it in any way.
> No representation is made that this email or any attachments are free of
> viruses. Virus scanning is recommended and is the responsibility of the
> recipient.
>


Re: Alternative for DIH

2019-02-02 Thread Erick Erickson
Depending on how complicated you need this to be, you can just write
your own in SolrJ, see:

https://lucidworks.com/2012/02/14/indexing-with-solrj/

You haven't said a lot about the characteristics of your situation.
Are you talking 1B rows
from the DB? 1M? what is the pain point? Because until one gets to
massive amounts of
data, 9 times out of 10 poor indexing performance is a result of the
DB query being
used executing very slowly.

Before jumping to a solution, it'd be good to know
1> why you're dissatisfied with DIH, i.e. what is the problem you're seeing
2> some information about your situation, size of DB, how fast DIH
works now etc.

This latter is important, 'cause it's a totally different question if,
say, your problem
statement is
"it takes 8 hours to import 1,000,000,000 rows and the docs are 1M long"
.vs.
"it takes 8 hours to import 100,000 rows that are 1K each".

Until there are answers to questions like that it's not clear at all you even
_have_ a problem that's solvable by any of the suggestions so far.

Best,
Erick

On Thu, Jan 31, 2019 at 12:34 PM Alexandre Rafalovitch
 wrote:
>
> Apache NiFi may also be something of interest: https://nifi.apache.org/
>
> Regards,
>Alex.
>
> On Thu, 31 Jan 2019 at 11:15, Mikhail Khludnev  wrote:
> >
> > Hello,
> >
> > I did this deck some time ago. It might be useful for choosing one.
> > https://docs.google.com/presentation/d/e/2PACX-1vQzi3QOZAwLh_t3zs1gH9EGCB2HKUgiN3WJRGHpULyA-GleCrQ41dIOINa18h_XG64BX5D_ZG6jKmXL/pub?start=false=false=3000
> > Note, as far as I understand Lucidworks' answer to this is Spark.
> >
> >
> > On Thu, Jan 31, 2019 at 2:15 PM Srinivas Kashyap 
> > wrote:
> >
> > > Hello,
> > >
> > > As we all know DIH is single threaded and has it's own issues while
> > > indexing.
> > >
> > > Got to know that we can write our own API's to pull data from DB and push
> > > it into solr. One such I heard was Apache Kafka being used for the 
> > > purpose.
> > >
> > > Can any of you send me the links and guides to use apache kafka to pull
> > > data from DB and push into solr?
> > >
> > > If there are any other alternatives please suggest.
> > >
> > > Thanks and Regards,
> > > Srinivas Kashyap
> > > 
> > > DISCLAIMER:
> > > E-mails and attachments from Bamboo Rose, LLC are confidential.
> > > If you are not the intended recipient, please notify the sender
> > > immediately by replying to the e-mail, and then delete it without making
> > > copies or using it in any way.
> > > No representation is made that this email or any attachments are free of
> > > viruses. Virus scanning is recommended and is the responsibility of the
> > > recipient.
> > >
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev


Re: Solr Size Limitation upto 32 kb limitation

2019-02-02 Thread Erick Erickson
Clean your data. The whole point of that limitation is that it makes
no sense to try to index a 32K field for _searching_ anyway. By
"index" here, I mean have a _token_ in your index that you can search.
Storing it away with stored="true" field is another matter.

Best,
Erick

On Thu, Jan 31, 2019 at 11:33 PM Walter Underwood  wrote:
>
> Solr is not a database. It won’t store arbitrary length data. Put the file 
> content in a database and put the key in Solr.
>
> I’m dropping the CC to d...@lucene.apache.org, because this does not belong 
> on that list.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 31, 2019, at 8:56 PM, Kranthi Kumar K 
> >  wrote:
> >
> > Hi Team,
> >
> > Thanks for your suggestions that you've posted, but none of them have fixed 
> > our issue. Could you please provide us your valuable suggestions to address 
> > this issue.
> >
> > We'll be awaiting your reply.
> >
> > Thanks,
> > Kranthi kumar.K
> > From: Michelle Ngo
> > Sent: Thursday, January 24, 2019 12:00:06 PM
> > To: Kranthi Kumar K; d...@lucene.apache.org 
> > ; solr-user@lucene.apache.org 
> > 
> > Cc: Ananda Babu medida; Srinivasa Reddy Karri; Ravi Vangala; Suresh 
> > Malladi; Vijay Nandula
> > Subject: RE: Solr Size Limitation upto 32 kb limitation
> >
> > Thanks @Kranthi Kumar K  for 
> > following up
> >
> > From: Kranthi Kumar K  > >
> > Sent: Thursday, 24 January 2019 4:51 PM
> > To: d...@lucene.apache.org ; 
> > solr-user@lucene.apache.org 
> > Cc: Ananda Babu medida  > >; Srinivasa Reddy Karri 
> >  > >; Michelle Ngo 
> > mailto:michelle@ccube.com.au>>; Ravi 
> > Vangala  > >; Suresh Malladi 
> > mailto:sur...@ccubefintech.com>>; Vijay Nandula 
> > mailto:vijay.nand...@ccubefintech.com>>
> > Subject: RE: Solr Size Limitation upto 32 kb limitation
> >
> > Thank you Bernd Fehling for your suggested solution, I've tried the same by 
> > changing the type and added multivalued to true in Schema.xml file i.e,
> > change from:
> >
> >  > />
> >
> > Changed to:
> >
> >  > multiValued="true" />
> >
> > After changing it also still we are unable to import the files size > 32 
> > kb. please find the solution suggested by Bernd in the below url:
> >
> > http://lucene.472066.n3.nabble.com/Re-Solr-Size-Limitation-upto-32-kb-limitation-td4421569.html
> >  
> > 
> >
> > Bernd Fehling, could you please suggest another alternative solution to 
> > resolve our issue, which would help us alot?
> >
> > Please let me know for any questions.
> >
> > 
> >
> > Thanks & Regards,
> > Kranthi Kumar.K,
> > Software Engineer,
> > Ccube Fintech Global Services Pvt Ltd.,
> > Email/Skype: kranthikuma...@ccubefintech.com 
> > ,
> > Mobile: +91-8978078449.
> >
> >
> > From: Kranthi Kumar K
> > Sent: Friday, January 18, 2019 4:22 PM
> > To: d...@lucene.apache.org ; 
> > solr-user@lucene.apache.org 
> > Cc: Ananda Babu medida  > >; Srinivasa Reddy Karri 
> >  > >; Michelle Ngo 
> > mailto:michelle@ccube.com.au>>; Ravi 
> > Vangala  > >
> > Subject: RE: Solr Size Limitation upto 32 kb limitation
> >
> > Hi team,
> >
> > Thank you Erick Erickson ,Bernd Fehling , Jan Hoydahl for your suggested 
> > solutions. I’ve tried the suggested one’s and still we are unable to import 
> > files havingsize  >32 kb, it is displaying same error.
> >
> > Below link has the suggested solutions. Please have a look once.
> >
> > http://lucene.472066.n3.nabble.com/Solr-Size-Limitation-upto-32-KB-files-td4419779.html
> >  
> > 
> >
> > As per Erick Erickson, I’ve changed the string type to Text type based and 
> > still the issue occurs .
> > I’ve changed from :
> >
> > 
> >
> > Changed to:
> >
> > 
> >
> > If we do so, it is showing error in the log, please find the error in the 
> > attachment.
> >
> > If I change to:
> >
> >  > />
> >
> > It is not showing any error , but the issue still exists.
> >
> > As per Jan Hoydahl, I have gone through the link that you have provided and 
> > checked ‘requestParsers’ tag in solrconfig.xml,
> >
> > RequestParsers tag in our application is as follows:
> >
> > ‘ > multipartUploadLimitInKB="2048000"
> > formdataUploadLimitInKB="2048"
> > 

Re: Creating shard with core.properties

2019-02-02 Thread Erick Erickson
I think you're making this much more difficult for yourself than necessary.

I'd _strongly_ recommend you abandon this approach and use the
collections AP. Perhaps you'd need to create some kind of script
that handles core creation and the like. If you know where the
core.properties file should be for instance (which you must if
you're trying to create it manually) you can specify instanceDir
and/or dataDir in the ADDREPLICA command to point it to an
index for example.

You can use the EMPTY flag on the collection create command
to create the skeleton in ZK with _no_ replicas defined and use
ADDREPLICA.

Frankly, it sounds like you got started down this road with
legacyCloud quite some time ago and are unwilling to change
even though Solr has changed dramatically. It's always
hard to throw away what's worked in the past, but it's also
sometimes necessary.

Best,
Erick

On Fri, Feb 1, 2019 at 2:41 AM Bharath Kumar  wrote:
>
> Thanks Shawn for your inputs and the pointer to the documentation. Our
> setup currently has 1 shard and 2 replicas for that shard and we do not
> want a manual step which involves creating a collection since for SOLR
> Cloud at least more than 50% of the shard nodes should be up and running.
> Also if the zookeeper states go bad for some reason, we will need to
> re-create the collection, whereas in the legacy cloud mode with manual
> core.properties creation it has helped us bring up the solr cloud even
> without any known zookeeper states after an upgrade and not do any
> additional step.
>
> On Wed, Jan 30, 2019 at 3:49 PM Shawn Heisey  wrote:
>
> > On 1/30/2019 3:36 PM, Bharath Kumar wrote:
> > > Thanks Erick. We cleanup the zookeeper state on every installation, so
> > the
> > > zookeeper states are gone. So what should we do in case of a new 7.6
> > > installation where we want to manually create core.properties and use the
> > > non-legacy cloud option? Is it in order to use non-legacy cloud, we
> > should
> > > use the collections api to create a collection first and then use the
> > > manual core.properties for auto-discovery?
> >
> > *ALL* creations and modifications to SolrCloud collections should be
> > done using the Collections API.  Creating cores directly (either with
> > core.properties or the CoreAdmin API) is something that will almost
> > certainly bite you hard.  Based on what Erick has said, I don't think
> > you can even do it at all when legacy mode is disabled.  Even when you
> > can ... don't.
> >
> > > Because in the legacy cloud mode we were just creating the
> > core.properties
> > > manually and that would update the zookeeper state when the solr boots
> > up.
> > > Can you please help me with this?
> >
> > Use the Collections API.  This is the recommendation even for experts
> > who really know the code.  Creating cores manually in ANY SolrCloud
> > install is a recipe for problems, even in legacy mode.
> >
> > There is a very large warning box (red triangle with an exclamation
> > point) in this section of the documentation:
> >
> >
> > https://lucene.apache.org/solr/guide/7_6/coreadmin-api.html#coreadmin-create
> >
> > One of the first things it says there in that warning box is that the
> > CoreAdmin API should not be used in SolrCloud.  Manually creating
> > core.properties files and restarting Solr is effectively the same thing
> > as using the CoreAdmin API.
> >
> > Thanks,
> > Shawn
> >
>
>
> --
> Thanks & Regards,
> Bharath MV Kumar
>
> "Life is short, enjoy every moment of it"


Re: [CDCR]Unable to locate core

2019-02-02 Thread Erick Erickson
CDCR does _not_ replicate to followers, it is a leader<->leader replication
of the raw document.

Once the document has been forwarded to the target's leader, then the
leader on the target system should forward it to followers on that
system just like any other update.

The Solr JIRA is unlikely the problem from what you describe.

1> are you sure you are _committing_ on the target system?
2> "unable to locate core" comes from where? The source? Target?
   CDCR?
3> is your target collection properly set up? Because it sounds
   a bit like your target cluster isn't running in SolrCloud mode.

Best,
Erick

On Fri, Feb 1, 2019 at 12:48 PM Tim  wrote:
>
> After some more investigation it seems that we're running into the  same bug
> found here   .
>
> However if my understanding is correct that bug in 7.3 was patched out.
> Unfortunately we're running into the same behavior in 7.5
>
> CDCR is replicating successfully to the leader node but is not replicating
> to the followers.
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Need to perfom search and group the record on basis of domain,subject,from address and display the count of label i.e inbox,spam

2019-02-02 Thread Erick Erickson
1> please don't hijack threads, start a new topic

2> Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You
must use the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But note you need
to show us the _entire_ return header to allow anyone to diagnose the
problem.

Best,
Erick

On Fri, Feb 1, 2019 at 2:05 PM Margaret Owens
 wrote:
>
> Please remove me from this list.
>
> -Original Message-
> From: Scott Stults 
> Sent: February 1, 2019 2:03 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Need to perfom search and group the record on basis of 
> domain,subject,from address and display the count of label i.e inbox,spam
>
> Hi Swapnil,
>
> There wasn't a a question in your post, so I'm guessing you're having trouble 
> getting started. Take a look at the JSON Facet API. That should get you most 
> of the way there.
>
> https://lucene.apache.org/solr/guide/7_5/json-facet-api.html
>
> k/r,
> Scott
>
> On Fri, Feb 1, 2019 at 7:36 AM swap  wrote:
>
> > Need to perfom search and group the record on basis of
> > domain,subject,from address and display the count of label i.e inbox,spam
> >   and label status i.e read and unread with it.The label and label
> > status should be displayed as percentage.
> >
> > Scenorio 1
> > Document structure is as mentioned below indexed in solr. message_id
> > is unique field in solr
> >   {
> > "email_date_time": 1548922689,
> > "subject": "abcdef",
> > "created": 1548932108,
> > "domain": ".com",
> > "message_id": "123456789ui",
> > "label": "inbox",
> > "from_address": xxxbc.com",
> > "email": "g...@gmail.com",
> > "label_status": "unread"
> >   }
> >
> >   {
> > "email_date_time": 1548922689,
> > "subject": "abcdef",
> > "created": 1548932108,
> > "domain": ".com",
> > "message_id": "zxiu22",
> > "label": "inbox",
> > "from_address": xxxbc.com",
> > "email": "g...@gmail.com",
> > "label_status": "unread"
> >   }
> >
> >   {
> > "email_date_time": 1548922689,
> > "subject": "defg",
> > "created": 1548932108,
> > "domain": ".com",
> > "message_id": "ftyuiooo899",
> > "label": "inbox",
> > "from_address": xxxbc.com",
> > "email": "f...@gmail.com",
> > "label_status": "unread"
> >   }
> >
> > I have below mentioned point to be implemented
> >
> > 1. Need to perfom search and group the record on basis of
> > domain,subject,from address and display the count of label i.e inbox,spam
> >   and label status i.e read and unread with it.The label and label
> > status should be displayed as percentage.
> >
> >
> > 2. Need to paginate the record along with the implementation 1
> >
> >
> > Display will be as mentioned below
> >
> >
> > 1. domain name : @ subject:hello from addredd: abcd@i
> >
> > inbox percentage : 20% spam percentage : 80% read percentage  : 30%
> > unread percentage : 70%
> >
> > 2. domain name : @ subject:hi from addredd: abcd@i
> >
> > inbox percentage : 20% spam percentage : 80% read percentage  : 30%
> > unread percentage : 70%
> >
> >
> > 3. domain name : @ subject:where from addredd: abcd@i
> >
> > inbox percentage : 20% spam percentage : 80% read percentage  : 30%
> > unread percentage : 70%
> >
> >
> >
> > --
> > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> >
>
>
> --
> Scott Stults | Founder & Solutions Architect | OpenSource Connections, LLC
> | 434.409.2780
> http://www.opensourceconnections.com