Re: Undefined field - solr 7.2.1 cloud

2019-09-25 Thread Antony A
Thanks Erick.

I have removed the managed-schema for now. This setup was running perfectly
for couple of years. I implemented basic auth around the collection a year
back. But nothing really changed on my process to update the schema. Let me
see if removing managed-schema has any impact and will update.



On Wed, Sep 25, 2019 at 9:16 AM Erick Erickson 
wrote:

> Then something sounds wrong with your setup. The configs are stored in ZK,
> and read from ZooKeeper every time Solr starts. So how the replica “does
> not have the correct schema” is a complete mystery.
>
> You say you have ClassicIndexSchemaFactory set up. Take a look at your
> configs _through the Admin UI from the “collections” drop-down_ and verify.
> This reads the same thing in ZooKeeper. Sometimes I’ve thought I was set up
> one way and discovered later that I wasn’t.
>
> Next: Do you have “managed-schema” _and_ “schema.xml” in your configs? If
> you’re indeed using classic, you can remove managed-schema.
>
> All to make sure your’e operating as you think you are.
>
> Best,
> Erick
>
> > On Sep 24, 2019, at 3:58 PM, Antony A  wrote:
> >
> > Hi,
> >
> > I also observed that whenever the JVM crashes, the replicas does not have
> > the correct schema. Anyone seen similar behavior.
> >
> > Thanks,
> > AA
> >
> > On Wed, Sep 4, 2019 at 9:58 PM Antony A 
> wrote:
> >
> >> Hi,
> >>
> >> I have confirmed that ZK ensemble is external. Even though both
> >> managed-schema and schema.xml are on the admin ui, I see the below class
> >> defined in solrconfig.
> >> 
> >>
> >> The workaround is till to run "solr zk upconfig" followed by restarting
> >> the cores of the collection. Anything else I should be looking into?
> >>
> >> Thanks
> >>
> >> On Wed, Sep 4, 2019 at 6:31 PM Erick Erickson 
> >> wrote:
> >>
> >>> This almost always means that you really _didn’t_ update the schema and
> >>> reload the collection, you just thought you did ;).
> >>>
> >>> One common reason is to fire up Solr with an internal ZooKeeper but
> have
> >>> the rest of your collection be using an external ensemble.
> >>>
> >>> Another is to be modifying schema.xml when using managed-schema or
> >>> vice-versa.
> >>>
> >>> First thing I’d do is check the ZK ensemble, are any of the ports
> >>> reference by the admin screen anywhere 9983? If so it’s internal.
> >>>
> >>> Second thing I’d do is, in the admin UI, select my collection from the
> >>> drop down list, then click files and open up the schema. Check that
> there
> >>> is only managed-schema or schema.xml. If both are present, check your
> >>> solrconfig to see which one you’re using. Then open the schema and
> check
> >>> that your field is there. BTW, the field will be explicitly stated in
> the
> >>> solr log.
> >>>
> >>> Third thing I’d do is open the admin
> >>> UI>>configsets>>the_configset_you’re_using and check which schema
> you’re
> >>> using and again if the field is in the schema.
> >>>
> >>> Best,
> >>> Erick
> >>>
> >>>> On Sep 4, 2019, at 3:27 PM, Antony A 
> wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> I ran the collection reload after a new "leader" core was selected for
> >>> the
> >>>> collection due to heap failure on the previous core. But I still have
> >>> stack
> >>>> trace with common.SolrException: undefined field.
> >>>>
> >>>> On Thu, Aug 29, 2019 at 1:36 PM Antony A 
> >>> wrote:
> >>>>
> >>>>> Yes. I do restart the cores on all the different servers. I will look
> >>> at
> >>>>> implementing reloading the collection. Thank you for your suggestion.
> >>>>>
> >>>>> Cheers,
> >>>>> Antony
> >>>>>
> >>>>> On Thu, Aug 29, 2019 at 1:34 PM Shawn Heisey 
> >>> wrote:
> >>>>>
> >>>>>> On 8/29/2019 1:22 PM, Antony A wrote:
> >>>>>>> I do restart Solr after changing schema using "solr zk upconfig". I
> >>> am
> >>>>>> yet
> >>>>>>> to confirm but I do have a daily cron that does "delta" import.
> Does
> >>>>>> that
> >>>>>>> process have any bearing on some cores losing the field?
> >>>>>>
> >>>>>> Did you restart all the Solr servers?  If the collection lives on
> >>>>>> multiple servers, restarting one of the servers is not going to
> affect
> >>>>>> replicas living on other servers.
> >>>>>>
> >>>>>> Reloading the collection with an HTTP request to the collections API
> >>> is
> >>>>>> a better option than restarting Solr.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Shawn
> >>>>>>
> >>>>>
> >>>
> >>>
>
>


Re: Undefined field - solr 7.2.1 cloud

2019-09-24 Thread Antony A
Hi,

I also observed that whenever the JVM crashes, the replicas does not have
the correct schema. Anyone seen similar behavior.

Thanks,
AA

On Wed, Sep 4, 2019 at 9:58 PM Antony A  wrote:

> Hi,
>
> I have confirmed that ZK ensemble is external. Even though both
> managed-schema and schema.xml are on the admin ui, I see the below class
> defined in solrconfig.
> 
>
> The workaround is till to run "solr zk upconfig" followed by restarting
> the cores of the collection. Anything else I should be looking into?
>
> Thanks
>
> On Wed, Sep 4, 2019 at 6:31 PM Erick Erickson 
> wrote:
>
>> This almost always means that you really _didn’t_ update the schema and
>> reload the collection, you just thought you did ;).
>>
>> One common reason is to fire up Solr with an internal ZooKeeper but have
>> the rest of your collection be using an external ensemble.
>>
>> Another is to be modifying schema.xml when using managed-schema or
>> vice-versa.
>>
>> First thing I’d do is check the ZK ensemble, are any of the ports
>> reference by the admin screen anywhere 9983? If so it’s internal.
>>
>> Second thing I’d do is, in the admin UI, select my collection from the
>> drop down list, then click files and open up the schema. Check that there
>> is only managed-schema or schema.xml. If both are present, check your
>> solrconfig to see which one you’re using. Then open the schema and check
>> that your field is there. BTW, the field will be explicitly stated in the
>> solr log.
>>
>> Third thing I’d do is open the admin
>> UI>>configsets>>the_configset_you’re_using and check which schema you’re
>> using and again if the field is in the schema.
>>
>> Best,
>> Erick
>>
>> > On Sep 4, 2019, at 3:27 PM, Antony A  wrote:
>> >
>> > Hi,
>> >
>> > I ran the collection reload after a new "leader" core was selected for
>> the
>> > collection due to heap failure on the previous core. But I still have
>> stack
>> > trace with common.SolrException: undefined field.
>> >
>> > On Thu, Aug 29, 2019 at 1:36 PM Antony A 
>> wrote:
>> >
>> >> Yes. I do restart the cores on all the different servers. I will look
>> at
>> >> implementing reloading the collection. Thank you for your suggestion.
>> >>
>> >> Cheers,
>> >> Antony
>> >>
>> >> On Thu, Aug 29, 2019 at 1:34 PM Shawn Heisey 
>> wrote:
>> >>
>> >>> On 8/29/2019 1:22 PM, Antony A wrote:
>> >>>> I do restart Solr after changing schema using "solr zk upconfig". I
>> am
>> >>> yet
>> >>>> to confirm but I do have a daily cron that does "delta" import. Does
>> >>> that
>> >>>> process have any bearing on some cores losing the field?
>> >>>
>> >>> Did you restart all the Solr servers?  If the collection lives on
>> >>> multiple servers, restarting one of the servers is not going to affect
>> >>> replicas living on other servers.
>> >>>
>> >>> Reloading the collection with an HTTP request to the collections API
>> is
>> >>> a better option than restarting Solr.
>> >>>
>> >>> Thanks,
>> >>> Shawn
>> >>>
>> >>
>>
>>


Re: Undefined field - solr 7.2.1 cloud

2019-09-04 Thread Antony A
Hi,

I have confirmed that ZK ensemble is external. Even though both
managed-schema and schema.xml are on the admin ui, I see the below class
defined in solrconfig.


The workaround is till to run "solr zk upconfig" followed by restarting the
cores of the collection. Anything else I should be looking into?

Thanks

On Wed, Sep 4, 2019 at 6:31 PM Erick Erickson 
wrote:

> This almost always means that you really _didn’t_ update the schema and
> reload the collection, you just thought you did ;).
>
> One common reason is to fire up Solr with an internal ZooKeeper but have
> the rest of your collection be using an external ensemble.
>
> Another is to be modifying schema.xml when using managed-schema or
> vice-versa.
>
> First thing I’d do is check the ZK ensemble, are any of the ports
> reference by the admin screen anywhere 9983? If so it’s internal.
>
> Second thing I’d do is, in the admin UI, select my collection from the
> drop down list, then click files and open up the schema. Check that there
> is only managed-schema or schema.xml. If both are present, check your
> solrconfig to see which one you’re using. Then open the schema and check
> that your field is there. BTW, the field will be explicitly stated in the
> solr log.
>
> Third thing I’d do is open the admin
> UI>>configsets>>the_configset_you’re_using and check which schema you’re
> using and again if the field is in the schema.
>
> Best,
> Erick
>
> > On Sep 4, 2019, at 3:27 PM, Antony A  wrote:
> >
> > Hi,
> >
> > I ran the collection reload after a new "leader" core was selected for
> the
> > collection due to heap failure on the previous core. But I still have
> stack
> > trace with common.SolrException: undefined field.
> >
> > On Thu, Aug 29, 2019 at 1:36 PM Antony A 
> wrote:
> >
> >> Yes. I do restart the cores on all the different servers. I will look at
> >> implementing reloading the collection. Thank you for your suggestion.
> >>
> >> Cheers,
> >> Antony
> >>
> >> On Thu, Aug 29, 2019 at 1:34 PM Shawn Heisey 
> wrote:
> >>
> >>> On 8/29/2019 1:22 PM, Antony A wrote:
> >>>> I do restart Solr after changing schema using "solr zk upconfig". I am
> >>> yet
> >>>> to confirm but I do have a daily cron that does "delta" import. Does
> >>> that
> >>>> process have any bearing on some cores losing the field?
> >>>
> >>> Did you restart all the Solr servers?  If the collection lives on
> >>> multiple servers, restarting one of the servers is not going to affect
> >>> replicas living on other servers.
> >>>
> >>> Reloading the collection with an HTTP request to the collections API is
> >>> a better option than restarting Solr.
> >>>
> >>> Thanks,
> >>> Shawn
> >>>
> >>
>
>


Re: Undefined field - solr 7.2.1 cloud

2019-09-04 Thread Antony A
Hi,

I ran the collection reload after a new "leader" core was selected for the
collection due to heap failure on the previous core. But I still have stack
trace with common.SolrException: undefined field.

On Thu, Aug 29, 2019 at 1:36 PM Antony A  wrote:

> Yes. I do restart the cores on all the different servers. I will look at
> implementing reloading the collection. Thank you for your suggestion.
>
> Cheers,
> Antony
>
> On Thu, Aug 29, 2019 at 1:34 PM Shawn Heisey  wrote:
>
>> On 8/29/2019 1:22 PM, Antony A wrote:
>> > I do restart Solr after changing schema using "solr zk upconfig". I am
>> yet
>> > to confirm but I do have a daily cron that does "delta" import. Does
>> that
>> > process have any bearing on some cores losing the field?
>>
>> Did you restart all the Solr servers?  If the collection lives on
>> multiple servers, restarting one of the servers is not going to affect
>> replicas living on other servers.
>>
>> Reloading the collection with an HTTP request to the collections API is
>> a better option than restarting Solr.
>>
>> Thanks,
>> Shawn
>>
>


Re: Undefined field - solr 7.2.1 cloud

2019-08-29 Thread Antony A
Yes. I do restart the cores on all the different servers. I will look at
implementing reloading the collection. Thank you for your suggestion.

Cheers,
Antony

On Thu, Aug 29, 2019 at 1:34 PM Shawn Heisey  wrote:

> On 8/29/2019 1:22 PM, Antony A wrote:
> > I do restart Solr after changing schema using "solr zk upconfig". I am
> yet
> > to confirm but I do have a daily cron that does "delta" import. Does that
> > process have any bearing on some cores losing the field?
>
> Did you restart all the Solr servers?  If the collection lives on
> multiple servers, restarting one of the servers is not going to affect
> replicas living on other servers.
>
> Reloading the collection with an HTTP request to the collections API is
> a better option than restarting Solr.
>
> Thanks,
> Shawn
>


Re: Undefined field - solr 7.2.1 cloud

2019-08-29 Thread Antony A
I do restart Solr after changing schema using "solr zk upconfig". I am yet
to confirm but I do have a daily cron that does "delta" import. Does that
process have any bearing on some cores losing the field?

On Thu, Aug 29, 2019 at 11:32 AM Shawn Heisey  wrote:

> On 8/29/2019 11:26 AM, Antony A wrote:
> > Hi,
> >
> > I am running on Solr cloud 7.2.1. I have 4 core collection. The fields
> are
> > available in the schema.xml in solr admin UI. This tells me zookeeper has
> > the correct schema. But unfortunately only the leader core has the
> correct
> > response to the query with the field while other cores are throwing the
> > below error stack. Restarting the core returns the correct results but
> > trying to avoid that situation.
> >
> > o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
> undefined
> > field xxx
>
> This sounds like the schema got updated to add the referenced field, but
> some or all of the cores were never reloaded.  So when you do the core
> reload (or restart Solr), it's good on that core, but not on a core that
> didn't get reloaded.
>
> Whenever you make changes to the config files in ZooKeeper, you should
> reload the collection.  This will reload all the individual cores on
> whatever servers they happen to reside on, which will cause them to
> re-read the config files.
>
> https://lucene.apache.org/solr/guide/7_2/collections-api.html#reload
>
> Thanks,
> Shawn
>


Undefined field - solr 7.2.1 cloud

2019-08-29 Thread Antony A
Hi,

I am running on Solr cloud 7.2.1. I have 4 core collection. The fields are
available in the schema.xml in solr admin UI. This tells me zookeeper has
the correct schema. But unfortunately only the leader core has the correct
response to the query with the field while other cores are throwing the
below error stack. Restarting the core returns the correct results but
trying to avoid that situation.

o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: undefined
field xxx
at
org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1292)
at
org.apache.solr.schema.IndexSchema$SolrQueryAnalyzer.getWrappedAnalyzer(IndexSchema.java:434)
at
org.apache.lucene.analysis.DelegatingAnalyzerWrapper$DelegatingReuseStrategy.getReusableComponents(DelegatingAnalyzerWrapper.java:84)
at
org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:191)
at
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:240)
at
org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:518)
at
org.apache.solr.parser.QueryParser.newFieldQuery(QueryParser.java:62)
at
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:1148)
at
org.apache.solr.parser.QueryParser.MultiTerm(QueryParser.java:593)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:142)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)
at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)
at org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)
at
org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:131)
at
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:254)
at org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:49)
at org.apache.solr.search.QParser.getQuery(QParser.java:169)
at
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:207)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecutePro

Last Modified Timestamp

2018-12-19 Thread Antony A
Hello Solr Users,

I am trying to figure out if there was a reason for "Last Modified: about
20 hours ago" remaining unchanged after a full data import into solr. I am
running solr cloud on 7.2.1.

I do see this value and also the numDocs value change on a Delta import.

Thanks,
Antony


Re: Basic Auth Permission

2018-12-04 Thread Antony A
I run on Solr cloud 7.2.1

Sent from my mobile. Please excuse any typos.

> On Dec 4, 2018, at 2:57 PM, Terry Steichen  wrote:
> 
> I think there's been some confusion on which standalone versions support
> authentication.  I'm using 6.6 in cloud mode (purely so the
> authentication will work).  Some of the documentation seems to say that
> only cloud implementations support it, but others (like the experts on
> this forum) say that later versions (including yours) support it in
> standalone mode.
> 
>> On 12/4/18 4:14 PM, yydpkm wrote:
>> I am using standalone Solr 7.4.0. Are you using cloud or standalone? Not sure
>> if that cause the problem or not.
>> 
>> 
>> 
>> --
>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>> 


Re: Collection specific permission not working

2018-12-04 Thread Antony A
Just curious. Did you try by not making path variable as a list?

On Tue, Dec 4, 2018 at 11:15 AM yydpkm  wrote:

> Hi,
> I am using standalone Solr 7.4. Right now I have 2 collections A and B.
> Users a and b. I want to let only a can read and query A, b can read and
> query B. But it doesn't work. I have tried similar in
>
> https://lucidworks.com/2017/04/14/securing-solr-tips-tricks-and-other-things-you-really-need-to-know/
> but this doesn't work as well.
> Right now I have:
>
> {"name":"permissonA","collection":"A","path":["/query","/select","/get"],"role":
> "readA"}
>
> {"name":"permissonB","collection":"B","path":["/query","/select","/get"],"role":
> "readB"}
> But A still can access B and B can access A.
>
> Where is wrong?
>
> Thanks,
> Rick
>
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: Basic Auth Permission

2018-12-04 Thread Antony A
Hi Rick,

This is how I was able to restrict permissions to user-role ( user1  ) to
its own collection. Hopefully it helps.

   "permissions": [
 {"name": "*", "path": "/dataimport", "params": {"command":
["status"]}, "role": "*"},
 {"collection": "name", "path": "/admin/file", "role": ["user1",
"admin"]},
 {"collection": "name", "path": "/files", "role": ["user1", "admin"]},
 {"collection": "name", "path": "/admin/collections", "params":
{"action": ["LIST"]}, "role": ["user1", "admin"]},
 {"collection": "name", "path": "/dataimport", "role": ["user1",
"admin"]},
 {"collection": "name", "path": "/select", "role": ["user1", "admin"]},
 {"collection": "name", "name": "update", "role": ["user1", "admin"]},
 {"collection": "name", "name": "collection-admin-read", "role":
["user1", "admin"]},
 {"collection": "name", "name": "schema-read", "role": ["user1",
"admin"]},
 {"collection": "name", "name": "core-admin-read", "role": ["user1",
"admin"]},
 {"collection": "null", "path": "/admin/zookeeper", "role": ["admin"]},
 {"name": "security-read", "role": ["admin"]},
 {"name": "schema-edit", "role": ["admin"]},
 {"name": "config-edit", "role": ["admin"]},
 {"name": "core-admin-edit", "role": ["admin"]},
 {"name": "security-read", "role": ["admin"]},
 {"name": "collection-admin-edit", "role": ["admin"]},
 {"name": "security-edit", "role": ["admin"]}
]

Thanks,
Antony


On Tue, Dec 4, 2018 at 10:07 AM Terry Steichen  wrote:

> In setting his permission, Antony said he set "path": "/admin/file".  I
> use "path":"/*" - that may be too restrictive for you, but it works fine
> (for me).
>
> On 12/4/18 9:55 AM, yydpkm wrote:
> > Hi Antony,
> >
> > Have you solved this? I am facing the same thing. Other users can still
> do
> > /select after I set the permission path and collection.
> >
> > Best,
> > Rick
> >
> >
> >
> > --
> > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> >
>


Re: Collection API - ADDREPLICA

2018-08-02 Thread Antony A
I was able to run the collections API after upgrading all the replicas in
solr cloud.

Thanks,
Antony

On Wed, Aug 1, 2018 at 4:05 PM, Antony A  wrote:

> I am running into this bug?
>
> https://issues.apache.org/jira/browse/SOLR-7055
>
> Can we help and let me know if there is a work around?
>
> Thanks,
> Antony
>
> On Wed, Aug 1, 2018 at 2:54 PM, Antony A  wrote:
>
>> Hi,
>>
>> I am running into issues adding replica to a collection. I am doing a
>> in-place upgrade from 6.2.1 to 7.2.1. I checked the solr.jar and have the
>> 7.2.1 version.
>>
>> {
>>   "responseHeader":{
>> "status":500,
>> "QTime":119},
>>   "error":{
>> "metadata":[
>>   "error-class","org.apache.solr.common.SolrException",
>>   "root-error-class","java.io.InvalidClassException"],
>> "msg":"java.io.InvalidClassException: 
>> org.apache.solr.common.util.SimpleOrderedMap; local class incompatible: 
>> stream classdesc serialVersionUID = -2149411884323073227, local class 
>> serialVersionUID = 4921066926612345812",
>> "trace":"org.apache.solr.common.SolrException: 
>> java.io.InvalidClassException: org.apache.solr.common.util.SimpleOrderedMap; 
>> local class incompatible: stream classdesc serialVersionUID = 
>> -2149411884323073227, local class serialVersionUID = 
>> 4921066926612345812\n\tat 
>> org.apache.solr.client.solrj.SolrResponse.deserialize(SolrResponse.java:61)\n\tat
>>  
>> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:304)\n\tat
>>  
>> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:246)\n\tat
>>  
>> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:224)\n\tat
>>  
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)\n\tat
>>  
>> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)\n\tat
>>  
>> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)\n\tat
>>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497)\n\tat 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
>>  
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
>>  
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
>>  
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>>  
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>>  
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>>  
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>>  
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>>  
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
>>  
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>>  
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
>>  
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>>  
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>>  
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>>  
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>>  
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>>  
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>>  org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
>>  
>> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
>>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat 
>> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>>  
>> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
>>  
>> org.eclipse.jetty.util.th

Re: Collection API - ADDREPLICA

2018-08-01 Thread Antony A
I am running into this bug?

https://issues.apache.org/jira/browse/SOLR-7055

Can we help and let me know if there is a work around?

Thanks,
Antony

On Wed, Aug 1, 2018 at 2:54 PM, Antony A  wrote:

> Hi,
>
> I am running into issues adding replica to a collection. I am doing a
> in-place upgrade from 6.2.1 to 7.2.1. I checked the solr.jar and have the
> 7.2.1 version.
>
> {
>   "responseHeader":{
> "status":500,
> "QTime":119},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","java.io.InvalidClassException"],
> "msg":"java.io.InvalidClassException: 
> org.apache.solr.common.util.SimpleOrderedMap; local class incompatible: 
> stream classdesc serialVersionUID = -2149411884323073227, local class 
> serialVersionUID = 4921066926612345812",
> "trace":"org.apache.solr.common.SolrException: 
> java.io.InvalidClassException: org.apache.solr.common.util.SimpleOrderedMap; 
> local class incompatible: stream classdesc serialVersionUID = 
> -2149411884323073227, local class serialVersionUID = 
> 4921066926612345812\n\tat 
> org.apache.solr.client.solrj.SolrResponse.deserialize(SolrResponse.java:61)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:304)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:224)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
>  java.lang.Thread.run(Thread.java:745)\nCaused by: 
> java.io.InvalidClassException: org.apache.solr.common.util.SimpleOrd

Collection API - ADDREPLICA

2018-08-01 Thread Antony A
Hi,

I am running into issues adding replica to a collection. I am doing a
in-place upgrade from 6.2.1 to 7.2.1. I checked the solr.jar and have the
7.2.1 version.

{
  "responseHeader":{
"status":500,
"QTime":119},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","java.io.InvalidClassException"],
"msg":"java.io.InvalidClassException:
org.apache.solr.common.util.SimpleOrderedMap; local class
incompatible: stream classdesc serialVersionUID =
-2149411884323073227, local class serialVersionUID =
4921066926612345812",
"trace":"org.apache.solr.common.SolrException:
java.io.InvalidClassException:
org.apache.solr.common.util.SimpleOrderedMap; local class
incompatible: stream classdesc serialVersionUID =
-2149411884323073227, local class serialVersionUID =
4921066926612345812\n\tat
org.apache.solr.client.solrj.SolrResponse.deserialize(SolrResponse.java:61)\n\tat
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:304)\n\tat
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:246)\n\tat
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:224)\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)\n\tat
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
java.lang.Thread.run(Thread.java:745)\nCaused by:
java.io.InvalidClassException:
org.apache.solr.common.util.SimpleOrderedMap; local class
incompatible: stream classdesc serialVersionUID =
-2149411884323073227, local class serialVersionUID =
4921066926612345812\n\tat
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)\n\tat
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)\n\tat
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)\n\tat
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)\n\tat
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)\n\tat
java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)\n\tat
java.util.ArrayList.readObject(ArrayList.java:791)\n\tat
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat
sun.reflect.DelegatingMethodAcc

Collection level Permission

2018-06-18 Thread Antony A
Hi,

I am trying to find some help with the format of the collection name under
permissions.

The collection can be "*", "null", "collection_name". Is there a way to
group a set of collections?

example:

This format is not working.

{"collection": ["colname1, colname2, colname3"], "path": "/select", "role":
["dev"], "index": 1},

Thanks,
Antony


Basic Auth Permission

2018-06-08 Thread Antony A
Hello,

I am trying to get the path/params restricted to users of individual
collection through Solr UI.

Here is the permission that I have for an user.

{"collection": "collection_name", "path": "/admin/file", "role": ["
collection_user"]}

I am still not able to restrict another user from accessing other
collection files like solrconfig, solr-data-config etc.

If it possible to define permission at collection-level to this path?

Thanks,
Antony


Re: Shard size variation

2018-04-30 Thread Antony A
Thank you all. I have around 70% free space in production. I will compute for 
the additional fields.


Sent from my mobile. Please excuse any typos.

> On Apr 30, 2018, at 5:10 PM, Erick Erickson  wrote:
> 
> There's really no good way to purge deleted documents from the index
> other than to wait until merging happens.
> 
> Optimize/forceMerge and expungeDeletes both suffer from the problem
> that they create massive segments that then stick around for a very
> long time, see:
> https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
> 
> Best,
> Erick
> 
>> On Mon, Apr 30, 2018 at 1:56 PM, Michael Joyner  wrote:
>> Based on experience, 2x head room is room is not always enough, sometimes
>> not even 3x, if you are optimizing from many segments down to 1 segment in a
>> single go.
>> 
>> We have however figured out a way that can work with as little as 51% free
>> space via the following iteration cycle:
>> 
>> public void solrOptimize() {
>>int initialMaxSegments = 256;
>>int finalMaxSegments = 1;
>>if (isShowSegmentCounter()) {
>>log.info("Optimizing ...");
>>}
>>try (SolrClient solrServerInstance = getSolrClientInstance()){
>>for (int segments=initialMaxSegments;
>> segments>=finalMaxSegments; segments--) {
>>if (isShowSegmentCounter()) {
>>System.out.println("Optimizing to a max of "+segments+"
>> segments.");
>>}
>>solrServerInstance.optimize(true, true, segments);
>>}
>>} catch (SolrServerException | IOException e) {
>>throw new RuntimeException(e);
>> 
>>}
>>}
>> 
>> 
>>> On 04/30/2018 04:23 PM, Walter Underwood wrote:
>>> 
>>> You need 2X the minimum index size in disk space anyway, so don’t worry
>>> about keeping the indexes as small as possible. Worry about having enough
>>> headroom.
>>> 
>>> If your indexes are 250 GB, you need 250 GB of free space.
>>> 
>>> wunder
>>> Walter Underwood
>>> wun...@wunderwood.org
>>> http://observer.wunderwood.org/  (my blog)
>>> 
>>>> On Apr 30, 2018, at 1:13 PM, Antony A  wrote:
>>>> 
>>>> Thanks Erick/Deepak.
>>>> 
>>>> The cloud is running on baremetal (128 GB/24 cpu).
>>>> 
>>>> Is there an option to run a compact on the data files to make the size
>>>> equal on both the clouds? I am trying find all the options before I add
>>>> the
>>>> new fields into the production cloud.
>>>> 
>>>> Thanks
>>>> AA
>>>> 
>>>> On Mon, Apr 30, 2018 at 10:45 AM, Erick Erickson
>>>> 
>>>> wrote:
>>>> 
>>>>> Anthony:
>>>>> 
>>>>> You are probably seeing the results of removing deleted documents from
>>>>> the shards as they're merged. Even on replicas in the same _shard_,
>>>>> the size of the index on disk won't necessarily be identical. This has
>>>>> to do with which segments are selected for merging, which are not
>>>>> necessarily coordinated across replicas.
>>>>> 
>>>>> The test is if the number of docs on each collection is the same. If
>>>>> it is, then don't worry about index sizes.
>>>>> 
>>>>> Best,
>>>>> Erick
>>>>> 
>>>>>> On Mon, Apr 30, 2018 at 9:38 AM, Deepak Goel  wrote:
>>>>>> 
>>>>>> Could you please also give the machine details of the two clouds you
>>>>>> are
>>>>>> running?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Deepak
>>>>>> "The greatness of a nation can be judged by the way its animals are
>>>>>> treated. Please stop cruelty to Animals, become a Vegan"
>>>>>> 
>>>>>> +91 73500 12833
>>>>>> deic...@gmail.com
>>>>>> 
>>>>>> Facebook: https://www.facebook.com/deicool
>>>>>> LinkedIn: www.linkedin.com/in/deicool
>>>>>> 
>>>>>> "Plant a Tree, Go Green"
>>>>>> 
>>>>>> Make In India : http://www.makeinindia.com/home
>>>>>> 
>>>>>&g

Re: Shard size variation

2018-04-30 Thread Antony A
Thanks Erick/Deepak.

The cloud is running on baremetal (128 GB/24 cpu).

Is there an option to run a compact on the data files to make the size
equal on both the clouds? I am trying find all the options before I add the
new fields into the production cloud.

Thanks
AA

On Mon, Apr 30, 2018 at 10:45 AM, Erick Erickson 
wrote:

> Anthony:
>
> You are probably seeing the results of removing deleted documents from
> the shards as they're merged. Even on replicas in the same _shard_,
> the size of the index on disk won't necessarily be identical. This has
> to do with which segments are selected for merging, which are not
> necessarily coordinated across replicas.
>
> The test is if the number of docs on each collection is the same. If
> it is, then don't worry about index sizes.
>
> Best,
> Erick
>
> On Mon, Apr 30, 2018 at 9:38 AM, Deepak Goel  wrote:
> > Could you please also give the machine details of the two clouds you are
> > running?
> >
> >
> >
> > Deepak
> > "The greatness of a nation can be judged by the way its animals are
> > treated. Please stop cruelty to Animals, become a Vegan"
> >
> > +91 73500 12833
> > deic...@gmail.com
> >
> > Facebook: https://www.facebook.com/deicool
> > LinkedIn: www.linkedin.com/in/deicool
> >
> > "Plant a Tree, Go Green"
> >
> > Make In India : http://www.makeinindia.com/home
> >
> > On Mon, Apr 30, 2018 at 9:51 PM, Antony A 
> wrote:
> >
> >> Hi Shawn,
> >>
> >> The cloud is running version 6.2.1. with ClassicIndexSchemaFactory
> >>
> >> The sum of size from admin UI on all the shards is around 265 G vs 224 G
> >> between the two clouds.
> >>
> >> I created the collection using "numShards" so compositeId router.
> >>
> >> If you need more information, please let me know.
> >>
> >> Thanks
> >> AA
> >>
> >> On Mon, Apr 30, 2018 at 10:04 AM, Shawn Heisey 
> >> wrote:
> >>
> >> > On 4/30/2018 9:51 AM, Antony A wrote:
> >> >
> >> >> I am running two separate solr clouds. I have 8 shards in each with a
> >> >> total
> >> >> of 300 million documents. Both the clouds are indexing the document
> from
> >> >> the same source/configuration.
> >> >>
> >> >> I am noticing there is a difference in the size of the collection
> >> between
> >> >> them. I am planning to add more shards to see if that helps solve the
> >> >> issue. Has anyone come across similar issue?
> >> >>
> >> >
> >> > There's no information here about exactly what you are seeing, what
> you
> >> > are expecting to see, and why you believe that what you are seeing is
> >> wrong.
> >> >
> >> > You did say that there is "a difference in size".  That is a very
> vague
> >> > problem description.
> >> >
> >> > FYI, unless a SolrCloud collection is using the implicit router, you
> >> > cannot add shards.  And if it *IS* using the implicit router, then you
> >> are
> >> > 100% in control of document routing -- Solr cannot influence that at
> all.
> >> >
> >> > Thanks,
> >> > Shawn
> >> >
> >> >
> >>
>


Re: Shard size variation

2018-04-30 Thread Antony A
Hi Shawn,

The cloud is running version 6.2.1. with ClassicIndexSchemaFactory

The sum of size from admin UI on all the shards is around 265 G vs 224 G
between the two clouds.

I created the collection using "numShards" so compositeId router.

If you need more information, please let me know.

Thanks
AA

On Mon, Apr 30, 2018 at 10:04 AM, Shawn Heisey  wrote:

> On 4/30/2018 9:51 AM, Antony A wrote:
>
>> I am running two separate solr clouds. I have 8 shards in each with a
>> total
>> of 300 million documents. Both the clouds are indexing the document from
>> the same source/configuration.
>>
>> I am noticing there is a difference in the size of the collection between
>> them. I am planning to add more shards to see if that helps solve the
>> issue. Has anyone come across similar issue?
>>
>
> There's no information here about exactly what you are seeing, what you
> are expecting to see, and why you believe that what you are seeing is wrong.
>
> You did say that there is "a difference in size".  That is a very vague
> problem description.
>
> FYI, unless a SolrCloud collection is using the implicit router, you
> cannot add shards.  And if it *IS* using the implicit router, then you are
> 100% in control of document routing -- Solr cannot influence that at all.
>
> Thanks,
> Shawn
>
>


Shard size variation

2018-04-30 Thread Antony A
Hi all,

I am trying to find if anyone has suggestion for the below.

I am running two separate solr clouds. I have 8 shards in each with a total
of 300 million documents. Both the clouds are indexing the document from
the same source/configuration.

I am noticing there is a difference in the size of the collection between
them. I am planning to add more shards to see if that helps solve the
issue. Has anyone come across similar issue?

Thanks
AA


Re: Solr 7.2

2018-04-13 Thread Antony A
Thank you Shawn & Edwin. It was a certificate error. I did not have the IPs
list in the SAN in the format required for IPs. After I updated the SAN
list, I was able to run the ADDREPLICA API correctly.

-Antony

On Thu, Apr 12, 2018 at 10:20 PM, Shawn Heisey  wrote:

> On 4/12/2018 9:48 PM, Antony A wrote:
>
>> Thank you. I was trying to create the collection using the API.
>> Unfortunately the API changes a bit between 6x to 7x.
>>
>> I posted the API that I used to create the collection and subsequently
>> when
>> trying to create cores for the same collection.
>>
>> https://pastebin.com/hrydZktX
>>
>
> You're going to need to look for errors in the solr.log file.  There
> should be something that explains what went wrong.The logfile often has
> more information than the response.
>
> The top guess I have is that Java couldn't validate the certificate for
> https, but without error logs, I can't say for sure.
>
> Thanks,
> Shawn
>
>


Re: Solr 7.2

2018-04-12 Thread Antony A
Hi Edwin,

Thank you. I was trying to create the collection using the API.
Unfortunately the API changes a bit between 6x to 7x.

I posted the API that I used to create the collection and subsequently when
trying to create cores for the same collection.

https://pastebin.com/hrydZktX

Hopefully this helps.

Thanks

On Thu, Apr 12, 2018 at 2:06 PM, Antony A  wrote:

> Hi,
>
> I am trying to add a replica to the ssl-enabled solr cloud with external
> zookeeper ensemble.
>
> 2018-04-12 18:26:29.140 INFO  (qtp672320506-51) [   ]
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with
> params 
> node=_solr&action=ADDREPLICA&collection=collection_name&shard=shard1
> and sendToOCPQueue=true
>
> 2018-04-12 18:26:29.151 ERROR (OverseerThreadFactory-10-thre
> ad-5-processing-n:_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler
> Collection: tnlookup operation: addreplica 
> failed:org.apache.solr.common.SolrException:
> At lea
>
>


Solr 7.2

2018-04-12 Thread Antony A
 Hi,

I am trying to add a replica to the ssl-enabled solr cloud with external
zookeeper ensemble.

2018-04-12 18:26:29.140 INFO  (qtp672320506-51) [   ]
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with
params 
node=_solr&action=ADDREPLICA&collection=collection_name&shard=shard1
and sendToOCPQueue=true

2018-04-12 18:26:29.151 ERROR (OverseerThreadFactory-10-
thread-5-processing-n:_solr) [   ] o.a.s.c.
OverseerCollectionMessageHandler Collection: tnlookup operation: addreplica
failed:org.apache.solr.common.SolrException: At lea