Re: Solr Delete By Id Out of memory issue

2017-04-03 Thread Rohit Kanchan
Thanks everyone for replying to this issue. Just a final comment on this
issue which I was closely working on. We have fixed this issue. It was a
bug in our custom component which we wrote to convert delete by query to
delete by id. We were using BytesRef differently, we were not making a deep
copy. It was causing OOM. We have changed that and now making a deep copy.
Now It seems it is restricting old deletes map to capacity 1K.

After deployment of this change, we took another heap dump and did not find
this as leak suspects.  Please let me know if anyone have questions.

Thanks
Rohit


On Mon, Mar 27, 2017 at 11:56 AM, Rohit Kanchan 
wrote:

> Thanks Erick for replying back. I have deployed changes to production, we
> will figure it out soon if it is still causing OOM or not. And for commits
> we are doing auto commits after 10K docs or 30 secs.
> If I get time I will try to run a local test to check if we will hit OOM
> because of 1K map entries or not. I will update this thread about my
> findings. I really appreciate yours and Chris response.
>
> Thanks
> Rohit
>
>
> On Mon, Mar 27, 2017 at 10:47 AM, Erick Erickson 
> wrote:
>
>> Rohit:
>>
>> Well, whenever I see something like "I have this custom component..."
>> I immediately want the problem to be demonstrated without that custom
>> component before trying to debug Solr.
>>
>> As Chris explained, we can't clear the 1K entries. It's hard to
>> imagine why keeping the last 1,000 entries around would cause OOMs.
>>
>> You haven't demonstrated yet that after your latest change you still
>> get OOMs, you've just assumed so. After running for a "long time" do
>> you still see the problem after your changes?
>>
>> So before assuming it's a Solr bug, and after you demonstrate that
>> your latest change didn't solve the problem, you should try two
>> things:
>>
>> 1> as I suggested and Chris endorsed, try committing upon occasion
>> from your custom component. Or set your autocommit settings
>> appropriately if you haven't already.
>>
>> 2> run your deletes from the client as a test. You've created a custom
>> URP component because you "didn't want to run the queries from the
>> client". That's perfectly reasonable, it's just that to know where you
>> should be looking deleting from the client would eliminate your custom
>> code and tell us where to focus.
>>
>> Best,
>> Erick
>>
>>
>>
>> On Sat, Mar 25, 2017 at 1:21 PM, Rohit Kanchan 
>> wrote:
>> > I think we figure out the issue, When we were conventing delete by
>> query in
>> > a Solr Handler we were not making a deep copy of BytesRef. We were
>> making
>> > reference of same object, which was causing old deletes(LinkedHasmap)
>> > adding more than 1K entries.
>> >
>> > But I think it is still not clearing those 1K entries. Eventually it
>> will
>> > throw OOM because UpdateLog is not singleton and when there will be many
>> > delete by id and server is not re started for very long time then
>> > eventually throw OOM. I think we should clear this map when we are
>> > committing. I am not a committer,  it would be great if I get reply
>> from a
>> > committer.  What do you guys think?
>> >
>> > Thanks
>> > Rohit
>> >
>> >
>> > On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan 
>> > wrote:
>> >
>> >> For commits we are relying on auto commits. We have define following in
>> >> configs:
>> >>
>> >>
>> >>
>> >> 1
>> >>
>> >> 3
>> >>
>> >> false
>> >>
>> >> 
>> >>
>> >> 
>> >>
>> >> 15000
>> >>
>> >> 
>> >>
>> >> One thing which I would like to mention is that we are not calling
>> >> directly deleteById from client. We have created an  update chain and
>> added
>> >> a processor there. In this processor we are querying first and
>> collecting
>> >> all byteRefHash and get each byteRef out of it and set it to indexedId.
>> >> After collecting indexedId we are using those ids to call delete byId.
>> We
>> >> are doing this because we do not want query solr before deleting at
>> client
>> >> side. It is possible that there is a bug in this code but I am not
>> sur

Re: Solr Delete By Id Out of memory issue

2017-03-27 Thread Rohit Kanchan
Thanks Erick for replying back. I have deployed changes to production, we
will figure it out soon if it is still causing OOM or not. And for commits
we are doing auto commits after 10K docs or 30 secs.
If I get time I will try to run a local test to check if we will hit OOM
because of 1K map entries or not. I will update this thread about my
findings. I really appreciate yours and Chris response.

Thanks
Rohit


On Mon, Mar 27, 2017 at 10:47 AM, Erick Erickson 
wrote:

> Rohit:
>
> Well, whenever I see something like "I have this custom component..."
> I immediately want the problem to be demonstrated without that custom
> component before trying to debug Solr.
>
> As Chris explained, we can't clear the 1K entries. It's hard to
> imagine why keeping the last 1,000 entries around would cause OOMs.
>
> You haven't demonstrated yet that after your latest change you still
> get OOMs, you've just assumed so. After running for a "long time" do
> you still see the problem after your changes?
>
> So before assuming it's a Solr bug, and after you demonstrate that
> your latest change didn't solve the problem, you should try two
> things:
>
> 1> as I suggested and Chris endorsed, try committing upon occasion
> from your custom component. Or set your autocommit settings
> appropriately if you haven't already.
>
> 2> run your deletes from the client as a test. You've created a custom
> URP component because you "didn't want to run the queries from the
> client". That's perfectly reasonable, it's just that to know where you
> should be looking deleting from the client would eliminate your custom
> code and tell us where to focus.
>
> Best,
> Erick
>
>
>
> On Sat, Mar 25, 2017 at 1:21 PM, Rohit Kanchan 
> wrote:
> > I think we figure out the issue, When we were conventing delete by query
> in
> > a Solr Handler we were not making a deep copy of BytesRef. We were making
> > reference of same object, which was causing old deletes(LinkedHasmap)
> > adding more than 1K entries.
> >
> > But I think it is still not clearing those 1K entries. Eventually it will
> > throw OOM because UpdateLog is not singleton and when there will be many
> > delete by id and server is not re started for very long time then
> > eventually throw OOM. I think we should clear this map when we are
> > committing. I am not a committer,  it would be great if I get reply from
> a
> > committer.  What do you guys think?
> >
> > Thanks
> > Rohit
> >
> >
> > On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan 
> > wrote:
> >
> >> For commits we are relying on auto commits. We have define following in
> >> configs:
> >>
> >>
> >>
> >> 1
> >>
> >> 3
> >>
> >> false
> >>
> >> 
> >>
> >> 
> >>
> >> 15000
> >>
> >> 
> >>
> >> One thing which I would like to mention is that we are not calling
> >> directly deleteById from client. We have created an  update chain and
> added
> >> a processor there. In this processor we are querying first and
> collecting
> >> all byteRefHash and get each byteRef out of it and set it to indexedId.
> >> After collecting indexedId we are using those ids to call delete byId.
> We
> >> are doing this because we do not want query solr before deleting at
> client
> >> side. It is possible that there is a bug in this code but I am not sure,
> >> because when I run tests in my local it is not showing any issues. I am
> >> trying to remote debug now.
> >>
> >> Thanks
> >> Rohit
> >>
> >>
> >> On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter <
> hossman_luc...@fucit.org
> >> > wrote:
> >>
> >>>
> >>> : OK, The whole DBQ thing baffles the heck out of me so this may be
> >>> : totally off base. But would committing help here? Or at least be
> worth
> >>> : a test?
> >>>
> >>> ths isn't DBQ -- the OP specifically said deleteById, and that the
> >>> oldDeletes map (only used for DBI) was the problem acording to the heap
> >>> dumps they looked at.
> >>>
> >>> I suspect you are correct about the root cause of the OOMs ... perhaps
> the
> >>> OP isn't using hard/soft commits effectively enough and the uncommitted
> >>> data is what's causing the OOM ... hard to say w/o more details. or
> >>> confirmation of exactly what the OP was looking at in their claim below
> >>> about the heap dump
> >>>
> >>>
> >>> : > : Thanks for replying. We are using Solr 6.1 version. Even I saw
> that
> >>> it is
> >>> : > : bounded by 1K count, but after looking at heap dump I was amazed
> >>> how can it
> >>> : > : keep more than 1K entries. But Yes I see around 7M entries
> >>> according to
> >>> : > : heap dump and around 17G of memory occupied by BytesRef there.
> >>> : >
> >>> : > what exactly are you looking at when you say you see "7M entries" ?
> >>> : >
> >>> : > are you sure you aren't confusing the keys in oldDeletes with other
> >>> : > instances of BytesRef in the JVM?
> >>>
> >>>
> >>> -Hoss
> >>> http://www.lucidworks.com/
> >>>
> >>
> >>
>


Re: Solr Delete By Id Out of memory issue

2017-03-25 Thread Rohit Kanchan
I think we figure out the issue, When we were conventing delete by query in
a Solr Handler we were not making a deep copy of BytesRef. We were making
reference of same object, which was causing old deletes(LinkedHasmap)
adding more than 1K entries.

But I think it is still not clearing those 1K entries. Eventually it will
throw OOM because UpdateLog is not singleton and when there will be many
delete by id and server is not re started for very long time then
eventually throw OOM. I think we should clear this map when we are
committing. I am not a committer,  it would be great if I get reply from a
committer.  What do you guys think?

Thanks
Rohit


On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan 
wrote:

> For commits we are relying on auto commits. We have define following in
> configs:
>
>
>
> 1
>
> 3
>
> false
>
> 
>
> 
>
> 15000
>
> 
>
> One thing which I would like to mention is that we are not calling
> directly deleteById from client. We have created an  update chain and added
> a processor there. In this processor we are querying first and collecting
> all byteRefHash and get each byteRef out of it and set it to indexedId.
> After collecting indexedId we are using those ids to call delete byId. We
> are doing this because we do not want query solr before deleting at client
> side. It is possible that there is a bug in this code but I am not sure,
> because when I run tests in my local it is not showing any issues. I am
> trying to remote debug now.
>
> Thanks
> Rohit
>
>
> On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter  > wrote:
>
>>
>> : OK, The whole DBQ thing baffles the heck out of me so this may be
>> : totally off base. But would committing help here? Or at least be worth
>> : a test?
>>
>> ths isn't DBQ -- the OP specifically said deleteById, and that the
>> oldDeletes map (only used for DBI) was the problem acording to the heap
>> dumps they looked at.
>>
>> I suspect you are correct about the root cause of the OOMs ... perhaps the
>> OP isn't using hard/soft commits effectively enough and the uncommitted
>> data is what's causing the OOM ... hard to say w/o more details. or
>> confirmation of exactly what the OP was looking at in their claim below
>> about the heap dump
>>
>>
>> : > : Thanks for replying. We are using Solr 6.1 version. Even I saw that
>> it is
>> : > : bounded by 1K count, but after looking at heap dump I was amazed
>> how can it
>> : > : keep more than 1K entries. But Yes I see around 7M entries
>> according to
>> : > : heap dump and around 17G of memory occupied by BytesRef there.
>> : >
>> : > what exactly are you looking at when you say you see "7M entries" ?
>> : >
>> : > are you sure you aren't confusing the keys in oldDeletes with other
>> : > instances of BytesRef in the JVM?
>>
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>
>


Re: Solr Delete By Id Out of memory issue

2017-03-22 Thread Rohit Kanchan
For commits we are relying on auto commits. We have define following in
configs:

   

1

3

false





15000



One thing which I would like to mention is that we are not calling directly
deleteById from client. We have created an  update chain and added a
processor there. In this processor we are querying first and collecting all
byteRefHash and get each byteRef out of it and set it to indexedId. After
collecting indexedId we are using those ids to call delete byId. We are
doing this because we do not want query solr before deleting at client
side. It is possible that there is a bug in this code but I am not sure,
because when I run tests in my local it is not showing any issues. I am
trying to remote debug now.

Thanks
Rohit


On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter 
wrote:

>
> : OK, The whole DBQ thing baffles the heck out of me so this may be
> : totally off base. But would committing help here? Or at least be worth
> : a test?
>
> ths isn't DBQ -- the OP specifically said deleteById, and that the
> oldDeletes map (only used for DBI) was the problem acording to the heap
> dumps they looked at.
>
> I suspect you are correct about the root cause of the OOMs ... perhaps the
> OP isn't using hard/soft commits effectively enough and the uncommitted
> data is what's causing the OOM ... hard to say w/o more details. or
> confirmation of exactly what the OP was looking at in their claim below
> about the heap dump
>
>
> : > : Thanks for replying. We are using Solr 6.1 version. Even I saw that
> it is
> : > : bounded by 1K count, but after looking at heap dump I was amazed how
> can it
> : > : keep more than 1K entries. But Yes I see around 7M entries according
> to
> : > : heap dump and around 17G of memory occupied by BytesRef there.
> : >
> : > what exactly are you looking at when you say you see "7M entries" ?
> : >
> : > are you sure you aren't confusing the keys in oldDeletes with other
> : > instances of BytesRef in the JVM?
>
>
> -Hoss
> http://www.lucidworks.com/
>


Re: Solr Delete By Id Out of memory issue

2017-03-21 Thread Rohit Kanchan
Hi Chris,

Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
bounded by 1K count, but after looking at heap dump I was amazed how can it
keep more than 1K entries. But Yes I see around 7M entries according to
heap dump and around 17G of memory occupied by BytesRef there.

It would be better to know why old deletes Map is used there. I am still
digging, If I find something then I will share that.

Thanks
Rohit


On Tue, Mar 21, 2017 at 4:00 PM, Chris Hostetter 
wrote:

>
> : facing. We are storing messages in solr as documents. We are running a
> : pruning job every night to delete old message documents. We are deleting
> : old documents by calling multiple delete by id query to solr. Document
> : count can be in millions which we are deleting using SolrJ client. We are
> : using delete by id because it is faster than delete by query. It works
> : great for few days but after a week these delete by id get accumulated in
> : Linked hash map of UpdateLog (variable name as olddeletes). Once this map
> : is full then we are seeing out of memory.
>
> first off: what version of Solr are you running?
>
> UpdateLog.oldDeletes is bounded at numDeletesToKeep=1000 entries -- any
> more then that and the oldest entry is automatically deleted when more
> items are added.  So it doesn't really make sense to me that you would be
> seeing OOMs from this map filling up endlessly.
>
> Are you seeing more then 1000 entries in this map when you look at your
> heap dumps?
>
> : I am not sure why it is keeping the reference of all old deletes.
>
> It's complicated -- the short answer is that it's protection against out
> of order updates ariving from other nodes in SolrCloud under highly
> concurrent updates.
>
>
>
> -Hoss
> http://www.lucidworks.com/
>


Solr Delete By Id Out of memory issue

2017-03-21 Thread Rohit Kanchan
Hi All,

I am looking for some help to solve an out of memory issue which we are
facing. We are storing messages in solr as documents. We are running a
pruning job every night to delete old message documents. We are deleting
old documents by calling multiple delete by id query to solr. Document
count can be in millions which we are deleting using SolrJ client. We are
using delete by id because it is faster than delete by query. It works
great for few days but after a week these delete by id get accumulated in
Linked hash map of UpdateLog (variable name as olddeletes). Once this map
is full then we are seeing out of memory.

We did look into heap dump then we found that this map is storing BytesRef
as key and LogPtr as value. BytesRef is the one which is taking a lot of
memory. BytesRef is storing reference of all the ids which we are deleting.
I am not sure why it is keeping the reference of all old deletes.

I looked at solr code but I could not trace how it is cleaning this map.
There is a deleteAll method in UpdateLog and only test cases are calling
this method.

Did anyone face the same issue? I really appreciate a reply for this
problem.

*Note*: We are running Solr in cloud and because of this there is high gc
pause which is causing first replica go in recovery then leader and replica
crashes.

-
Rohit


Re: SOLR vs mongdb

2016-11-23 Thread Rohit Kanchan
Hi Prateek,

I think you are talking about two different animals. Solr(actually embedded
lucene) is actually a search engine where you can use different features
like faceting, highlighting etc but it is a document store where for each
text it does create an Inverted index and map that to documents.  Mongodb
is also document store but I think it adds basic search capability.  This
is my understanding. We are using mongo for temporary storage and I think
it is good for that where you want to store a key value document in a
collection without any static schema. In Solr you need to define your
schema. In solr you can define dynamic fields too. This is all my
understanding.

-
Rohit


On Wed, Nov 23, 2016 at 10:27 AM, Prateek Jain J <
prateek.j.j...@ericsson.com> wrote:

>
> Hi All,
>
> I have started to use mongodb and solr recently. Please feel free to
> correct me where my understanding is not upto the mark:
>
>
> 1.   Solr is indexing engine but it stores both data and indexes in
> same directory. Although we can select fields to store/persist in solr via
> schema.xml. But in nutshell, it's not possible to distinguish between data
> and indexes like, I can't remove all indexes and still have persisted data
> with SOLR.
>
> 2.   Solr indexing capabilities are far better than any other nosql db
> like mongodb etc. like faceting, weighted search.
>
> 3.   Both support scalability via sharding.
>
> 4.   We can have architecture where data is stored in separate db like
> mongodb or mysql. SOLR can connect with db and index data (in SOLR).
>
> I tried googling for question "solr vs mongodb" and there are various
> threads on sites like stackoverflow. But I still can't understand why would
> anyone go for mongodb and when for SOLR (except for features like faceting,
> may be CAP theorem). Are there any specific use-cases for choosing NoSQL
> databases like mongoDB over SOLR?
>
>
> Regards,
> Prateek Jain
>
>


Re: Error about Unsupported major.minor version 52.0

2016-09-10 Thread Rohit Kanchan
With Java 8, you also need to upgrade your tomcat which can work on Java 8.
I think Tomcat 8.x compiled using Java 8. I think you can switch your
existing Tomcat also to Java 8 but that may break somewhere because of same
reason.

Thanks
Rohit Kanchan


On Sat, Sep 10, 2016 at 2:38 AM, Brendan Humphreys <
bren...@canva.com.invalid> wrote:

> Solr 6.x requires Java 8 - you're using Java 7 (aka JDK 1.7).
>
> https://cwiki.apache.org/confluence/display/solr/Installing+Solr
>
> Cheers,
> -Brendan
>
>
>
>
>
>
> On 10 September 2016 at 19:21, Jiangenbo  wrote:
>
> > Hello Everyone,
> > Sorry for disturbing you all
> > I am facing at a problem when I deploy SOLR 6.0 in TOMCAT under Windows
> XP.
> > My tools and their version are
> > JDK 1.7
> > SOLR 6.0
> > TOMCAT 7.0
> > MyEclipse 8.0 (for starting TOMCAT)
> > I don't know how to deal with it and want to for your help, thanks !
> >
> > The error information is below:
> > 严重: Exception starting filter SolrRequestFilter
> > java.lang.UnsupportedClassVersionError: org/apache/solr/servlet/
> SolrDispatchFilter
> > : Unsupported major.minor version 52.0 (unable to load class
> > org.apache.solr.servlet.SolrDispatchFilter)
> > at org.apache.catalina.loader.WebappClassLoader.findClassInternal(
> > WebappClassLoader.java:2961)
> > at org.apache.catalina.loader.WebappClassLoader.findClass(
> > WebappClassLoader.java:1210)
> > at org.apache.catalina.loader.WebappClassLoader.loadClass(
> > WebappClassLoader.java:1690)
> > at org.apache.catalina.loader.WebappClassLoader.loadClass(
> > WebappClassLoader.java:1571)
> > at org.apache.catalina.core.DefaultInstanceManager.loadClass(
> > DefaultInstanceManager.java:506)
> > at org.apache.catalina.core.DefaultInstanceManager.
> > loadClassMaybePrivileged(DefaultInstanceManager.java:488)
> > at org.apache.catalina.core.DefaultInstanceManager.newInstance(
> > DefaultInstanceManager.java:115)
> > at org.apache.catalina.core.ApplicationFilterConfig.getFilter(
> > ApplicationFilterConfig.java:258)
> > at org.apache.catalina.core.ApplicationFilterConfig.
> > (ApplicationFilterConfig.java:105)
> > at org.apache.catalina.core.StandardContext.filterStart(
> > StandardContext.java:4830)
> > at org.apache.catalina.core.StandardContext.startInternal(
> > StandardContext.java:5510)
> > at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> > at org.apache.catalina.core.ContainerBase$StartChild.call(
> > ContainerBase.java:1575)
> > at org.apache.catalina.core.ContainerBase$StartChild.call(
> > ContainerBase.java:1565)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1145)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:745)
> >
> >
> >
> >
> >
> >
> >
> >
>
> --
>
> [image: Canva] <https://canva.com>
> Empowering the world to design
> Also, we're hiring. Apply here! <https://about.canva.com/careers/>
> [image: Twitter] <https://twitter.com/canva> [image: Facebook]
> <https://facebook.com/canva> [image: LinkedIn]
> <https://au.linkedin.com/company/canva> [image: Instagram]
> <https://instagram.com/canva>
>


Re: Detecting down node with SolrJ

2016-09-10 Thread Rohit Kanchan
I think it is better to use zookeeper data. Solr Cloud updates zookeeper
about node status. If you are using cloud then u can check zookeeper
cluster api and get status of node from there. Zookeeper cluster state api
can give you information about your Solr cloud. I hope this helps.

Thanks
Rohit Kanchan


On Fri, Sep 9, 2016 at 3:38 PM, Brent  wrote:

> Is there a way to tell whether or not a node at a specific address is up
> using a SolrJ API?
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Detecting-down-node-with-SolrJ-tp4295402.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Modifying fl in QParser

2016-08-30 Thread Rohit Kanchan
We are dealing with same thing, we have overridden QueryComponent (type of
SearchComponent) and added a field to retrieve there. That same field we
are setting in SolrParams from query request. According to your business
you need to figure out  how you can override QueryComponent. I hope this
helps in solving your problem.

Thanks
Rohit Kanchan


On Tue, Aug 30, 2016 at 5:11 PM, Erik Hatcher 
wrote:

> Personally, I don’t think a QParser(Plugin) is the right place to modify
> other parameters, only to create a Query object.   A QParser could be
> invoked from an fq, not just a q, and will get invoked on multiple nodes in
> SolrCloud, for example - this is why I think it’s not a good idea to do
> anything but return a Query.
>
> It is possible (in fact I’m dealing with this very situation with a client
> as we speak) to set parameters this way, but I don’t recommend it.   Create
> a SearchComponent to do this job instead.
>
> Erik
>
>
>
> > On Aug 9, 2016, at 10:23 AM, Beale, Jim (US-KOP) 
> wrote:
> >
> > Hi,
> >
> > Is it possible to modify the SolrParam, fl, to append selected dynamic
> fields, while rewriting a query in QParser.parse()?
> >
> > Thanks in advance!
> >
> >
> > Jim Beale
> > Senior Lead Developer
> > 2201 Renaissance Boulevard, King of Prussia, PA, 19406
> > Mobile: 610-220-3067
> >
> >
> >
> > The information contained in this email message, including any
> attachments, is intended solely for use by the individual or entity named
> above and may be confidential. If the reader of this message is not the
> intended recipient, you are hereby notified that you must not read, use,
> disclose, distribute or copy any part of this communication. If you have
> received this communication in error, please immediately notify me by email
> and destroy the original message, including any attachments. Thank you.
> **hibu IT Code:141459300**
>
>


Re: EmbeddedSolrServer problem when using one-jar-with-dependency including solr

2016-08-02 Thread Rohit Kanchan
We also faced same issue when we were running embedded solr 6.1 server.
Actually I faced the same in our integration environment after deploying
project. Solr 6.1 is using http client 4.4.1 which I think  embedded solr
server is looking for. I think when solr core is getting loaded then old
http client is getting loaded from some where in your maven. Check
dependency tree of your pom.xml and see if you can exclude this jar getting
loaded from anywhere else. Just exclude them in your pom.xml. I hope this
solves your issue,


Thanks
Rohit


On Tue, Aug 2, 2016 at 9:44 AM, Steve Rowe  wrote:

> solr-core[1] and solr-solrj[2] POMs have parent POM solr-parent[3], which
> in turn has parent POM lucene-solr-grandparent[4], which has a
>  section that specifies dependency versions &
> exclusions *for all direct dependencies*.
>
> The intent is for all Lucene/Solr’s internal dependencies to be managed
> directly, rather than through Maven’s transitive dependency mechanism.  For
> background, see summary & comments on JIRA issue LUCENE-5217[5].
>
> I haven’t looked into how this affects systems that depend on Lucene/Solr
> artifacts, but it appears to be the case that you can’t use Maven’s
> transitive dependency mechanism to pull in all required dependencies for
> you.
>
> BTW, if you look at the grandparent POM, the httpclient version for Solr
> 6.1.0 is declared as 4.4.1.  I don’t know if depending on version 4.5.2 is
> causing problems, but if you don’t need a feature in 4.5.2, I suggest that
> you depend on the same version as Solr does.
>
> For error #2, you should depend on lucene-core[6].
>
> My suggestion as a place to start: copy/paste the dependencies from
> solr-core[1] and solr-solrj[2] POMs, and leave out stuff you know you won’t
> need.
>
> [1] <
> https://repo1.maven.org/maven2/org/apache/solr/solr-core/6.1.0/solr-core-6.1.0.pom
> >
> [2] <
> https://repo1.maven.org/maven2/org/apache/solr/solr-solrj/6.1.0/solr-solrj-6.1.0.pom
> >
> [3] <
> https://repo1.maven.org/maven2/org/apache/solr/solr-parent/6.1.0/solr-parent-6.1.0.pom
> >
> [4] <
> https://repo1.maven.org/maven2/org/apache/lucene/lucene-solr-grandparent/6.1.0/lucene-solr-grandparent-6.1.0.pom
> >
> [5] 
> [6] <
> http://search.maven.org/#artifactdetails|org.apache.lucene|lucene-core|6.1.0|jar
> >
>
> --
> Steve
> www.lucidworks.com
>
> > On Aug 2, 2016, at 12:03 PM, Ziqi Zhang 
> wrote:
> >
> > Hi, I am using Solr, Solrj 6.1, and Maven to manage my project. I use
> maven to build a jar-with-dependency and run a java program pointing its
> classpath to this jar. However I keep getting errors even when I just try
> to create an instance of EmbeddedSolrServer:
> >
> > */code/
> > *String solrHome = "/home/solr/";
> > String solrCore = "fw";
> > solrCores = new EmbeddedSolrServer(
> >Paths.get(solrHome), solrCore
> >).getCoreContainer();
> > ///
> >
> >
> > My project has dependencies defined in the pom shown below:  **When
> block A is not present**, running the code that calls:
> >
> > * pom /*
> > 
> >org.apache.jena
> >jena-arq
> >3.0.1
> >
> >
> >
> > BLOCK A
> > org.apache.httpcomponents
> >httpclient
> >4.5.2
> > BLOCK A ENDS
> >
> >
> >org.apache.solr
> >solr-core
> >6.1.0
> >
> >
> >org.slf4j
> > slf4j-log4j12
> >
> >
> >log4j
> >log4j
> >
> >
> >org.slf4j
> > slf4j-jdk14
> >
> >
> >
> >
> >org.apache.solr
> >solr-solrj
> >6.1.0
> >
> >
> >org.slf4j
> > slf4j-log4j12
> >
> >
> >log4j
> >log4j
> >
> >
> >org.slf4j
> > slf4j-jdk14
> >
> >
> >
> > ///
> >
> >
> > Block A is added because when it is missing, the following error is
> thrown on the java code above:
> >
> > * ERROR 1 ///*
> >
> >Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/http/impl/client/CloseableHttpClient
> >at
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:167)
> >at
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:47)
> >at org.apache.solr.core.CoreContainer.loa