Re: Issue: Hit Highlighting Working Inconsistently in Solr 6.6

2017-07-14 Thread David Smiley
Does hl.method=unified help any?

Perhaps you need to set hl.fl?  or hl.requireFieldMatch=false? (although it
should default to false already)

On Fri, Jul 14, 2017 at 6:52 PM Vikram Oberoi  wrote:

> Hi!
>
> Just wanted to close the loop here.
>
> I'm pretty sure this has something to do with the default _text_ "catchall"
> field being a slightly differently type ('text_general') from all my
> textual fields ('text_en'). A few things I tried support that hypothesis:
>
> - Specifying fields for terms correctly yields highlights consistently
> (e.g. "hello" doesn't work but "subject:hello" always does).
> - Creating a different catchall field with same type as all my textual
> fields ('text_en') and making that the default field yields highlighting
> results that work properly and consistently.
> - Finally -- I need to use a friendlier parser anyway. Using edismax for
> all my queries -- and eliminating my catchall field -- yields highlighting
> results properly and consistently.
>
> I've got this working, but I'm curious to know if this is what's happening
> more around precisely why. If anyone more knowledgable has thoughts or
> pointers to writing on how highlighting works internally, I'd really
> appreciate it!
>
> Cheers,
> Vikram
>
> On Thu, Jul 13, 2017 at 5:51 PM, Vikram Oberoi  wrote:
>
> > Hi there,
> >
> > I'm seeing inconsistent highlighting behavior using a default, fresh Solr
> > 6.6 install and it's unclear to me why or how to go about debugging it.
> >
> > Hit highlights either show entirely correct highlights or none at all
> when
> > there should be highlights.
> >
> >- Some queries show highlights out of the box, some do not.
> >   - e.g. "hello" yields no highlights, but "goodbye" correctly yields
> >   highlights
> >- Some queries that do not show highlights suddenly work when
> >specifying fields
> >   - e.g. "subject:hello" yields highlights, but "hello" does not
> >- When queries that yield highlights and queries that do not are
> >combined, only those that work are highlighted.
> >   - e.g. "hello goodbye" yields highlights correctly for "goodbye",
> >   but not for "hello"
> >
> > I've thrown specific details and examples in a Gist here:
> >
> > Full Gist: https://gist.github.com/voberoi/a7a8a679390fc4f27422e7
> > 0600cfb338
> >
> >- Problem description:
> >   - https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cf
> >   b338#file-problem-details-md
> >- Solr install, my schema, solrconfig details:
> >   - https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cf
> >   b338#file-solr-details-md
> >
> > Does anyone here have any hypotheses for why this might be happening?
> >
> > Thanks!
> > Vikram
> >
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Cant stop/start server

2017-07-14 Thread Erick Erickson
Hmm, looks like the pid file is located in different spots (depending)
and the -p option apparently looks in the same place but the -all
doesn't. Haven't tracked down why

If I start by

bin/solr start -s example/techproducts/configs
the pid file goes in bin/solr
It also goes there if I cd into the bin directory and:
./solr start -s
/Users/Erick/apache/solrJiras/jira/solr/example/techproducts/solr/


However, if I cd into the bin directory then:
./solr start -s ../example/techproducts/solr
the pid file goes in to ../example/techproducts/solr
when the pid file goes here, the -all doesn't find it.


The odd thing is that it's findable in all cases by the -p option but
not the -all option.

Seems like a problem with the script, but I'll leave it to someone
else. In the mean time what happens when you start with an absolute
path? Or at least without a ../ as the start of your path?

Erick


On Fri, Jul 14, 2017 at 11:12 AM, Iridian Group
 wrote:
> REL 7.3
> Apache 2.4.6
>
> Sry, not versed enough in CLI to get your ‘find’ to work. Dropped me into a 
> prompt of some type. Got this however.
> find / -name "solr-*.pid"
> /var/solr/solr-8983.pid
>
>
>
> Join us on facebook  or twitter 
> 
>> On Jul 14, 2017, at 12:56 PM, Erick Erickson  wrote:
>>
>> Shouldn't be a setup or configuration issue, it should "just happen".
>> But if this has been up and running for a long time perhaps someone
>> "cleaned it up".
>>
>> Hmmm, now that I think about it the pid file must have been there if
>> "-p " worked so I'm stumped too. What op system? The relevant
>> part of the *nix script is:
>>
>> find "$SOLR_PID_DIR" -name "solr-*.pid" -type f | while read PIDF
>>
>> and windows is:
>>  set found_it=0
>>  for /f "usebackq" %%i in (`dir /b "%SOLR_TIP%\bin" ^| findstr /i
>> "^solr-.*\.port$"`) do (
>>set SOME_SOLR_PORT=
>>
>> Just wonder if they're depending on something not in your system?
>>
>> Best,
>> Erick
>>
>> On Fri, Jul 14, 2017 at 10:26 AM, Iridian Group
>> mailto:ksav...@iridiangroup.com>> wrote:
>>> Typical story, I wasn’t the admin who set it up but I’m pretty sure is was 
>>> vanilla.
>>>
>>>
>>> Thanks
>>>
>>> Keith Savoie
>>> Vice President of Technology
>>>
>>> IRiDiAN GROUP
>>>
>>> Helping organizations brand
>>> & market themselves through
>>> web, print, & social media.
>>>
>>>
>>> 14450 Eagle Run Dr. Ste. 120
>>> Omaha, Nebraska 68116
>>>
>>> P  • 402.422.0150
>>> W • iridiangroup.com  
>>> >
>>>
>>> Join us on facebook >> > or twitter 
>>> >
 On Jul 14, 2017, at 12:18 PM, Erick Erickson  
 wrote:

 bq: wonder why -all didn’t pick it up?

 Good question, I use this _all_ the time. (little joke there).

 The -all flag looks for various .pid files, you'll see things like:
 solr-8983.pid that contain the process id to kill associated with that
 port. Any chance these were removed or in some different place?

 Erick

 On Fri, Jul 14, 2017 at 10:15 AM, Iridian Group
  wrote:
> Ahhh well then.
> I did try the -all flag but it returned nothing.
>
> However an explicit  -p 8983 did the trick.  :)
>
> … wonder why -all didn’t pick it up?
>
> Thanks!
>
>
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com 
>
> Join us on facebook  or twitter 
> 
>> On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:
>>
>> Did you mention the port with -p
>> Like
>>
>> Bin/solr stop -p 8983
>>
>> Please check
>>
>> On Jul 14, 2017 10:35 PM, "Iridian Group" > > wrote:
>>
>>> I know I am missing something very simple here but I cant stop/start my
>>> Solr instance with
>>> /opt/solr/bin/solr stop
>>>
>>> I get “No Solr nodes found to stop”, however the server is running. I 
>>> can
>>> access the server via the default port and my app is able to use its
>>> services without issue.
>>>
>>>
>>> Thanks for any assistance!
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> Keith Savoie
>>> Vice President of Technology
>>>
>>> IRiDiAN GROUP
>>>
>>> Helping organizations brand
>>> & market themselves through
>>> web, print, & social media.
>>>
>>>
>>> 14450 Eagle Run

Problem accessing AdminUiServlet of Solr 4.10 in my Spring app

2017-07-14 Thread nbosecker
I've got a web project in a .war that is deployed on Tomcat. Inside that,
I've got a Solr .war with an instance of Solr 4.10.4 running. Everything
works fine as far as indexing/searching, but I can't access the Solr Admin
UI page, and I've tried everything.

I use Spring in the application, so I've registered both the AdminUIServlet
and Zookeeper Servlet like this:
...
ServletRegistrationBean srb = new ServletRegistrationBean();
srb.setServlet(new LoadAdminUiServlet());
srb.setUrlMappings(Arrays.asList("/admin.html"));
return srb;
...
ServletRegistrationBean srb = new ServletRegistrationBean();
srb.setServlet(new ZookeeperInfoServlet());
srb.setUrlMappings(Arrays.asList("/zookeeper.jsp", "/zookeeper"));
return srb;
...

I start up Tomcat, I see the mapping info logged:

INFO  ServletRegistrationBean- Mapping servlet: 'loadAdminUiServlet'
to [/admin.html]
INFO  ServletRegistrationBean- Mapping servlet:
'zookeeperInfoServlet' to [/zookeeper.jsp, /zookeeper]
...

I go to the browser and try accessing /myapp/zookeeper.jsp and I get a valid
error (I'm not using zookeeper):
{  "status":404,
  "error":"Zookeeper is not configured for this Solr Core. Please try
connecting to an alternate zookeeper address."}

Now I try /admin.html and I get this:
{"timestamp":1500074845890,"status":404,"error":"Not Found","message":"No
message available","path":"/myapp/admin.html"}

This should be simple, but I'm missing something obvious. I can access
/admin/cores which returns valid JSON about my setup, but that is configured
from solr.xml and not via Spring infrastructure. I should mention that all
of this setup worked before I migrated to Spring, and I used web.xml to
setup the servlets.

Any ideas?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-accessing-AdminUiServlet-of-Solr-4-10-in-my-Spring-app-tp4346187.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Issue: Hit Highlighting Working Inconsistently in Solr 6.6

2017-07-14 Thread Vikram Oberoi
Hi!

Just wanted to close the loop here.

I'm pretty sure this has something to do with the default _text_ "catchall"
field being a slightly differently type ('text_general') from all my
textual fields ('text_en'). A few things I tried support that hypothesis:

- Specifying fields for terms correctly yields highlights consistently
(e.g. "hello" doesn't work but "subject:hello" always does).
- Creating a different catchall field with same type as all my textual
fields ('text_en') and making that the default field yields highlighting
results that work properly and consistently.
- Finally -- I need to use a friendlier parser anyway. Using edismax for
all my queries -- and eliminating my catchall field -- yields highlighting
results properly and consistently.

I've got this working, but I'm curious to know if this is what's happening
more around precisely why. If anyone more knowledgable has thoughts or
pointers to writing on how highlighting works internally, I'd really
appreciate it!

Cheers,
Vikram

On Thu, Jul 13, 2017 at 5:51 PM, Vikram Oberoi  wrote:

> Hi there,
>
> I'm seeing inconsistent highlighting behavior using a default, fresh Solr
> 6.6 install and it's unclear to me why or how to go about debugging it.
>
> Hit highlights either show entirely correct highlights or none at all when
> there should be highlights.
>
>- Some queries show highlights out of the box, some do not.
>   - e.g. "hello" yields no highlights, but "goodbye" correctly yields
>   highlights
>- Some queries that do not show highlights suddenly work when
>specifying fields
>   - e.g. "subject:hello" yields highlights, but "hello" does not
>- When queries that yield highlights and queries that do not are
>combined, only those that work are highlighted.
>   - e.g. "hello goodbye" yields highlights correctly for "goodbye",
>   but not for "hello"
>
> I've thrown specific details and examples in a Gist here:
>
> Full Gist: https://gist.github.com/voberoi/a7a8a679390fc4f27422e7
> 0600cfb338
>
>- Problem description:
>   - https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cf
>   b338#file-problem-details-md
>- Solr install, my schema, solrconfig details:
>   - https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cf
>   b338#file-solr-details-md
>
> Does anyone here have any hypotheses for why this might be happening?
>
> Thanks!
> Vikram
>


Re: Auto commit Error - Solr Cloud 6.6.0 with HDFS

2017-07-14 Thread Joe Obernberger
Hi Shawn - had a shard go down (appears to have just dropped out of the 
cluster) with a similar error:


2017-07-14 20:43:04.238 ERROR (commitScheduler-65-thread-1) [c:UNCLASS 
s:shard87 r:core_node132 x:UNCLASS_shard87_replica1] 
o.a.s.u.CommitTracker auto commit 
error...:org.apache.solr.common.SolrException: openNewSearcher called on 
closed core
at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1943)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:678)

at org.apache.solr.update.CommitTracker.run(CommitTracker.java:217)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:748)

The whole log can be found here:
http://lovehorsepower.com/solr.log
the GC log is here:
http://lovehorsepower.com/solr_gc.log.3.current

-Joe


On 7/12/2017 9:25 AM, Shawn Heisey wrote:

On 7/12/2017 7:14 AM, Joe Obernberger wrote:

Started up a 6.6.0 solr cloud instance running on 45 machines
yesterday using HDFS (managed schema in zookeeper) and began
indexing.  This error occurred on several of the nodes:



Caused by: org.apache.solr.common.SolrException: openNewSearcher
called on closed core

There's the important part of the error.

For some reason, which is not immediately clear from the information
provided, the core is closed.  In that situation, Solr is not able to
open a new searcher, so this error happens.  Do you have any other WARN
or ERROR messages in solr.log before this error?  You might want to find
a way to share an entire logfile and provide a URL for accessing it.

Thanks,
Shawn


---
This email has been checked for viruses by AVG.
http://www.avg.com





Re: CDCR - how to deal with the transaction log files

2017-07-14 Thread Varun Thacker
https://issues.apache.org/jira/browse/SOLR-11069 is tracking why is
LASTPROCESSEDVERSION=-1
on the source cluster always

On Fri, Jul 14, 2017 at 11:46 AM, jmyatt  wrote:

> Thanks for the suggestion - tried that today and still no luck.  Time to
> write a script to naively / blindly delete old logs and run that in cron.
> *sigh*
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/CDCR-how-to-deal-with-the-transaction-log-
> files-tp4345062p4346138.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: How to determine the user Solr is using

2017-07-14 Thread Iridian Group
Ok one last question just so I’m clear

> It should be 2) solr:solr

As my install is running as a service, everything under /opt/solr should be 
solr:sold? Currently root:root but seems to be working correctly.
My /var/solr is already solr:solr.

K



> On Jul 14, 2017, at 2:19 PM, Susheel Kumar  wrote:
> 
> It should be 2) solr:solr
> 
> On Fri, Jul 14, 2017 at 3:18 PM, Susheel Kumar 
> wrote:
> 
>> If you setup solr using install_service which comes with solr, it sets up
>> solr running as "solr" user.  Solr is not recommended to run as root user
>> due to security concerns. You either launch solr as
>> service solr start or /etc/init.d/solr start
>> 
>> Answer to
>> 1) Yes, you can configure solr service to start at boot using linux
>> commands
>> 2) The data/index directory should be owned/writable for Solr user in
>> order for solr process to be able to write etc.
>> 
>> HTH.
>> 
>> On Fri, Jul 14, 2017 at 2:26 PM, Iridian Group 
>> wrote:
>> 
>>> OK thanks, got it.
>>> So the user the server runs under is ‘Solr’.
>>> However when I stop the server and switch to the Solr user and try to
>>> start, I get an error stating that I can’t write to the log files due to
>>> permissions.
>>> All of this Solr installs permissions are root:root.
>>> 
>>> 1) If I can’t manually start the server as user Solr, shouldn’t the
>>> service also not start on boot?
>>> 2) Are the permissions of the Solr instance supposed to be something
>>> other than root:root?
>>> 
>>> Apologies for all the 101 questions.
>>> 
>>> 
>>> 
>>> Thanks
>>> 
>>> Keith
>>> 
>>> 
>>> 
 On Jul 14, 2017, at 1:14 PM, Susheel Kumar 
>>> wrote:
 
 The first column is the UID/user column as output of ps -ef on linux
 machines...
 
 On Fri, Jul 14, 2017 at 1:38 PM, Iridian Group <
>>> ksav...@iridiangroup.com>
 wrote:
 
> How do I determine which user the Solr server starts up with and is
> running under?
> 
> Thanks
> 
> Keith Savoie
> 
> 
>>> 
>>> 
>> 



Re: Antw: Re: How to Debug Solr With Eclipse

2017-07-14 Thread Giovanni De Stefano
Hello Rainer,

Have you found the issue?

If not, just to be on the safe side:
1) once you extracted the .tgz you get the folder `solr-6.0.0`, cd in it and 
then just 
2) execute `ant eclipse` and then 
3) in Eclipse do Import -> Existing Projects in the workspace -> select the 
`solr-6.0.0` folder (leave all options the way they are)
4) wait a few minutes…it takes a while to build the whole thing, in the 
meantime it’s normal to see “errors” or “warning”…

I hope it helps,
Giovanni



> On 14 Jul 2017, at 16:01, Rainer Gnan  wrote:
> 
> Hi Giovanni,
> 
> thank you for this hint!
> 
> The whole process (tar -xvf ..., ant compile, ant eclipse) untill importing 
> the eclipse-project seems to be fine .
> After importing it as an existing eclipse project the project explorer shows 
> an error sign on the project folder.
> Refreshing does not help.
> 
> -> The sub-folder lucene/src is empty ...
> 
> I am using eclipse neon, java 1.8.0_112, solr-6.6.0-src.tgz.
> 
> Any suggestions?
> 
> Cheers,
> Rainer
> 
> 
> Rainer Gnan
> Bayerische Staatsbibliothek 
> Verbundzentrale des BVB
> Referat Verbundnahe Dienste
> 80807 München
> Tel.: +49(0)89/28638-4445
> Fax: +49(0)89/28638-2605
> E-Mail: rainer.g...@bsb-muenchen.de
> 
> 
> 
> 
 Giovanni De Stefano  13.07.2017 19:59 >>>
> Hello Rainer,
> 
> you have the right link: select the version you want and download the -src 
> version.
> 
> Once un untar the .tgz you can run `ant eclipse` from the command line and 
> then import the generated project in eclipse.
> 
> Please note that you will need both and and ivy installed (just start with 
> ant eclipse and take it from there: the script will tell you what to do next).
> 
> I hope it helps!
> 
> Cheers,
> Giovanni
> 
> 
>> On 13 Jul 2017, at 19:54, govind nitk  wrote:
>> 
>> Hi,
>> 
>> Solr has releases, kindly checkout to the needed one.
>> 
>> 
>> cheers
>> 
>> On Thu, Jul 13, 2017 at 11:20 PM, Rainer Gnan 
>> wrote:
>> 
>>> Hello community,
>>> 
>>> my aim is to develop solr custom code (e.g. UpdateRequestProcessor)
>>> within Eclipse AND to test the code within a debuggable solr/lucene
>>> local instance - also within Eclipse.
>>> Searching the web led me to multiple instructions but for me no one
>>> works.
>>> 
>>> The only relevant question I actually have to solve this problem is:
>>> Where can I download the source code for the version I want that
>>> includes the ANT build.xml for building an Eclipse-Project?
>>> 
>>> The solr project page (http://archive.apache.org/dist/lucene/solr/)
>>> seems not to provide that.
>>> 
>>> I appreciate any hint!
>>> 
>>> Best regards
>>> Rainer
>>> 
>>> 
> 
> 



Re: How to determine the user Solr is using

2017-07-14 Thread Susheel Kumar
It should be 2) solr:solr

On Fri, Jul 14, 2017 at 3:18 PM, Susheel Kumar 
wrote:

> If you setup solr using install_service which comes with solr, it sets up
> solr running as "solr" user.  Solr is not recommended to run as root user
> due to security concerns. You either launch solr as
> service solr start or /etc/init.d/solr start
>
> Answer to
> 1) Yes, you can configure solr service to start at boot using linux
> commands
> 2) The data/index directory should be owned/writable for Solr user in
> order for solr process to be able to write etc.
>
> HTH.
>
> On Fri, Jul 14, 2017 at 2:26 PM, Iridian Group 
> wrote:
>
>> OK thanks, got it.
>> So the user the server runs under is ‘Solr’.
>> However when I stop the server and switch to the Solr user and try to
>> start, I get an error stating that I can’t write to the log files due to
>> permissions.
>> All of this Solr installs permissions are root:root.
>>
>> 1) If I can’t manually start the server as user Solr, shouldn’t the
>> service also not start on boot?
>> 2) Are the permissions of the Solr instance supposed to be something
>> other than root:root?
>>
>> Apologies for all the 101 questions.
>>
>>
>>
>> Thanks
>>
>> Keith
>>
>>
>>
>> > On Jul 14, 2017, at 1:14 PM, Susheel Kumar 
>> wrote:
>> >
>> > The first column is the UID/user column as output of ps -ef on linux
>> > machines...
>> >
>> > On Fri, Jul 14, 2017 at 1:38 PM, Iridian Group <
>> ksav...@iridiangroup.com>
>> > wrote:
>> >
>> >> How do I determine which user the Solr server starts up with and is
>> >> running under?
>> >>
>> >> Thanks
>> >>
>> >> Keith Savoie
>> >>
>> >>
>>
>>
>


Re: How to determine the user Solr is using

2017-07-14 Thread Susheel Kumar
If you setup solr using install_service which comes with solr, it sets up
solr running as "solr" user.  Solr is not recommended to run as root user
due to security concerns. You either launch solr as
service solr start or /etc/init.d/solr start

Answer to
1) Yes, you can configure solr service to start at boot using linux commands
2) The data/index directory should be owned/writable for Solr user in order
for solr process to be able to write etc.

HTH.

On Fri, Jul 14, 2017 at 2:26 PM, Iridian Group 
wrote:

> OK thanks, got it.
> So the user the server runs under is ‘Solr’.
> However when I stop the server and switch to the Solr user and try to
> start, I get an error stating that I can’t write to the log files due to
> permissions.
> All of this Solr installs permissions are root:root.
>
> 1) If I can’t manually start the server as user Solr, shouldn’t the
> service also not start on boot?
> 2) Are the permissions of the Solr instance supposed to be something other
> than root:root?
>
> Apologies for all the 101 questions.
>
>
>
> Thanks
>
> Keith
>
>
>
> > On Jul 14, 2017, at 1:14 PM, Susheel Kumar 
> wrote:
> >
> > The first column is the UID/user column as output of ps -ef on linux
> > machines...
> >
> > On Fri, Jul 14, 2017 at 1:38 PM, Iridian Group  >
> > wrote:
> >
> >> How do I determine which user the Solr server starts up with and is
> >> running under?
> >>
> >> Thanks
> >>
> >> Keith Savoie
> >>
> >>
>
>


BFS with gatherNodes()

2017-07-14 Thread HoJae Jung
Hello,

I am currently trying to implement a graph traversal feature across multiple 
shards.

Is there a way of making gatherNodes() to work for infinite depth, until it 
reaches the leaves, as if the local graph query does?

Local graph query( {!graph … } ) won’t work across shards, and nested 
gatherNodes() is not applicable since the input may not specify the depth.

If that is not possible, is there a way of avoiding nested gatherNodes() call 
for searching multiple depths, and/or making it faster?


Re: CDCR - how to deal with the transaction log files

2017-07-14 Thread jmyatt
Thanks for the suggestion - tried that today and still no luck.  Time to
write a script to naively / blindly delete old logs and run that in cron.
*sigh*



--
View this message in context: 
http://lucene.472066.n3.nabble.com/CDCR-how-to-deal-with-the-transaction-log-files-tp4345062p4346138.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to determine the user Solr is using

2017-07-14 Thread Iridian Group
OK thanks, got it. 
So the user the server runs under is ‘Solr’.
However when I stop the server and switch to the Solr user and try to start, I 
get an error stating that I can’t write to the log files due to permissions. 
All of this Solr installs permissions are root:root.

1) If I can’t manually start the server as user Solr, shouldn’t the service 
also not start on boot?
2) Are the permissions of the Solr instance supposed to be something other than 
root:root? 
 
Apologies for all the 101 questions. 



Thanks 

Keith



> On Jul 14, 2017, at 1:14 PM, Susheel Kumar  wrote:
> 
> The first column is the UID/user column as output of ps -ef on linux
> machines...
> 
> On Fri, Jul 14, 2017 at 1:38 PM, Iridian Group 
> wrote:
> 
>> How do I determine which user the Solr server starts up with and is
>> running under?
>> 
>> Thanks
>> 
>> Keith Savoie
>> 
>> 



Re: How to determine the user Solr is using

2017-07-14 Thread Susheel Kumar
The first column is the UID/user column as output of ps -ef on linux
machines...

On Fri, Jul 14, 2017 at 1:38 PM, Iridian Group 
wrote:

> How do I determine which user the Solr server starts up with and is
> running under?
>
> Thanks
>
> Keith Savoie
>
>


Re: Cant stop/start server

2017-07-14 Thread Iridian Group
REL 7.3
Apache 2.4.6

Sry, not versed enough in CLI to get your ‘find’ to work. Dropped me into a 
prompt of some type. Got this however.  
find / -name "solr-*.pid"
/var/solr/solr-8983.pid



Join us on facebook  or twitter 

> On Jul 14, 2017, at 12:56 PM, Erick Erickson  wrote:
> 
> Shouldn't be a setup or configuration issue, it should "just happen".
> But if this has been up and running for a long time perhaps someone
> "cleaned it up".
> 
> Hmmm, now that I think about it the pid file must have been there if
> "-p " worked so I'm stumped too. What op system? The relevant
> part of the *nix script is:
> 
> find "$SOLR_PID_DIR" -name "solr-*.pid" -type f | while read PIDF
> 
> and windows is:
>  set found_it=0
>  for /f "usebackq" %%i in (`dir /b "%SOLR_TIP%\bin" ^| findstr /i
> "^solr-.*\.port$"`) do (
>set SOME_SOLR_PORT=
> 
> Just wonder if they're depending on something not in your system?
> 
> Best,
> Erick
> 
> On Fri, Jul 14, 2017 at 10:26 AM, Iridian Group
> mailto:ksav...@iridiangroup.com>> wrote:
>> Typical story, I wasn’t the admin who set it up but I’m pretty sure is was 
>> vanilla.
>> 
>> 
>> Thanks
>> 
>> Keith Savoie
>> Vice President of Technology
>> 
>> IRiDiAN GROUP
>> 
>> Helping organizations brand
>> & market themselves through
>> web, print, & social media.
>> 
>> 
>> 14450 Eagle Run Dr. Ste. 120
>> Omaha, Nebraska 68116
>> 
>> P  • 402.422.0150
>> W • iridiangroup.com  
>> >
>> 
>> Join us on facebook > > or twitter 
>> >
>>> On Jul 14, 2017, at 12:18 PM, Erick Erickson  
>>> wrote:
>>> 
>>> bq: wonder why -all didn’t pick it up?
>>> 
>>> Good question, I use this _all_ the time. (little joke there).
>>> 
>>> The -all flag looks for various .pid files, you'll see things like:
>>> solr-8983.pid that contain the process id to kill associated with that
>>> port. Any chance these were removed or in some different place?
>>> 
>>> Erick
>>> 
>>> On Fri, Jul 14, 2017 at 10:15 AM, Iridian Group
>>>  wrote:
 Ahhh well then.
 I did try the -all flag but it returned nothing.
 
 However an explicit  -p 8983 did the trick.  :)
 
 … wonder why -all didn’t pick it up?
 
 Thanks!
 
 
 
 Keith Savoie
 Vice President of Technology
 
 IRiDiAN GROUP
 
 Helping organizations brand
 & market themselves through
 web, print, & social media.
 
 
 14450 Eagle Run Dr. Ste. 120
 Omaha, Nebraska 68116
 
 P  • 402.422.0150
 W • iridiangroup.com 
 
 Join us on facebook  or twitter 
 
> On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:
> 
> Did you mention the port with -p
> Like
> 
> Bin/solr stop -p 8983
> 
> Please check
> 
> On Jul 14, 2017 10:35 PM, "Iridian Group"  > wrote:
> 
>> I know I am missing something very simple here but I cant stop/start my
>> Solr instance with
>> /opt/solr/bin/solr stop
>> 
>> I get “No Solr nodes found to stop”, however the server is running. I can
>> access the server via the default port and my app is able to use its
>> services without issue.
>> 
>> 
>> Thanks for any assistance!
>> 
>> 
>> 
>> 
>> 
>> Thanks
>> 
>> Keith Savoie
>> Vice President of Technology
>> 
>> IRiDiAN GROUP
>> 
>> Helping organizations brand
>> & market themselves through
>> web, print, & social media.
>> 
>> 
>> 14450 Eagle Run Dr. Ste. 120
>> Omaha, Nebraska 68116
>> 
>> P  • 402.422.0150
>> W • iridiangroup.com  
>> >
>> 
>> Join us on facebook > > or twitter <
>> https://twitter.com/iridiangroup >



Re: Cant stop/start server

2017-07-14 Thread Erick Erickson
Shouldn't be a setup or configuration issue, it should "just happen".
But if this has been up and running for a long time perhaps someone
"cleaned it up".

Hmmm, now that I think about it the pid file must have been there if
"-p " worked so I'm stumped too. What op system? The relevant
part of the *nix script is:

find "$SOLR_PID_DIR" -name "solr-*.pid" -type f | while read PIDF

and windows is:
  set found_it=0
  for /f "usebackq" %%i in (`dir /b "%SOLR_TIP%\bin" ^| findstr /i
"^solr-.*\.port$"`) do (
set SOME_SOLR_PORT=

Just wonder if they're depending on something not in your system?

Best,
Erick

On Fri, Jul 14, 2017 at 10:26 AM, Iridian Group
 wrote:
> Typical story, I wasn’t the admin who set it up but I’m pretty sure is was 
> vanilla.
>
>
> Thanks
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com 
>
> Join us on facebook  or twitter 
> 
>> On Jul 14, 2017, at 12:18 PM, Erick Erickson  wrote:
>>
>> bq: wonder why -all didn’t pick it up?
>>
>> Good question, I use this _all_ the time. (little joke there).
>>
>> The -all flag looks for various .pid files, you'll see things like:
>> solr-8983.pid that contain the process id to kill associated with that
>> port. Any chance these were removed or in some different place?
>>
>> Erick
>>
>> On Fri, Jul 14, 2017 at 10:15 AM, Iridian Group
>>  wrote:
>>> Ahhh well then.
>>> I did try the -all flag but it returned nothing.
>>>
>>> However an explicit  -p 8983 did the trick.  :)
>>>
>>> … wonder why -all didn’t pick it up?
>>>
>>> Thanks!
>>>
>>>
>>>
>>> Keith Savoie
>>> Vice President of Technology
>>>
>>> IRiDiAN GROUP
>>>
>>> Helping organizations brand
>>> & market themselves through
>>> web, print, & social media.
>>>
>>>
>>> 14450 Eagle Run Dr. Ste. 120
>>> Omaha, Nebraska 68116
>>>
>>> P  • 402.422.0150
>>> W • iridiangroup.com 
>>>
>>> Join us on facebook  or twitter 
>>> 
 On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:

 Did you mention the port with -p
 Like

 Bin/solr stop -p 8983

 Please check

 On Jul 14, 2017 10:35 PM, "Iridian Group" >>> > wrote:

> I know I am missing something very simple here but I cant stop/start my
> Solr instance with
> /opt/solr/bin/solr stop
>
> I get “No Solr nodes found to stop”, however the server is running. I can
> access the server via the default port and my app is able to use its
> services without issue.
>
>
> Thanks for any assistance!
>
>
>
>
>
> Thanks
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com  
> >
>
> Join us on facebook  > or twitter <
> https://twitter.com/iridiangroup >
>>>
>


How to determine the user Solr is using

2017-07-14 Thread Iridian Group
How do I determine which user the Solr server starts up with and is running 
under?

Thanks 

Keith Savoie



solr-user-subscribe

2017-07-14 Thread Naohiko Uramoto
solr-user-subscribe 

-- 
Naohiko Uramoto


Re: Cant stop/start server

2017-07-14 Thread Iridian Group
Typical story, I wasn’t the admin who set it up but I’m pretty sure is was 
vanilla. 


Thanks 

Keith Savoie
Vice President of Technology

IRiDiAN GROUP

Helping organizations brand
& market themselves through
web, print, & social media.  


14450 Eagle Run Dr. Ste. 120
Omaha, Nebraska 68116 

P  • 402.422.0150
W • iridiangroup.com  

Join us on facebook  or twitter 

> On Jul 14, 2017, at 12:18 PM, Erick Erickson  wrote:
> 
> bq: wonder why -all didn’t pick it up?
> 
> Good question, I use this _all_ the time. (little joke there).
> 
> The -all flag looks for various .pid files, you'll see things like:
> solr-8983.pid that contain the process id to kill associated with that
> port. Any chance these were removed or in some different place?
> 
> Erick
> 
> On Fri, Jul 14, 2017 at 10:15 AM, Iridian Group
>  wrote:
>> Ahhh well then.
>> I did try the -all flag but it returned nothing.
>> 
>> However an explicit  -p 8983 did the trick.  :)
>> 
>> … wonder why -all didn’t pick it up?
>> 
>> Thanks!
>> 
>> 
>> 
>> Keith Savoie
>> Vice President of Technology
>> 
>> IRiDiAN GROUP
>> 
>> Helping organizations brand
>> & market themselves through
>> web, print, & social media.
>> 
>> 
>> 14450 Eagle Run Dr. Ste. 120
>> Omaha, Nebraska 68116
>> 
>> P  • 402.422.0150
>> W • iridiangroup.com 
>> 
>> Join us on facebook  or twitter 
>> 
>>> On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:
>>> 
>>> Did you mention the port with -p
>>> Like
>>> 
>>> Bin/solr stop -p 8983
>>> 
>>> Please check
>>> 
>>> On Jul 14, 2017 10:35 PM, "Iridian Group" >> > wrote:
>>> 
 I know I am missing something very simple here but I cant stop/start my
 Solr instance with
 /opt/solr/bin/solr stop
 
 I get “No Solr nodes found to stop”, however the server is running. I can
 access the server via the default port and my app is able to use its
 services without issue.
 
 
 Thanks for any assistance!
 
 
 
 
 
 Thanks
 
 Keith Savoie
 Vice President of Technology
 
 IRiDiAN GROUP
 
 Helping organizations brand
 & market themselves through
 web, print, & social media.
 
 
 14450 Eagle Run Dr. Ste. 120
 Omaha, Nebraska 68116
 
 P  • 402.422.0150
 W • iridiangroup.com  
 >
 
 Join us on facebook > or twitter <
 https://twitter.com/iridiangroup >
>> 



Re: [EXTERNAL] - Re: compiling Solr

2017-07-14 Thread Erick Erickson
Steve:

Glad to hear it. BTW, I usually just attach to the server remotely
from my IDE rather than try to get Solr to run inside IntelliJ, I know
others run it all in the IDE though. You have to create a "remote"
configuration to run, then start Solr specially (pardon me if you know
all this) like:

bin/solr start  -p 8981 -s example/techproducts/solr -a
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=6900"

The "suspend -y" causes Solr to just sit there until you connect and
hit go, useful for debugging loading issues.

But I wouldn't necessarily even bother attaching to a remote session.
It's often far more directed to pick one of the junit tests (or create
one of your own) and debug through _that_ with no Solr running at all.
Plus if you're making changes it's faster to change code and re-run
the test than create a runnable Solr with the changes to debug. Of
course there are reasons you'd want to attach to a remote session, but
for diving into a particular bit of code the junit method is often
what I prefer.

FWIW,
Erick

On Fri, Jul 14, 2017 at 6:11 AM, Steve Pruitt  wrote:
> My mistake.  I guess I thought compiling and creating the dist still created 
> a war for the client.  The build was successful and of course the webapp 
> folder was created.  Again, my error.
>
> I am only building Solr because I want to learn more through direct 
> observation how things work.  Hard to glean much from the JavaDocs.
>
> My immediate concern is debugging (from IntelliJ)  two custom search 
> components I am working on.
>
> Thanks.
>
> -S
>
> -Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: Thursday, July 13, 2017 6:06 PM
> To: solr-user@lucene.apache.org
> Subject: [EXTERNAL] - Re: compiling Solr
>
> On 7/13/2017 2:16 PM, Steve Pruitt wrote:
>> I have been following the instructions on the Solr Wiki for compiling Solr.  
>> I started with the 6.6 source.  The only thing I did different was download 
>> the src directly.  I did not use Subversion.
>> I made through step 7 - Compile application with no problems.  However, the 
>> dist folder contains newly build snapshot jars, but no war file.
>
> As noted by Daniel on your other reply, that page is very out of date.
> This is more current:
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_solr_HowToContribute&d=DwICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=ksx9qnQFG3QvxkP54EBPEzv1HHDjlk-MFO-7EONGCtY&m=L4vyJ1M3fKfl6vI6BIjWsg2z9KsxHuYzSaZXy4L-T2c&s=mFpiIPugnxZvDFFlBAUNAU_a9GUhcDCRHJ1AZtj7BM8&e=
>
> There has been no war file in the dist directory since version 5.0.0, and 
> there has been no war file produced *at all* since version 5.3.0.
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_solr_WhyNoWar&d=DwICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=ksx9qnQFG3QvxkP54EBPEzv1HHDjlk-MFO-7EONGCtY&m=L4vyJ1M3fKfl6vI6BIjWsg2z9KsxHuYzSaZXy4L-T2c&s=O_5sS0kbtcPtQ2oTsB0H6K0Bp0K9lq4v0BBIJgX6YxY&e=
>
> If you run "ant server", then you will get a runnable server.  Once that's 
> done, type "bin/solr start" or "bin\solr start" to start Solr, depending on 
> the operating system.
>
> I agree with Daniel on another point:  If you aren't intending to immediately 
> jump into editing the source code, then you should download the binary 
> distribution, which is ready to run right away.
>
> You can also run "ant package" to create your own local copy of the binary 
> distribution with a SNAPSHOT version number.
>
> Thanks,
> Shawn
>


Re: Cant stop/start server

2017-07-14 Thread Erick Erickson
bq: wonder why -all didn’t pick it up?

Good question, I use this _all_ the time. (little joke there).

The -all flag looks for various .pid files, you'll see things like:
solr-8983.pid that contain the process id to kill associated with that
port. Any chance these were removed or in some different place?

Erick

On Fri, Jul 14, 2017 at 10:15 AM, Iridian Group
 wrote:
> Ahhh well then.
> I did try the -all flag but it returned nothing.
>
> However an explicit  -p 8983 did the trick.  :)
>
> … wonder why -all didn’t pick it up?
>
> Thanks!
>
>
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com 
>
> Join us on facebook  or twitter 
> 
>> On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:
>>
>> Did you mention the port with -p
>> Like
>>
>> Bin/solr stop -p 8983
>>
>> Please check
>>
>> On Jul 14, 2017 10:35 PM, "Iridian Group" > > wrote:
>>
>>> I know I am missing something very simple here but I cant stop/start my
>>> Solr instance with
>>> /opt/solr/bin/solr stop
>>>
>>> I get “No Solr nodes found to stop”, however the server is running. I can
>>> access the server via the default port and my app is able to use its
>>> services without issue.
>>>
>>>
>>> Thanks for any assistance!
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> Keith Savoie
>>> Vice President of Technology
>>>
>>> IRiDiAN GROUP
>>>
>>> Helping organizations brand
>>> & market themselves through
>>> web, print, & social media.
>>>
>>>
>>> 14450 Eagle Run Dr. Ste. 120
>>> Omaha, Nebraska 68116
>>>
>>> P  • 402.422.0150
>>> W • iridiangroup.com  
>>> >
>>>
>>> Join us on facebook >> > or twitter <
>>> https://twitter.com/iridiangroup >
>


Re: Solr 6.6 is trying to loading *some* (not all) cores more than once

2017-07-14 Thread Shawn Heisey
On 7/14/2017 10:48 AM, Erick Erickson wrote:
> I haven't seen anything like that, unsightly indeed.
>
> I like the idea of the button to remove failed messages. The only
> thing you see is the write lock exception, correct?
>
> And since you say it fails on different cores at different times that
> seems to rule out somehow you have more than one core pointing to the
> same data dir.
>
> Two questions:
> 1> I don't see "inc_1" in the list of cores discovered by the
> corePropertiesLocator. Is this perhaps an alias?
> 2> Are you absolutely sure that you don't somehow have more than one
> core pointing to the same data dir? this latter is unlikely as you say
> all the cores work, just covering bases.

The core is named inclive, its instanceDir ends in inc_1.  At the
moment, the core named incbuild is pointed at inc_0.  When those cores
are swapped, that will be reversed.  I went with this directory naming
scheme because cores get swapped on every full index rebuild and I did
not want to have a situation where a live core was pointed at a
directory with "build" in the name.  It's not running in cloud mode, so
there are no aliases.

There were no problems with 6.3.0 pointing at the same solr home. 
Seeing the "write.lock" problem after the upgrade, I initially assumed
that lockfiles were left over from 6.3.0 ... so I stopped Solr, deleted
all the lockfiles, and started it back up.  That's when I saw that it
was being held by the running VM, not the previous one.

Thanks,
Shawn



Re: Cant stop/start server

2017-07-14 Thread Iridian Group
Ahhh well then.
I did try the -all flag but it returned nothing.

However an explicit  -p 8983 did the trick.  :) 

… wonder why -all didn’t pick it up?

Thanks!

 

Keith Savoie
Vice President of Technology

IRiDiAN GROUP

Helping organizations brand
& market themselves through
web, print, & social media.  


14450 Eagle Run Dr. Ste. 120
Omaha, Nebraska 68116 

P  • 402.422.0150
W • iridiangroup.com  

Join us on facebook  or twitter 

> On Jul 14, 2017, at 12:08 PM, Atita Arora  wrote:
> 
> Did you mention the port with -p
> Like
> 
> Bin/solr stop -p 8983
> 
> Please check
> 
> On Jul 14, 2017 10:35 PM, "Iridian Group"  > wrote:
> 
>> I know I am missing something very simple here but I cant stop/start my
>> Solr instance with
>> /opt/solr/bin/solr stop
>> 
>> I get “No Solr nodes found to stop”, however the server is running. I can
>> access the server via the default port and my app is able to use its
>> services without issue.
>> 
>> 
>> Thanks for any assistance!
>> 
>> 
>> 
>> 
>> 
>> Thanks
>> 
>> Keith Savoie
>> Vice President of Technology
>> 
>> IRiDiAN GROUP
>> 
>> Helping organizations brand
>> & market themselves through
>> web, print, & social media.
>> 
>> 
>> 14450 Eagle Run Dr. Ste. 120
>> Omaha, Nebraska 68116
>> 
>> P  • 402.422.0150
>> W • iridiangroup.com  
>> >
>> 
>> Join us on facebook > > or twitter <
>> https://twitter.com/iridiangroup >



Re: Re: How to Debug Solr With Eclipse

2017-07-14 Thread Erick Erickson
Rainer:

Have you seen: https://wiki.apache.org/solr/HowToContribute? There's a
section about using Eclipse and a couple of other IDEs. I use IntelliJ
so can't help there. A number of devs use Eclipse so it should work.
Please feel free to add to the docs if you find a gotcha.

You can also pull down the source code from the Git repository and
switch to whatever branch suits your fancy. What you're doing should
work so I'm not sure what's up there.

You say: The sub-folder lucene/src is empty

I'm assuming this is in Eclipse, you do have the source when you
unpack the tgz file, right?

Best,
Erick

On Fri, Jul 14, 2017 at 7:01 AM, Rainer Gnan
 wrote:
> Hi Giovanni,
>
> thank you for this hint!
>
> The whole process (tar -xvf ..., ant compile, ant eclipse) untill importing 
> the eclipse-project seems to be fine .
> After importing it as an existing eclipse project the project explorer shows 
> an error sign on the project folder.
> Refreshing does not help.
>
> -> The sub-folder lucene/src is empty ...
>
> I am using eclipse neon, java 1.8.0_112, solr-6.6.0-src.tgz.
>
> Any suggestions?
>
> Cheers,
> Rainer
>
> 
> Rainer Gnan
> Bayerische Staatsbibliothek
> Verbundzentrale des BVB
> Referat Verbundnahe Dienste
> 80807 München
> Tel.: +49(0)89/28638-4445
> Fax: +49(0)89/28638-2605
> E-Mail: rainer.g...@bsb-muenchen.de
> 
>
>
>
 Giovanni De Stefano  13.07.2017 19:59 >>>
> Hello Rainer,
>
> you have the right link: select the version you want and download the -src 
> version.
>
> Once un untar the .tgz you can run `ant eclipse` from the command line and 
> then import the generated project in eclipse.
>
> Please note that you will need both and and ivy installed (just start with 
> ant eclipse and take it from there: the script will tell you what to do next).
>
> I hope it helps!
>
> Cheers,
> Giovanni
>
>
>> On 13 Jul 2017, at 19:54, govind nitk  wrote:
>>
>> Hi,
>>
>> Solr has releases, kindly checkout to the needed one.
>>
>>
>> cheers
>>
>> On Thu, Jul 13, 2017 at 11:20 PM, Rainer Gnan 
>> wrote:
>>
>>> Hello community,
>>>
>>> my aim is to develop solr custom code (e.g. UpdateRequestProcessor)
>>> within Eclipse AND to test the code within a debuggable solr/lucene
>>> local instance - also within Eclipse.
>>> Searching the web led me to multiple instructions but for me no one
>>> works.
>>>
>>> The only relevant question I actually have to solve this problem is:
>>> Where can I download the source code for the version I want that
>>> includes the ANT build.xml for building an Eclipse-Project?
>>>
>>> The solr project page (http://archive.apache.org/dist/lucene/solr/)
>>> seems not to provide that.
>>>
>>> I appreciate any hint!
>>>
>>> Best regards
>>> Rainer
>>>
>>>
>
>


Re: Cant stop/start server

2017-07-14 Thread Atita Arora
Did you mention the port with -p
Like

Bin/solr stop -p 8983

Please check

On Jul 14, 2017 10:35 PM, "Iridian Group"  wrote:

> I know I am missing something very simple here but I cant stop/start my
> Solr instance with
> /opt/solr/bin/solr stop
>
> I get “No Solr nodes found to stop”, however the server is running. I can
> access the server via the default port and my app is able to use its
> services without issue.
>
>
> Thanks for any assistance!
>
>
>
>
>
> Thanks
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com 
>
> Join us on facebook  or twitter <
> https://twitter.com/iridiangroup>
>


Re: Cant stop/start server

2017-07-14 Thread Erick Erickson
What is the exact command you use? Because the command is "stop -all"
or "stop -p "?

On Fri, Jul 14, 2017 at 10:05 AM, Iridian Group
 wrote:
> I know I am missing something very simple here but I cant stop/start my Solr 
> instance with
> /opt/solr/bin/solr stop
>
> I get “No Solr nodes found to stop”, however the server is running. I can 
> access the server via the default port and my app is able to use its services 
> without issue.
>
>
> Thanks for any assistance!
>
>
>
>
>
> Thanks
>
> Keith Savoie
> Vice President of Technology
>
> IRiDiAN GROUP
>
> Helping organizations brand
> & market themselves through
> web, print, & social media.
>
>
> 14450 Eagle Run Dr. Ste. 120
> Omaha, Nebraska 68116
>
> P  • 402.422.0150
> W • iridiangroup.com 
>
> Join us on facebook  or twitter 
> 


Re: Apache Solr 4.10.x - Collection Reload times out

2017-07-14 Thread Erick Erickson
I doubt SOLR-6246 is related, DirectSolrSpellChecker just looks in the
index using (on a quick scan) IndexReader which doesn't hold a lock
IIUC so it shouldn't leave anything around. Additionally, there is no
real "build" step since it's looking at the index rather than creating
a new one as AnalyzingInfixSuggester does. The write lock in that JIRA
was for the "sidecar" index that AnalyzingInfixSuggester created.

Which doesn't help your original issue. Have you tried specifying the
"async" parameter when you issue the RELOAD command then checking the
status with REQUESTSTATUS? I'm wondering if you restart your cluster
_after_ the reload is successfully completed whether you'd have the
same problem. Or whether you'd get some more helpful information if
the request actually fails somehow.

Also, why issue a reload? If you're re-indexing in the background and
want to atomically switch you could use collection aliasing (obviously
you'd need more disk space/resources which may make it not a viable
option). It looks like
> alias points to C1
> create C2 (or delete all data in an existing C2)
> index to C2
> check C2
> point alias to C2

Next time of course you index to C1 and switch the alias to C1 when
you're happy with it.

But even if you do the alias thing it'd still be good to see if we can
figure out what's going on because on the surface what you're
describing should be OK.

Best,
Erick

On Fri, Jul 14, 2017 at 8:11 AM, alessandro.benedetti
 wrote:
> I have been recently facing an issue with the Collection Reload in a couple
> of Solr Cloud clusters :
>
> 1) re-index a collection
> 2) collection happily working
> 3) trigger collection reload
> 4) reload times out ( silently, no message in any of the Solr node logs)
> 5) no effect on the collection ( it still serves query)
>
> If I restart, the collection doesn't start as it finds the write.lock in the
> index.
> Sometimes this even avoid the entire cluster to be restarted ( even if the
> clusterstate.json actually shows only few collection down) and Solr is not
> reachable.
> Of course i can mitigate the problem just cleaning up the indexes and
> restart (avoiding the reload in favor of just restarts in the future), but
> this is annoying.
>
> I index through the DIH and I use a DirectSolrSpellChecker .
> Should I take a look into Zookeeper ? I tried to check the Overseer queues
> and some other checks, not sure the best places to look though in there...
>
> Could this be related ?[1] I don't think so, but I am a bit puzzled...
>
> [1] https://issues.apache.org/jira/browse/SOLR-6246
>
>
>
>
>
>
> -
> ---
> Alessandro Benedetti
> Search Consultant, R&D Software Engineer, Director
> Sease Ltd. - www.sease.io
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Apache-Solr-4-10-x-Collection-Reload-times-out-tp4346075.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Cant stop/start server

2017-07-14 Thread Iridian Group
I know I am missing something very simple here but I cant stop/start my Solr 
instance with
/opt/solr/bin/solr stop

I get “No Solr nodes found to stop”, however the server is running. I can 
access the server via the default port and my app is able to use its services 
without issue. 


Thanks for any assistance!





Thanks 

Keith Savoie
Vice President of Technology

IRiDiAN GROUP

Helping organizations brand
& market themselves through
web, print, & social media.  


14450 Eagle Run Dr. Ste. 120
Omaha, Nebraska 68116 

P  • 402.422.0150
W • iridiangroup.com  

Join us on facebook  or twitter 



Re: NullPointerException on openStreams

2017-07-14 Thread Erick Erickson
Joel:

Would it make sense to throw a more informative error when the stream
context wasn't set? Maybe an explicit check in open() or some such?

Erick

On Fri, Jul 14, 2017 at 8:25 AM, Joe Obernberger
 wrote:
> Still stuck on this one.  I suspect there is something I'm not setting in
> the StreamContext.  I'm not sure what to put for these two?
> context.put("core", this.coreName);
> context.put("solr-core", req.getCore());
>
> Also not sure what the class is for ClassifyStream?  Error that I'm getting
> is:
>
> java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:408)
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:299)
> at
> com.ngc.bigdata.ie_machinelearningprofile.MachineLearningProfileProcessor.profile(MachineLearningProfileProcessor.java:344)
> at
> com.ngc.bigdata.ie_machinelearningprofile.ProfileThread.run(ProfileThread.java:41)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:653)
> at java.util.ArrayList.get(ArrayList.java:429)
> at
> org.apache.solr.client.solrj.io.stream.TupleStream.getShards(TupleStream.java:133)
> at
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:393)
>
> Thanks for any ideas!
>
> -Joe
>
>
>
> On 7/13/2017 4:33 PM, Joe Obernberger wrote:
>>
>> Thanks for this.  I'm now trying to use stream for classify, but am
>> getting an ArrayIndexOutOfBounds error on the stream.open().  I'm setting
>> the streamFactory up, and including .withFunctionName("classify",
>> ClassifyStream.class) - but is that class in orga.apache.solr.handler?
>>
>> -
>> StringBuilder expression = new StringBuilder();
>> solrCollection = getCollectionFromProfileBean(pBean);
>>
>> expression.append("classify(model(models,id=\"").append(pBean.getModelID()).append("\",cacheMillis=5000),");
>>
>> expression.append("search(").append(solrCollection).append(",q=\"DocumentId:").append(docID).append("\",");
>> expression.append("fl=\"ClusterText,id\",sort=\"id
>> asc\"),field=\"ClusterText\")");
>> logger.info("Have classify expression:\n" +
>> expression.toString() + "\n");
>> params.set("expr", expression.toString());
>> params.set("qt", "/stream");
>> params.set("explain", "true");
>> params.set("q", "*:*");
>> params.set("fl", "id");
>> params.set("sort", "id asc");
>>
>> context = new StreamContext();
>>
>> context.setSolrClientCache(StaticInfo.getSingleton(props).getClientCache());
>> context.workerID = 0;
>> context.numWorkers = 1;
>> context.setModelCache(StaticInfo.getSingleton(props).getModelCache());
>>
>> streamFactory.withCollectionZkHost(solrCollection,
>> props.getProperty("hbase.zookeeper.solr.quorum"))
>> .withFunctionName("search", CloudSolrStream.class)
>> .withFunctionName("facet", FacetStream.class)
>> .withFunctionName("update", UpdateStream.class)
>> .withFunctionName("jdbc", JDBCStream.class)
>> .withFunctionName("topic", TopicStream.class)
>> .withFunctionName("commit", CommitStream.class)
>> // decorator streams
>> .withFunctionName("merge", MergeStream.class)
>> .withFunctionName("unique", UniqueStream.class)
>> .withFunctionName("top", RankStream.class)
>> .withFunctionName("reduce", ReducerStream.class)
>> .withFunctionName("parallel", ParallelStream.class)
>> .withFunctionName("rollup", RollupStream.class)
>> .withFunctionName("stats", StatsStream.class)
>> .withFunctionName("innerJoin", InnerJoinStream.class)
>> .withFunctionName("leftOuterJoin",
>> LeftOuterJoinStream.class)
>> .withFunctionName("hashJoin", HashJoinStream.class)
>> .withFunctionName("outerHashJoin",
>> OuterHashJoinStream.class)
>> .withFunctionName("intersect", IntersectStream.class)
>> .withFunctionName("complement",
>> ComplementStream.class)
>> .withFunctionName(SORT, SortStream.class)
>> .withFunctionName("train", TextLogitStream.class)
>> .withFunctionName("features",
>> FeaturesSelectionStream.class)
>> .withF

Re: Solr 6.6 is trying to loading *some* (not all) cores more than once

2017-07-14 Thread Erick Erickson
I haven't seen anything like that, unsightly indeed.

I like the idea of the button to remove failed messages. The only
thing you see is the write lock exception, correct?

And since you say it fails on different cores at different times that
seems to rule out somehow you have more than one core pointing to the
same data dir.

Two questions:
1> I don't see "inc_1" in the list of cores discovered by the
corePropertiesLocator. Is this perhaps an alias?
2> Are you absolutely sure that you don't somehow have more than one
core pointing to the same data dir? this latter is unlikely as you say
all the cores work, just covering bases.

Erick

On Fri, Jul 14, 2017 at 9:34 AM, Shawn Heisey  wrote:
> I have a situation at work where a dev system that I have just upgraded
> from 6.3.0 to 6.6.0 is trying to load a small number of cores more than
> once.  It's not always the same cores on a restart -- sometimes the list
> is different, and isn't always the same number of cores.
>
> The affected cores are working perfectly -- the first load appears to
> have no issues and is not affected by the second (failed) load attempt.
>
> Grepping for "inc_1" in the solr log (the final directory in the
> instanceDir of a core that was affected on that restart), I see that the
> core "inclive" is discovered and loaded, then a short time later, Solr
> attempts to load the same core again.
>
> 2017-07-13 15:54:10.168 INFO  (coreLoadExecutor-6-thread-1) [   ]
> o.a.s.c.CoreContainer Creating SolrCore 'inclive' using configuration
> from instancedir /index/solr6/data/cores/inc_1, trusted=true
> 2017-07-13 15:54:10.168 INFO  (coreLoadExecutor-6-thread-1) [   ]
> o.a.s.c.SolrCore [[inclive] ] Opening new SolrCore at
> [/index/solr6/data/cores/inc_1],
> dataDir=[/index/solr6/data/cores/inc_1/../../data/inc_1/]
> 2017-07-13 15:54:10.362 INFO  (coreLoadExecutor-6-thread-1) [   ]
> o.a.s.r.ManagedResourceStorage File-based storage initialized to use
> dir: /index/solr6/data/cores/inc_1/conf
> 2017-07-13 15:54:10.363 INFO  (coreLoadExecutor-6-thread-1) [   ]
> o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json
> using file:dir=/index/solr6/data/cores/inc_1/conf
> 2017-07-13 15:54:10.640 INFO  (qtp120960120-22) [   x:inclive]
> o.a.s.c.CoreContainer Creating SolrCore 'inclive' using configuration
> from instancedir /index/solr6/data/cores/inc_1, trusted=true
> 2017-07-13 15:54:10.641 INFO  (qtp120960120-22) [   x:inclive]
> o.a.s.c.SolrCore [[inclive] ] Opening new SolrCore at
> [/index/solr6/data/cores/inc_1],
> dataDir=[/index/solr6/data/cores/inc_1/../../data/inc_1/]
> Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held
> by this virtual machine: /index/solr6/data/data/inc_1/index/write.lock
>
> I have only defined the solr home, not the coreRootDirectory.  Here is
> the first part of the log where it shows the primary configuration
> information:
>
> 2017-07-13 15:54:06.231 INFO  (main) [   ] o.e.j.s.Server
> jetty-9.3.14.v20161028
> 2017-07-13 15:54:06.573 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter
> ___  _   Welcome to Apache Solr™ version 6.6.0
> 2017-07-13 15:54:06.574 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter /
> __| ___| |_ _   Starting in standalone mode on port 8982
> 2017-07-13 15:54:06.574 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter
> \__ \/ _ \ | '_|  Install dir: /opt/solr6
> 2017-07-13 15:54:06.590 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter
> |___/\___/_|_|Start time: 2017-07-13T15:54:06.575Z
> 2017-07-13 15:54:06.608 INFO  (main) [   ] o.a.s.c.SolrResourceLoader
> Using system property solr.solr.home: /index/solr6/data
> 2017-07-13 15:54:06.616 INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading
> container configuration from /index/solr6/data/solr.xml
> 2017-07-13 15:54:06.745 INFO  (main) [   ] o.a.s.c.SolrResourceLoader
> [null] Added 8 libs to classloader, from paths: [/index/solr6/data/lib]
> 2017-07-13 15:54:07.244 INFO  (main) [   ] o.a.s.u.UpdateShardHandler
> Creating UpdateShardHandler HTTP client with params:
> socketTimeout=60&connTimeout=6&retry=true
> 2017-07-13 15:54:07.446 INFO  (main) [   ] o.a.s.c.CorePropertiesLocator
> Found 49 core definitions underneath /index/solr6/data
> 2017-07-13 15:54:07.447 INFO  (main) [   ] o.a.s.c.CorePropertiesLocator
> Cores are: [s0build, ai-inclive, spark1live, inclive, spark5build,
> ai-main, ncmain, s2build, sparkmain, ncrss, s4build, ai-1build, s3live,
> spark7build, spark7live, sparkinclive, sparkrss, spark2build,
> spark6build, spark1build, s5live, spark4build, ai-0live, s3build,
> spark2live, ai-incbuild, spark9live, spark4live, spark8live, ai-rss,
> s1build, spark0build, incbuild, spark5live, ai-1live, spark0live,
> s5build, s2live, sparkincbuild, s0live, spark3live, s4live, spark8build,
> spark3build, ai-0build, banana, s1live, spark6live, spark9build]
> 2017-07-13 15:54:07.549 INFO  (main) [   ] o.e.j.s.Server Started @2675ms
>
> As I said, this does not affect how the system runs, but it does displ

Solr 6.6 is trying to loading *some* (not all) cores more than once

2017-07-14 Thread Shawn Heisey
I have a situation at work where a dev system that I have just upgraded
from 6.3.0 to 6.6.0 is trying to load a small number of cores more than
once.  It's not always the same cores on a restart -- sometimes the list
is different, and isn't always the same number of cores.

The affected cores are working perfectly -- the first load appears to
have no issues and is not affected by the second (failed) load attempt.

Grepping for "inc_1" in the solr log (the final directory in the
instanceDir of a core that was affected on that restart), I see that the
core "inclive" is discovered and loaded, then a short time later, Solr
attempts to load the same core again.

2017-07-13 15:54:10.168 INFO  (coreLoadExecutor-6-thread-1) [   ]
o.a.s.c.CoreContainer Creating SolrCore 'inclive' using configuration
from instancedir /index/solr6/data/cores/inc_1, trusted=true
2017-07-13 15:54:10.168 INFO  (coreLoadExecutor-6-thread-1) [   ]
o.a.s.c.SolrCore [[inclive] ] Opening new SolrCore at
[/index/solr6/data/cores/inc_1],
dataDir=[/index/solr6/data/cores/inc_1/../../data/inc_1/]
2017-07-13 15:54:10.362 INFO  (coreLoadExecutor-6-thread-1) [   ]
o.a.s.r.ManagedResourceStorage File-based storage initialized to use
dir: /index/solr6/data/cores/inc_1/conf
2017-07-13 15:54:10.363 INFO  (coreLoadExecutor-6-thread-1) [   ]
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json
using file:dir=/index/solr6/data/cores/inc_1/conf
2017-07-13 15:54:10.640 INFO  (qtp120960120-22) [   x:inclive]
o.a.s.c.CoreContainer Creating SolrCore 'inclive' using configuration
from instancedir /index/solr6/data/cores/inc_1, trusted=true
2017-07-13 15:54:10.641 INFO  (qtp120960120-22) [   x:inclive]
o.a.s.c.SolrCore [[inclive] ] Opening new SolrCore at
[/index/solr6/data/cores/inc_1],
dataDir=[/index/solr6/data/cores/inc_1/../../data/inc_1/]
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held
by this virtual machine: /index/solr6/data/data/inc_1/index/write.lock

I have only defined the solr home, not the coreRootDirectory.  Here is
the first part of the log where it shows the primary configuration
information:

2017-07-13 15:54:06.231 INFO  (main) [   ] o.e.j.s.Server
jetty-9.3.14.v20161028
2017-07-13 15:54:06.573 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter 
___  _   Welcome to Apache Solr™ version 6.6.0
2017-07-13 15:54:06.574 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter /
__| ___| |_ _   Starting in standalone mode on port 8982
2017-07-13 15:54:06.574 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter
\__ \/ _ \ | '_|  Install dir: /opt/solr6
2017-07-13 15:54:06.590 INFO  (main) [   ] o.a.s.s.SolrDispatchFilter
|___/\___/_|_|Start time: 2017-07-13T15:54:06.575Z
2017-07-13 15:54:06.608 INFO  (main) [   ] o.a.s.c.SolrResourceLoader
Using system property solr.solr.home: /index/solr6/data
2017-07-13 15:54:06.616 INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading
container configuration from /index/solr6/data/solr.xml
2017-07-13 15:54:06.745 INFO  (main) [   ] o.a.s.c.SolrResourceLoader
[null] Added 8 libs to classloader, from paths: [/index/solr6/data/lib]
2017-07-13 15:54:07.244 INFO  (main) [   ] o.a.s.u.UpdateShardHandler
Creating UpdateShardHandler HTTP client with params:
socketTimeout=60&connTimeout=6&retry=true
2017-07-13 15:54:07.446 INFO  (main) [   ] o.a.s.c.CorePropertiesLocator
Found 49 core definitions underneath /index/solr6/data
2017-07-13 15:54:07.447 INFO  (main) [   ] o.a.s.c.CorePropertiesLocator
Cores are: [s0build, ai-inclive, spark1live, inclive, spark5build,
ai-main, ncmain, s2build, sparkmain, ncrss, s4build, ai-1build, s3live,
spark7build, spark7live, sparkinclive, sparkrss, spark2build,
spark6build, spark1build, s5live, spark4build, ai-0live, s3build,
spark2live, ai-incbuild, spark9live, spark4live, spark8live, ai-rss,
s1build, spark0build, incbuild, spark5live, ai-1live, spark0live,
s5build, s2live, sparkincbuild, s0live, spark3live, s4live, spark8build,
spark3build, ai-0build, banana, s1live, spark6live, spark9build]
2017-07-13 15:54:07.549 INFO  (main) [   ] o.e.j.s.Server Started @2675ms

As I said, this does not affect how the system runs, but it does display
unsightly "Initialization Failures" on every page of the admin UI.

Has anyone else run into this?  Worth an issue?

Possible enhancement idea: A button to acknowledge initialization
failures and remove them from the UI display.

Thanks,
Shawn



Re: Run solr 6.5+ as daemon

2017-07-14 Thread Shawn Heisey
On 7/14/2017 8:29 AM, Nawab Zada Asad Iqbal wrote:
> I want my solr to restart if the process crashes; I am wondering if there
> is any drawback which I should consider?
> I am considering to use 'daemon --respawn' in the bin/solr;

The included scripts already run Solr in the background.  I don't know
if that's enough to call it a daemon, but it's pretty close even if it's
not technically accurate.

Solr almost never *crashes*.  I've never seen it happen, and I've been
using Solr for seven years.  Typically if a Solr process were to
actually crash, it would be caused by a problem with Java itself, a
problem with the local Solr installation, or a problem with the
operating system.

Modern Solr versions (if running on non-Windows systems) *do* kill
themselves if an OutOfMemoryError exception occurs ... but if that
happens, you do not want to automatically restart Solr -- you need to
figure out why the OOME happened and fix it.  After a Java program
encounters OOME, it is completely unpredictable and can destroy its
data.  If the OOME was not caused by an atypical query, it is almost
guaranteed to happen again.

Thanks,
Shawn



Re: NullPointerException on openStreams

2017-07-14 Thread Joe Obernberger
Still stuck on this one.  I suspect there is something I'm not setting 
in the StreamContext.  I'm not sure what to put for these two?

context.put("core", this.coreName);
context.put("solr-core", req.getCore());

Also not sure what the class is for ClassifyStream?  Error that I'm 
getting is:


java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:408)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:299)
at 
com.ngc.bigdata.ie_machinelearningprofile.MachineLearningProfileProcessor.profile(MachineLearningProfileProcessor.java:344)
at 
com.ngc.bigdata.ie_machinelearningprofile.ProfileThread.run(ProfileThread.java:41)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.getShards(TupleStream.java:133)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:393)


Thanks for any ideas!

-Joe


On 7/13/2017 4:33 PM, Joe Obernberger wrote:
Thanks for this.  I'm now trying to use stream for classify, but am 
getting an ArrayIndexOutOfBounds error on the stream.open().  I'm 
setting the streamFactory up, and including 
.withFunctionName("classify", ClassifyStream.class) - but is that 
class in orga.apache.solr.handler?


-
StringBuilder expression = new StringBuilder();
solrCollection = getCollectionFromProfileBean(pBean);
expression.append("classify(model(models,id=\"").append(pBean.getModelID()).append("\",cacheMillis=5000),"); 

expression.append("search(").append(solrCollection).append(",q=\"DocumentId:").append(docID).append("\","); 

expression.append("fl=\"ClusterText,id\",sort=\"id 
asc\"),field=\"ClusterText\")");
logger.info("Have classify expression:\n" + 
expression.toString() + "\n");

params.set("expr", expression.toString());
params.set("qt", "/stream");
params.set("explain", "true");
params.set("q", "*:*");
params.set("fl", "id");
params.set("sort", "id asc");

context = new StreamContext();
context.setSolrClientCache(StaticInfo.getSingleton(props).getClientCache()); 


context.workerID = 0;
context.numWorkers = 1;
context.setModelCache(StaticInfo.getSingleton(props).getModelCache());

streamFactory.withCollectionZkHost(solrCollection, 
props.getProperty("hbase.zookeeper.solr.quorum"))

.withFunctionName("search", CloudSolrStream.class)
.withFunctionName("facet", FacetStream.class)
.withFunctionName("update", UpdateStream.class)
.withFunctionName("jdbc", JDBCStream.class)
.withFunctionName("topic", TopicStream.class)
.withFunctionName("commit", CommitStream.class)
// decorator streams
.withFunctionName("merge", MergeStream.class)
.withFunctionName("unique", UniqueStream.class)
.withFunctionName("top", RankStream.class)
.withFunctionName("reduce", ReducerStream.class)
.withFunctionName("parallel", ParallelStream.class)
.withFunctionName("rollup", RollupStream.class)
.withFunctionName("stats", StatsStream.class)
.withFunctionName("innerJoin", InnerJoinStream.class)
.withFunctionName("leftOuterJoin", 
LeftOuterJoinStream.class)

.withFunctionName("hashJoin", HashJoinStream.class)
.withFunctionName("outerHashJoin", 
OuterHashJoinStream.class)

.withFunctionName("intersect", IntersectStream.class)
.withFunctionName("complement", 
ComplementStream.class)

.withFunctionName(SORT, SortStream.class)
.withFunctionName("train", TextLogitStream.class)
.withFunctionName("features", 
FeaturesSelectionStream.class)

.withFunctionName("daemon", DaemonStream.class)
.withFunctionName("shortestPath", 
ShortestPathStream.class)
.withFunctionName("gatherNodes", 
GatherNodesStream.class)

.withFunctionName("nodes", GatherNodesStream.class)
.withFunctionName("select", SelectStream.class)
.withFunctionName("shortestPath", 
ShortestPathStream.cl

Apache Solr 4.10.x - Collection Reload times out

2017-07-14 Thread alessandro.benedetti
I have been recently facing an issue with the Collection Reload in a couple
of Solr Cloud clusters :

1) re-index a collection
2) collection happily working
3) trigger collection reload 
4) reload times out ( silently, no message in any of the Solr node logs)
5) no effect on the collection ( it still serves query)

If I restart, the collection doesn't start as it finds the write.lock in the
index.
Sometimes this even avoid the entire cluster to be restarted ( even if the
clusterstate.json actually shows only few collection down) and Solr is not
reachable.
Of course i can mitigate the problem just cleaning up the indexes and
restart (avoiding the reload in favor of just restarts in the future), but
this is annoying.

I index through the DIH and I use a DirectSolrSpellChecker .
Should I take a look into Zookeeper ? I tried to check the Overseer queues
and some other checks, not sure the best places to look though in there...

Could this be related ?[1] I don't think so, but I am a bit puzzled...

[1] https://issues.apache.org/jira/browse/SOLR-6246






-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Apache-Solr-4-10-x-Collection-Reload-times-out-tp4346075.html
Sent from the Solr - User mailing list archive at Nabble.com.


Run solr 6.5+ as daemon

2017-07-14 Thread Nawab Zada Asad Iqbal
Hi,

I want my solr to restart if the process crashes; I am wondering if there
is any drawback which I should consider?
I am considering to use 'daemon --respawn' in the bin/solr; where the OOTB
script has following statement:

nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS
> -Dsolr.log.muteconsole \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT
> $SOLR_LOGS_DIR" \
> -jar start.jar "${SOLR_JETTY_CONFIG[@]}" \
> 1>"$SOLR_LOGS_DIR/solr-$SOLR_PORT-console.log" 2>&1 & echo $! >
> "$SOLR_PID_DIR/solr-$SOLR_PORT.pid"
>


Basically something like:

daemon --respawn --name mysolr --pidfiles="$SOLR_PID_DIR/solr-$SOLR_PORT.pid"
--user solr --chdir /home/nawab/solr/bin \
--stdout $LOGDIR/$SOLR_SHARD.log --stderr $LOGDIR/$SOLR_SHARD.log \
--command $JAVA -- "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS
-Dsolr.log.muteconsole \
"-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT
$SOLR_LOGS_DIR" \
-jar start.jar "${SOLR_JETTY_CONFIG[@]}"


Regards
Nawab


Antw: Re: How to Debug Solr With Eclipse

2017-07-14 Thread Rainer Gnan
Hi Giovanni,

thank you for this hint!

The whole process (tar -xvf ..., ant compile, ant eclipse) untill importing the 
eclipse-project seems to be fine .
After importing it as an existing eclipse project the project explorer shows an 
error sign on the project folder.
Refreshing does not help.

-> The sub-folder lucene/src is empty ...

I am using eclipse neon, java 1.8.0_112, solr-6.6.0-src.tgz.

Any suggestions?

Cheers,
Rainer


Rainer Gnan
Bayerische Staatsbibliothek 
Verbundzentrale des BVB
Referat Verbundnahe Dienste
80807 München
Tel.: +49(0)89/28638-4445
Fax: +49(0)89/28638-2605
E-Mail: rainer.g...@bsb-muenchen.de




>>> Giovanni De Stefano  13.07.2017 19:59 >>>
Hello Rainer,

you have the right link: select the version you want and download the -src 
version.

Once un untar the .tgz you can run `ant eclipse` from the command line and then 
import the generated project in eclipse.

Please note that you will need both and and ivy installed (just start with ant 
eclipse and take it from there: the script will tell you what to do next).

I hope it helps!

Cheers,
Giovanni


> On 13 Jul 2017, at 19:54, govind nitk  wrote:
> 
> Hi,
> 
> Solr has releases, kindly checkout to the needed one.
> 
> 
> cheers
> 
> On Thu, Jul 13, 2017 at 11:20 PM, Rainer Gnan 
> wrote:
> 
>> Hello community,
>> 
>> my aim is to develop solr custom code (e.g. UpdateRequestProcessor)
>> within Eclipse AND to test the code within a debuggable solr/lucene
>> local instance - also within Eclipse.
>> Searching the web led me to multiple instructions but for me no one
>> works.
>> 
>> The only relevant question I actually have to solve this problem is:
>> Where can I download the source code for the version I want that
>> includes the ANT build.xml for building an Eclipse-Project?
>> 
>> The solr project page (http://archive.apache.org/dist/lucene/solr/)
>> seems not to provide that.
>> 
>> I appreciate any hint!
>> 
>> Best regards
>> Rainer
>> 
>> 




Re: Create too many zookeeper connections when recreate CloudSolrServer instance

2017-07-14 Thread Shawn Heisey
On 7/14/2017 6:29 AM, wg85907 wrote:
> I use Solr(4.10.2) as indexing tool. I use a singleton
> CloudSolrServer instance to query Solr. When meet exception, for example
> current Solr server not response, i will create a new CloudSolrServer
> instance and shutdown the old one.

Why shutdown the object and create a new one?  That should not be necessary.

The SolrJ client objects, including CloudSolrServer (CloudSolrClient in
5.0 and later) are designed to be created once and used by multiple
threads until program exit.  Any problems you encounter with them should
be due to either an incorrect query or server side problems ... if you
are finding that the client stops working after encountering an error
and never starts working again, that's either a problem with your system
or a bug in SolrJ.  A problem with your system is more likely than a
bug, but a bug is always possible.

Since you're using CloudSolrServer and not CloudSolrClient, I am
assuming that your SolrJ version is also 4.x.  Upgrading is strongly
recommended.  Here are a few things you should know about why I am
making that recommendation:  Development on 4.x is completely dead since
the release of 6.0 in April 2016 -- any bug found in 4.x will NOT be
fixed.  The 5.x branch is in maintenance mode, which means that only
very major bugs will be fixed.  The current stable branch is 6.x -- the
vast majority of problems will only be fixed there.  The project is in
the process of gearing up for a 7.0 release, which will end development
on 5.x, put 6.x in maintenance mode, and make 7.x the stable branch.

> 2017-07-06 09:42:37,595 [myid:5] - WARN 
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:10199:NIOServerCnxnFactory@193] - Too 
> many connections from /169.171.87.37 - max is 60
>   So I just want to know if I operate CloudSolrServer in a wrong way and 
> do you have any suggestions about how to fill my requirement.

You should not be creating new CloudSolrServer instances.  That
exception may indicate that there is a bug in the shutdown method, but
as already said, you should not need to make a new client object.  Note
that a bug in the SolrServer#shutdown method in 4.x or 5.x isn't going
to be fixed.  In 6.x, the shutdown method is gone and the SolrClient
object now uses a close() method.  If 6.x has a problem with close(),
that is something we need to know.

Thanks,
Shawn



RE: [EXTERNAL] - Re: compiling Solr

2017-07-14 Thread Steve Pruitt
My mistake.  I guess I thought compiling and creating the dist still created a 
war for the client.  The build was successful and of course the webapp folder 
was created.  Again, my error.

I am only building Solr because I want to learn more through direct observation 
how things work.  Hard to glean much from the JavaDocs.

My immediate concern is debugging (from IntelliJ)  two custom search components 
I am working on.

Thanks.

-S

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Thursday, July 13, 2017 6:06 PM
To: solr-user@lucene.apache.org
Subject: [EXTERNAL] - Re: compiling Solr

On 7/13/2017 2:16 PM, Steve Pruitt wrote:
> I have been following the instructions on the Solr Wiki for compiling Solr.  
> I started with the 6.6 source.  The only thing I did different was download 
> the src directly.  I did not use Subversion.
> I made through step 7 - Compile application with no problems.  However, the 
> dist folder contains newly build snapshot jars, but no war file.

As noted by Daniel on your other reply, that page is very out of date. 
This is more current:

https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_solr_HowToContribute&d=DwICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=ksx9qnQFG3QvxkP54EBPEzv1HHDjlk-MFO-7EONGCtY&m=L4vyJ1M3fKfl6vI6BIjWsg2z9KsxHuYzSaZXy4L-T2c&s=mFpiIPugnxZvDFFlBAUNAU_a9GUhcDCRHJ1AZtj7BM8&e=
 

There has been no war file in the dist directory since version 5.0.0, and there 
has been no war file produced *at all* since version 5.3.0.

https://urldefense.proofpoint.com/v2/url?u=https-3A__wiki.apache.org_solr_WhyNoWar&d=DwICaQ&c=ZgVRmm3mf2P1-XDAyDsu4A&r=ksx9qnQFG3QvxkP54EBPEzv1HHDjlk-MFO-7EONGCtY&m=L4vyJ1M3fKfl6vI6BIjWsg2z9KsxHuYzSaZXy4L-T2c&s=O_5sS0kbtcPtQ2oTsB0H6K0Bp0K9lq4v0BBIJgX6YxY&e=
 

If you run "ant server", then you will get a runnable server.  Once that's 
done, type "bin/solr start" or "bin\solr start" to start Solr, depending on the 
operating system.

I agree with Daniel on another point:  If you aren't intending to immediately 
jump into editing the source code, then you should download the binary 
distribution, which is ready to run right away.

You can also run "ant package" to create your own local copy of the binary 
distribution with a SNAPSHOT version number.

Thanks,
Shawn



Solr 6.6.0 - Deleting Collections - HDFS

2017-07-14 Thread Joe Obernberger
When I delete a collection, it is gone from the GUI, but the directory 
is not removed from HDFS.  The directory is empty, but the entry is 
still there.  Is this expected?  As shown below all the MODEL1007_* 
collections have been deleted.


hadoop fs -du -s -h /solr6.6.0/*
3.3 G  22.7 G  /solr6.6.0/IMAGEDATA
0  0  /solr6.6.0/MODEL1007_1499965404903
0  0  /solr6.6.0/MODEL1007_1499966093797
0  0  /solr6.6.0/MODEL1007_1499968803262
0  0  /solr6.6.0/MODEL1007_1499969417635
0  0  /solr6.6.0/MODEL1007_1499969774354
0  0  /solr6.6.0/MODEL1007_1499970938597
0  0  /solr6.6.0/MODEL1007_1499971101288
0  0  /solr6.6.0/MODEL1007_1499971618545
635.8 G  2.0 T  /solr6.6.0/UNCLASS
241.5 K  724.5 K  /solr6.6.0/models

-Joe



Create too many zookeeper connections when recreate CloudSolrServer instance

2017-07-14 Thread wg85907
Hi Community,
I use Solr(4.10.2) as indexing tool. I use a singleton
CloudSolrServer instance to query Solr. When meet exception, for example
current Solr server not response, i will create a new CloudSolrServer
instance and shutdown the old one. We have many query threads that share the
same CloudSolrServer instance. In a case, when thread A meet an Exception it
create a new CloudSolrServer instance and begin to shutdown current
CloudSolrServer, from Solr code I know the first step is to close the
Zookeeper connection; while at the same time, thread B may still doing query
with this instance, the first step of query is to check Zookeeper
connection, if the connection is not exist, then create one. Thread A can
processed to do the shutdown. Then the Zookeeper connection created by
thread B is over there without access. Due to this, we may have more and
more zookeeper connections at the same time till we can't create one new and
get below exception on zookeeper server side:   

2017-07-06 09:42:37,595 [myid:5] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:10199:NIOServerCnxnFactory@193] - Too
many connections from /169.171.87.37 - max is 60
  So I just want to know if I operate CloudSolrServer in a wrong way and
do you have any suggestions about how to fill my requirement.
Regards,
Geng, Wei



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Create-too-many-zookeeper-connections-when-recreate-CloudSolrServer-instance-tp4346040.html
Sent from the Solr - User mailing list archive at Nabble.com.


Issue: Hit Highlighting Working Inconsistently in Solr 6.6

2017-07-14 Thread Vikram Oberoi
Hi there,

I'm seeing inconsistent highlighting behavior using a default, fresh Solr
6.6 install and it's unclear to me why or how to go about debugging it.

Hit highlights either show entirely correct highlights or none at all when
there should be highlights.

   - Some queries show highlights out of the box, some do not.
  - e.g. "hello" yields no highlights, but "goodbye" correctly yields
  highlights
   - Some queries that do not show highlights suddenly work when specifying
   fields
  - e.g. "subject:hello" yields highlights, but "hello" does not
   - When queries that yield highlights and queries that do not are
   combined, only those that work are highlighted.
  - e.g. "hello goodbye" yields highlights correctly for "goodbye", but
  not for "hello"

I've thrown specific details and examples in a Gist here:

Full Gist: https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cfb338

   - Problem description:
  -
  
https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cfb338#file-problem-details-md
   - Solr install, my schema, solrconfig details:
  -
  
https://gist.github.com/voberoi/a7a8a679390fc4f27422e70600cfb338#file-solr-details-md

Does anyone here have any hypotheses for why this might be happening?

Thanks!
Vikram