How to use javacc with QueryParser.jj

2017-07-24 Thread Nawab Zada Asad Iqbal
[Subject changed for reposting]

Good morning,

If I want to change something in the lucene-solr/solr/core/src/java
/org/apache/solr/parser/QueryParser.jj, what is the workflow to generate
the new Java code?


Thanks
Nawab

On Fri, Jul 21, 2017 at 7:33 PM, Nawab Zada Asad Iqbal 
wrote:

> ok,  I see there is an `ant javacc` target in some folders, e.g.
>
> 1) lucene-solr/solr/build/solr/src-export/solr/core
> 2) lucene-solr/lucene/queryparser
>
> Both of them use different parser files. I am interested in the
> QueryParser at path:
> lucene-solr/solr/core/src/java/org/apache/solr/parser/QueryParser.jj
>
> this apparently is getting dropped at:  lucene-solr/solr/build/solr/sr
> c-export/solr/core/src/java/org/apache/solr/parser/QueryParser.jj
>
> However, I am not sure what target drops it!
>
>
> Nawab
>
>
>
>
> On Fri, Jul 21, 2017 at 7:12 PM, Nawab Zada Asad Iqbal 
> wrote:
>
>> Hi,
>>
>> I know that we can make changes in the language by editing
>> QueryParser.jj, however, how does it get generated into java code? Is there
>> any ant target?
>> 'compile' doesn't seem to generate java code for my changes (e.g., adding
>> lower case logical operators).
>>
>>
>> Regards
>> Nawab
>>
>
>


Re: how to generate code from QueryParser.jj file

2017-07-21 Thread Nawab Zada Asad Iqbal
ok,  I see there is an `ant javacc` target in some folders, e.g.

1) lucene-solr/solr/build/solr/src-export/solr/core
2) lucene-solr/lucene/queryparser

Both of them use different parser files. I am interested in the QueryParser
at path:
lucene-solr/solr/core/src/java/org/apache/solr/parser/QueryParser.jj

this apparently is getting dropped at:
lucene-solr/solr/build/solr/src-export/solr/core/src/java/org/apache/solr/parser/QueryParser.jj

However, I am not sure what target drops it!


Nawab




On Fri, Jul 21, 2017 at 7:12 PM, Nawab Zada Asad Iqbal 
wrote:

> Hi,
>
> I know that we can make changes in the language by editing QueryParser.jj,
> however, how does it get generated into java code? Is there any ant target?
> 'compile' doesn't seem to generate java code for my changes (e.g., adding
> lower case logical operators).
>
>
> Regards
> Nawab
>


how to generate code from QueryParser.jj file

2017-07-21 Thread Nawab Zada Asad Iqbal
Hi,

I know that we can make changes in the language by editing QueryParser.jj,
however, how does it get generated into java code? Is there any ant target?
'compile' doesn't seem to generate java code for my changes (e.g., adding
lower case logical operators).


Regards
Nawab


Re: Solr 6.6 test failure: TestSolrCloudWithKerberosAlt.testBasics

2017-07-20 Thread Nawab Zada Asad Iqbal
xecutor.java:617)
   [junit4]> at java.lang.Thread.run(Thread.java:745)
   [junit4]> at
__randomizedtesting.SeedInfo.seed([C3B77541FB9DE693]:0)
   [junit4] Completed [1/1 (1!)] in 32.47s, 1 test, 3 errors <<< FAILURES!
   [junit4]
   [junit4]
   [junit4] Tests with failures [seed: C3B77541FB9DE693]:
   [junit4]   -
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
   [junit4]   - org.apache.solr.cloud.TestSolrCloudWithKerberosAlt (suite)
   [junit4]
   [junit4]
   [junit4] JVM J0: 0.86 ..34.40 =33.53s
   [junit4] Execution time total: 34 seconds
   [junit4] Tests summary: 1 suite, 1 test, 2 suite-level errors, 1 error

BUILD FAILED





On Thu, Jul 20, 2017 at 1:20 PM, Steve Rowe  wrote:

> Does it look like this?: <https://lists.apache.org/thread.html/
> 643b8188e94983cfa191116381ae3044ab06fe78a2aedf1768c6d6c5@%
> 3Cdev.lucene.apache.org%3E>
>
> I see failures like that on my Jenkins once or twice a week.
>
> --
> Steve
> www.lucidworks.com
>
> > On Jul 20, 2017, at 3:53 PM, Nawab Zada Asad Iqbal 
> wrote:
> >
> > Hi,
> >
> > I cloned solr 6.6 branch today and I see this failure consistently.
> >
> > TestSolrCloudWithKerberosAlt.testBasics
> >
> >
> > I had done some script changes but after seeing this failure I reverted
> > them and ran: `ant -Dtestcase=TestSolrCloudWithKerberosAlt clean test`
> but
> > this test still fails with this error:-
> >
> >   [junit4]> Throwable #1: java.lang.NoSuchFieldError: id_aes128_CBC
> >   [junit4]> at
> > __randomizedtesting.SeedInfo.seed([453D16027AC52FD9:78E5B82E422B71A9]:0)
> >
> >
> > I see the jenkins build are all clean, so not sure what I am hitting.
> >
> > https://builds.apache.org/job/Lucene-Solr-Maven-6.x/
> >
> > https://builds.apache.org/job/Solr-Artifacts-6.x/
> >
> > Regards
> > Nawab
>
>


Solr 6.6 test failure: TestSolrCloudWithKerberosAlt.testBasics

2017-07-20 Thread Nawab Zada Asad Iqbal
Hi,

I cloned solr 6.6 branch today and I see this failure consistently.

TestSolrCloudWithKerberosAlt.testBasics


I had done some script changes but after seeing this failure I reverted
them and ran: `ant -Dtestcase=TestSolrCloudWithKerberosAlt clean test` but
this test still fails with this error:-

   [junit4]> Throwable #1: java.lang.NoSuchFieldError: id_aes128_CBC
   [junit4]> at
__randomizedtesting.SeedInfo.seed([453D16027AC52FD9:78E5B82E422B71A9]:0)


I see the jenkins build are all clean, so not sure what I am hitting.

https://builds.apache.org/job/Lucene-Solr-Maven-6.x/

https://builds.apache.org/job/Solr-Artifacts-6.x/

Regards
Nawab


Re: 'ant test' gets stuck after aborting one run

2017-07-19 Thread Nawab Zada Asad Iqbal
Thanks Erick for the fix.

Meanwhile, I had restarted the terminal, then the machine and cloned the
repo again and then realized that the problematic status is somewhere else
on the drive which I don't know.


Nawab

On Wed, Jul 19, 2017 at 12:57 PM, Erick Erickson 
wrote:

> This is often an issue with ivy, one of my least favorite "features"
> of Ivy. To cure it I delete all the *.lck files in my ivy cache. On my
> mac:
>
> cd ~/.ivy2
> find . -name "*.lck" | xargs rm
>
> Best,
> Erick
>
>
> On Wed, Jul 19, 2017 at 11:21 AM, Nawab Zada Asad Iqbal
>  wrote:
> > Hi
> >
> >
> > I stopped 'ant test' target before it finished, and now whenever I run it
> > again, it is stuck at 'install-junit4-taskdef'.
> >
> > I have tried 'ant clean' but it didn't help. I guessed that it could be
> > some locking thing in ivy or ant so I set ivy.sync to false in the
> > common-build.xml
> >
> >  ""
> >
> > I also deleted the .cache folder.
> >
> > But that didn't help either.
> >
> > What should I do?
> >
> > When run with '-v', the execution halts at following logs:-
> >
> > ...
> > install-junit4-taskdef:
> > Overriding previous definition of property "ivy.version"
> > [ivy:cachepath] using inline mode to resolve
> > com.carrotsearch.randomizedtesting junit4-ant 2.5.0 (*(public))
> > [ivy:cachepath] no resolved descriptor found: launching default resolve
> > Overriding previous definition of property "ivy.version"
> > [ivy:cachepath] default: Checking cache for: dependency:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {}
> > [ivy:cachepath] don't use cache for
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0: checkModified=true
> > [ivy:cachepath] tried
> > /Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/
> 2.5.0/ivys/ivy.xml
> > [ivy:cachepath] tried
> > /Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/
> 2.5.0/jars/junit4-ant.jar
> > [ivy:cachepath] local: no ivy file nor artifact found for
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
> > [ivy:cachepath] main: Checking cache for: dependency:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {}
> > [ivy:cachepath] main: module revision found in cache:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
> > [ivy:cachepath] :: resolving dependencies ::
> > com.carrotsearch.randomizedtesting#junit4-ant-caller;working
> > [ivy:cachepath] confs: [default, master, compile, provided, runtime,
> > system, sources, javadoc, optional]
> > [ivy:cachepath] validate = true
> > [ivy:cachepath] refresh = false
> > [ivy:cachepath] resolving dependencies for configuration 'default'
> > [ivy:cachepath] == resolving dependencies for
> > com.carrotsearch.randomizedtesting#junit4-ant-caller;working [default]
> > [ivy:cachepath] == resolving dependencies
> > com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.
> carrotsearch.randomizedtesting#junit4-ant;2.5.0
> > [default->default]
> > [ivy:cachepath] default: Checking cache for: dependency:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {default=[default],
> > master=[master], compile=[compile], provided=[provided],
> runtime=[runtime],
> > system=[system], sources=[sources], javadoc=[javadoc],
> optional=[optional]}
> > [ivy:cachepath] don't use cache for
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0: checkModified=true
> > [ivy:cachepath] tried
> > /Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/
> 2.5.0/ivys/ivy.xml
> > [ivy:cachepath] tried
> > /Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/
> 2.5.0/jars/junit4-ant.jar
> > [ivy:cachepath] local: no ivy file nor artifact found for
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
> > [ivy:cachepath] main: Checking cache for: dependency:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {default=[default],
> > master=[master], compile=[compile], provided=[provided],
> runtime=[runtime],
> > system=[system], sources=[sources], javadoc=[javadoc],
> optional=[optional]}
> > [ivy:cachepath] main: module revision found in cache:
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
> > [ivy:cachepath] found
> > com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 in public
> > [ivy:cachepath] == resolving dependencies
&

'ant test' gets stuck after aborting one run

2017-07-19 Thread Nawab Zada Asad Iqbal
Hi


I stopped 'ant test' target before it finished, and now whenever I run it
again, it is stuck at 'install-junit4-taskdef'.

I have tried 'ant clean' but it didn't help. I guessed that it could be
some locking thing in ivy or ant so I set ivy.sync to false in the
common-build.xml

 ""

I also deleted the .cache folder.

But that didn't help either.

What should I do?

When run with '-v', the execution halts at following logs:-

...
install-junit4-taskdef:
Overriding previous definition of property "ivy.version"
[ivy:cachepath] using inline mode to resolve
com.carrotsearch.randomizedtesting junit4-ant 2.5.0 (*(public))
[ivy:cachepath] no resolved descriptor found: launching default resolve
Overriding previous definition of property "ivy.version"
[ivy:cachepath] default: Checking cache for: dependency:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {}
[ivy:cachepath] don't use cache for
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0: checkModified=true
[ivy:cachepath] tried
/Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/2.5.0/ivys/ivy.xml
[ivy:cachepath] tried
/Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/2.5.0/jars/junit4-ant.jar
[ivy:cachepath] local: no ivy file nor artifact found for
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[ivy:cachepath] main: Checking cache for: dependency:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {}
[ivy:cachepath] main: module revision found in cache:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[ivy:cachepath] :: resolving dependencies ::
com.carrotsearch.randomizedtesting#junit4-ant-caller;working
[ivy:cachepath] confs: [default, master, compile, provided, runtime,
system, sources, javadoc, optional]
[ivy:cachepath] validate = true
[ivy:cachepath] refresh = false
[ivy:cachepath] resolving dependencies for configuration 'default'
[ivy:cachepath] == resolving dependencies for
com.carrotsearch.randomizedtesting#junit4-ant-caller;working [default]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[default->default]
[ivy:cachepath] default: Checking cache for: dependency:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {default=[default],
master=[master], compile=[compile], provided=[provided], runtime=[runtime],
system=[system], sources=[sources], javadoc=[javadoc], optional=[optional]}
[ivy:cachepath] don't use cache for
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0: checkModified=true
[ivy:cachepath] tried
/Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/2.5.0/ivys/ivy.xml
[ivy:cachepath] tried
/Users/niqbal/.ivy2/local/com.carrotsearch.randomizedtesting/junit4-ant/2.5.0/jars/junit4-ant.jar
[ivy:cachepath] local: no ivy file nor artifact found for
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[ivy:cachepath] main: Checking cache for: dependency:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 {default=[default],
master=[master], compile=[compile], provided=[provided], runtime=[runtime],
system=[system], sources=[sources], javadoc=[javadoc], optional=[optional]}
[ivy:cachepath] main: module revision found in cache:
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[ivy:cachepath] found
com.carrotsearch.randomizedtesting#junit4-ant;2.5.0 in public
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[default->runtime]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[default->compile]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[default->master]
[ivy:cachepath] resolving dependencies for configuration 'master'
[ivy:cachepath] == resolving dependencies for
com.carrotsearch.randomizedtesting#junit4-ant-caller;working [master]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[master->master]
[ivy:cachepath] resolving dependencies for configuration 'compile'
[ivy:cachepath] == resolving dependencies for
com.carrotsearch.randomizedtesting#junit4-ant-caller;working [compile]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant;2.5.0
[compile->compile]
[ivy:cachepath] resolving dependencies for configuration 'provided'
[ivy:cachepath] == resolving dependencies for
com.carrotsearch.randomizedtesting#junit4-ant-caller;working [provided]
[ivy:cachepath] == resolving dependencies
com.carrotsearch.randomizedtesting#junit4-ant-caller;working->com.carrotsearch.randomizedtesting#junit4-ant

Run solr 6.5+ as daemon

2017-07-14 Thread Nawab Zada Asad Iqbal
Hi,

I want my solr to restart if the process crashes; I am wondering if there
is any drawback which I should consider?
I am considering to use 'daemon --respawn' in the bin/solr; where the OOTB
script has following statement:

nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS
> -Dsolr.log.muteconsole \
> "-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT
> $SOLR_LOGS_DIR" \
> -jar start.jar "${SOLR_JETTY_CONFIG[@]}" \
> 1>"$SOLR_LOGS_DIR/solr-$SOLR_PORT-console.log" 2>&1 & echo $! >
> "$SOLR_PID_DIR/solr-$SOLR_PORT.pid"
>


Basically something like:

daemon --respawn --name mysolr --pidfiles="$SOLR_PID_DIR/solr-$SOLR_PORT.pid"
--user solr --chdir /home/nawab/solr/bin \
--stdout $LOGDIR/$SOLR_SHARD.log --stderr $LOGDIR/$SOLR_SHARD.log \
--command $JAVA -- "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS
-Dsolr.log.muteconsole \
"-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT
$SOLR_LOGS_DIR" \
-jar start.jar "${SOLR_JETTY_CONFIG[@]}"


Regards
Nawab


Re: Enabling SSL

2017-07-12 Thread Nawab Zada Asad Iqbal
I guess your certificates are self generated?  In that case, this is a
browser nanny trying to protect you.
I also get same error in Firefox, however Chrome was little forgiving. It
showed me an option to choose my certificate (the client certificate), and
then bypassed the safety barrier.
I should add that even Chrome didn't show me that 'select certificate'
option on the first attempt, so I don't know what caused it to trigger.

Here is a relevant thread about Firefox:
https://bugzilla.mozilla.org/show_bug.cgi?id=1255049


Let me know how it worked for you, as I am still learning this myself.


Regards
Nawab



On Wed, Jul 12, 2017 at 9:05 AM, Miller, William K - Norman, OK -
Contractor  wrote:

> I am not using Zookeeper.  Is the urlScheme also used outside of Zookeeper?
>
>
>
>
> ~~~
> William Kevin Miller
>
> ECS Federal, Inc.
> USPS/MTSC
> (405) 573-2158
>
>
> -Original Message-
> From: esther.quan...@lucidworks.com [mailto:esther.quan...@lucidworks.com]
> Sent: Wednesday, July 12, 2017 10:58 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Enabling SSL
>
> Hi William,
>
> You should be able to navigate to https://local host:8983/solr (albeit
> with your host:port) to access the admin UI, provided you updated the
> urlScheme property in the Zookeeper cluster props.
>
> Did you complete that step?
>
> Esther
> Search Engineer
> Lucidworks
>
>
>
> > On Jul 12, 2017, at 08:20, Miller, William K - Norman, OK - Contractor <
> william.k.mil...@usps.gov.INVALID> wrote:
> >
> > I am trying to enable SSL and I have followed the instructions in the
> Solr 6.4 reference manual, but when I restart my Solr server and try to
> access the Solr Admin page I am getting:
> >
> > “This page isn’t working”;
> >  sent an invalid response; ERR_INVALID_HTTP_RESPONSE
> >
> > Does the Solr server need to be on a secure server in order to enable
> SSL.
> >
> >
> > Additional Info:
> > Running Solr 6.5.1 on Linux OS
> >
> >
> >
> >
> > ~~~
> > William Kevin Miller
> >
> > ECS Federal, Inc.
> > USPS/MTSC
> > (405) 573-2158
> >
>


Re: Using HTTP and HTTPS at the same time

2017-07-12 Thread Nawab Zada Asad Iqbal
Thanks rick

I am  wondering what is wrong if I pass both http and https port to
underlying jetty sever , won't that be enough to have both http and https
access to solr ?

Regards
Nawab

On Wed, Jul 12, 2017 at 3:39 AM Rick Leir  wrote:

> Hi all,
> The recommended best practice is to run a web app in front of Solr, and
> maybe there is no benefit in SSL between the web app and Solr. In any case,
> if SSL is desired, you would configure the web app to always use HTTPS.
>
> Without the web app, you can have Apache promote a connection from http to
> https. (Is 'promote' the right term?) Cheers -- Rick
>
> On July 11, 2017 6:09:42 PM EDT, Nawab Zada Asad Iqbal 
> wrote:
> >Hi,
> >
> >I am reading a comment on
> >https://cwiki.apache.org/confluence/display/solr/Enabling+SSL which
> >says.
> >Just wanted to check if this is still the same with 6.5? This used to
> >work
> >in 4.5.
> >Shalin Shekhar Mangar
> ><https://cwiki.apache.org/confluence/display/%7Eshalinmangar>
> >
> >Solr does not support both HTTP and HTTPS at the same time. You can
> >only
> >use one of them at a time.
> >
> >
> >Thanks
> >
> >Nawab
>
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com


Using HTTP and HTTPS at the same time

2017-07-11 Thread Nawab Zada Asad Iqbal
Hi,

I am reading a comment on
https://cwiki.apache.org/confluence/display/solr/Enabling+SSL which says.
Just wanted to check if this is still the same with 6.5? This used to work
in 4.5.
Shalin Shekhar Mangar


Solr does not support both HTTP and HTTPS at the same time. You can only
use one of them at a time.


Thanks

Nawab


Re: Solr starts without error but not working

2017-06-18 Thread Nawab Zada Asad Iqbal
Ah, I found that if i remove the keystore related properties in solr.in.sh
, then i am able to access the server via browser. I still need to fix and
find the issue in the keys. However, solr should have shown some clear
error in the logs.

Thanks for your response.



On Sun, Jun 18, 2017 at 3:20 AM, Rick Leir  wrote:

> firewall?
>
>
>
> On 2017-06-18 01:04 AM, Nawab Zada Asad Iqbal wrote:
>
>> Hi
>>
>> So I am deploying solr 6.5.1 using puppet to another machine (which I can
>> ssh to) . The logs  have no error but solr home page has nothing (no
>> response from server) . Using curl also showed empty response.
>>
>> What could be wrong ?
>> The server is writing   logs and also found  the core folder; so from
>> directory structure and logs , everything seems fine .
>>
>>
>> Regards
>> Nawab
>>
>>
>


Solr starts without error but not working

2017-06-17 Thread Nawab Zada Asad Iqbal
Hi

So I am deploying solr 6.5.1 using puppet to another machine (which I can
ssh to) . The logs  have no error but solr home page has nothing (no
response from server) . Using curl also showed empty response.

What could be wrong ?
The server is writing   logs and also found  the core folder; so from
directory structure and logs , everything seems fine .


Regards
Nawab


Re: Taking Solr 6.5.x to production.

2017-06-07 Thread Nawab Zada Asad Iqbal
Actually, I found the answer by opening the script file!



On Mon, Jun 5, 2017 at 3:59 PM, Nawab Zada Asad Iqbal 
wrote:

> Hi solr community
>
> What are the steps for taking solr to production if Solr installation
> script does not support my environment.  Is there a list of all the steps
> done by the Installation script so that I can do them manually?
>
> I am upgrading from 4.5.0; and today we compile our solr custom code
> (together with solr 4.5.0) into a war and then deploy it to our solr
> servers using puppet. I already have a 'solr' user which owns the data and
> log on those machines.
>
> Thanks
> Nawab
>


Taking Solr 6.5.x to production.

2017-06-05 Thread Nawab Zada Asad Iqbal
Hi solr community

What are the steps for taking solr to production if Solr installation
script does not support my environment.  Is there a list of all the steps
done by the Installation script so that I can do them manually?

I am upgrading from 4.5.0; and today we compile our solr custom code
(together with solr 4.5.0) into a war and then deploy it to our solr
servers using puppet. I already have a 'solr' user which owns the data and
log on those machines.

Thanks
Nawab


Re: Steps for building solr/lucene code and starting server

2017-06-05 Thread Nawab Zada Asad Iqbal
Thanks Erick

I ended up including my files in solr/webapp/build.xml under "dist" target
(which is also called by "server" target in solr folder). I used fileset
instead of lib not sure about the pros and cons. Now I am able to run my
server using 'bin/solr' with my custom config files.



On Fri, Jun 2, 2017 at 8:08 PM, Erick Erickson 
wrote:

> You can just put a  directive in your solrconfig.xml file
> that points to the jar in analysis-extras.
>
> I generally prefer that to copying things around on the theory that
> it's one less thing to forget to copy sometime later...
>
> Best,
> Erick
>
> On Fri, Jun 2, 2017 at 5:05 PM, Nawab Zada Asad Iqbal 
> wrote:
> > When I do 'ant server', the libs from "./build/lucene-libs/" are copied
> > over to "./server/solr-webapp/webapp/WEB-INF/lib/" . However, my
> required
> > class is in a lib which is on:
> > "./build/contrib/solr-analysis-extras/lucene-libs/"
> >
> > I guess, I need to do the contrib target?
> >
> >
> > On Fri, Jun 2, 2017 at 4:20 PM, Nawab Zada Asad Iqbal 
> > wrote:
> >
> >> Hi Erick
> >>
> >> "bin/solr start -e techproducts" works fine. It is probably because it
> is
> >> not referring to 'org.apache.lucene.analysis.ic
> >> u.ICUNormalizer2CharFilterFactory' in the schema.xml ?
> >>
> >> I am not sure what should I try. I am wondering if there is some
> document
> >> about solr dev setup.
> >>
> >>
> >> On Fri, Jun 2, 2017 at 8:29 AM, Erick Erickson  >
> >> wrote:
> >>
> >>> "ant server" should be sufficient. "dist" is useful for when
> >>> you have custom _external_ programs (say SolrJ) that you
> >>> want all the libraries collected in the same place. There's
> >>> no need to "ant compile" as the "server" target
> >>>
> >>> I assume what you're seeing is a ClassNotFound error, right?
> >>> I'm a bit puzzled since that filter isn't a contrib, so it should
> >>> be found.
> >>>
> >>> What I'd do is just do the build first then start the example,
> >>> "bin/solr start -e techproducts"
> >>> Don't specify solrhome or anything else. Once that works,
> >>> build up from there.
> >>>
> >>> Best,
> >>> Erick
> >>>
> >>> On Fri, Jun 2, 2017 at 3:15 AM, Nawab Zada Asad Iqbal <
> khi...@gmail.com>
> >>> wrote:
> >>> > Hi,
> >>> >
> >>> > I have synced lucene-solr repo because I (will) have some custom
> code in
> >>> > lucene and solr folders. What are the steps for starting solr
> server? My
> >>> > schema.xml uses ICUNormalizer2CharFilterFactory (which I see in
> lucene
> >>> > folder tree), but I don't know how to make it work with solr webapp.
> I
> >>> know
> >>> > the (luncene ant
> >>> > target) 'compile',  (solr targets) 'dist', and 'server', but the
> order
> >>> is
> >>> > not clear to me.
> >>> >
> >>> > I have compiled lucene before doing 'ant server' in solr folder, but
> I
> >>> > still see this error when I do 'bin/solr start -f -s ~/solrhome/' :-
> >>> >
> >>> > Caused by: org.apache.solr.common.SolrException: Plugin init failure
> >>> for
> >>> > [schema.xml] fieldType "text": Plugin init failure for [schema.xml]
> >>> > analyzer/charFilter "nfkc": Error loading class
> >>> > 'org.apache.lucene.analysis.icu.ICUNormalizer2CharFilterFactory'
> >>> >
> >>> >
> >>> >
> >>> > Thanks
> >>> > Nawab
> >>>
> >>
> >>
>


Re: Upgrading config from 4.5.0 to 6.5.1

2017-06-05 Thread Nawab Zada Asad Iqbal
Thanks Tony, but I think Rick's answer is different. He asked me to start
with "new version of solrconfig.xml".

@Rick

I have been doing what Tony said (copied my config to new core, and fixed
each error or warning). Is there some mega solrconfig.xml which which you
recommended me to start from?


Nawab


On Fri, Jun 2, 2017 at 12:15 PM, Tony Wang  wrote:

> Hi Nawab,
> We did exact the same way like Rick recommended.  When you apply your
> change from your old configs on top of the originals, it will give you the
> errors for incompatible settings.  For an example of
> "text_general_edge_ngram" fieldType setting, side="front" is no longer
> valid attributes.
>
> Tony
>
>
> On Wed, May 31, 2017 at 3:53 PM, Rick Leir  wrote:
>
> > Hi Nawab
> > The recommended way is to use the new version of solrconfig.xml and apply
> > your modifications to it. You will want to go through it looking for
> > developments that would affect you.
> > Cheers
> > Rick
> >
> > On May 31, 2017 3:45:58 PM EDT, Nawab Zada Asad Iqbal 
> > wrote:
> > >Hi,
> > >
> > >I am upgrading 4.5.0 to latest stable bits and wondering what will be
> > >the
> > >quickest way to find out any obsolete or deprecated settings in config
> > >files?
> > >If I run the latest server with my old config (solr.xml,
> > >solrconfig.xml,
> > >schema.xml) files, will it warn for deprecated/less-optimal values?
> > >
> > >
> > >Thanks
> > >Nawab
> >
> > --
> > Sorry for being brief. Alternate email is rickleir at yahoo dot com
>
> --
>
>
> [image: What's New with Xactly] <http://www.xactlycorp.com/email-click/>
>
> <https://www.nyse.com/quote/XNYS:XTLY>  [image: LinkedIn]
> <https://www.linkedin.com/company/xactly-corporation>  [image: Twitter]
> <https://twitter.com/Xactly>  [image: Facebook]
> <https://www.facebook.com/XactlyCorp>  [image: YouTube]
> <http://www.youtube.com/xactlycorporation>
>


Field x is not multivalued and destination for multiple copyFields

2017-06-05 Thread Nawab Zada Asad Iqbal
*Hi, I have a field 'name_token' which gets value via copyFields on
different language-specific fields (e.g. name_en , name_it, name_es, etc.)
If I can ensure that, *



*only one of these language-specific fields will have a value for a given
document, is it ok to ignore this warning:"IndexSchema Field name_token is
not multivalued and destination for multiple copyFields"*


*Also, what will happen if my a record happens to have values in two
language specific name fields (e.g. if a name or word exists in two
languages: name_zh and name_ja ). *



*My understanding is that the value is same anyways, so there is no
drawback, but can it result in an exception?*

*Regards*

*Nawab*


Re: Steps for building solr/lucene code and starting server

2017-06-02 Thread Nawab Zada Asad Iqbal
When I do 'ant server', the libs from "./build/lucene-libs/" are copied
over to "./server/solr-webapp/webapp/WEB-INF/lib/" . However, my required
class is in a lib which is on:
"./build/contrib/solr-analysis-extras/lucene-libs/"

I guess, I need to do the contrib target?


On Fri, Jun 2, 2017 at 4:20 PM, Nawab Zada Asad Iqbal 
wrote:

> Hi Erick
>
> "bin/solr start -e techproducts" works fine. It is probably because it is
> not referring to 'org.apache.lucene.analysis.ic
> u.ICUNormalizer2CharFilterFactory' in the schema.xml ?
>
> I am not sure what should I try. I am wondering if there is some document
> about solr dev setup.
>
>
> On Fri, Jun 2, 2017 at 8:29 AM, Erick Erickson 
> wrote:
>
>> "ant server" should be sufficient. "dist" is useful for when
>> you have custom _external_ programs (say SolrJ) that you
>> want all the libraries collected in the same place. There's
>> no need to "ant compile" as the "server" target
>>
>> I assume what you're seeing is a ClassNotFound error, right?
>> I'm a bit puzzled since that filter isn't a contrib, so it should
>> be found.
>>
>> What I'd do is just do the build first then start the example,
>> "bin/solr start -e techproducts"
>> Don't specify solrhome or anything else. Once that works,
>> build up from there.
>>
>> Best,
>> Erick
>>
>> On Fri, Jun 2, 2017 at 3:15 AM, Nawab Zada Asad Iqbal 
>> wrote:
>> > Hi,
>> >
>> > I have synced lucene-solr repo because I (will) have some custom code in
>> > lucene and solr folders. What are the steps for starting solr server? My
>> > schema.xml uses ICUNormalizer2CharFilterFactory (which I see in lucene
>> > folder tree), but I don't know how to make it work with solr webapp. I
>> know
>> > the (luncene ant
>> > target) 'compile',  (solr targets) 'dist', and 'server', but the order
>> is
>> > not clear to me.
>> >
>> > I have compiled lucene before doing 'ant server' in solr folder, but I
>> > still see this error when I do 'bin/solr start -f -s ~/solrhome/' :-
>> >
>> > Caused by: org.apache.solr.common.SolrException: Plugin init failure
>> for
>> > [schema.xml] fieldType "text": Plugin init failure for [schema.xml]
>> > analyzer/charFilter "nfkc": Error loading class
>> > 'org.apache.lucene.analysis.icu.ICUNormalizer2CharFilterFactory'
>> >
>> >
>> >
>> > Thanks
>> > Nawab
>>
>
>


Re: Steps for building solr/lucene code and starting server

2017-06-02 Thread Nawab Zada Asad Iqbal
Hi Erick

"bin/solr start -e techproducts" works fine. It is probably because it is
not referring to 'org.apache.lucene.analysis.ic
u.ICUNormalizer2CharFilterFactory' in the schema.xml ?

I am not sure what should I try. I am wondering if there is some document
about solr dev setup.


On Fri, Jun 2, 2017 at 8:29 AM, Erick Erickson 
wrote:

> "ant server" should be sufficient. "dist" is useful for when
> you have custom _external_ programs (say SolrJ) that you
> want all the libraries collected in the same place. There's
> no need to "ant compile" as the "server" target
>
> I assume what you're seeing is a ClassNotFound error, right?
> I'm a bit puzzled since that filter isn't a contrib, so it should
> be found.
>
> What I'd do is just do the build first then start the example,
> "bin/solr start -e techproducts"
> Don't specify solrhome or anything else. Once that works,
> build up from there.
>
> Best,
> Erick
>
> On Fri, Jun 2, 2017 at 3:15 AM, Nawab Zada Asad Iqbal 
> wrote:
> > Hi,
> >
> > I have synced lucene-solr repo because I (will) have some custom code in
> > lucene and solr folders. What are the steps for starting solr server? My
> > schema.xml uses ICUNormalizer2CharFilterFactory (which I see in lucene
> > folder tree), but I don't know how to make it work with solr webapp. I
> know
> > the (luncene ant
> > target) 'compile',  (solr targets) 'dist', and 'server', but the order is
> > not clear to me.
> >
> > I have compiled lucene before doing 'ant server' in solr folder, but I
> > still see this error when I do 'bin/solr start -f -s ~/solrhome/' :-
> >
> > Caused by: org.apache.solr.common.SolrException: Plugin init failure for
> > [schema.xml] fieldType "text": Plugin init failure for [schema.xml]
> > analyzer/charFilter "nfkc": Error loading class
> > 'org.apache.lucene.analysis.icu.ICUNormalizer2CharFilterFactory'
> >
> >
> >
> > Thanks
> > Nawab
>


Steps for building solr/lucene code and starting server

2017-06-02 Thread Nawab Zada Asad Iqbal
Hi,

I have synced lucene-solr repo because I (will) have some custom code in
lucene and solr folders. What are the steps for starting solr server? My
schema.xml uses ICUNormalizer2CharFilterFactory (which I see in lucene
folder tree), but I don't know how to make it work with solr webapp. I know
the (luncene ant
target) 'compile',  (solr targets) 'dist', and 'server', but the order is
not clear to me.

I have compiled lucene before doing 'ant server' in solr folder, but I
still see this error when I do 'bin/solr start -f -s ~/solrhome/' :-

Caused by: org.apache.solr.common.SolrException: Plugin init failure for
[schema.xml] fieldType "text": Plugin init failure for [schema.xml]
analyzer/charFilter "nfkc": Error loading class
'org.apache.lucene.analysis.icu.ICUNormalizer2CharFilterFactory'



Thanks
Nawab


Upgrading config from 4.5.0 to 6.5.1

2017-05-31 Thread Nawab Zada Asad Iqbal
Hi,

I am upgrading 4.5.0 to latest stable bits and wondering what will be the
quickest way to find out any obsolete or deprecated settings in config
files?
If I run the latest server with my old config (solr.xml, solrconfig.xml,
schema.xml) files, will it warn for deprecated/less-optimal values?


Thanks
Nawab


Re: TLog for non-Solrcloud scenario

2017-05-29 Thread Nawab Zada Asad Iqbal
Thanks Erick, that summary is very helpful.


Nawab


On Mon, May 29, 2017 at 1:39 PM, Erick Erickson 
wrote:

> Yeah, it's a bit confusing. I made Yonik and Mark take me through the
> process in detail in order to write that blog, misunderstandings my
> fault of course ;)
>
> bq: This makes me think that at the time of soft-commit,
> the documents in preceding update requests are already flushed (might not
> be on the disk yet, but JVM has handed over the responsibility to Operating
> system)
>
> True. Soft commits aren't about the tlog at all, just making docs that
> are already indexed visible to  searchers. Soft commits don't have any
> effect on the segment files either.
>
> Back to your original question:
>
> bq: Does it mean that flush protects against JVM crash but not power
> failure?
> While fsync will protect against both scenarios.
>
> In a word, "yes". In practice, the only time people will do an fsync
> (which you can specify when you commit) is in situations where they
> need to guard against the remote possibility that the bits would be
> lost if the power went out during that very short interval. And you
> have a one-replica system (assuming SolrCloud). And you don't have a
> tlog (see below).
>
> bq:  If the JVM crashes or there is a loss of power, changes that
> occurred after the last *hard
> commit* will be lost."
>
> OK, there's a distinction between whether the tlog enabled or not.
> There's nothing at all that _requires_ the tlog. So you have two
> scenarios:
>
> 1> tlog not enabled. In this scenario the above is completely true.
> Unless and until the hard commit is performed, documents sent to the
> index are lost if there's a power outage or you kill Solr harshly. A
> hard commit will close all open segments so the state of the index is
> consistent. When Solr starts up it only "knows" about segments that
> were closed by a hard commit.
>
> 2> tlog enabled. In this scenario, any docs written to the tlog (and
> the flush/fsync discussion pertains here) then, upon restart, the Solr
> node will replay docs between the last hard commit from the tlog and
> no data successfully written to the tlog will be lost. Note that Solr
> doesn't "know" about the unclosed segments in this case either. But
> you don't care since any docs in those segments are re-indexed from
> the tlog.
>
> One implication here is that if you do _not_ hard commit, your tlogs
> will grow without limit. Which is one of the reasons you can specify
> openSearcher=false for hard commits, so you can commit frequently,
> preserving your index without having to replay and without worrying
> about the expense of opening new searchers.
>
> Best,
> Erick
>
> On Mon, May 29, 2017 at 12:47 PM, Nawab Zada Asad Iqbal
>  wrote:
> > Thanks Erick,
> >
> > I have read different documents in this area and I am getting confused
> due
> > to overloaded/"reused" terms.
> >
> > E.g., in that lucidworks page, the flow for an indexing request is
> > explained as follows. This makes me think that at the time of
> soft-commit,
> > the documents in preceding update requests are already flushed (might not
> > be on the disk yet, but JVM has handed over the responsibility to
> Operating
> > system). (even if we don't do it as part of soft-commit)
> >
> > "After all the leaders have responded, the originating node replies to
> the
> > client. At this point,
> >
> > *all documents have been flushed to the tlog for all the nodes in the
> > cluster!"*
> >
> > On Mon, May 29, 2017 at 7:57 AM, Erick Erickson  >
> > wrote:
> >
> >> There's a long post here on this that might help:
> >>
> >> https://lucidworks.com/2013/08/23/understanding-
> >> transaction-logs-softcommit-and-commit-in-sorlcloud/
> >>
> >> Short form: soft commit doesn't flush tlogs, does not start a new
> >> tlog, does not close segments, does not open new segments.
> >>
> >> Hard commit does all of these things.
> >>
> >> Best,
> >> Erick
> >>
> >> On Sun, May 28, 2017 at 3:59 PM, Nawab Zada Asad Iqbal <
> khi...@gmail.com>
> >> wrote:
> >> > Hi,
> >> >
> >> > SolrCloud document <https://wiki.apache.org/solr/NewSolrCloudDesign>
> >> > mentions:
> >> >
> >> > "The sync can be tunable e.g. flush vs fsync by default can protect
> >> against
> >> > JVM crashes but not against power failure and can be much faster

Re: TLog for non-Solrcloud scenario

2017-05-29 Thread Nawab Zada Asad Iqbal
Thanks Erick,

I have read different documents in this area and I am getting confused due
to overloaded/"reused" terms.

E.g., in that lucidworks page, the flow for an indexing request is
explained as follows. This makes me think that at the time of soft-commit,
the documents in preceding update requests are already flushed (might not
be on the disk yet, but JVM has handed over the responsibility to Operating
system). (even if we don't do it as part of soft-commit)

"After all the leaders have responded, the originating node replies to the
client. At this point,

*all documents have been flushed to the tlog for all the nodes in the
cluster!"*

On Mon, May 29, 2017 at 7:57 AM, Erick Erickson 
wrote:

> There's a long post here on this that might help:
>
> https://lucidworks.com/2013/08/23/understanding-
> transaction-logs-softcommit-and-commit-in-sorlcloud/
>
> Short form: soft commit doesn't flush tlogs, does not start a new
> tlog, does not close segments, does not open new segments.
>
> Hard commit does all of these things.
>
> Best,
> Erick
>
> On Sun, May 28, 2017 at 3:59 PM, Nawab Zada Asad Iqbal 
> wrote:
> > Hi,
> >
> > SolrCloud document <https://wiki.apache.org/solr/NewSolrCloudDesign>
> > mentions:
> >
> > "The sync can be tunable e.g. flush vs fsync by default can protect
> against
> > JVM crashes but not against power failure and can be much faster "
> >
> > Does it mean that flush protects against JVM crash but not power failure?
> > While fsync will protect against both scenarios.
> >
> >
> > Also, this NRT help
> > <https://cwiki.apache.org/confluence/display/solr/Near+
> Real+Time+Searching>
> > explains soft commit as:
> > "A *soft commit* is much faster since it only makes index changes visible
> > and does not fsync index files or write a new index descriptor. If the
> JVM
> > crashes or there is a loss of power, changes that occurred after the
> last *hard
> > commit* will be lost."
> >
> > This is little confusing, as a soft-commit will only happen after a tlog
> > entry is flushed. Isn't it? Or doesn't tlog work differently for
> solrcloud
> > and non-solrCloud configurations.
> >
> >
> > Thanks
> > Nawab
>


Re: StandardDirectoryReader.java:: applyAllDeletes, writeAllDeletes

2017-05-28 Thread Nawab Zada Asad Iqbal
After reading some more code it seems if we are sure that there are no
deletes in this segment/index, then setting  applyAllDeletes and
writeAllDeletes both to false will achieve similar to what I was getting in
4.5.0

However, after I read the comment from IndexWriter::DirectoryReader
getReader(boolean applyAllDeletes, boolean writeAllDeletes) , it seems that
this method is particular to NRT.  Since we are not using soft commits, can
this change actually improve our performance during full reindex?


Thanks
Nawab









On Sun, May 28, 2017 at 2:16 PM, Nawab Zada Asad Iqbal 
wrote:

> Thanks Michael and Shawn for the detailed response. I was later able to
> pull the full history using gitk; and found the commits behind this patch.
>
> Mike:
>
> So, in solr 4.5.0 ; some earlier developer has added code and config to
> set applyAllDeletes to false when we reindex all the data.  At the moment,
> I am not sure about the performance gain by this.
>
> 
>
>
> I am investigating the question, if this change is still needed in 6.5.1
> or can this be achieved by any other configuration?
>
> For now, we are not planning to use NRT and solrCloud.
>
>
> Thanks
> Nawab
>
> On Sun, May 28, 2017 at 9:26 AM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Sorry, yes, that commit was one of many on a feature branch I used to
>> work on LUCENE-5438, which added near-real-time index replication to
>> Lucene.  Before this change, Lucene's replication module required a commit
>> in order to replicate, which is a heavy operation.
>>
>> The writeAllDeletes boolean option asks Lucene to move all recent deletes
>> (tombstone bitsets) to disk while opening the NRT (near-real-time) reader.
>>
>> Normally Lucene won't always do that, and will instead carry the bitsets
>> in memory from writer to reader, for reduced refresh latency.
>>
>> What sort of custom changes do you have in this part of Lucene?
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Sat, May 27, 2017 at 10:35 PM, Nawab Zada Asad Iqbal > > wrote:
>>
>>> Hi all
>>>
>>> I am looking at following change in lucene-solr which doen't mention any
>>> JIRA. How can I know more about it?
>>>
>>> "1ae7291 Mike McCandless on 1/24/16 at 3:17 PM current patch"
>>>
>>> Specifically, I am interested in what 'writeAllDeletes'  does in the
>>> following method. Let me know if it is very stupid question and I should
>>> have done something else before emailing here.
>>>
>>> static DirectoryReader open(IndexWriter writer, SegmentInfos infos,
>>> boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
>>>
>>> Background: We are running solr4.5 and upgrading to 6.5.1. We have
>>> some custom code in this area, which we need to merge.
>>>
>>>
>>> Thanks
>>>
>>> Nawab
>>>
>>
>>
>


TLog for non-Solrcloud scenario

2017-05-28 Thread Nawab Zada Asad Iqbal
Hi,

SolrCloud document 
mentions:

"The sync can be tunable e.g. flush vs fsync by default can protect against
JVM crashes but not against power failure and can be much faster "

Does it mean that flush protects against JVM crash but not power failure?
While fsync will protect against both scenarios.


Also, this NRT help

explains soft commit as:
"A *soft commit* is much faster since it only makes index changes visible
and does not fsync index files or write a new index descriptor. If the JVM
crashes or there is a loss of power, changes that occurred after the last *hard
commit* will be lost."

This is little confusing, as a soft-commit will only happen after a tlog
entry is flushed. Isn't it? Or doesn't tlog work differently for solrcloud
and non-solrCloud configurations.


Thanks
Nawab


Re: StandardDirectoryReader.java:: applyAllDeletes, writeAllDeletes

2017-05-28 Thread Nawab Zada Asad Iqbal
Thanks Michael and Shawn for the detailed response. I was later able to
pull the full history using gitk; and found the commits behind this patch.

Mike:

So, in solr 4.5.0 ; some earlier developer has added code and config to set
applyAllDeletes to false when we reindex all the data.  At the moment, I am
not sure about the performance gain by this.




I am investigating the question, if this change is still needed in 6.5.1 or
can this be achieved by any other configuration?

For now, we are not planning to use NRT and solrCloud.


Thanks
Nawab

On Sun, May 28, 2017 at 9:26 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Sorry, yes, that commit was one of many on a feature branch I used to work
> on LUCENE-5438, which added near-real-time index replication to Lucene.
> Before this change, Lucene's replication module required a commit in order
> to replicate, which is a heavy operation.
>
> The writeAllDeletes boolean option asks Lucene to move all recent deletes
> (tombstone bitsets) to disk while opening the NRT (near-real-time) reader.
>
> Normally Lucene won't always do that, and will instead carry the bitsets
> in memory from writer to reader, for reduced refresh latency.
>
> What sort of custom changes do you have in this part of Lucene?
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Sat, May 27, 2017 at 10:35 PM, Nawab Zada Asad Iqbal 
> wrote:
>
>> Hi all
>>
>> I am looking at following change in lucene-solr which doen't mention any
>> JIRA. How can I know more about it?
>>
>> "1ae7291 Mike McCandless on 1/24/16 at 3:17 PM current patch"
>>
>> Specifically, I am interested in what 'writeAllDeletes'  does in the
>> following method. Let me know if it is very stupid question and I should
>> have done something else before emailing here.
>>
>> static DirectoryReader open(IndexWriter writer, SegmentInfos infos,
>> boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
>>
>> Background: We are running solr4.5 and upgrading to 6.5.1. We have
>> some custom code in this area, which we need to merge.
>>
>>
>> Thanks
>>
>> Nawab
>>
>
>


StandardDirectoryReader.java:: applyAllDeletes, writeAllDeletes

2017-05-27 Thread Nawab Zada Asad Iqbal
Hi all

I am looking at following change in lucene-solr which doen't mention any
JIRA. How can I know more about it?

"1ae7291 Mike McCandless on 1/24/16 at 3:17 PM current patch"

Specifically, I am interested in what 'writeAllDeletes'  does in the
following method. Let me know if it is very stupid question and I should
have done something else before emailing here.

static DirectoryReader open(IndexWriter writer, SegmentInfos infos,
boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {

Background: We are running solr4.5 and upgrading to 6.5.1. We have
some custom code in this area, which we need to merge.


Thanks

Nawab


Re: solr 6 at scale

2017-05-25 Thread Nawab Zada Asad Iqbal
Hi Toke,

I don't have any blog, but here is a high level idea:

I have 31 machine cluster with 3 shards on each (93 shards). Each machine
has 250~GB ram and 3TB SSD for search index (there is another drive for OS
and stuff). One solr process runs for each shard with 48G heap. So we have
3 large files on the SSD.

That is just one cluster, we have 5 such clusters which we can bring live
or offline (for testing or maintenance etc.) Usually 3 are active at any
time, taking 1/3 of user traffic each.
We don't rely on replication between these clusters. Our out-of-solr
processes send writes to all the replicas in parallel. We don't use
solrCloud although it was available in solr.4.5 (which we are using).


Thanks
Nawab


On Wed, May 24, 2017 at 3:01 PM, Toke Eskildsen  wrote:

> Nawab Zada Asad Iqbal  wrote:
> > @Toke, I stumbled upon your page last week but it seems that your huge
> > index doesn't receive a lot of query traffic.
>
> It switches between two kinds of usage:
>
> Everyday use is very low traffic by researchers using it interactively:
> 1-2 simultaneous queries, with faceting ranging from somewhat heavy to very
> heavy. Our setup is optimized towards this scenario and latency starts to
> go up pretty quickly if the number of simultaneous request rises.
>
> Now and then some cultural probes are being performed, where the index is
> being hammered continuously by multiple threads. Here it is our experience
> that max throughput for extremely simple queries (existence checks for
> social security numbers) is around 50 queries/second.
>
> > Mine is around 60TB and receives around 120 queries per second; ~90
> shards on 30 machines.
>
> Sounds interesting. Do you have a more detailed write-up somewhere?
>
> - Toke
>


Re: solr 6 at scale

2017-05-24 Thread Nawab Zada Asad Iqbal
Thanks everyone for the responses, I will go with the latest bits for now;
and will share how it goes.

@Toke, I stumbled upon your page last week but it seems that your huge
index doesn't receive a lot of query traffic. Mine is around 60TB and
receives around 120 queries per second; ~90 shards on 30 machines.


I look forward to hear more scale stories.
Nawab

On Wed, May 24, 2017 at 7:58 AM, Toke Eskildsen  wrote:

> Shawn Heisey  wrote:
> > On 5/24/2017 3:44 AM, Toke Eskildsen wrote:
> >> It is relatively easy to downgrade to an earlier release within the
> >> same major version. We have not switched to 6.5.1 simply because we
> >> have no pressing need for it - Solr 6.3 works well for us.
>
> > That strikes me as a little bit dangerous, unless your indexes are very
> > static.  The Lucene index format does occasionally change in minor
> > versions.
>
> Err.. Okay? Thank you for that. I was under the impression that the index
> format was fixed (modulo critical bugs) for major versions. This will
> change our approach to updating.
>
> Apologies for the confusion,
> Toke
>


solr 6 at scale

2017-05-23 Thread Nawab Zada Asad Iqbal
Hi all,

I am planning to upgrade my solr.4.x installation to a recent stable
version. Should I get the latest 6.5.1 bits or will a little older release
be better in terms of stability?
I am curious if there is way to see solr.6.x adoption in large companies. I
have talked to few people and they are also stuck at older major versions.

Anyone using solr.6.x for multi-terabytes index size: how did you decide
which version to upgrade to?


Regards
Nawab


<    1   2