FW: Graph traversal when nodes are indirectly connected with references

2021-03-03 Thread Sravani Kambhampati
I have a graph with disjoint sets of nodes connected indirectly with a 
reference as shown below. Given an id is it possible to get the leaf node when 
the depth is unknown?

[
{ id: A, child: { ref: B } },
{ id: B, child: { ref: C } },
{ id: C, child: { ref: D } },
.
.
{ id: Y, child: { ref: Z } }
]
[cid:image001.png@01D71031.E9C961D0]
Thanks,
Sravani


FW: Vulnerabilities in SOLR 8.6.2

2020-12-11 Thread Narayanan, Lakshmi
Can anyone please advise?
Who else should be notified to get some guidance on this please??

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
Sent: Friday, November 13, 2020 11:21 AM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2

This is my 5th attempt in the last 60 days
Is there anyone looking at these mails?
Does anyone care?? :(


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Thursday, October 22, 2020 1:06 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: FW: Vulnerabilities in SOLR 8.6.2

This is my 4th attempt to contact
Please advise, if there is a build that fixes these vulnerabilities

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 18, 2020 4:01 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: FW: Vulnerabilities in SOLR 8.6.2

SOLR-User Support team
Is there anyone who can answer my question or can point to someone who can help
I have not had any response for the past 3 weeks !?
Please advise


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 04, 2020 2:11 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: RE: Vulnerabilities in SOLR 8.6.2

Hello Solr-User Support team
Please advise or provide further guidance on the request below

Thank you!

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Monday, September 28, 2020 1:52 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: Vulnerabilities in SOLR 8.6.2
Importance: High

Hello Solr-User Support team
We have installed the SOLR 8.6.2 package into docker container in our DEV 
environment. Prior to using it, our security team scanned the docker image 
using SysDig and found a lot of Critical/High/Medium vulnerabilities. The full 
list is in the attached spreadsheet

Scan Summary
30 STOPS 190 WARNS188 Vulnerabilities

Please advise or point us to how/where to get a package that has been patched 
for the Critical/High/Medium vulnerabilities in the attached spreadsheet
Your help will be gratefully received


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>






**
This e-mail, including any attachments that accompany it, may contain
information that is confidential or privileged. This e-mail is
intended solely for the use of the individual(s) to whom it was intended to be
addressed. If you have received this e-mail and are not an intended recipient,
any disclosure, distribution, copying or other use or
retention of this email or information contained within it are prohibited.
If you have received this email in error, please immediately
reply to the sender via e-mail and also permanently
delete all copies of the original message together with any of its attachments
from your computer or device.
**


SOLR862 Vulnerabilities.xlsx
Description: SOLR862 Vulnerabilities.xlsx


Re: FW: Vulnerabilities in SOLR 8.6.2

2020-11-13 Thread Kevin Risden
As far as I can tell only your first and 5th emails went through. Either
way, Cassandra responded on 20200929 - ~15 hrs after your first message:

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/202009.mbox/%3Cbe447e96-60ed-4a40-88dd-9e0c28be6c71%40Spark%3E

Kevin Risden


On Fri, Nov 13, 2020 at 11:35 AM Narayanan, Lakshmi
 wrote:

> This is my 5th attempt in the last 60 days
>
> Is there anyone looking at these mails?
>
> Does anyone care?? L
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Thursday, October 22, 2020 1:06 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* FW: Vulnerabilities in SOLR 8.6.2
>
>
>
> This is my 4th attempt to contact
>
> Please advise, if there is a build that fixes these vulnerabilities
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Sunday, October 18, 2020 4:01 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* FW: Vulnerabilities in SOLR 8.6.2
>
>
>
> SOLR-User Support team
>
> Is there anyone who can answer my question or can point to someone who can
> help
>
> I have not had any response for the past 3 weeks !?
>
> Please advise
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Sunday, October 04, 2020 2:11 PM
> *To:* solr-user@lucene.apache.org
> *Cc:* Chattopadhyay, Salil ; Mutnuri, Vishnu
> D ; Pathak, Omkar ;
> Shenouda, Nasir B 
> *Subject:* RE: Vulnerabilities in SOLR 8.6.2
>
>
>
> Hello Solr-User Support team
>
> Please advise or provide further guidance on the request below
>
>
>
> Thank you!
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> *From:* Narayanan, Lakshmi 
> *Sent:* Monday, September 28, 2020 1:52 PM
> *To:* solr-user@lucene.apache.org
> *Cc:* Chattopadhyay, Salil ; Mutnuri, Vishnu
> D ; Pathak, Omkar ;
> Shenouda, Nasir B 
> *Subject:* Vulnerabilities in SOLR 8.6.2
> *Importance:* High
>
>
>
> Hello Solr-User Support team
>
> We have installed the SOLR 8.6.2 package into docker container in our DEV
> environment. Prior to using it, our security team scanned the docker image
> using SysDig and found a lot of Critical/High/Medium vulnerabilities. The
> full list is in the attached spreadsheet
>
>
>
> Scan Summary
>
> *30* *STOPS **190* *WARNS**188* *Vulnerabilities*
>
>
>
> Please advise or point us to how/where to get a package that has been
> patched for the Critical/High/Medium vulnerabilities in the attached
> spreadsheet
>
> Your help will be gratefully received
>
>
>
>
>
> Lakshmi Narayanan
>
> Marsh & McLennan Companies
>
> 121 River Street, Hoboken,NJ-07030
>
> 201-284-3345
>
> M: 845-300-3809
>
> Email: lakshmi.naraya...@mmc.com
>
>
>
>
>
> --
>
>
> **
> This e-mail, including any attachments that accompany it, may contain
> information that is confidential or privileged. This e-mail is
> intended solely for the use of the individual(s) to whom it was intended
> to be
> addressed. If you have received this e-mail and are not an intended
> recipient,
> any disclosure, distribution, copying or other use or
> retention of this email or information contained within it are prohibited.
> If you have received this email in error, please immediately
> reply to the sender via e-mail and also permanently
> delete all copies of the original message together with any of its
> attachments
> from your computer or device.
> **
>


FW: Vulnerabilities in SOLR 8.6.2

2020-11-13 Thread Narayanan, Lakshmi
This is my 5th attempt in the last 60 days
Is there anyone looking at these mails?
Does anyone care?? :(


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
Sent: Thursday, October 22, 2020 1:06 PM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2

This is my 4th attempt to contact
Please advise, if there is a build that fixes these vulnerabilities

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 18, 2020 4:01 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Subject: FW: Vulnerabilities in SOLR 8.6.2

SOLR-User Support team
Is there anyone who can answer my question or can point to someone who can help
I have not had any response for the past 3 weeks !?
Please advise


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Sunday, October 04, 2020 2:11 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: RE: Vulnerabilities in SOLR 8.6.2

Hello Solr-User Support team
Please advise or provide further guidance on the request below

Thank you!

Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>


From: Narayanan, Lakshmi 
mailto:lakshmi.naraya...@mmc.com>>
Sent: Monday, September 28, 2020 1:52 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
Cc: Chattopadhyay, Salil 
mailto:salil.chattopadh...@mmc.com>>; Mutnuri, 
Vishnu D mailto:vishnu.d.mutn...@mmc.com>>; Pathak, 
Omkar mailto:omkar.pat...@mmc.com>>; Shenouda, Nasir B 
mailto:nasir.b.sheno...@mmc.com>>
Subject: Vulnerabilities in SOLR 8.6.2
Importance: High

Hello Solr-User Support team
We have installed the SOLR 8.6.2 package into docker container in our DEV 
environment. Prior to using it, our security team scanned the docker image 
using SysDig and found a lot of Critical/High/Medium vulnerabilities. The full 
list is in the attached spreadsheet

Scan Summary
30 STOPS 190 WARNS188 Vulnerabilities

Please advise or point us to how/where to get a package that has been patched 
for the Critical/High/Medium vulnerabilities in the attached spreadsheet
Your help will be gratefully received


Lakshmi Narayanan
Marsh & McLennan Companies
121 River Street, Hoboken,NJ-07030
201-284-3345
M: 845-300-3809
Email: lakshmi.naraya...@mmc.com<mailto:lakshmi.naraya...@mmc.com>






**
This e-mail, including any attachments that accompany it, may contain
information that is confidential or privileged. This e-mail is
intended solely for the use of the individual(s) to whom it was intended to be
addressed. If you have received this e-mail and are not an intended recipient,
any disclosure, distribution, copying or other use or
retention of this email or information contained within it are prohibited.
If you have received this email in error, please immediately
reply to the sender via e-mail and also permanently
delete all copies of the original message together with any of its attachments
from your computer or device.
**


SOLR862 Vulnerabilities.xlsx
Description: SOLR862 Vulnerabilities.xlsx


Re: Fw: TolerantUpdateProcessorFactory not functioning

2020-06-10 Thread Hup Chen

There was another error which I think it should be an indexing error.
The listprice below is a pdouble filed, the update process didn't ignore the 
error when it was sent wrong data.

Response: {
  "responseHeader":{
"status":400,
"QTime":133551},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","java.lang.NumberFormatException"],
"msg":"ERROR: [doc=978194537913] Error adding field 
'listprice'='106Chapter' msg=For input string: \"106Chapter\"",
"code":400}}


____
From: Shawn Heisey 
Sent: Tuesday, June 9, 2020 3:19 PM
To: solr-user@lucene.apache.org 
Subject: Re: Fw: TolerantUpdateProcessorFactory not functioning

On 6/9/2020 12:44 AM, Hup Chen wrote:
> Thanks for your reply, this is one of the example where it fail.  POST by 
> using  charset=utf-8 or other charset didn't help that CTRL-CHAR "^" error 
> found in the title field,  I hope solr can simply skip this record and go 
> ahead to index the rest data.
>
> 
> 
>   9780373773244
>   9780373773244
> Missing: Innocent By Association^Zachary's Law (Hqn 
> Romance) 
>   Lisa_Jackson 
> 
> 
>
> curl 
> "http://localhost:7070/solr/searchinfo/update?update.chain=tolerant-chain=100;
>  -H 'Content-Type: text/xml; charset=utf-8' -d @data
>
>
> 
> 
>
> 
>
>100
>400
>0
> 
> 
>
>  org.apache.solr.common.SolrException
>   name="root-error-class">com.ctc.wstx.exc.WstxUnexpectedCharException
>
>Illegal character ((CTRL-CHAR, code 26))
>   at [row,col {unknown-source}]: [1,225]
>400
> 
> 

I tried your example XML as it is shown in your original message, saved
to a file named "foo.xml", and didn't have any trouble.  I wasn't even
using the tolerant update processor.   I just fired up the techproducts
example on a solr-8.3.0 download I already had, added a field named
"isbn13" (string type) so the schema was compatible, and tried the
following command:

curl "http://localhost:8983/solr/techproducts/update; -H 'Content-Type:
text/xml; charset=utf-8' -d @foo.xml

I then tried it again with the ^Z (which is two characters) replaced by
an actual Ctrl-Z character.  When I did that, I got exactly the same
error you did.

A Ctrl-Z character (ascii code 26) is *NOT* a valid character for XML,
which is why you're getting the error.

The tolerant update processor can't ignore errors in the actual format
of the input ... it only ignores errors during *indexing*.  This error
occurred during the input parsing, not during indexing, so the update
processor could not ignore it.

Thanks,
Shawn


Re: Fw: TolerantUpdateProcessorFactory not functioning

2020-06-09 Thread Hup Chen
Oh I got it, that's not indexing error!
Seem like I need to remove all the characters between [\x0-\x1F] (except \x9 
TAB, \xA LF, \xD CR) first.

Thanks a lot!





From: Shawn Heisey 
Sent: Tuesday, June 9, 2020 3:19 PM
To: solr-user@lucene.apache.org 
Subject: Re: Fw: TolerantUpdateProcessorFactory not functioning


I tried your example XML as it is shown in your original message, saved
to a file named "foo.xml", and didn't have any trouble.  I wasn't even
using the tolerant update processor.   I just fired up the techproducts
example on a solr-8.3.0 download I already had, added a field named
"isbn13" (string type) so the schema was compatible, and tried the
following command:

curl "http://localhost:8983/solr/techproducts/update; -H 'Content-Type:
text/xml; charset=utf-8' -d @foo.xml

I then tried it again with the ^Z (which is two characters) replaced by
an actual Ctrl-Z character.  When I did that, I got exactly the same
error you did.

A Ctrl-Z character (ascii code 26) is *NOT* a valid character for XML,
which is why you're getting the error.

The tolerant update processor can't ignore errors in the actual format
of the input ... it only ignores errors during *indexing*.  This error
occurred during the input parsing, not during indexing, so the update
processor could not ignore it.

Thanks,
Shawn


Re: Fw: TolerantUpdateProcessorFactory not functioning

2020-06-09 Thread Shawn Heisey

On 6/9/2020 12:44 AM, Hup Chen wrote:

Thanks for your reply, this is one of the example where it fail.  POST by using  
charset=utf-8 or other charset didn't help that CTRL-CHAR "^" error found in 
the title field,  I hope solr can simply skip this record and go ahead to index the rest 
data.



  9780373773244
  9780373773244
Missing: Innocent By Association^Zachary's Law (Hqn Romance) 

  Lisa_Jackson 



curl 
"http://localhost:7070/solr/searchinfo/update?update.chain=tolerant-chain=100;
 -H 'Content-Type: text/xml; charset=utf-8' -d @data






   
   100
   400
   0


   
 org.apache.solr.common.SolrException
 com.ctc.wstx.exc.WstxUnexpectedCharException
   
   Illegal character ((CTRL-CHAR, code 26))
  at [row,col {unknown-source}]: [1,225]
   400




I tried your example XML as it is shown in your original message, saved 
to a file named "foo.xml", and didn't have any trouble.  I wasn't even 
using the tolerant update processor.   I just fired up the techproducts 
example on a solr-8.3.0 download I already had, added a field named 
"isbn13" (string type) so the schema was compatible, and tried the 
following command:


curl "http://localhost:8983/solr/techproducts/update; -H 'Content-Type: 
text/xml; charset=utf-8' -d @foo.xml


I then tried it again with the ^Z (which is two characters) replaced by 
an actual Ctrl-Z character.  When I did that, I got exactly the same 
error you did.


A Ctrl-Z character (ascii code 26) is *NOT* a valid character for XML, 
which is why you're getting the error.


The tolerant update processor can't ignore errors in the actual format 
of the input ... it only ignores errors during *indexing*.  This error 
occurred during the input parsing, not during indexing, so the update 
processor could not ignore it.


Thanks,
Shawn


Re: Fw: TolerantUpdateProcessorFactory not functioning

2020-06-09 Thread Hup Chen
Thanks for your reply, this is one of the example where it fail.  POST by using 
 charset=utf-8 or other charset didn't help that CTRL-CHAR "^" error found in 
the title field,  I hope solr can simply skip this record and go ahead to index 
the rest data.



 9780373773244
 9780373773244
Missing: Innocent By Association^Zachary's Law (Hqn 
Romance) 
 Lisa_Jackson 





curl 
"http://localhost:7070/solr/searchinfo/update?update.chain=tolerant-chain=100;
 -H 'Content-Type: text/xml; charset=utf-8' -d @data






  
  100
  400
  0


  
org.apache.solr.common.SolrException
com.ctc.wstx.exc.WstxUnexpectedCharException
  
  Illegal character ((CTRL-CHAR, code 26))
 at [row,col {unknown-source}]: [1,225]
  400




From: Thomas Corthals 
Sent: Tuesday, June 9, 2020 2:12 PM
To: solr-user@lucene.apache.org 
Subject: Re: Fw: TolerantUpdateProcessorFactory not functioning

If your XML or JSON can't be parsed, your content never makes it to the
update chain.

It looks like you're trying to index non-UTF-8 data. You can set the
encoding of your XML in the Content-Type header of your POST request.

-H 'Content-Type: text/xml; charset=GB18030'

JSON only allows UTF-8, UTF-16 or UTF-32.

Best,

Thomas

Op di 9 jun. 2020 07:11 schreef Hup Chen :

> Any idea?
> I still won't be able to get TolerantUpdateProcessorFactory working, solr
> exited at any error without any tolerance, any suggestions will be
> appreciated.
> curl "
> http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
> -d @data.xml
>
> 
> 
>
> 
>   
>   100
>   400
>   1
> 
> 
>   
> org.apache.solr.common.SolrException
> com.ctc.wstx.exc.WstxEOFException
>   
>   Unexpected EOF; was expecting a close tag for element
> field
>  at [row,col {unknown-source}]: [1,8191]
>   400
> 
> 
>
>
> 
> From: Hup Chen
> Sent: Friday, May 29, 2020 7:29 PM
> To: solr-user@lucene.apache.org 
> Subject: TolerantUpdateProcessorFactory not functioning
>
> Hi,
>
> My solr indexing did not tolerate bad record but simply exited even I have
> configured TolerantUpdateProcessorFactory  in solrconfig.xml.
> Please advise how could I get TolerantUpdateProcessorFactory  to be
> working?
>
> solrconfig.xml:
>
>  
>
>  100
>
>
>  
>
> restarted solr before indexing:
> service solr stop
> service solr start
>
> curl "
> http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
> -d @test.json
>
> The first record is a bad record in test.json, the rest were not indexed.
>
> {
>   "responseHeader":{
> "errors":[{
> "type":"ADD",
> "id":"0007264097",
> "message":"ERROR: [doc=0007264097] Error adding field
> 'usedshipping'='' msg=empty String"}],
> "maxErrors":100,
> "status":400,
> "QTime":0},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Cannot parse provided JSON: Expected key,value separator ':':
> char=\",position=1240 AFTER='isbn\":\"4032171203\", \"sku\":\"\",
> \"title\":\"ãã³ãã¡ã¡ããã³ã \"author\"' BEFORE=':\"Sachiko
> OÃtomo\", ãã, \"ima'",
> "code":400}}
>
>


Re: Fw: TolerantUpdateProcessorFactory not functioning

2020-06-09 Thread Thomas Corthals
If your XML or JSON can't be parsed, your content never makes it to the
update chain.

It looks like you're trying to index non-UTF-8 data. You can set the
encoding of your XML in the Content-Type header of your POST request.

-H 'Content-Type: text/xml; charset=GB18030'

JSON only allows UTF-8, UTF-16 or UTF-32.

Best,

Thomas

Op di 9 jun. 2020 07:11 schreef Hup Chen :

> Any idea?
> I still won't be able to get TolerantUpdateProcessorFactory working, solr
> exited at any error without any tolerance, any suggestions will be
> appreciated.
> curl "
> http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
> -d @data.xml
>
> 
> 
>
> 
>   
>   100
>   400
>   1
> 
> 
>   
> org.apache.solr.common.SolrException
> com.ctc.wstx.exc.WstxEOFException
>   
>   Unexpected EOF; was expecting a close tag for element
> field
>  at [row,col {unknown-source}]: [1,8191]
>   400
> 
> 
>
>
> 
> From: Hup Chen
> Sent: Friday, May 29, 2020 7:29 PM
> To: solr-user@lucene.apache.org 
> Subject: TolerantUpdateProcessorFactory not functioning
>
> Hi,
>
> My solr indexing did not tolerate bad record but simply exited even I have
> configured TolerantUpdateProcessorFactory  in solrconfig.xml.
> Please advise how could I get TolerantUpdateProcessorFactory  to be
> working?
>
> solrconfig.xml:
>
>  
>
>  100
>
>
>  
>
> restarted solr before indexing:
> service solr stop
> service solr start
>
> curl "
> http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
> -d @test.json
>
> The first record is a bad record in test.json, the rest were not indexed.
>
> {
>   "responseHeader":{
> "errors":[{
> "type":"ADD",
> "id":"0007264097",
> "message":"ERROR: [doc=0007264097] Error adding field
> 'usedshipping'='' msg=empty String"}],
> "maxErrors":100,
> "status":400,
> "QTime":0},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Cannot parse provided JSON: Expected key,value separator ':':
> char=\",position=1240 AFTER='isbn\":\"4032171203\", \"sku\":\"\",
> \"title\":\"ãã³ãã¡ã¡ããã³ã \"author\"' BEFORE=':\"Sachiko
> OÃtomo\", ãã, \"ima'",
> "code":400}}
>
>


Fw: TolerantUpdateProcessorFactory not functioning

2020-06-08 Thread Hup Chen
Any idea?
I still won't be able to get TolerantUpdateProcessorFactory working, solr 
exited at any error without any tolerance, any suggestions will be appreciated.
curl 
"http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
 -d @data.xml





  
  100
  400
  1


  
org.apache.solr.common.SolrException
com.ctc.wstx.exc.WstxEOFException
  
  Unexpected EOF; was expecting a close tag for element 
field
 at [row,col {unknown-source}]: [1,8191]
  400





From: Hup Chen
Sent: Friday, May 29, 2020 7:29 PM
To: solr-user@lucene.apache.org 
Subject: TolerantUpdateProcessorFactory not functioning

Hi,

My solr indexing did not tolerate bad record but simply exited even I have 
configured TolerantUpdateProcessorFactory  in solrconfig.xml.
Please advise how could I get TolerantUpdateProcessorFactory  to be working?

solrconfig.xml:

 
   
 100
   
   
 

restarted solr before indexing:
service solr stop
service solr start

curl 
"http://localhost:7070/solr/mycore/update?update.chain=tolerant-chain=100;
 -d @test.json

The first record is a bad record in test.json, the rest were not indexed.

{
  "responseHeader":{
"errors":[{
"type":"ADD",
"id":"0007264097",
"message":"ERROR: [doc=0007264097] Error adding field 'usedshipping'='' 
msg=empty String"}],
"maxErrors":100,
"status":400,
"QTime":0},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Cannot parse provided JSON: Expected key,value separator ':': 
char=\",position=1240 AFTER='isbn\":\"4032171203\", \"sku\":\"\", 
\"title\":\"ãã³ãã¡ã¡ããã³ã \"author\"' BEFORE=':\"Sachiko OÃtomo\", 
ãã, \"ima'",
"code":400}}



FW: velocity reponse writer javascript execution problem

2020-05-15 Thread Serkan KAZANCI
Hi,

Found the solution myself.

The cause of the problem is SOLR-13982 issue. The new
Content-Security-Policy directives added to response headers thru jetty.xml
file, restricts java script codes to be executed at html files generated by
velocity response writer. 

Content-Security-Policy directives should be edited or deleted (with
evaluating security concerns) in order to prevent restriction.

Serkan,


-Original Message-
From: Serkan KAZANCI [mailto:ser...@kazanci.com.tr] 
Sent: Wednesday, May 13, 2020 6:41 PM
To: solr-user@lucene.apache.org
Subject: FW: velocity reponse writer javascript execution problem

Hi,

Any update on this matter guys ?

Regards,

Serkan

-Original Message-
From: Serkan KAZANCI [mailto:ser...@kazanci.com.tr] 
Sent: Tuesday, May 12, 2020 3:32 PM
To: solr-user@lucene.apache.org
Subject: velocity reponse writer javascript execution problem

Hi,

 

This is my first mail to the group. Nice to be here.

 

4 years ago, I have set up a solr search interface using velocity response
writer templates. (Solr version : 5.3.1)

 

I want to re-do the interface with new solr version(8.5.1). After some
tests, I have realized that velocity response writer templates do not run
JavaScript codes. Even the auto-complete feature at Solr's techproducts demo
is not working, which also uses velocity response writer templates and
relies on JavaScript for that function.

 

Is it due to security vulnerability I have heard couple of years ago? Is
there a work around so that I can use velocity templates that executes
JavaScript? Or is it only me having this problem.

 

Thanks for the replies in advance.

 

Serkan,

 




FW: velocity reponse writer javascript execution problem

2020-05-13 Thread Serkan KAZANCI
Hi,

Any update on this matter guys ?

Regards,

Serkan

-Original Message-
From: Serkan KAZANCI [mailto:ser...@kazanci.com.tr] 
Sent: Tuesday, May 12, 2020 3:32 PM
To: solr-user@lucene.apache.org
Subject: velocity reponse writer javascript execution problem

Hi,

 

This is my first mail to the group. Nice to be here.

 

4 years ago, I have set up a solr search interface using velocity response
writer templates. (Solr version : 5.3.1)

 

I want to re-do the interface with new solr version(8.5.1). After some
tests, I have realized that velocity response writer templates do not run
JavaScript codes. Even the auto-complete feature at Solr's techproducts demo
is not working, which also uses velocity response writer templates and
relies on JavaScript for that function.

 

Is it due to security vulnerability I have heard couple of years ago? Is
there a work around so that I can use velocity templates that executes
JavaScript? Or is it only me having this problem.

 

Thanks for the replies in advance.

 

Serkan,

 




Re: FW: Solr proximity search highlighting issue

2020-04-02 Thread Charlie Hull
I may be wrong here, but the problem may be that the match was on your 
terms pos1 and pos2 (you don't need the pos3 term to match, due to the 
OR operator) and thus that's what's been highlighted.


There's a hl.q parameter that lets you supply a different query for 
highlighting to the one you're using for searching, perhaps that could 
have a different and more forgiving pattern that made sure all your 
terms were highlighted?


Also, the XML didn't come through as this list strips attachments.

Best

Charlie

On 31/03/2020 19:27, Anil Shingala wrote:


Hello Dev Team,

I found some problem in highlighting module. Not all the search terms 
are getting highlighted.


Sample query: q={!complexphrase+inOrder=true}"pos1 (pos2 OR 
pos3)"~30=true


Indexed text: "pos1 pos2 pos3 pos4"

please find attached response xml screen shot from solr.

You can see that only two terms are highlighted like, "pos1 
pos2 pos3 pos4"


The scenario is same in Solr source code since long time (I have 
checked in Solr version 4 to version 7). The scenario is when term 
positions are in-order in document and query both.


Please let me know your view on this.

Regards,

Anil Shingala

*Knovos*
10521 Rosehaven Street, Suite 300 | Fairfax, VA 22030 (USA)
Office +1 703.226.1505

Main +1 703.226.1500 | +1 877.227.5457

/ashing...@knovos.com/ 
/_|_//www.knovos.com/ 



Washington DC | New York | London | Paris | Gandhinagar | Tokyo

/Knovos was formerly also known as Capital Novus or Capital Legal 
Solutions. The information contained in this email message may be 
confidential or legally privileged. If you are not the intended 
recipient, please advise the sender by replying to this email and by 
immediately deleting all copies of this message and any attachments. 
Knovos, LLC is not authorized to practice law./




--
Charlie Hull
OpenSource Connections, previously Flax

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.o19s.com



FW: Solr proximity search highlighting issue

2020-03-31 Thread Anil Shingala


Hello Dev Team,

I found some problem in highlighting module. Not all the search terms are 
getting highlighted.

Sample query: q={!complexphrase+inOrder=true}"pos1 (pos2 OR pos3)"~30=true
Indexed text: "pos1 pos2 pos3 pos4"

please find attached response xml screen shot from solr.

You can see that only two terms are highlighted like, "pos1 
pos2 pos3 pos4"

The scenario is same in Solr source code since long time (I have checked in 
Solr version 4 to version 7). The scenario is when term positions are in-order 
in document and query both.

Please let me know your view on this.

Regards,
Anil Shingala
Knovos
10521 Rosehaven Street, Suite 300 | Fairfax, VA 22030 (USA)
Office +1 703.226.1505
Main +1 703.226.1500 | +1 877.227.5457
ashing...@knovos.com | 
www.knovos.com

Washington DC | New York | London | Paris | Gandhinagar | Tokyo

Knovos was formerly also known as Capital Novus or Capital Legal Solutions. The 
information contained in this email message may be confidential or legally 
privileged. If you are not the intended recipient, please advise the sender by 
replying to this email and by immediately deleting all copies of this message 
and any attachments. Knovos, LLC is not authorized to practice law.



pos1 pos2 pos3 pos4

Re: FW: SOLR version 8 bug???

2020-03-24 Thread Charlie Hull

Hi Phil,

The error you mention “The website encountered an unexpected error. 
Please try again later.” isn't being generated by Solr but by Drupal. We 
can't tell from the error text you're providing what the Drupal Solr 
plugin is actually sending to Solr as a query I'm afraid: if you could 
figure that out, then you could try running it on Solr itself using the 
standard Solr API and see if you could reproduce the problem. It's also 
certainly possible that your Drupal system is sending something crazy to 
Solr which causes the error. You might look into raising the Solr 
logging level (see 
https://lucene.apache.org/solr/guide/8_5/configuring-logging.html)


Best

Charlie

On 24/03/2020 15:38, Staley, Phil R - DCF wrote:

I just updated to SOLR 8.5.0 on one of our test servers and I continue to get 
the same issue/bug I described below.  Below my description of the problem I 
have also included the log message detail from Drupal.

This is the third time I have submitted this item.  For the time being we will 
continue to run SOLR 7.7.2 in production.

Thanks,

Phil Staley
Webmaster
Wisconsin Dept. of Childern and Families
608 422-6569
phil.sta...@wisconsin.gov

We recently upgraded to our Drupal 8 sites to SOLR 8.3.1.  We are now getting 
reports of certain patterns of search terms resulting in an error that reads, 
“The website encountered an unexpected error. Please try again later.”

Below is a list of example terms that always result in this error and a similar 
list that works fine.  The problem pattern seems to be a search term that 
contains 2 or 3 characters followed by a space, followed by additional text.

To confirm that the problem is version 8 of SOLR, I have updated our local and 
UAT sites with the latest Drupal updates that did include an update to the 
Search API Solr module and tested the terms below under SOLR 7.7.2, 8.3.1, and 
8.4.1.  Under version 7.7.2  everything works fine. Under either of the version 
8, the problem returns.

Thoughts?

  Search terms that result in error

• w-2 agency directory

• agency w-2 directory

• w-2 agency

• w-2 directory

• w2 agency directory

• w2 agency

• w2 directory

  Search terms that do not result in error

• w-22 agency directory

• agency directory w-2

• agency w-2directory

• agencyw-2 directory

• w-2

• w2

• agency directory

• agency • directory

• -2 agency directory

• 2 agency directory

• w-2agency directory

• w2agency directory
  Drupal\search_api_solr\SearchApiSolrException: An error occurred while trying to search with Solr: { "error":{ 
"msg":"0", "trace":"java.lang.ArrayIndexOutOfBoundsException: 0\n\tat 
org.apache.lucene.util.QueryBuilder.newSynonymQuery(QueryBuilder.java:701)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.newSynonymQuery(SolrQueryParserBase.java:636)\n\tat 
org.apache.lucene.util.QueryBuilder.analyzeGraphBoolean(QueryBuilder.java:581)\n\tat 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:343)\n\tat 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:263)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:527)\n\tat 
org.apache.solr.parser.QueryParser.newFieldQuery(QueryParser.java:62)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:1141)\n\tat 
org.apache.solr.parser.QueryParser.MultiTerm(QueryParser.java:593)\n\tat org.apache.solr.parser.QueryParser.Query(QueryParser.java:142)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:131)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:263)\n\tat 
org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:49)\n\tat org.apache.solr.search.QParser.getQuery(QParser.java:174)\n\tat 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:161)\n\tat 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:302)\n\tat 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)\n\tat 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)\n\tat org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:802)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:579)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)\n\tat 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)\n\tat 

FW: SOLR version 8 bug???

2020-03-24 Thread Staley, Phil R - DCF
I just updated to SOLR 8.5.0 on one of our test servers and I continue to get 
the same issue/bug I described below.  Below my description of the problem I 
have also included the log message detail from Drupal.

This is the third time I have submitted this item.  For the time being we will 
continue to run SOLR 7.7.2 in production.

Thanks,

Phil Staley
Webmaster
Wisconsin Dept. of Childern and Families
608 422-6569
phil.sta...@wisconsin.gov

We recently upgraded to our Drupal 8 sites to SOLR 8.3.1.  We are now getting 
reports of certain patterns of search terms resulting in an error that reads, 
“The website encountered an unexpected error. Please try again later.”

Below is a list of example terms that always result in this error and a similar 
list that works fine.  The problem pattern seems to be a search term that 
contains 2 or 3 characters followed by a space, followed by additional text.

To confirm that the problem is version 8 of SOLR, I have updated our local and 
UAT sites with the latest Drupal updates that did include an update to the 
Search API Solr module and tested the terms below under SOLR 7.7.2, 8.3.1, and 
8.4.1.  Under version 7.7.2  everything works fine. Under either of the version 
8, the problem returns.

Thoughts?

 Search terms that result in error

• w-2 agency directory

• agency w-2 directory

• w-2 agency

• w-2 directory

• w2 agency directory

• w2 agency

• w2 directory

 Search terms that do not result in error

• w-22 agency directory

• agency directory w-2

• agency w-2directory

• agencyw-2 directory

• w-2

• w2

• agency directory

• agency • directory

• -2 agency directory

• 2 agency directory

• w-2agency directory

• w2agency directory
 Drupal\search_api_solr\SearchApiSolrException: An error occurred while trying 
to search with Solr: { "error":{ "msg":"0", 
"trace":"java.lang.ArrayIndexOutOfBoundsException: 0\n\tat 
org.apache.lucene.util.QueryBuilder.newSynonymQuery(QueryBuilder.java:701)\n\tat
 
org.apache.solr.parser.SolrQueryParserBase.newSynonymQuery(SolrQueryParserBase.java:636)\n\tat
 
org.apache.lucene.util.QueryBuilder.analyzeGraphBoolean(QueryBuilder.java:581)\n\tat
 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:343)\n\tat
 
org.apache.lucene.util.QueryBuilder.createFieldQuery(QueryBuilder.java:263)\n\tat
 
org.apache.solr.parser.SolrQueryParserBase.newFieldQuery(SolrQueryParserBase.java:527)\n\tat
 org.apache.solr.parser.QueryParser.newFieldQuery(QueryParser.java:62)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:1141)\n\tat
 org.apache.solr.parser.QueryParser.MultiTerm(QueryParser.java:593)\n\tat 
org.apache.solr.parser.QueryParser.Query(QueryParser.java:142)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat 
org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat 
org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.Clause(QueryParser.java:282)\n\tat 
org.apache.solr.parser.QueryParser.Query(QueryParser.java:162)\n\tat 
org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:131)\n\tat 
org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:263)\n\tat
 org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:49)\n\tat 
org.apache.solr.search.QParser.getQuery(QParser.java:174)\n\tat 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:161)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:302)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2596)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:802)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:579)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)\n\tat
 

Re: Fw: SolrException in Solr 6.1.0

2020-03-11 Thread Paras Lehana
Hi Vishal,

Let's have a quick look into your logs.

org.apache.solr.common.SolrException: Exception writing document id
> WF204878828_42970103 to the index; possible analysis error.


If you were indexing something, check the documents syntax.


 Caused by: java.nio.file.FileSystemException:
> E:\SolrCloud\solr1\server\solr\workflows\data\index\_8suj.fdx: Insufficient
> system resources exist to complete the requested service.


Resource issues? Increase heap, check RAM and HDD utilization?



On Mon, 9 Mar 2020 at 10:54, vishal patel 
wrote:

> Anyone is looking my issue?
>
> Sent from Outlook
>
> 
> From: vishal patel 
> Sent: Friday, March 6, 2020 12:02 PM
> To: solr-user@lucene.apache.org
> Subject: SolrException in Solr 6.1.0
>
> I got below ERROR in Solr 6.1.0 log
>
> 2020-03-05 16:54:09.508 ERROR (qtp1239731077-468949) [c:workflows s:shard1
> r:core_node1 x:workflows] o.a.s.h.RequestHandlerBase
> org.apache.solr.common.SolrException: Exception writing document id
> WF204878828_42970103 to the index; possible analysis error.
> at
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:181)
> at
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:68)
> at
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
> at
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:939)
> at
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1094)
> at
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:720)
> at
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
> at
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:97)
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:179)
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:135)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:274)
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:239)
> at
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:157)
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:186)
> at
> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)
> at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54)
> at
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
> at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at org.eclipse.jetty.io
> 

Fw: SolrException in Solr 6.1.0

2020-03-08 Thread vishal patel
Anyone is looking my issue?

Sent from Outlook


From: vishal patel 
Sent: Friday, March 6, 2020 12:02 PM
To: solr-user@lucene.apache.org
Subject: SolrException in Solr 6.1.0

I got below ERROR in Solr 6.1.0 log

2020-03-05 16:54:09.508 ERROR (qtp1239731077-468949) [c:workflows s:shard1 
r:core_node1 x:workflows] o.a.s.h.RequestHandlerBase 
org.apache.solr.common.SolrException: Exception writing document id 
WF204878828_42970103 to the index; possible analysis error.
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:181)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:68)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:939)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1094)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:720)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:97)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:179)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:135)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:274)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:239)
at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:157)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:186)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:107)
at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexWriter is 
closed
at 

Fw: Unable to start solr node with sysprop to form a nodeset

2020-03-03 Thread Yatin Grover



From: Yatin Grover
Sent: Wednesday, March 4, 2020 11:46 AM
To: solr-user@lucene.apache.org 
Subject: Unable to start solr node with sysprop to form a nodeset


Hi,

I am trying to apply some cluster level autoscaling policies and for that I am 
trying to create a nodeset. There are some of the ways to create a 
nodeset(selecting a set of nodes on basis of some rules) as per docs : 
https://lucene.apache.org/solr/guide/8_3/solrcloud-autoscaling-policy-preferences.html#node-selection-attributes

The docs state that we can use sysprop.{PROPERTY_NAME}: to specify a set of 
nodes.and sysprop.key means a value that is passed to the node as 
-Dkey=keyValue during the node startup.

So this is the command I am using to start a node

bin/solr start -cloud -s example/cloud/node/solr -p 8987 -Dkey=noderef -z 
localhost:9983


However I get the following error after the above command

javax.servlet.UnavailableException: Error processing the request. CoreContainer 
is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:370)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:505)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
at java.lang.Thread.run(Thread.java:748)


Can anyone point out what I am doing here. basically I want to label a set of 
nodes as "pullnodes" and other set as "tlognodes" so that I can apply different 
alutoscaling cluster policies to the same

Thanks and Regards
Yatin Grover


FW: Solr 4 to Solr7 migration DIH behavior change

2019-11-22 Thread Shashank Bellary
Hi Folks

I migrated from Solr 4 to 7.5 and I see an issue with the way DIH is working. I 
use `JdbcDataSource` and here the config file is attached

1) I started seeing OutOfMemory issue since MySQL JDBC driver has that issue of 
not respecting `batchSize` (though Solr4 didn't show this behavior). So, I 
added `batchSize=-1` for that

2) After adding that I'm running into ResultSet closed exception as shown below 
while fetching the child entity


getNext() failed for query ' SELECT REVIEW AS REVIEWS FROM 
SOLR_SITTER_SERVICE_PROFILE_REVIEWS WHERE SERVICE_PROFILE_ID = '17' ; 
':org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.sql.SQLException: Operation not allowed after ResultSet closed
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:464)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:377)
at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:133)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:267)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:517)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:33)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:424)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
at 
org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:466)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:984)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:929)
at com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:794)
at com.mysql.jdbc.ResultSetImpl.next(ResultSetImpl.java:7145)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2078)
at com.mysql.jdbc.StatementImpl.getMoreResults(StatementImpl.java:2062)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:458)
... 13 more


Is this a known issue? How do I fix this, any help is greatly appreciated.



Thanks

Shashank

This email is intended for the person(s) to whom it is addressed and may 
contain information that is PRIVILEGED or CONFIDENTIAL. Any unauthorized use, 
distribution, copying, or disclosure by any person other than the addressee(s) 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately by return email and delete the message and any 
attachments from your system.


serviceprofile-data-import.xml
Description: serviceprofile-data-import.xml


Re: Fw: unsubscribe

2018-12-07 Thread Timeka Cobb
Unsubscribe

On Fri, Dec 7, 2018, 10:57 AM samuel kim 
>
> Sent from Outlook
>
>
> 
> From: samuel kim
> Sent: Monday, July 31, 2017 3:48 PM
> To: solr-user@lucene.apache.org
> Subject: unsubscribe
>
> unsubscribe
>


Fw: unsubscribe

2018-12-07 Thread samuel kim


Sent from Outlook



From: samuel kim
Sent: Monday, July 31, 2017 3:48 PM
To: solr-user@lucene.apache.org
Subject: unsubscribe

unsubscribe


FW: Sort index by size

2018-11-27 Thread Srinivas Kashyap
Hi Shawn and everyone who replied to the thread,

The solr version is 5.2.1 and each document is returning multi-valued fields 
for majority of fields defined in schema.xml. I'm in the process of pasting the 
content of my files to a paste website and soon will update.

Thanks,
Srinivas


On 11/19/2018 2:31 AM, Srinivas Kashyap wrote:
> I have a solr core with some 20 fields in it.(all are stored and indexed). 
> For an environment, the number of documents are around 0.29 million. When I 
> run the full import through DIH, indexing is completing successfully. But, it 
> is occupying the disk space of around 5 GB. Is there a possibility where I 
> can go and check, which document is consuming more memory? Put in another 
> way, can I sort the index based on size?

I am not aware of any way to do that.  Might be one that I don't know about, 
but if there were a way, seems like I would have come across it before.

It is not very that the large index size is due to a single document or a 
handful of documents.  It is more likely that most documents are relatively 
large.  I could be wrong about that, though.

If you have 29 documents (which is how I interpreted 0.29 million) and the 
total index size is about 5 GB, then the average size per document in the index 
is about 18 kilobytes.This is in my view pretty large.  Typically I think that 
most documents are 1-2 kilobytes.

Can we get your Solr version, a copy of your schema, and exactly what Solr 
returns in search results for a typically sized document?  You'll need to use a 
paste website or a file-sharing website ... if you try to attach these things 
to a message, the mailing list will most likely eat them, and we'll never see 
them. If you need to redact the information in search results ... please do it 
in a way that we can still see the exact size of the text -- don't just remove 
information, replace it with information that's the same length.

Thanks,
Shawn


DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify the sender immediately by 
replying to the e-mail, and then delete it without making copies or using it in 
any way.
No representation is made that this email or any attachments are free of 
viruses. Virus scanning is recommended and is the responsibility of the 
recipient.


Re: FW: Question about Overseer calling SPLITSHARD collection API command during autoscaling

2018-03-15 Thread Cassandra Targett
Hi Matthew -

It's cool to hear you're using the new autoscaling features.

To answer your first question, SPLITSHARD as an action for autoscaling is
not yet supported. As for when it might be, it's the next big gap to fill
in the autoscaling functionality, but there is some work to do first to
make splitting shards faster and safer overall. So, I hope we'll see it in
7.4, but there's a chance it won't be ready until the release after (7.5,
I'd assume).

AFAICT, there isn't a JIRA issue specifically for the SPLITSHARD support
yet, but there will be one relatively soon. There's an umbrella issue for a
many of the open tasks if you're interested in that:
https://issues.apache.org/jira/browse/SOLR-9735 (although, it's not an
exhaustive roadmap, I don't think).

I think for the time being if you want/need to split a shard, you'd still
need to do it manually.

Hope this helps -
Cassandra

On Thu, Mar 15, 2018 at 11:41 AM, Matthew Faw 
wrote:

> I sent this a few mins ago, but wasn’t yet subscribed.  Forwarding the
> message along to make sure it’s received!
>
> From: Matthew Faw 
> Date: Thursday, March 15, 2018 at 12:28 PM
> To: "solr-user@lucene.apache.org" 
> Cc: Matthew Faw , Alex Meijer <
> alex.mei...@verato.com>
> Subject: Question about Overseer calling SPLITSHARD collection API command
> during autoscaling
>
> Hi,
>
> So I’ve been trying out the new autoscaling features in solr 7.2.1.  I run
> the following commands when creating my solr cluster:
>
>
> Set up overseer role:
> curl -s "solr-service-core:8983/solr/admin/collections?action=
> ADDROLE=overseer=$thenode"
>
> Create cluster prefs:
> clusterprefs=$(cat <<-EOF
> {
> "set-cluster-preferences" : [
>   {"minimize":"sysLoadAvg"},
>   {"minimize":"cores"}
>   ]
> }
> EOF
> )
> echo "The cluster prefs request body is: $clusterprefs"
> curl -H "Content-Type: application/json" -X POST -d
> "$clusterprefs" solr-service-core:8983/api/cluster/autoscaling
>
> Cluster policy:
> clusterpolicy=$(cat <<-EOF
> {
> "set-cluster-policy": [
>   {"replica": 0, "nodeRole": "overseer"},
>   {"replica": "<2", "shard": "#EACH", "node": "#ANY"},
>   {"cores": ">0", "node": "#ANY"},
>   {"cores": "<5", "node": "#ANY"},
>   {"replica": 0, "sysLoadAvg": ">80"}
>   ]
> }
> EOF
> )
> echo "The cluster policy is $clusterpolicy"
> curl -H "Content-Type: application/json" -X POST -d
> "$clusterpolicy" solr-service-core:8983/api/cluster/autoscaling
>
> nodeaddtrigger=$(cat <<-EOF
> {
>  "set-trigger": {
>   "name" : "node_added_trigger",
>   "event" : "nodeAdded",
>   "waitFor" : "1s"
>  }
> }
> EOF
> )
> echo "The node added trigger request: $nodeaddtrigger"
> curl -H "Content-Type: application/json" -X POST -d
> "$nodeaddtrigger" solr-service-core:8983/api/cluster/autoscaling
>
>
> I then create a collection with 2 shards and 3 replicas, under a set of
> nodes in an autoscaling group (initially 4, scales up to 10):
> curl -s "solr-service-core:8983/solr/admin/collections?action=
> CREATE=${COLLECTION_NAME}=${NUM_SHARDS}&
> replicationFactor=${NUM_REPLICAS}=${
> AUTO_ADD_REPLICAS}=${COLLECTION_NAME}&
> waitForFinalState=true"
>
>
> I’ve observed several autoscaling actions being performed – automatically
> re-adding replicas, and moving shards to nodes based on my cluster
> policy/prefs.  However, I have not observed a SPLITSHARD operation.  My
> question is:
> 1) should I expect the Overseer to be able to call the SPLITSHARD command,
> or is this feature not yet implemented?
> 2) If it is possible, do you have any recommendations as to how I might
> force this type of behavior to happen?
> 3) If it’s not implemented yet, when could I expect the feature to be
> available?
>
> If you need any more details, please let me know! Really excited about
> these new features.
>
> Thanks,
> Matthew
>
> The content of this email is intended solely for the individual or entity
> named above and access by anyone else is unauthorized. If you are not the
> intended recipient, any disclosure, copying, distribution, or use of the
> contents of this information is prohibited and may be unlawful. If you have
> received this electronic transmission in error, please reply immediately to
> the sender that you have received the message in error, and delete it.
> Thank you.
>


FW: Question about Overseer calling SPLITSHARD collection API command during autoscaling

2018-03-15 Thread Matthew Faw
I sent this a few mins ago, but wasn’t yet subscribed.  Forwarding the message 
along to make sure it’s received!

From: Matthew Faw 
Date: Thursday, March 15, 2018 at 12:28 PM
To: "solr-user@lucene.apache.org" 
Cc: Matthew Faw , Alex Meijer 
Subject: Question about Overseer calling SPLITSHARD collection API command 
during autoscaling

Hi,

So I’ve been trying out the new autoscaling features in solr 7.2.1.  I run the 
following commands when creating my solr cluster:


Set up overseer role:
curl -s 
"solr-service-core:8983/solr/admin/collections?action=ADDROLE=overseer=$thenode"

Create cluster prefs:
clusterprefs=$(cat <<-EOF
{
"set-cluster-preferences" : [
  {"minimize":"sysLoadAvg"},
  {"minimize":"cores"}
  ]
}
EOF
)
echo "The cluster prefs request body is: $clusterprefs"
curl -H "Content-Type: application/json" -X POST -d "$clusterprefs" 
solr-service-core:8983/api/cluster/autoscaling

Cluster policy:
clusterpolicy=$(cat <<-EOF
{
"set-cluster-policy": [
  {"replica": 0, "nodeRole": "overseer"},
  {"replica": "<2", "shard": "#EACH", "node": "#ANY"},
  {"cores": ">0", "node": "#ANY"},
  {"cores": "<5", "node": "#ANY"},
  {"replica": 0, "sysLoadAvg": ">80"}
  ]
}
EOF
)
echo "The cluster policy is $clusterpolicy"
curl -H "Content-Type: application/json" -X POST -d 
"$clusterpolicy" solr-service-core:8983/api/cluster/autoscaling

nodeaddtrigger=$(cat <<-EOF
{
 "set-trigger": {
  "name" : "node_added_trigger",
  "event" : "nodeAdded",
  "waitFor" : "1s"
 }
}
EOF
)
echo "The node added trigger request: $nodeaddtrigger"
curl -H "Content-Type: application/json" -X POST -d 
"$nodeaddtrigger" solr-service-core:8983/api/cluster/autoscaling


I then create a collection with 2 shards and 3 replicas, under a set of nodes 
in an autoscaling group (initially 4, scales up to 10):
curl -s 
"solr-service-core:8983/solr/admin/collections?action=CREATE=${COLLECTION_NAME}=${NUM_SHARDS}=${NUM_REPLICAS}=${AUTO_ADD_REPLICAS}=${COLLECTION_NAME}=true"


I’ve observed several autoscaling actions being performed – automatically 
re-adding replicas, and moving shards to nodes based on my cluster 
policy/prefs.  However, I have not observed a SPLITSHARD operation.  My 
question is:
1) should I expect the Overseer to be able to call the SPLITSHARD command, or 
is this feature not yet implemented?
2) If it is possible, do you have any recommendations as to how I might force 
this type of behavior to happen?
3) If it’s not implemented yet, when could I expect the feature to be available?

If you need any more details, please let me know! Really excited about these 
new features.

Thanks,
Matthew

The content of this email is intended solely for the individual or entity named 
above and access by anyone else is unauthorized. If you are not the intended 
recipient, any disclosure, copying, distribution, or use of the contents of 
this information is prohibited and may be unlawful. If you have received this 
electronic transmission in error, please reply immediately to the sender that 
you have received the message in error, and delete it. Thank you.


RE: FW: Need Help Configuring Solr To Work With Nutch

2017-12-11 Thread Mukhopadhyay, Aratrika
Thank you Erick Erickson and Rick Leir . My issue was permission related where 
the solr user was not running the indexing job through Nutch and therefore was 
being unable to write anything to Solr. I changed the ownership of the nutch's 
runtime directory to solr and all is well and working. I thank you for your 
help. Your tip about the numDocs put me on the right track . 

Aratrika 

-Original Message-
From: Rick Leir [mailto:rl...@leirtech.com] 
Sent: Saturday, December 09, 2017 10:25 AM
To: solr-user@lucene.apache.org
Subject: RE: FW: Need Help Configuring Solr To Work With Nutch

Ara
The config for soft commit would not be in schema.xml, please look in 
solrconfig.xml.

Look in solr.log for evidence of commits occurring. Explore the SolrAdmin 
console, what are the document counts?

You can post snippets from your config files here.
Cheers --Rick


On December 8, 2017 4:23:00 PM EST, "Mukhopadhyay, Aratrika" 
<aratrika.mukhopadh...@mail.house.gov> wrote:
>Rick ,
>Thanks for your reply. I do not see any errors or exceptions in the 
>solr logs. I have read that the my schema in nutch needs to match the 
>schema in solr. When I change the schema in in the config directory and 
>restart solr my changes are lost. Leaving the schema alone is the only 
>way I can get the indexing job to run but I cant query for the data in 
>solr. Would you like me to send you specific configuration files ? I 
>cant seem to get this to work.
>
>Kind regards,
>Aratrika Mukhopadhyay
>
>-Original Message-
>From: Rick Leir [mailto:rl...@leirtech.com]
>Sent: Friday, December 08, 2017 4:06 PM
>To: solr-user@lucene.apache.org
>Subject: Re: FW: Need Help Configuring Solr To Work With Nutch
>
>Ara
>Softcommit might be the default in Solrconfig.xml, and if not then you 
>should probably make it so. Then you need to have a look in solr.log if 
>things are not working as you expect.
>Cheers -- Rick
>
>On December 8, 2017 3:23:35 PM EST, "Mukhopadhyay, Aratrika"
><aratrika.mukhopadh...@mail.house.gov> wrote:
>>Erick,
>>Do I need to set the softCommit = true and prepareCommit to true in my
>
>>solrconfig ? I am still at a loss as to what is happening. Thanks
>again
>>for your help.
>>
>>Aratrika
>>
>>From: Mukhopadhyay, Aratrika
>>Sent: Friday, December 08, 2017 11:34 AM
>>To: solr-user <solr-user@lucene.apache.org>
>>Subject: RE: Need Help Configuring Solr To Work With Nutch
>>
>>
>>Hello Erick ,
>>
>>   This is what I see in the logs :
>>
>>[cid:image001.png@01D37018.62D3CC90]
>>
>>
>>
>>I am sorry it sbeen a while since I worked with solr. I did not do 
>>anything to specifically commit the changes to the core. Thanks for 
>>your prompt attention to this matter.
>>
>>
>>
>>Aratrika Mukhopadhyay
>>
>>
>>
>>-Original Message-
>>From: Erick Erickson [mailto:erickerick...@gmail.com]
>>Sent: Friday, December 08, 2017 11:06 AM
>>To: solr-user
>><solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>>
>>Subject: Re: Need Help Configuring Solr To Work With Nutch
>>
>>
>>
>>1> do you see update messages in the Solr logs?
>>
>>2> did you issue a commit?
>>
>>
>>
>>Best,
>>
>>Erick
>>
>>
>>
>>On Fri, Dec 8, 2017 at 7:27 AM, Mukhopadhyay, Aratrika < 
>>aratrika.mukhopadh...@mail.house.gov<mailto:Aratrika.Mukhopadhyay@mail.
>>house.gov>>
>>wrote:
>>
>>
>>
>>> Good Morning,
>>
>>>
>>
>>>I am running nutch 2.3 , hbase 0.98 and I am integrating
>>
>>> nutch with solr 6.4. I have a successful crawl in nutch and when I
>>see
>>
>>> that it is indexing the content into solr. However I cannot query
>and
>>get any results.
>>
>>> Its as if Nutch isn’t writing anything to solr at all. I am stuck
>and
>>
>>> need someone who is familiar with solr/nutch to provide assistance.
>>
>>> Can someone please help ?
>>
>>>
>>
>>>
>>
>>>
>>
>>> This is what I see when I index into solr. I see no errors.
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>> Regards,
>>
>>>
>>
>>> Aratrika Mukhopadhyay
>>
>>>
>
>--
>Sorry for being brief. Alternate email is rickleir at yahoo dot com

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com 


RE: FW: Need Help Configuring Solr To Work With Nutch

2017-12-09 Thread Rick Leir
Ara
The config for soft commit would not be in schema.xml, please look in 
solrconfig.xml.

Look in solr.log for evidence of commits occurring. Explore the SolrAdmin 
console, what are the document counts?

You can post snippets from your config files here.
Cheers --Rick


On December 8, 2017 4:23:00 PM EST, "Mukhopadhyay, Aratrika" 
<aratrika.mukhopadh...@mail.house.gov> wrote:
>Rick , 
>Thanks for your reply. I do not see any errors or exceptions in the
>solr logs. I have read that the my schema in nutch needs to match the
>schema in solr. When I change the schema in in the config directory and
>restart solr my changes are lost. Leaving the schema alone is the only
>way I can get the indexing job to run but I cant query for the data in
>solr. Would you like me to send you specific configuration files ? I
>cant seem to get this to work. 
>
>Kind regards,
>Aratrika Mukhopadhyay
>
>-Original Message-
>From: Rick Leir [mailto:rl...@leirtech.com] 
>Sent: Friday, December 08, 2017 4:06 PM
>To: solr-user@lucene.apache.org
>Subject: Re: FW: Need Help Configuring Solr To Work With Nutch
>
>Ara
>Softcommit might be the default in Solrconfig.xml, and if not then you
>should probably make it so. Then you need to have a look in solr.log if
>things are not working as you expect. 
>Cheers -- Rick
>
>On December 8, 2017 3:23:35 PM EST, "Mukhopadhyay, Aratrika"
><aratrika.mukhopadh...@mail.house.gov> wrote:
>>Erick,
>>Do I need to set the softCommit = true and prepareCommit to true in my
>
>>solrconfig ? I am still at a loss as to what is happening. Thanks
>again 
>>for your help.
>>
>>Aratrika
>>
>>From: Mukhopadhyay, Aratrika
>>Sent: Friday, December 08, 2017 11:34 AM
>>To: solr-user <solr-user@lucene.apache.org>
>>Subject: RE: Need Help Configuring Solr To Work With Nutch
>>
>>
>>Hello Erick ,
>>
>>   This is what I see in the logs :
>>
>>[cid:image001.png@01D37018.62D3CC90]
>>
>>
>>
>>I am sorry it sbeen a while since I worked with solr. I did not do 
>>anything to specifically commit the changes to the core. Thanks for 
>>your prompt attention to this matter.
>>
>>
>>
>>Aratrika Mukhopadhyay
>>
>>
>>
>>-Original Message-
>>From: Erick Erickson [mailto:erickerick...@gmail.com]
>>Sent: Friday, December 08, 2017 11:06 AM
>>To: solr-user
>><solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>>
>>Subject: Re: Need Help Configuring Solr To Work With Nutch
>>
>>
>>
>>1> do you see update messages in the Solr logs?
>>
>>2> did you issue a commit?
>>
>>
>>
>>Best,
>>
>>Erick
>>
>>
>>
>>On Fri, Dec 8, 2017 at 7:27 AM, Mukhopadhyay, Aratrika < 
>>aratrika.mukhopadh...@mail.house.gov<mailto:Aratrika.Mukhopadhyay@mail.
>>house.gov>>
>>wrote:
>>
>>
>>
>>> Good Morning,
>>
>>>
>>
>>>I am running nutch 2.3 , hbase 0.98 and I am integrating
>>
>>> nutch with solr 6.4. I have a successful crawl in nutch and when I
>>see
>>
>>> that it is indexing the content into solr. However I cannot query
>and
>>get any results.
>>
>>> Its as if Nutch isn’t writing anything to solr at all. I am stuck
>and
>>
>>> need someone who is familiar with solr/nutch to provide assistance.
>>
>>> Can someone please help ?
>>
>>>
>>
>>>
>>
>>>
>>
>>> This is what I see when I index into solr. I see no errors.
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>>
>>
>>> Regards,
>>
>>>
>>
>>> Aratrika Mukhopadhyay
>>
>>>
>
>--
>Sorry for being brief. Alternate email is rickleir at yahoo dot com 

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

RE: FW: Need Help Configuring Solr To Work With Nutch

2017-12-08 Thread Mukhopadhyay, Aratrika
Rick , 
  Thanks for your reply. I do not see any errors or exceptions in the solr 
logs. I have read that the my schema in nutch needs to match the schema in 
solr. When I change the schema in in the config directory and restart solr my 
changes are lost. Leaving the schema alone is the only way I can get the 
indexing job to run but I cant query for the data in solr. Would you like me to 
send you specific configuration files ? I cant seem to get this to work. 

Kind regards,
Aratrika Mukhopadhyay

-Original Message-
From: Rick Leir [mailto:rl...@leirtech.com] 
Sent: Friday, December 08, 2017 4:06 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Need Help Configuring Solr To Work With Nutch

Ara
Softcommit might be the default in Solrconfig.xml, and if not then you should 
probably make it so. Then you need to have a look in solr.log if things are not 
working as you expect. 
Cheers -- Rick

On December 8, 2017 3:23:35 PM EST, "Mukhopadhyay, Aratrika" 
<aratrika.mukhopadh...@mail.house.gov> wrote:
>Erick,
>Do I need to set the softCommit = true and prepareCommit to true in my 
>solrconfig ? I am still at a loss as to what is happening. Thanks again 
>for your help.
>
>Aratrika
>
>From: Mukhopadhyay, Aratrika
>Sent: Friday, December 08, 2017 11:34 AM
>To: solr-user <solr-user@lucene.apache.org>
>Subject: RE: Need Help Configuring Solr To Work With Nutch
>
>
>Hello Erick ,
>
>   This is what I see in the logs :
>
>[cid:image001.png@01D37018.62D3CC90]
>
>
>
>I am sorry it sbeen a while since I worked with solr. I did not do 
>anything to specifically commit the changes to the core. Thanks for 
>your prompt attention to this matter.
>
>
>
>Aratrika Mukhopadhyay
>
>
>
>-Original Message-
>From: Erick Erickson [mailto:erickerick...@gmail.com]
>Sent: Friday, December 08, 2017 11:06 AM
>To: solr-user
><solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>>
>Subject: Re: Need Help Configuring Solr To Work With Nutch
>
>
>
>1> do you see update messages in the Solr logs?
>
>2> did you issue a commit?
>
>
>
>Best,
>
>Erick
>
>
>
>On Fri, Dec 8, 2017 at 7:27 AM, Mukhopadhyay, Aratrika < 
>aratrika.mukhopadh...@mail.house.gov<mailto:Aratrika.Mukhopadhyay@mail.
>house.gov>>
>wrote:
>
>
>
>> Good Morning,
>
>>
>
>>I am running nutch 2.3 , hbase 0.98 and I am integrating
>
>> nutch with solr 6.4. I have a successful crawl in nutch and when I
>see
>
>> that it is indexing the content into solr. However I cannot query and
>get any results.
>
>> Its as if Nutch isn’t writing anything to solr at all. I am stuck and
>
>> need someone who is familiar with solr/nutch to provide assistance.
>
>> Can someone please help ?
>
>>
>
>>
>
>>
>
>> This is what I see when I index into solr. I see no errors.
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>> Regards,
>
>>
>
>> Aratrika Mukhopadhyay
>
>>

--
Sorry for being brief. Alternate email is rickleir at yahoo dot com 


Re: FW: Need Help Configuring Solr To Work With Nutch

2017-12-08 Thread Rick Leir
Ara
Softcommit might be the default in Solrconfig.xml, and if not then you should 
probably make it so. Then you need to have a look in solr.log if things are not 
working as you expect. 
Cheers -- Rick

On December 8, 2017 3:23:35 PM EST, "Mukhopadhyay, Aratrika" 
 wrote:
>Erick,
>Do I need to set the softCommit = true and prepareCommit to true in my
>solrconfig ? I am still at a loss as to what is happening. Thanks again
>for your help.
>
>Aratrika
>
>From: Mukhopadhyay, Aratrika
>Sent: Friday, December 08, 2017 11:34 AM
>To: solr-user 
>Subject: RE: Need Help Configuring Solr To Work With Nutch
>
>
>Hello Erick ,
>
>   This is what I see in the logs :
>
>[cid:image001.png@01D37018.62D3CC90]
>
>
>
>I am sorry it sbeen a while since I worked with solr. I did not do
>anything to specifically commit the changes to the core. Thanks for
>your prompt attention to this matter.
>
>
>
>Aratrika Mukhopadhyay
>
>
>
>-Original Message-
>From: Erick Erickson [mailto:erickerick...@gmail.com]
>Sent: Friday, December 08, 2017 11:06 AM
>To: solr-user
>>
>Subject: Re: Need Help Configuring Solr To Work With Nutch
>
>
>
>1> do you see update messages in the Solr logs?
>
>2> did you issue a commit?
>
>
>
>Best,
>
>Erick
>
>
>
>On Fri, Dec 8, 2017 at 7:27 AM, Mukhopadhyay, Aratrika <
>aratrika.mukhopadh...@mail.house.gov>
>wrote:
>
>
>
>> Good Morning,
>
>>
>
>>I am running nutch 2.3 , hbase 0.98 and I am integrating
>
>> nutch with solr 6.4. I have a successful crawl in nutch and when I
>see
>
>> that it is indexing the content into solr. However I cannot query and
>get any results.
>
>> Its as if Nutch isn’t writing anything to solr at all. I am stuck and
>
>> need someone who is familiar with solr/nutch to provide assistance.
>
>> Can someone please help ?
>
>>
>
>>
>
>>
>
>> This is what I see when I index into solr. I see no errors.
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>> Regards,
>
>>
>
>> Aratrika Mukhopadhyay
>
>>

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

FW: Need Help Configuring Solr To Work With Nutch

2017-12-08 Thread Mukhopadhyay, Aratrika
Erick,
 Do I need to set the softCommit = true and prepareCommit to true in my 
solrconfig ? I am still at a loss as to what is happening. Thanks again for 
your help.

Aratrika

From: Mukhopadhyay, Aratrika
Sent: Friday, December 08, 2017 11:34 AM
To: solr-user 
Subject: RE: Need Help Configuring Solr To Work With Nutch


Hello Erick ,

   This is what I see in the logs :

[cid:image001.png@01D37018.62D3CC90]



I am sorry it sbeen a while since I worked with solr. I did not do anything to 
specifically commit the changes to the core. Thanks for your prompt attention 
to this matter.



Aratrika Mukhopadhyay



-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, December 08, 2017 11:06 AM
To: solr-user >
Subject: Re: Need Help Configuring Solr To Work With Nutch



1> do you see update messages in the Solr logs?

2> did you issue a commit?



Best,

Erick



On Fri, Dec 8, 2017 at 7:27 AM, Mukhopadhyay, Aratrika < 
aratrika.mukhopadh...@mail.house.gov>
 wrote:



> Good Morning,

>

>I am running nutch 2.3 , hbase 0.98 and I am integrating

> nutch with solr 6.4. I have a successful crawl in nutch and when I see

> that it is indexing the content into solr. However I cannot query and get any 
> results.

> Its as if Nutch isn’t writing anything to solr at all. I am stuck and

> need someone who is familiar with solr/nutch to provide assistance.

> Can someone please help ?

>

>

>

> This is what I see when I index into solr. I see no errors.

>

>

>

>

>

>

>

> Regards,

>

> Aratrika Mukhopadhyay

>


FW: Issue while using Document Routing in SolrCloud 6.1

2017-10-10 Thread Ketan Thanki

Thanks Emir,

As mentions below, I have indexing the using two tenant and my data are 
currently belongs to only one shard which also shows impact in retrieval much 
faster but While insert its seems slower.
So is there any specific reason for that.

Please do needful.

Regards,
Ketan.



Hi Ketan,
Is it possible that you are indexing only one tenant and that is causing single 
shard to become
hotspot?

Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/


From: Ketan Thanki
Sent: Tuesday, October 10, 2017 4:18 PM
To: 'solr-user@lucene.apache.org'
Subject: Issue while using Document Routing in SolrCloud 6.1

Hi,

Need the help regarding to below mention query.
I have configure the 2 collections with each has 2 shard and 2 replica and i 
have implemented Composite document routing for my unique field 'Id' where I 
have use 2 level Tenant route as mentions below.
e.g : projectId:158380 modelId:3606 where tenants bits use as 
projectId/2!modelId/8 for below ID
"id":"79190!450!0003606#001#001#0#002754269#11760499351"

Issue: its seems that my retrieval get faster but the insertion was slower 
compared to without routing changes

Please  do needful.

Regards,
Ketan.



Re: Fw: How to secure solr-6.2.0 in standalone mode?

2017-05-08 Thread Rick Leir

Christian

Cool, you prompted me to learn something. Is your answer in the 
following cwiki link?


https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin

cheers -- Rick

google apache kerberos basic auth

On 2017-05-07 05:21 PM, FOTACHE CHRISTIAN wrote:
  

   
Hi

I'm using solr-6.2.0 in standalone and i need to setup security with kerberos 
(???) for standalone
I have previously setup basic authentication for solr-6.1.0 but it seems that 
solr-6.2.0 has a pretty different approach when it comes to security... I can't 
make it happenPlease help
Thank you,

Christian Fotache Tel: 0728.297.207







Fw: How to secure solr-6.2.0 in standalone mode?

2017-05-07 Thread FOTACHE CHRISTIAN
 

  
Hi
I'm using solr-6.2.0 in standalone and i need to setup security with kerberos 
(???) for standalone
I have previously setup basic authentication for solr-6.1.0 but it seems that 
solr-6.2.0 has a pretty different approach when it comes to security... I can't 
make it happenPlease help
Thank you,

Christian Fotache Tel: 0728.297.207


   

Re: Fw: solr-user-unsubscribe

2017-02-01 Thread alessandro.benedetti
Gents,
have you read the instructions ?
Have you sent an email to : solr-user-unsubscr...@lucene.apache.org ?

You don't need to send messages to the mailing list with that address as
content.
Just follow what's in the official Solr documentation page :

http://lucene.apache.org/solr/community.html#mailing-lists-irc

Thank you 



-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-user-unsubscribe-tp4317823p4318255.html
Sent from the Solr - User mailing list archive at Nabble.com.


Fw: solr-user-unsubscribe

2017-02-01 Thread Syed Mudasseer
Can someone help me with unsubscription of solr emails?

I tried sending "unsubscribe" emails to "solr-user@lucene.apache.org" but no 
luck.


Thanks,

Mudasseer


From: Syed Mudasseer 
Sent: Monday, January 30, 2017 12:55 PM
To: solr-user@lucene.apache.org
Subject: solr-user-unsubscribe




FW: How to Add New Fields and Fields Types Programmatically Using Solrj

2016-07-18 Thread Jeniba Johnson

Hi,

I have configured solr5.3.1 and started Solr in schema less mode. Using 
SolrInputDocument, Iam able to add new fields in solrconfig.xml using Solrj.
How to specify the field type of a field using Solrj.

Eg 

How can I add field type properties using SolrInputDocument programmatically 
using Solrj? Can anyone help with it?



Regards,
Jeniba Johnson




The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. L Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail"


Fw: SolrCloud and Zookeeper integration issue in .net application

2016-05-29 Thread shivendra tiwari
Hi,

This is my first time asking the question. I am facing some problems in Solr. 
Could you please help me out. Below is my question:

Currently I am using Solr lower version it is working fine but now, we are 
trying to configure SolrCloud for load balance so,
I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections and 
shards also getting data from SQL server on Solr but i need to call 
SolrCloud in .net application. Please tell me, need to call zookeeper or 
SolrCloud and How it will configure in my .net application and what i need 
to configure it please help anyone. 

Warm Regards!

Shivendra Kumar Tiwari

From: shivendra.tiwari 
Sent: Friday, May 27, 2016 5:27 PM
To: solr-user@lucene.apache.org 
Subject: Fw: SolrCloud and Zookeeper integration issue in .net application

Hi,

This is my first time asking the question. I am facing some problems in Solr. 
Could you please help me out. Below is my question:

Currently I am using Solr lower version it is working fine but now, we are 
trying to configure SolrCloud for load balance so,
I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections and 
shards also getting data from SQL server on Solr but i need to call 
SolrCloud in .net application. Please tell me, need to call zookeeper or 
SolrCloud and How it will configure in my .net application and what i need 
to configure it please help anyone. 

Warm Regards!

Shivendra Kumar Tiwari

From: shivendra.tiwari 
Sent: Friday, May 27, 2016 1:00 PM
To: solr-user@lucene.apache.org 
Subject: SolrCloud and Zookeeper integration issue in .net application

Hi,

Currently I am using Solr lower version it is working fine but now, we are 
trying to configure SolrCloud for load balance so,
I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections and 
shards also getting data from SQL server on Solr but i need to call 
SolrCloud in .net application. Please tell me, need to call zookeeper or 
SolrCloud and How it will configure in my .net application and what i need 
to configure it please help anyone.


Warm Regards!
Shivendra Kumar Tiwari 


Re: Fw: SolrCloud and Zookeeper integration issue in .net application

2016-05-27 Thread Shawn Heisey
On 5/27/2016 5:57 AM, shivendra.tiwari wrote:
> Currently I am using Solr lower version it is working fine but now, we are 
> trying to configure SolrCloud for load balance so,
> I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections 
> and 
> shards also getting data from SQL server on Solr but i need to call 
> SolrCloud in .net application. Please tell me, need to call zookeeper or 
> SolrCloud and How it will configure in my .net application and what i need 
> to configure it please help anyone. 

Unless you write the code to do it, a .net application will not know how
to talk to zookeeper.

There are multiple choices for .net clients which know how to talk to
Solr via HTTP:

https://wiki.apache.org/solr/IntegratingSolr#C.23_.2F_.NET

I do not know if any of these clients know how to talk to multiple Solr
servers so you don't have a single point of failure.  If they don't, you
will need a load balancer in front of your cloud.

Solr itself will load balance the requests across the cloud, but if your
application only talks HTTP to a single server:port, that instance of
Solr becomes a single point of failure.

For redundancy, you will need three zookeeper nodes:

http://zookeeper.apache.org/doc/r3.4.8/zookeeperAdmin.html#sc_zkMulitServerSetup

Thanks,
Shawn



Fw: SolrCloud and Zookeeper integration issue in .net application

2016-05-27 Thread shivendra.tiwari
Hi,

This is my first time asking the question. I am facing some problems in Solr. 
Could you please help me out. Below is my question:

Currently I am using Solr lower version it is working fine but now, we are 
trying to configure SolrCloud for load balance so,
I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections and 
shards also getting data from SQL server on Solr but i need to call 
SolrCloud in .net application. Please tell me, need to call zookeeper or 
SolrCloud and How it will configure in my .net application and what i need 
to configure it please help anyone. 

Warm Regards!

Shivendra Kumar Tiwari

From: shivendra.tiwari 
Sent: Friday, May 27, 2016 1:00 PM
To: solr-user@lucene.apache.org 
Subject: SolrCloud and Zookeeper integration issue in .net application

Hi,

Currently I am using Solr lower version it is working fine but now, we are 
trying to configure SolrCloud for load balance so,
I have configured-  2 Solr nodes and 1 ZooKeeper node, created collections and 
shards also getting data from SQL server on Solr but i need to call 
SolrCloud in .net application. Please tell me, need to call zookeeper or 
SolrCloud and How it will configure in my .net application and what i need 
to configure it please help anyone.


Warm Regards!
Shivendra Kumar Tiwari 


Fw: Select distinct multiple fields

2016-05-19 Thread thiaga rajan


Thanks Joel for the response. In our requirement there is some logic that needs 
to be implemented after fetching the results from solr which might have an 
impact in working out the pagination

ie, we have data structures like(nested structure is flattened) we need to have 
this kind of structure as below we might need to support some other use cases. 
Hierarchical structure will not support other use cases. So we have flatten our 
data structure and we need to achieve the search in the flat structure below

| Level1 | Level2 | Level3 |
| 1 | 11 | 111 |
| 1 | 11 | 112 |
| 1 | 11 | 113 |
| 1 | 11 | 114 |

 Example - When the customer enters 11 we might need to query this word from 
the entire data structure. 
so we will get all the records including Level3 as well. But ideally we need to 
select only 1,11(filtering the current level and parent level). Also another 
problem is pagination. We might select 10 recs from example after filtering the 
levels/parent matching with the search keyword, the number of records might get 
reduced. So we might need to send another request to solr to get the next set 
and again working out the level and its parent which matches with the search 
keyword till we reach the required row count. 

Rather than doing this, is there a way(kind of any plugin like SearchComponent) 
will help with the above scenaio or best way to achieve this in solr?Kindly 
provide your valuable suggestions on this 



   On Thursday, 19 May 2016 6:11 PM, Joel Bernstein  
wrote:
 

 
https://cwiki.apache.org/confluence/display/solr/Parallel+SQL+Interface?focusedCommentId=62697742#ParallelSQLInterface-SELECTDISTINCTQueries

Joel Bernsteinhttp://joelsolr.blogspot.com/

On Thu, May 19, 2016 at 1:10 PM, Joel Bernstein  wrote:

The SQL interface and Streaming Expressions support selecting multiple distinct 
fields.
The SQL interface can use the JSON facet API or MapReduce to provide the 
results.
You the facet function and unique function are the Streaming Expressions that 
the SQL interface calls.
Joel Bernsteinhttp://joelsolr.blogspot.com/

On Thu, May 19, 2016 at 12:41 PM, thiaga rajan 
 wrote:

Hi Team - I have seen select distinct multiple fields is not possible in Solr 
and i have seen suggestions coming up on faceting and grouping. I have some 
questions. Is there any with any kind of plugins/custom implementation we can 
achieve the same
1. Using any plugin or through custom implementation whether we will be able to 
achieve the select distinct fields apart from facet and group by...Because the 
pagination is kind of issue.
For example - We are setting a pagination of 10. If we are getting 10 records 
(along with the duplicates) then we might ending up a getting the results less 
than 10. 
Any suggestions on this?





  

Re: FW: SolrCloud App Unit Testing

2016-03-19 Thread Shawn Heisey
On 3/19/2016 7:11 AM, GW wrote:
> I think the easiest way to write apps for Solr is with some kind of
> programming language and the REST API. Don't bother with the PHP or Perl
> modules. They are deprecated and beyond useless. just use the HTTP call
> that you see in Solr Admin. Mind the URL encoding when putting together
> your server calls.

The problem with using the REST-like API directly is that you have to
understand the API completely and construct every URL parameter
yourself.  You also have to understand the response format and write
code to extract the info you need from the response.

Using a pre-made client makes it so you don't have to do ANY of that. 
The request is built up from easily understood objects/methods, and all
the useful information from the response is loaded into data structures
that are fairly easy to understand if you know the language you're
writing in.

There are a LOT of php clients, and some of them have seen new releases
about three months ago.  I wouldn't call that deprecated.

https://wiki.apache.org/solr/IntegratingSolr#PHP

There aren't as many clients for Perl.  I haven't checked the last
update for these yet:

https://wiki.apache.org/solr/IntegratingSolr#Perl

Thanks,
Shawn



Re: FW: SolrCloud App Unit Testing

2016-03-19 Thread GW
I think the easiest way to write apps for Solr is with some kind of
programming language and the REST API. Don't bother with the PHP or Perl
modules. They are deprecated and beyond useless. just use the HTTP call
that you see in Solr Admin. Mind the URL encoding when putting together
your server calls.

I've used Perl and PHP with Curl to create Solr Apps

PHP:

function fetchContent($URL){
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $URL);
$data = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpCode == "404") {
$data="nin";
}
return $data;
}


switch($filters){

case "":
$url = "http://localhost:8983/solr/products/query?q=name
:".$urlsearch."^20+OR+short_description:".$urlsearch."~6=13=".$start."=*,score=json";
break;

case "clothing":
$url = "http://localhost:8983/solr/products/query
?q=name:%22".$urlsearch."%22^20+OR
+short_description:%22".$urlsearch."%22~6=13=".$start."=*,score=json";
break;

case "beauty cosmetics":

$url = "http://localhost:8983/solr/products/query?q=name
:".$urlsearch."^20+OR+short_description:".$urlsearch."~6=13=".$start."=*,score=json";


break;


}


$my_data = fetchContent($url);


Data goes into the $my_data as a JSON string in this case.


/// your forward facing App can be in Apache round robin in
front of a Solr system. This gives you insane scalability in the client app
and the Solr service.


Hope that helps.

GW


On 17 March 2016 at 11:23, Madhire, Naveen 
wrote:

>
> Hi,
>
> I am writing a Solr Application, can anyone please let me know how to Unit
> test the application?
>
> I see we have MiniSolrCloudCluster class available in Solr, but I am
> confused about how to use that for Unit testing.
>
> How should I create a embedded server for unit testing?
>
>
>
> Thanks,
> Naveen
> 
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>


FW: SolrCloud App Unit Testing

2016-03-18 Thread Madhire, Naveen

Hi,

I am writing a Solr Application, can anyone please let me know how to Unit test 
the application?

I see we have MiniSolrCloudCluster class available in Solr, but I am confused 
about how to use that for Unit testing.

How should I create a embedded server for unit testing?



Thanks,
Naveen


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: FW: Difference Between Tokenizer and filter

2016-03-03 Thread Jack Krupansky
Try re-reading the doc on "Understanding Analyzers, Tokenizers, and
Filters" and then ask specific questions on specific statements made in the
doc:
https://cwiki.apache.org/confluence/display/solr/Understanding+Analyzers,+Tokenizers,+and+Filters

As far as on-disk format, a Solr user has absolutely zero reason to be
concerned about what format Lucene uses to store the index on disk. You are
certainly welcome to dive down to that level if you wish, but that is not
something worth discussing on this list. To a Solr user the index is simply
a list of terms at positions, both determined by the character filters,
tokenizer, and token filters of the analyzer. The format of that
information as stored in Lucene won't impact the behavior of your Solr app
in any way.

Again, to be clear, you need to be thoroughly familiar with that doc
section. It won't help you to try to guess questions to ask if you don't
have a basic understanding of what is stated on that doc page.

It might also help you visualize what the doc says by using the analysis
page of the Solr admin UI which will give you all the intermediate and
final results of the analysis process, the specific token/term text and
position at each step. But even that won't help if you are unable to grasp
what is stated on the basic doc page.

-- Jack Krupansky

On Thu, Mar 3, 2016 at 8:51 AM, G, Rajesh <r...@cebglobal.com> wrote:

> Hi Shawn,
>
> One last question on analyzer. If the format of the index on disk is not
> controlled by the tokenizer, or anything else in the analysis chain, then
> what does type="index" and type="query" in analyzer mean. Can you please
> help me in understanding?
>
> 
>
>  
>  
>
>  
>
>
>
> Corporate Executive Board India Private Limited. Registration No:
> U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building
> No.10 DLF Cyber City, Gurgaon, Haryana-122002, India.
>
> This e-mail and/or its attachments are intended only for the use of the
> addressee(s) and may contain confidential and legally privileged
> information belonging to CEB and/or its subsidiaries, including CEB
> subsidiaries that offer SHL Talent Measurement products and services. If
> you have received this e-mail in error, please notify the sender and
> immediately, destroy all copies of this email and its attachments. The
> publication, copying, in whole or in part, or use or dissemination in any
> other way of this e-mail and attachments by anyone other than the intended
> person(s) is prohibited.
>
> -Original Message-
> From: G, Rajesh
> Sent: Thursday, March 3, 2016 6:12 PM
> To: 'solr-user@lucene.apache.org' <solr-user@lucene.apache.org>
> Subject: RE: FW: Difference Between Tokenizer and filter
>
> Thanks Shawn. This helps
>
> -Original Message-
> From: Shawn Heisey [mailto:apa...@elyograg.org]
> Sent: Wednesday, March 2, 2016 11:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: FW: Difference Between Tokenizer and filter
>
> On 3/2/2016 9:55 AM, G, Rajesh wrote:
> > Thanks for your email Koji. Can you please explain what is the role of
> tokenizer and filter so I can understand why I should not have two
> tokenizer in index and I should have at least one tokenizer in query?
>
> You can't have two tokenizers.  It's not allowed.
>
> The only notable difference between a Tokenizer and a Filter is that a
> Tokenizer operates on an input that's a single string, turning it into a
> token stream, and a Filter uses a token stream for both input and output.
> A CharFilter uses a single string as both input and output.
>
> An analysis chain in the Solr schema (whether it's index or query) is
> composed of zero or more CharFilter entries, exactly one Tokenizer entry,
> and zero or more Filter entries.  Alternately, you can specify an Analyzer
> class, which is a lot like a Tokenizer.  An Analyzer is effectively the
> same thing as a tokenizer combined with filters.
>
> CharFilters run before the Tokenizer, and Filters run after the
> Tokenizer.  CharFilters, Tokenizers, Filters, and Analyzers are Lucene
> concepts.
>
> > My understanding is tokenizer is used to say how the content should be
> > indexed physically in file system. Filters are used to query result
>
> The format of the index on disk is not controlled by the tokenizer, or
> anything else in the analysis chain.  It is controlled by the Lucene
> codec.  Only a very small part of the codec is configurable in Solr, but
> normally this does not need configuring.  The codec defaults are
> appropriate for the majority of use cases.
>
> Thanks,
> Shawn
>
>


RE: FW: Difference Between Tokenizer and filter

2016-03-03 Thread Vanlerberghe, Luc
The "index" type analyzer is used when documents are indexed and determines 
what tokens end up in the index.
The "query" type analyzer is used to analyze the user query and determines what 
tokens will be searched for.

As an example: If you want to be able to match on synonyms, you could have a 
"query" type analyzer that replaces each token in the users' query with the 
list of corresponding synonyms. The "index" type analyzer should just index the 
tokens as they are.

(If you have a fixed list of synonyms, both could map each token to a 
pre-defined 'canonical' synonym and save both index and query time)

Luc

-Original Message-
From: G, Rajesh [mailto:r...@cebglobal.com] 
Sent: donderdag 3 maart 2016 14:51
To: solr-user@lucene.apache.org
Subject: RE: FW: Difference Between Tokenizer and filter

Hi Shawn,

One last question on analyzer. If the format of the index on disk is not 
controlled by the tokenizer, or anything else in the analysis chain, then what 
does type="index" and type="query" in analyzer mean. Can you please help me in 
understanding?



 
 

 



Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India.

This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.

-Original Message-
From: G, Rajesh
Sent: Thursday, March 3, 2016 6:12 PM
To: 'solr-user@lucene.apache.org' <solr-user@lucene.apache.org>
Subject: RE: FW: Difference Between Tokenizer and filter

Thanks Shawn. This helps

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, March 2, 2016 11:04 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Difference Between Tokenizer and filter

On 3/2/2016 9:55 AM, G, Rajesh wrote:
> Thanks for your email Koji. Can you please explain what is the role of 
> tokenizer and filter so I can understand why I should not have two tokenizer 
> in index and I should have at least one tokenizer in query?

You can't have two tokenizers.  It's not allowed.

The only notable difference between a Tokenizer and a Filter is that a 
Tokenizer operates on an input that's a single string, turning it into a token 
stream, and a Filter uses a token stream for both input and output.  A 
CharFilter uses a single string as both input and output.

An analysis chain in the Solr schema (whether it's index or query) is composed 
of zero or more CharFilter entries, exactly one Tokenizer entry, and zero or 
more Filter entries.  Alternately, you can specify an Analyzer class, which is 
a lot like a Tokenizer.  An Analyzer is effectively the same thing as a 
tokenizer combined with filters.

CharFilters run before the Tokenizer, and Filters run after the Tokenizer.  
CharFilters, Tokenizers, Filters, and Analyzers are Lucene concepts.

> My understanding is tokenizer is used to say how the content should be
> indexed physically in file system. Filters are used to query result

The format of the index on disk is not controlled by the tokenizer, or anything 
else in the analysis chain.  It is controlled by the Lucene codec.  Only a very 
small part of the codec is configurable in Solr, but normally this does not 
need configuring.  The codec defaults are appropriate for the majority of use 
cases.

Thanks,
Shawn



RE: FW: Difference Between Tokenizer and filter

2016-03-03 Thread G, Rajesh
Hi Shawn,

One last question on analyzer. If the format of the index on disk is not 
controlled by the tokenizer, or anything else in the analysis chain, then what 
does type="index" and type="query" in analyzer mean. Can you please help me in 
understanding?



 
 

 



Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India.

This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.

-Original Message-
From: G, Rajesh
Sent: Thursday, March 3, 2016 6:12 PM
To: 'solr-user@lucene.apache.org' <solr-user@lucene.apache.org>
Subject: RE: FW: Difference Between Tokenizer and filter

Thanks Shawn. This helps

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, March 2, 2016 11:04 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Difference Between Tokenizer and filter

On 3/2/2016 9:55 AM, G, Rajesh wrote:
> Thanks for your email Koji. Can you please explain what is the role of 
> tokenizer and filter so I can understand why I should not have two tokenizer 
> in index and I should have at least one tokenizer in query?

You can't have two tokenizers.  It's not allowed.

The only notable difference between a Tokenizer and a Filter is that a 
Tokenizer operates on an input that's a single string, turning it into a token 
stream, and a Filter uses a token stream for both input and output.  A 
CharFilter uses a single string as both input and output.

An analysis chain in the Solr schema (whether it's index or query) is composed 
of zero or more CharFilter entries, exactly one Tokenizer entry, and zero or 
more Filter entries.  Alternately, you can specify an Analyzer class, which is 
a lot like a Tokenizer.  An Analyzer is effectively the same thing as a 
tokenizer combined with filters.

CharFilters run before the Tokenizer, and Filters run after the Tokenizer.  
CharFilters, Tokenizers, Filters, and Analyzers are Lucene concepts.

> My understanding is tokenizer is used to say how the content should be
> indexed physically in file system. Filters are used to query result

The format of the index on disk is not controlled by the tokenizer, or anything 
else in the analysis chain.  It is controlled by the Lucene codec.  Only a very 
small part of the codec is configurable in Solr, but normally this does not 
need configuring.  The codec defaults are appropriate for the majority of use 
cases.

Thanks,
Shawn



RE: FW: Difference Between Tokenizer and filter

2016-03-03 Thread G, Rajesh
Thanks Shawn. This helps



Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India.

This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, March 2, 2016 11:04 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Difference Between Tokenizer and filter

On 3/2/2016 9:55 AM, G, Rajesh wrote:
> Thanks for your email Koji. Can you please explain what is the role of 
> tokenizer and filter so I can understand why I should not have two tokenizer 
> in index and I should have at least one tokenizer in query?

You can't have two tokenizers.  It's not allowed.

The only notable difference between a Tokenizer and a Filter is that a 
Tokenizer operates on an input that's a single string, turning it into a token 
stream, and a Filter uses a token stream for both input and output.  A 
CharFilter uses a single string as both input and output.

An analysis chain in the Solr schema (whether it's index or query) is composed 
of zero or more CharFilter entries, exactly one Tokenizer entry, and zero or 
more Filter entries.  Alternately, you can specify an Analyzer class, which is 
a lot like a Tokenizer.  An Analyzer is effectively the same thing as a 
tokenizer combined with filters.

CharFilters run before the Tokenizer, and Filters run after the Tokenizer.  
CharFilters, Tokenizers, Filters, and Analyzers are Lucene concepts.

> My understanding is tokenizer is used to say how the content should be
> indexed physically in file system. Filters are used to query result

The format of the index on disk is not controlled by the tokenizer, or anything 
else in the analysis chain.  It is controlled by the Lucene codec.  Only a very 
small part of the codec is configurable in Solr, but normally this does not 
need configuring.  The codec defaults are appropriate for the majority of use 
cases.

Thanks,
Shawn



Re: FW: Difference Between Tokenizer and filter

2016-03-02 Thread Shawn Heisey
On 3/2/2016 9:55 AM, G, Rajesh wrote:
> Thanks for your email Koji. Can you please explain what is the role of 
> tokenizer and filter so I can understand why I should not have two tokenizer 
> in index and I should have at least one tokenizer in query?

You can't have two tokenizers.  It's not allowed.

The only notable difference between a Tokenizer and a Filter is that a
Tokenizer operates on an input that's a single string, turning it into a
token stream, and a Filter uses a token stream for both input and
output.  A CharFilter uses a single string as both input and output.

An analysis chain in the Solr schema (whether it's index or query) is
composed of zero or more CharFilter entries, exactly one Tokenizer
entry, and zero or more Filter entries.  Alternately, you can specify an
Analyzer class, which is a lot like a Tokenizer.  An Analyzer is
effectively the same thing as a tokenizer combined with filters.

CharFilters run before the Tokenizer, and Filters run after the
Tokenizer.  CharFilters, Tokenizers, Filters, and Analyzers are Lucene
concepts.

> My understanding is tokenizer is used to say how the content should be 
> indexed physically in file system. Filters are used to query result

The format of the index on disk is not controlled by the tokenizer, or
anything else in the analysis chain.  It is controlled by the Lucene
codec.  Only a very small part of the codec is configurable in Solr, but
normally this does not need configuring.  The codec defaults are
appropriate for the majority of use cases.

Thanks,
Shawn



RE: FW: Difference Between Tokenizer and filter

2016-03-02 Thread G, Rajesh
Thanks for your email Koji. Can you please explain what is the role of 
tokenizer and filter so I can understand why I should not have two tokenizer in 
index and I should have at least one tokenizer in query?

My understanding is tokenizer is used to say how the content should be indexed 
physically in file system. Filters are used to query result




Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India.

This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.

-Original Message-
From: Koji Sekiguchi [mailto:koji.sekigu...@rondhuit.com]
Sent: Wednesday, March 2, 2016 8:10 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Difference Between Tokenizer and filter

Hi,

... must have one and only one  and it can 
have zero or more s. From the point of view of the rules, your 
... is not correct because it has more than 
one  and  ... is not correct as 
well because it has no .

Koji

On 2016/03/02 20:25, G, Rajesh wrote:
> Hi Team,
>
> Can you please clarify the below. My understanding is tokenizer is used to 
> say how the content should be indexed physically in file system. Filters are 
> used to query result. The blow lines are from my setup. But I have seen eg 
> that include filters inside  and tokenizer in 
>  that confused me.
>
>   positionIncrementGap="100" >
>  
>  class="solr.LowerCaseTokenizerFactory"/>
>  class="solr.StandardTokenizerFactory"/>
>  class="solr.NGramTokenizerFactory" minGramSize="2" maxGramSize="2"/>
>  
>  
>  minGramSize="2" maxGramSize="2"/>
>  
>  
>
> My goal is to user solr and find the best match among the technology
> names e.g Actual tech name
>
> 1.   Microsoft Visual Studio
>
> 2.   Microsoft Internet Explorer
>
> 3.   Microsoft Visio
>
> When user types Microsoft Visal Studio user should get Microsoft
> Visual Studio. Basically misspelled and jumble words should match
> closest tech name
>
>
>
>
>
> Corporate Executive Board India Private Limited. Registration No: 
> U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
> No.10 DLF Cyber City, Gurgaon, Haryana-122002, India..
>
>
>
> This e-mail and/or its attachments are intended only for the use of the 
> addressee(s) and may contain confidential and legally privileged information 
> belonging to CEB and/or its subsidiaries, including CEB subsidiaries that 
> offer SHL Talent Measurement products and services. If you have received this 
> e-mail in error, please notify the sender and immediately, destroy all copies 
> of this email and its attachments. The publication, copying, in whole or in 
> part, or use or dissemination in any other way of this e-mail and attachments 
> by anyone other than the intended person(s) is prohibited.
>
>



Re: FW: Difference Between Tokenizer and filter

2016-03-02 Thread Koji Sekiguchi

Hi,

... must have one and only one  and
it can have zero or more s. From the point of view of the
rules, your ... is not correct
because it has more than one  and 
... is not correct as well because it has no .

Koji

On 2016/03/02 20:25, G, Rajesh wrote:

Hi Team,

Can you please clarify the below. My understanding is tokenizer is used to say how the 
content should be indexed physically in file system. Filters are used to query result. The 
blow lines are from my setup. But I have seen eg that include filters inside  and tokenizer in  that confused me.

 
 



 
 

 
 

My goal is to user solr and find the best match among the technology names e.g
Actual tech name

1.   Microsoft Visual Studio

2.   Microsoft Internet Explorer

3.   Microsoft Visio

When user types Microsoft Visal Studio user should get Microsoft Visual Studio. 
Basically misspelled and jumble words should match closest tech name





Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India..



This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.






FW: Difference Between Tokenizer and filter

2016-03-02 Thread G, Rajesh
Hi Team,

Can you please clarify the below. My understanding is tokenizer is used to say 
how the content should be indexed physically in file system. Filters are used 
to query result. The blow lines are from my setup. But I have seen eg that 
include filters inside  and tokenizer in  that confused me.



   
   
   


   



My goal is to user solr and find the best match among the technology names e.g
Actual tech name

1.   Microsoft Visual Studio

2.   Microsoft Internet Explorer

3.   Microsoft Visio

When user types Microsoft Visal Studio user should get Microsoft Visual Studio. 
Basically misspelled and jumble words should match closest tech name





Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India..



This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.




Re: FW: Difference Between Tokenizer and filter

2016-03-02 Thread Emir Arnautovic

Hi Rajesh,
Processing flow is same for both indexing and querying. What is compared 
at the end are resulting tokens. In general flow is: text -> char filter 
-> filtered text -> tokenizer -> tokens -> filter1 -> tokens ... -> 
filterN -> tokens.


You can read more about analysis chain in Solr wiki: 
https://cwiki.apache.org/confluence/display/solr/Understanding+Analyzers,+Tokenizers,+and+Filters


Regards,
Emir

--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



On 02.03.2016 10:00, G, Rajesh wrote:

Hi Team,

Can you please clarify the below. My understanding is tokenizer is used to say how the 
content should be indexed physically in file system. Filters are used to query result. The 
blow lines are from my setup. But I have seen eg that include filters inside  and tokenizer in  that confused me.

 
 



 
 

 
 

My goal is to user solr and find the best match among the technology names e.g
Actual tech name

1.   Microsoft Visual Studio

2.   Microsoft Internet Explorer

3.   Microsoft Visio

When user types Microsoft Visal Studio user should get Microsoft Visual Studio. 
Basically misspelled and jumble words should match closest tech name





Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India..



This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.




FW: Difference Between Tokenizer and filter

2016-03-02 Thread G, Rajesh
Hi Team,

Can you please clarify the below. My understanding is tokenizer is used to say 
how the content should be indexed physically in file system. Filters are used 
to query result. The blow lines are from my setup. But I have seen eg that 
include filters inside  and tokenizer in  that confused me.



   
   
   


   



My goal is to user solr and find the best match among the technology names e.g
Actual tech name

1.   Microsoft Visual Studio

2.   Microsoft Internet Explorer

3.   Microsoft Visio

When user types Microsoft Visal Studio user should get Microsoft Visual Studio. 
Basically misspelled and jumble words should match closest tech name





Corporate Executive Board India Private Limited. Registration No: 
U741040HR2004PTC035324. Registered office: 6th Floor, Tower B, DLF Building 
No.10 DLF Cyber City, Gurgaon, Haryana-122002, India..



This e-mail and/or its attachments are intended only for the use of the 
addressee(s) and may contain confidential and legally privileged information 
belonging to CEB and/or its subsidiaries, including CEB subsidiaries that offer 
SHL Talent Measurement products and services. If you have received this e-mail 
in error, please notify the sender and immediately, destroy all copies of this 
email and its attachments. The publication, copying, in whole or in part, or 
use or dissemination in any other way of this e-mail and attachments by anyone 
other than the intended person(s) is prohibited.




Fw: new message

2015-10-25 Thread Matthew Annen
Hey!

 

New message, please read 

 

Matthew Annen



Fw: new message

2015-10-25 Thread Ben Tilly
Hey!

 

New message, please read 

 

Ben Tilly



FW: Issue while setting Solr on Slider / YARN

2015-08-27 Thread Vijay Bhoomireddy
 

Hi Tim,

 

For some reason, I was not receiving messages from Solr mailing list, though
I could post it to the list. Now I got that sorted. For my below query, I
saw your response on the mailing list. Below is the snippet of your
response:

 

Hi Vijay,

 

Verify the ResourceManager URL and try passing the --manager param to

explicitly set the ResourceManager URL during the create step.

 

Cheers,

Tim

 

 

I verified the Resource Manager URL and its pointing correctly. In my case,
it's myhdpcluster.com:8032 I even tried passing --manager param to the
slider create solr command, but without any luck. So not sure where it's
getting wrong. Can you please help me in understanding if I need to modify
something else to get this working? I am just wondering whether
${AGENT_WORK_ROOT} in step below has any impact? I haven't changed this
line in the json file. Should this be modified?

 

I am able to login to Ambari console and see all the services running fine ,
including YARN related and ZooKeeper. Also, I could login to Resource
Manager's web UI as well and its working fine. Can you please let me know
what / where it could have gone wrong?

 

 

Thanks  Regards

Vijay

 

From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com] 
Sent: 17 August 2015 11:37
To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org 
Subject: Issue while setting Solr on Slider / YARN

 

Hi,

 

Any help on this please?

 

Thanks  Regards

Vijay

 

From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com] 
Sent: 14 August 2015 18:03
To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org 
Subject: Issue while setting Solr on Slider / YARN

 

Hi,

 

We have a requirement of setting up of Solr Cloud to work along with Hadoop.
Earlier, I could setup a SolrCloud cluster separately alongside the Hadoop
cluster i.e. it looks like two logical  clusters sitting next to each other,
both relying on HDFS.

 

However, the experiment now I am trying to do is to install SolrCloud on
YARN using Apache Slider. I am following LucidWorks blog at
https://github.com/LucidWorks/solr-slider for the same. I already have a
Hortonworks HDP cluster. When I try to setup Solr on my HDP cluster using
Slider, I am facing some issues.

 

As per the blog, I have performed the below steps:

 

1.   I have setup a single node HDP cluster for which the hostname is
myhdpcluster.com with all the essential services including ZooKeeper and
Slider running on it.

2.   Updated the resource manager address and port in slider-client.xml
present under /usr/hdp/current/slider-client/conf

property

nameyarn.resourcemanager.address/name

value myhdpcluster.com:8032/value

/property

3.   Cloned the LucidWorks git and moved it under /home/hdfs/solr-slider

4.   Downloaded solr latest stable distribution and renamed it as
solr.tgz and placed it under /home/hdfs/solr-slider/package/files/solr.tgz

5.   Next ran the following command from within the
/home/hdfs/solr-slider folder

zip -r solr-on-yarn.zip metainfo.xml package/

6.   Next ran the following command as hdfs user

slider install-package --replacepkg --name solr --package
/home/hdfs/solr-slider/solr-on-yarn.zip

7.   Modified the following settings in the
/home/hdfs/solr-slider/appConfig-default.json file

java_home: MY_JAVA_HOME_LOCATION

site.global.app_root: ${AGENT_WORK_ROOT}/app/install/solr-5.2.1,
(Should this be changed to any other value?)

site.global.zk_host:  myhdpcluster.com:2181,

8.   Set yarn.component.instances to 1 in resources-default.json file

9.   Next ran the following command

slider create solr --template /home/hdfs/solr-slider/appConfig-default.json
--resources /home/hdfs/solr-slider/resources-default.json

 

During this step, I am seeing an message INFO client.RMProxy - Connecting to
ResourceManager at myhdpcluster.com/10.0.2.15:8032 

 
INFO ipc.Client - Retrying connect to server:
myhdpcluster.com/10.0.2.15:8032. Already tried 0 time(s); 

 

This message keeps repeating for 50 times and then pauses for a couple of
seconds and then prints the same message in a loop eternally. Not sure on
where the problem is.

 

Can anyone please help me out to get away from this issue and help me setup
Solr on Slider/YARN?

 

Thanks  Regards

Vijay


-- 
The contents of this e-mail are confidential and for the exclusive use of 
the intended recipient. If you receive this e-mail in error please delete 
it from your system immediately and notify us either by e-mail or 
telephone. You should not copy, forward or otherwise disclose the content 
of the e-mail. The views expressed in this communication may not 
necessarily be the view held by WHISHWORKS.


Re: FW: Issue while setting Solr on Slider / YARN

2015-08-27 Thread Timothy Potter
Hi Vijay,

I'm not sure what's wrong here ... have you posted to the Slider
mailing list? Also, which version of Java are you using when
interacting with Slider? I know it had some issues with Java 8 at one
point. Which version of Slider so I can try to reproduce ...

Cheers,
Tim

On Thu, Aug 27, 2015 at 11:05 AM, Vijay Bhoomireddy
vijaya.bhoomire...@whishworks.com wrote:


 Hi Tim,



 For some reason, I was not receiving messages from Solr mailing list, though
 I could post it to the list. Now I got that sorted. For my below query, I
 saw your response on the mailing list. Below is the snippet of your
 response:



 Hi Vijay,



 Verify the ResourceManager URL and try passing the --manager param to

 explicitly set the ResourceManager URL during the create step.



 Cheers,

 Tim





 I verified the Resource Manager URL and its pointing correctly. In my case,
 it's myhdpcluster.com:8032 I even tried passing --manager param to the
 slider create solr command, but without any luck. So not sure where it's
 getting wrong. Can you please help me in understanding if I need to modify
 something else to get this working? I am just wondering whether
 ${AGENT_WORK_ROOT} in step below has any impact? I haven't changed this
 line in the json file. Should this be modified?



 I am able to login to Ambari console and see all the services running fine ,
 including YARN related and ZooKeeper. Also, I could login to Resource
 Manager's web UI as well and its working fine. Can you please let me know
 what / where it could have gone wrong?





 Thanks  Regards

 Vijay



 From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com]
 Sent: 17 August 2015 11:37
 To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org
 Subject: Issue while setting Solr on Slider / YARN



 Hi,



 Any help on this please?



 Thanks  Regards

 Vijay



 From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com]
 Sent: 14 August 2015 18:03
 To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org
 Subject: Issue while setting Solr on Slider / YARN



 Hi,



 We have a requirement of setting up of Solr Cloud to work along with Hadoop.
 Earlier, I could setup a SolrCloud cluster separately alongside the Hadoop
 cluster i.e. it looks like two logical  clusters sitting next to each other,
 both relying on HDFS.



 However, the experiment now I am trying to do is to install SolrCloud on
 YARN using Apache Slider. I am following LucidWorks blog at
 https://github.com/LucidWorks/solr-slider for the same. I already have a
 Hortonworks HDP cluster. When I try to setup Solr on my HDP cluster using
 Slider, I am facing some issues.



 As per the blog, I have performed the below steps:



 1.   I have setup a single node HDP cluster for which the hostname is
 myhdpcluster.com with all the essential services including ZooKeeper and
 Slider running on it.

 2.   Updated the resource manager address and port in slider-client.xml
 present under /usr/hdp/current/slider-client/conf

 property

 nameyarn.resourcemanager.address/name

 value myhdpcluster.com:8032/value

 /property

 3.   Cloned the LucidWorks git and moved it under /home/hdfs/solr-slider

 4.   Downloaded solr latest stable distribution and renamed it as
 solr.tgz and placed it under /home/hdfs/solr-slider/package/files/solr.tgz

 5.   Next ran the following command from within the
 /home/hdfs/solr-slider folder

 zip -r solr-on-yarn.zip metainfo.xml package/

 6.   Next ran the following command as hdfs user

 slider install-package --replacepkg --name solr --package
 /home/hdfs/solr-slider/solr-on-yarn.zip

 7.   Modified the following settings in the
 /home/hdfs/solr-slider/appConfig-default.json file

 java_home: MY_JAVA_HOME_LOCATION

 site.global.app_root: ${AGENT_WORK_ROOT}/app/install/solr-5.2.1,
 (Should this be changed to any other value?)

 site.global.zk_host:  myhdpcluster.com:2181,

 8.   Set yarn.component.instances to 1 in resources-default.json file

 9.   Next ran the following command

 slider create solr --template /home/hdfs/solr-slider/appConfig-default.json
 --resources /home/hdfs/solr-slider/resources-default.json



 During this step, I am seeing an message INFO client.RMProxy - Connecting to
 ResourceManager at myhdpcluster.com/10.0.2.15:8032


 INFO ipc.Client - Retrying connect to server:
 myhdpcluster.com/10.0.2.15:8032. Already tried 0 time(s);



 This message keeps repeating for 50 times and then pauses for a couple of
 seconds and then prints the same message in a loop eternally. Not sure on
 where the problem is.



 Can anyone please help me out to get away from this issue and help me setup
 Solr on Slider/YARN?



 Thanks  Regards

 Vijay


 --
 The contents of this e-mail are confidential and for the exclusive use of
 the intended recipient. If you receive this e-mail in error please delete
 it from your system immediately and notify us either by e-mail or
 telephone. 

RE: FW: Issue while setting Solr on Slider / YARN

2015-08-27 Thread Vijay Bhoomireddy
Tim,

Here is the complete content of the appConfig-default.json file. I haven’t 
worked with Slider so far, so not very sure if some mistake has crept into this 
file while modifying it as per the changes mentioned by Lucidworks on the 
Github page. However, I tried to simulate the steps mentioned on the Github 
Solr-Slider page. As you can see from below file, I am using Java 7. I have 
installed HDP 2.3 GA version which has Slider version 0.80 included in it.

{
  schema: http://example.org/specification/v2.0.0;,
  metadata: {
  },
  global: {
application.def: /home/hdfs/solr-slider/solr-on-yarn.zip,
java_home: /usr/lib/jvm/jre-1.7.0-openjdk.x86_64,
site.global.app_root: 
${AGENT_WORK_ROOT}/app/install/solr-5.2.1-SNAPSHOT,
site.global.zk_host: myhdpcluster.com:2181,
site.global.solr_host: ${SOLR_HOST},
site.global.listen_port: ${SOLR.ALLOCATED_PORT},
site.global.xmx_val: 1g,
site.global.xms_val: 1g,
site.global.gc_tune: -XX:NewRatio=3 -XX:SurvivorRatio=4 
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
-XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
-XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
-XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
-XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails 
-XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution 
-XX:+PrintGCApplicationStoppedTime,
site.global.zk_timeout: 15000,
site.global.server_module: --module=http,
site.global.stop_key: solrrocks,
site.global.solr_opts: 
  },
  components: {
slider-appmaster: {
  jvm.heapsize: 512M
},
SOLR: {
}
  }
}

Thanks  Regards
Vijay

-Original Message-
From: Timothy Potter [mailto:thelabd...@gmail.com] 
Sent: 27 August 2015 21:44
To: solr-user@lucene.apache.org
Subject: Re: FW: Issue while setting Solr on Slider / YARN

Hi Vijay,

I'm not sure what's wrong here ... have you posted to the Slider mailing list? 
Also, which version of Java are you using when interacting with Slider? I know 
it had some issues with Java 8 at one point. Which version of Slider so I can 
try to reproduce ...

Cheers,
Tim

On Thu, Aug 27, 2015 at 11:05 AM, Vijay Bhoomireddy 
vijaya.bhoomire...@whishworks.com wrote:


 Hi Tim,



 For some reason, I was not receiving messages from Solr mailing list, 
 though I could post it to the list. Now I got that sorted. For my 
 below query, I saw your response on the mailing list. Below is the 
 snippet of your
 response:



 Hi Vijay,



 Verify the ResourceManager URL and try passing the --manager param to

 explicitly set the ResourceManager URL during the create step.



 Cheers,

 Tim





 I verified the Resource Manager URL and its pointing correctly. In my 
 case, it's myhdpcluster.com:8032 I even tried passing --manager param 
 to the slider create solr command, but without any luck. So not sure 
 where it's getting wrong. Can you please help me in understanding if I 
 need to modify something else to get this working? I am just wondering 
 whether ${AGENT_WORK_ROOT} in step below has any impact? I haven't 
 changed this line in the json file. Should this be modified?



 I am able to login to Ambari console and see all the services running 
 fine , including YARN related and ZooKeeper. Also, I could login to 
 Resource Manager's web UI as well and its working fine. Can you please 
 let me know what / where it could have gone wrong?





 Thanks  Regards

 Vijay



 From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com]
 Sent: 17 August 2015 11:37
 To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org
 Subject: Issue while setting Solr on Slider / YARN



 Hi,



 Any help on this please?



 Thanks  Regards

 Vijay



 From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com]
 Sent: 14 August 2015 18:03
 To: solr-user@lucene.apache.org mailto:solr-user@lucene.apache.org
 Subject: Issue while setting Solr on Slider / YARN



 Hi,



 We have a requirement of setting up of Solr Cloud to work along with Hadoop.
 Earlier, I could setup a SolrCloud cluster separately alongside the 
 Hadoop cluster i.e. it looks like two logical  clusters sitting next 
 to each other, both relying on HDFS.



 However, the experiment now I am trying to do is to install SolrCloud 
 on YARN using Apache Slider. I am following LucidWorks blog at 
 https://github.com/LucidWorks/solr-slider for the same. I already have 
 a Hortonworks HDP cluster. When I try to setup Solr on my HDP cluster 
 using Slider, I am facing some issues.



 As per the blog, I have performed the below steps:



 1.   I have setup a single node HDP cluster for which the hostname is
 myhdpcluster.com with all the essential services including ZooKeeper 
 and Slider running on it.

 2.   Updated the resource manager address and port in slider-client.xml

Re: FW: Performance warning overlapping onDeckSearchers

2015-08-14 Thread Shawn Heisey
On 8/11/2015 6:15 AM, Adrian Liew wrote:
 I am not sure if you know much about the Sitecore WCMS Platform and we
 are experiencing some issues reported by Solr Admin with regards to
 the following when we try to publish some content in order to trigger
 an index update to our sitecore_web_index SolrCore:

  

 1. 8/11/2015, 7:46:26 PM

 WARN

 null

 SolrCore

 [sitecore_master_index] Error opening new searcher. exceeded limit of
 maxWarmingSearchers=2,​ try again later.


 2. 8/11/2015, 7:46:26 PM

 ERROR

 null

 SolrCore

 org.apache.solr.common.SolrException: Error opening new searcher.
 exceeded limit of maxWarmingSearchers=2,​ try again later.

  

 I also noticed that I had been receiving a quick succession of commits
 over time which I think is causing the above. I read somewhere in one
 of your articles after googling abit on the problem, and you did
 mention enabling autocommits may work?


First, a little housekeeping regarding the way this message found its
way to me:

http://people.apache.org/~hossman/#private_q

I'm redirecting this reply to the list (with a bcc to you) because
there's nothing sensitive in your question or my reply, and this
information may help others.

Those log messages are almost always caused by committing too
frequently.  Specifically, commits with openSearcher=true (which is the
default), which makes changes to the index visible to queries.  The
commits need to be adjusted so they are as infrequent as you can
possibly make them.

A bunch of good information about commits and their pitfalls can be
found here:

http://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

In particular, pay attention to the part about not listening your
product manager who says we need no more than 1 second latency.  This
is a common design requirement passed down from managers and executives
who have no idea how a search index works.  When actual user behavior
and expectations are considered, making changes visible within one
minute or five minutes is usually enough.  Even for unusual situations,
ten or fifteen seconds of latency is usually OK.

I checked the solr-user list history for your message threads, and tried
to find this Sitecore software you mentioned.  It appears that even
downloading the binary requires registration, which I'm not going to do
for software I have no plans to use myself, and I'm more interested in
the source code than the binary.  I couldn't find any source code.  It
looks like this is a commercial product that doesn't provide code.

Are there any commit-related settings in your configuration for the
sitecore product?  I found this:

http://www.sitecore.net/learn/blogs/technical-blogs/sitecore-7-development-team/posts/2013/04/sitecore-7-commit-policies.aspx

Regardless of what happens with this message and its replies, I
recommend that you open a trouble ticket with Sitecore, since there is
probably a good chance that you've paid them for their software.  If
they are competent, they will know about how Solr commits work and will
have some suggestions for you to try.

If it turns out that the Sitecore commit settings are reasonable, then
the problem is likely happening because commits are happening very
slowly.  We may need to look into how you can improve general Solr
performance.  There is a very small amount of information about slow
commits on this page:

https://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits

Thanks,
Shawn



FW: TIKA OCR not working

2015-04-27 Thread Allison, Timothy B.
Trung,

I haven't experimented with our OCR parser yet, but this should give a good 
start: https://wiki.apache.org/tika/TikaOCR .

Have you installed tesseract?

Tika colleagues,
  Any other tips?  What else has to be configured and how?

-Original Message-
From: trung.ht [mailto:trung...@anlab.vn] 
Sent: Friday, April 24, 2015 11:22 PM
To: solr-user@lucene.apache.org
Subject: Re: TIKA OCR not working

HI everyone,

Does anyone have the answer for this problem :)?


I saw the document of Tika. Tika 1.7 support OCR and Solr 5.0 use Tika 1.7,
 but it looks like it does not work. Does anyone know that TIKA OCR works
 automatically with Solr or I have to change some settings?


Trung.


 It's not clear if OCR would happen automatically in Solr Cell, or if
 changes to Solr would be needed.

 For Tika OCR info, see:

 https://issues.apache.org/jira/browse/TIKA-93
 https://wiki.apache.org/tika/TikaOCR



 -- Jack Krupansky

 On Thu, Apr 23, 2015 at 9:14 AM, Alexandre Rafalovitch 
 arafa...@gmail.com
 wrote:

  I think OCR is in Tika 1.8, so might be in Solr 5.?. But I haven't seen
 it
  in use yet.
 
  Regards,
  Alex
  On 23 Apr 2015 10:24 pm, Ahmet Arslan iori...@yahoo.com.invalid
 wrote:
 
   Hi Trung,
  
   I didn't know about OCR capabilities of tika.
   Someone who is familiar with sold-cell can inform us whether this
   functionality is added to solr or not.
  
   Ahmet
  
  
  
   On Thursday, April 23, 2015 2:06 PM, trung.ht trung...@anlab.vn
 wrote:
   Hi Ahmet,
  
   I used a png file, not a pdf file. From the document, I understand
 that
   solr will post the file to tika, and since tika 1.7, OCR is included.
 Is
   there something I misunderstood.
  
   Trung.
  
  
   On Thu, Apr 23, 2015 at 5:59 PM, Ahmet Arslan
 iori...@yahoo.com.invalid
  
   wrote:
  
Hi Trung,
   
solr-cell (tika) does not do OCR. It cannot exact text from image
 based
pdfs.
   
Ahmet
   
   
   
On Thursday, April 23, 2015 7:33 AM, trung.ht trung...@anlab.vn
  wrote:
   
   
   
Hi,
   
I want to use solr to index some scanned document, after settings
 solr
document with a two field content and filename, I tried to
 upload
  the
attached file, but it seems that the content of the file is only
 \n \n
\n.
But if I used the tesseract from command line I got the result
  correctly.
   
The log when solr receive my request:
---
INFO  - 2015-04-23 03:49:25.941;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update/extract params={literal.groupid=2json.nl
   =flat
resource.name=phplNiPrsliteral.id
   
  
 
 =4commit=trueextractOnly=falseliteral.historyid=4omitHeader=trueliteral.userid=3literal.createddate=2015-04-22T15:00:00Zfmap.content=contentwt=jsonliteral.filename=\\trunght\test\tesseract_3.png}
   

   
The document when I check on solr admin page:
-
{ groupid: 2, id: 4, historyid: 4, userid: 3,
 createddate:
2015-04-22T15:00:00Z, filename:
  trunght\\test\\tesseract_3.png,
autocomplete_text: [ trunght\\test\\tesseract_3.png ],
   content: 
\n \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n
 \n
  \n
\n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n  \n
 \n
  ,
_version_: 1499213034586898400 }
   
---
   
Since I am a solr newbie I do not know where to look, can anyone
 give
  me
an advice for where to look for error or settings to make it work.
Thanks in advanced.
   
Trung.
   
  
 





RE: FW: NRTCachingDirectory threads stuck

2015-02-23 Thread Moshe Recanati
Thank you.



Regards,
Moshe Recanati
SVP Engineering
Office + 972-73-2617564
Mobile  + 972-52-6194481
Skype    :  recanati

More at:  www.kmslh.com | LinkedIn | FB


-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] 
Sent: Sunday, February 22, 2015 6:16 PM
To: solr-user
Subject: Re: FW: NRTCachingDirectory threads stuck

On Sun, Feb 22, 2015 at 1:54 PM, Moshe Recanati mos...@kmslh.com wrote:

 Hi Mikhail,
 Thank you.
 1. Regarding jetty threads - How I can reduce them?


https://wiki.eclipse.org/Jetty/Howto/High_Load#Thread_Pool note, you'll get
503 or something when pool size is exceeded.


 2. Is it related to the fact we're running Solr 4.0 in parallel on 
 this machine?


are their index dirs different? Nevertheless, running something at same machine 
leads to resource contention. What does `top` say?



 Thank you


 Regards,
 Moshe Recanati
 SVP Engineering
 Office + 972-73-2617564
 Mobile  + 972-52-6194481
 Skype:  recanati

 More at:  www.kmslh.com | LinkedIn | FB


 -Original Message-
 From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
 Sent: Sunday, February 22, 2015 11:18 AM
 To: solr-user
 Subject: Re: FW: NRTCachingDirectory threads stuck

 Hello,

 I checked 20020.tdump. From the update perspective, it's ok, I see the 
 single thread committed and awaits for opening a searcher. There are a 
 few very bad evidences:
 - there are many threads executing search requests in parallel. let;s 
 say it's roughly hundred of them. This is dead end. Consider to limit 
 number of jetty threads, start from number of cores available;
 - heap is full, it's no-way for java. Either increase it, or reduce 
 load or make sure that there are no any leak;
 - i see many threads executing Luke handler code, it might be really 
 wrong setup, or regular approach for Solr replication. I'm not sure here.


 On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote:

   Hi,
 
  I saw message rejected because of attachment.
 
  I uploaded data to drive
 
 
  https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?us
  p=
  sharing
 
 
 
  Moshe
 
 
 
  *From:* Moshe Recanati [mailto:mos...@kmslh.com]
  *Sent:* Sunday, February 22, 2015 8:37 AM
  *To:* solr-user@lucene.apache.org
  *Subject:* RE: NRTCachingDirectory threads stuck
 
 
 
  *From:* Moshe Recanati
  *Sent:* Sunday, February 22, 2015 8:34 AM
  *To:* solr-user@lucene.apache.org
  *Subject:* NRTCachingDirectory threads stuck
 
 
 
  Hi,
 
  We're running two Solr servers on same machine.
 
  Once Solr 4.0 and the second is Solr 4.7.1.
 
  In the Solr 4.7.1 we've very strange behavior, while indexing 
  document we get spike of memory from 1GB to 4Gb in couple of minutes 
  and huge number of threads stuck on
 
  NRTCachingDirectory.openInput methods.
 
 
 
  Thread sump and GC attached.
 
 
 
  Are you familiar with this behavior? What can be the trigger for this?
 
 
 
  Thank you,
 
 
 
 
 
  *Regards,*
 
  *Moshe Recanati*
 
  *SVP Engineering*
 
  Office + 972-73-2617564
 
  Mobile  + 972-52-6194481
 
  Skype:  recanati
  [image: KMS2]
  http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121
  00
  0184.html
 
  More at:  www.kmslh.com | LinkedIn
  http://www.linkedin.com/company/kms-lighthouse | FB 
  https://www.facebook.com/pages/KMS-lighthouse/123774257810917
 
 
 
 
 



 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com




--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


RE: FW: NRTCachingDirectory threads stuck

2015-02-22 Thread Moshe Recanati
Hi Mikhail,
Thank you.
1. Regarding jetty threads - How I can reduce them?
2. Is it related to the fact we're running Solr 4.0 in parallel on this machine?

Thank you


Regards,
Moshe Recanati
SVP Engineering
Office + 972-73-2617564
Mobile  + 972-52-6194481
Skype    :  recanati

More at:  www.kmslh.com | LinkedIn | FB


-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com] 
Sent: Sunday, February 22, 2015 11:18 AM
To: solr-user
Subject: Re: FW: NRTCachingDirectory threads stuck

Hello,

I checked 20020.tdump. From the update perspective, it's ok, I see the single 
thread committed and awaits for opening a searcher. There are a few very bad 
evidences:
- there are many threads executing search requests in parallel. let;s say it's 
roughly hundred of them. This is dead end. Consider to limit number of jetty 
threads, start from number of cores available;
- heap is full, it's no-way for java. Either increase it, or reduce load or 
make sure that there are no any leak;
- i see many threads executing Luke handler code, it might be really wrong 
setup, or regular approach for Solr replication. I'm not sure here.


On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote:

  Hi,

 I saw message rejected because of attachment.

 I uploaded data to drive


 https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=
 sharing



 Moshe



 *From:* Moshe Recanati [mailto:mos...@kmslh.com]
 *Sent:* Sunday, February 22, 2015 8:37 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* RE: NRTCachingDirectory threads stuck



 *From:* Moshe Recanati
 *Sent:* Sunday, February 22, 2015 8:34 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* NRTCachingDirectory threads stuck



 Hi,

 We're running two Solr servers on same machine.

 Once Solr 4.0 and the second is Solr 4.7.1.

 In the Solr 4.7.1 we've very strange behavior, while indexing document 
 we get spike of memory from 1GB to 4Gb in couple of minutes and huge 
 number of threads stuck on

 NRTCachingDirectory.openInput methods.



 Thread sump and GC attached.



 Are you familiar with this behavior? What can be the trigger for this?



 Thank you,





 *Regards,*

 *Moshe Recanati*

 *SVP Engineering*

 Office + 972-73-2617564

 Mobile  + 972-52-6194481

 Skype:  recanati
 [image: KMS2]
 http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-12100
 0184.html

 More at:  www.kmslh.com | LinkedIn
 http://www.linkedin.com/company/kms-lighthouse | FB 
 https://www.facebook.com/pages/KMS-lighthouse/123774257810917








--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


RE: FW: NRTCachingDirectory threads stuck

2015-02-22 Thread Moshe Recanati
Hi,
Another question.
We're using 
lockTypesingle/lockType

Is it related?


Regards,
Moshe Recanati
SVP Engineering
Office + 972-73-2617564
Mobile  + 972-52-6194481
Skype    :  recanati

More at:  www.kmslh.com | LinkedIn | FB


-Original Message-
From: Moshe Recanati 
Sent: Sunday, February 22, 2015 12:55 PM
To: solr-user
Subject: RE: FW: NRTCachingDirectory threads stuck

Hi Mikhail,
Thank you.
1. Regarding jetty threads - How I can reduce them?
2. Is it related to the fact we're running Solr 4.0 in parallel on this machine?

Thank you


Regards,
Moshe Recanati
SVP Engineering
Office + 972-73-2617564
Mobile  + 972-52-6194481
Skype    :  recanati

More at:  www.kmslh.com | LinkedIn | FB


-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
Sent: Sunday, February 22, 2015 11:18 AM
To: solr-user
Subject: Re: FW: NRTCachingDirectory threads stuck

Hello,

I checked 20020.tdump. From the update perspective, it's ok, I see the single 
thread committed and awaits for opening a searcher. There are a few very bad 
evidences:
- there are many threads executing search requests in parallel. let;s say it's 
roughly hundred of them. This is dead end. Consider to limit number of jetty 
threads, start from number of cores available;
- heap is full, it's no-way for java. Either increase it, or reduce load or 
make sure that there are no any leak;
- i see many threads executing Luke handler code, it might be really wrong 
setup, or regular approach for Solr replication. I'm not sure here.


On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote:

  Hi,

 I saw message rejected because of attachment.

 I uploaded data to drive


 https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=
 sharing



 Moshe



 *From:* Moshe Recanati [mailto:mos...@kmslh.com]
 *Sent:* Sunday, February 22, 2015 8:37 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* RE: NRTCachingDirectory threads stuck



 *From:* Moshe Recanati
 *Sent:* Sunday, February 22, 2015 8:34 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* NRTCachingDirectory threads stuck



 Hi,

 We're running two Solr servers on same machine.

 Once Solr 4.0 and the second is Solr 4.7.1.

 In the Solr 4.7.1 we've very strange behavior, while indexing document 
 we get spike of memory from 1GB to 4Gb in couple of minutes and huge 
 number of threads stuck on

 NRTCachingDirectory.openInput methods.



 Thread sump and GC attached.



 Are you familiar with this behavior? What can be the trigger for this?



 Thank you,





 *Regards,*

 *Moshe Recanati*

 *SVP Engineering*

 Office + 972-73-2617564

 Mobile  + 972-52-6194481

 Skype:  recanati
 [image: KMS2]
 http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-12100
 0184.html

 More at:  www.kmslh.com | LinkedIn
 http://www.linkedin.com/company/kms-lighthouse | FB 
 https://www.facebook.com/pages/KMS-lighthouse/123774257810917








--
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


Re: FW: NRTCachingDirectory threads stuck

2015-02-22 Thread Mikhail Khludnev
On Sun, Feb 22, 2015 at 1:54 PM, Moshe Recanati mos...@kmslh.com wrote:

 Hi Mikhail,
 Thank you.
 1. Regarding jetty threads - How I can reduce them?


https://wiki.eclipse.org/Jetty/Howto/High_Load#Thread_Pool note, you'll get
503 or something when pool size is exceeded.


 2. Is it related to the fact we're running Solr 4.0 in parallel on this
 machine?


are their index dirs different? Nevertheless, running something at same
machine leads to resource contention. What does `top` say?



 Thank you


 Regards,
 Moshe Recanati
 SVP Engineering
 Office + 972-73-2617564
 Mobile  + 972-52-6194481
 Skype:  recanati

 More at:  www.kmslh.com | LinkedIn | FB


 -Original Message-
 From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
 Sent: Sunday, February 22, 2015 11:18 AM
 To: solr-user
 Subject: Re: FW: NRTCachingDirectory threads stuck

 Hello,

 I checked 20020.tdump. From the update perspective, it's ok, I see the
 single thread committed and awaits for opening a searcher. There are a few
 very bad evidences:
 - there are many threads executing search requests in parallel. let;s say
 it's roughly hundred of them. This is dead end. Consider to limit number of
 jetty threads, start from number of cores available;
 - heap is full, it's no-way for java. Either increase it, or reduce load
 or make sure that there are no any leak;
 - i see many threads executing Luke handler code, it might be really wrong
 setup, or regular approach for Solr replication. I'm not sure here.


 On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote:

   Hi,
 
  I saw message rejected because of attachment.
 
  I uploaded data to drive
 
 
  https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=
  sharing
 
 
 
  Moshe
 
 
 
  *From:* Moshe Recanati [mailto:mos...@kmslh.com]
  *Sent:* Sunday, February 22, 2015 8:37 AM
  *To:* solr-user@lucene.apache.org
  *Subject:* RE: NRTCachingDirectory threads stuck
 
 
 
  *From:* Moshe Recanati
  *Sent:* Sunday, February 22, 2015 8:34 AM
  *To:* solr-user@lucene.apache.org
  *Subject:* NRTCachingDirectory threads stuck
 
 
 
  Hi,
 
  We're running two Solr servers on same machine.
 
  Once Solr 4.0 and the second is Solr 4.7.1.
 
  In the Solr 4.7.1 we've very strange behavior, while indexing document
  we get spike of memory from 1GB to 4Gb in couple of minutes and huge
  number of threads stuck on
 
  NRTCachingDirectory.openInput methods.
 
 
 
  Thread sump and GC attached.
 
 
 
  Are you familiar with this behavior? What can be the trigger for this?
 
 
 
  Thank you,
 
 
 
 
 
  *Regards,*
 
  *Moshe Recanati*
 
  *SVP Engineering*
 
  Office + 972-73-2617564
 
  Mobile  + 972-52-6194481
 
  Skype:  recanati
  [image: KMS2]
  http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-12100
  0184.html
 
  More at:  www.kmslh.com | LinkedIn
  http://www.linkedin.com/company/kms-lighthouse | FB
  https://www.facebook.com/pages/KMS-lighthouse/123774257810917
 
 
 
 
 



 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


Re: FW: NRTCachingDirectory threads stuck

2015-02-22 Thread Mikhail Khludnev
Hello,

I checked 20020.tdump. From the update perspective, it's ok, I see the
single thread committed and awaits for opening a searcher. There are a few
very bad evidences:
- there are many threads executing search requests in parallel. let;s say
it's roughly hundred of them. This is dead end. Consider to limit number of
jetty threads, start from number of cores available;
- heap is full, it's no-way for java. Either increase it, or reduce load or
make sure that there are no any leak;
- i see many threads executing Luke handler code, it might be really wrong
setup, or regular approach for Solr replication. I'm not sure here.


On Sun, Feb 22, 2015 at 9:57 AM, Moshe Recanati mos...@kmslh.com wrote:

  Hi,

 I saw message rejected because of attachment.

 I uploaded data to drive


 https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=sharing



 Moshe



 *From:* Moshe Recanati [mailto:mos...@kmslh.com]
 *Sent:* Sunday, February 22, 2015 8:37 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* RE: NRTCachingDirectory threads stuck



 *From:* Moshe Recanati
 *Sent:* Sunday, February 22, 2015 8:34 AM
 *To:* solr-user@lucene.apache.org
 *Subject:* NRTCachingDirectory threads stuck



 Hi,

 We're running two Solr servers on same machine.

 Once Solr 4.0 and the second is Solr 4.7.1.

 In the Solr 4.7.1 we've very strange behavior, while indexing document we
 get spike of memory from 1GB to 4Gb in couple of minutes and huge number of
 threads stuck on

 NRTCachingDirectory.openInput methods.



 Thread sump and GC attached.



 Are you familiar with this behavior? What can be the trigger for this?



 Thank you,





 *Regards,*

 *Moshe Recanati*

 *SVP Engineering*

 Office + 972-73-2617564

 Mobile  + 972-52-6194481

 Skype:  recanati
 [image: KMS2]
 http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html

 More at:  www.kmslh.com | LinkedIn
 http://www.linkedin.com/company/kms-lighthouse | FB
 https://www.facebook.com/pages/KMS-lighthouse/123774257810917








-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


FW: NRTCachingDirectory threads stuck

2015-02-21 Thread Moshe Recanati
Hi,
I saw message rejected because of attachment.
I uploaded data to drive
https://drive.google.com/file/d/0B0GR0M-lL5QHVDNjZlUwVTR2QTQ/view?usp=sharing

Moshe

From: Moshe Recanati [mailto:mos...@kmslh.com]
Sent: Sunday, February 22, 2015 8:37 AM
To: solr-user@lucene.apache.org
Subject: RE: NRTCachingDirectory threads stuck

From: Moshe Recanati
Sent: Sunday, February 22, 2015 8:34 AM
To: solr-user@lucene.apache.orgmailto:solr-user@lucene.apache.org
Subject: NRTCachingDirectory threads stuck

Hi,
We're running two Solr servers on same machine.
Once Solr 4.0 and the second is Solr 4.7.1.
In the Solr 4.7.1 we've very strange behavior, while indexing document we get 
spike of memory from 1GB to 4Gb in couple of minutes and huge number of threads 
stuck on
NRTCachingDirectory.openInput methods.

Thread sump and GC attached.

Are you familiar with this behavior? What can be the trigger for this?

Thank you,


Regards,
Moshe Recanati
SVP Engineering
Office + 972-73-2617564
Mobile  + 972-52-6194481
Skype:  recanati
[KMS2]http://finance.yahoo.com/news/kms-lighthouse-named-gartner-cool-121000184.html
More at:  www.kmslh.comhttp://www.kmslh.com/ | 
LinkedInhttp://www.linkedin.com/company/kms-lighthouse | 
FBhttps://www.facebook.com/pages/KMS-lighthouse/123774257810917




Re: FW: Complex boost statement

2014-10-20 Thread Ramzi Alqrainy
Please try this 

if(and(exists(query({!v=BUS_CITY:regina})),exists(BUS_IS_NEARBY)),20,1)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Complex-boost-statement-tp4164572p4164885.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: FW: Complex boost statement

2014-10-18 Thread William Bell
Wouldn't it be?

 if(and(exists(query({!v=BUS_CITY:regina})),not(query({!v=BUS_IS_
TOLL_FREE:true})),500,1)

On Fri, Oct 17, 2014 at 2:07 PM, Corey Gerhardt 
corey.gerha...@directwest.com wrote:

 Apparently I need to a long holiday so that I can interpret the
 documentation correctly.

  if(and(exists(query({!v=BUS_CITY:regina})),not(BUS_IS_TOLL_FREE)),500,1)

 -Original Message-
 From: Corey Gerhardt [mailto:corey.gerha...@directwest.com]
 Sent: October-16-14 4:11 PM
 To: Solr User List
 Subject: Complex boost statement

 Edismax, solrnet

 I'm thinking that solrnet is going to be my problem due to I can only sent
 one boost  parameter.

 Is it possible to have a boost value:

 if(exists(query({!v=BUS_CITY:regina}))(BUS_IS_NEARBY),20,1)

 Thanks,

 Corey




-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076


FW: Complex boost statement

2014-10-17 Thread Corey Gerhardt
Apparently I need to a long holiday so that I can interpret the documentation 
correctly.

 if(and(exists(query({!v=BUS_CITY:regina})),not(BUS_IS_TOLL_FREE)),500,1)

-Original Message-
From: Corey Gerhardt [mailto:corey.gerha...@directwest.com] 
Sent: October-16-14 4:11 PM
To: Solr User List
Subject: Complex boost statement

Edismax, solrnet

I'm thinking that solrnet is going to be my problem due to I can only sent one 
boost  parameter.

Is it possible to have a boost value:

if(exists(query({!v=BUS_CITY:regina}))(BUS_IS_NEARBY),20,1)

Thanks,

Corey



FW: solr Analysis page matching question

2014-08-13 Thread Corey Gerhardt
Here's hopefully a better explanation of what I'm asking.

http://screencast.com/t/8blvgtJbY

Thanks,

Corey


-Original Message-
From: Corey Gerhardt [mailto:corey.gerha...@directwest.com] 
Sent: August-08-14 2:30 PM
To: Solr User List
Subject: solr Analysis page matching question

Edismax

Field Value (Index) = a  w   Field Value (Query) = a  w restaurant

The last token filter for both the index and query is LengthFilter.

So the very bottom of the Analyse Value results look like:

LF | aw|awLF |aw|restaurant

The bolded aw above indicate a match.

In an actual query, would I have to set mm to 50% or -1 in order to actually 
achieve a match?

Thanks,

Corey


Re: FW: solr Analysis page matching question

2014-08-13 Thread Shawn Heisey
On 8/13/2014 8:59 AM, Corey Gerhardt wrote:
 Here's hopefully a better explanation of what I'm asking.

 http://screencast.com/t/8blvgtJbY

This would not match, because restaurant (one of the query terms) is
not present in the field.  The mm value of 100% requires *all* of the
query terms to be present.  You would need a lower mm value in order to
get a match.

The analysis screen is not aware of things like the default operator and
the mm parameter for dismax/edismax.  It merely highlights terms in the
index analysis that match terms from the query analysis, it doesn't
attempt to tell you whether the query as a whole will match the index input.

Thanks,
Shawn



FW: Indexing a term into separate Lucene indexes

2014-06-20 Thread Huang, Roger

If I have documents with a person and his email address: 
u...@domain.commailto:u...@domain.com

How can I configure Solr (4.6) so that the email address source field is 
indexed as

-  the user part of the address (e.g., user) is in Lucene index X

-  the domain part of the address (e.g., domain.com) is in a separate 
Lucene index Y

I would like to be able search as follows:

-  Find all people whose email addresses have user part = userXyz

-  Find all people whose email addresses have domain part = 
domainABC.com

-  Find the person with exact email address = 
user...@domainabc.commailto:user...@domainabc.com

Would I use a copyField declaration in my schema?
http://wiki.apache.org/solr/SchemaXml#Copy_Fields

Thanks!


Aw: Fw: highlighting on hl.alternateField (copyField target) doesnt highlight

2014-06-11 Thread jay list
Answer to myself:
using the solr.KeywordTokenizerFactory and solr.WordDelimiterFilterFactory can 
preserve the original phone number and can add a token without containing 
spaces. 

input:  12345 67890
tokens: 12345 67890, 12345, 67890, 1234567890

Two advantages: I don't need another field and the highlighter works as 
aspected.
Best Regards.

 Gesendet: Donnerstag, 05. Juni 2014 um 09:14 Uhr
 Von: jay list jay.l...@web.de
 An: solr-user@lucene.apache.org
 Betreff: Fw: highlighting on hl.alternateField (copyField target) doesnt 
 highlight

 Anybody knowing this issue?
 
  Gesendet: Dienstag, 03. Juni 2014 um 09:11 Uhr
  Von: jay list jay.l...@web.de
  An: solr-user@lucene.apache.org
  Betreff: highlighting on hl.alternateField (copyField target) doesnt 
  highlight
 
  
  Hello,
   
  im trying to implement a user friendly search for phone numbers. These 
  numbers consist out of two digit-tokens like 12345 67890.
   
  Finally I want the highlighting for the phone number in the search result, 
  without any concerns about was this search result hit by field  tel  or 
  copyField  tel2.
   
  The field tel is splitted by a StandardTokenizer in two tokens 12345 AND 
  67890.
  And I want to catch up those people, who enter 1234567890 without any 
  space.
  I use copyField  tel2  to a solr.PatternReplaceCharFilterFactory to 
  eliminate non digits followed by a solr.KeywordTokenizerFactory.
   
  In both cases the search hits as expected.
   
  The highlighter works well for  tel  or  tel2,  but I want the highlight 
  always on field  tel!
  Using  f.tel.hl.alternateField=tel2  is returning the field value wihtout 
  any highlighting.
   
  lst name=params
   str name=qtel2:1234567890/str
   str name=f.tel.hl.alternateFieldtel2/str
   str name=hltrue/str
   str name=hl.requireFieldMatchtrue/str
   str name=hl.simple.preem/str
   str name=hl.simple.post/em/str
   str name=hl.fltel,tel2/str
   str name=fltel,tel2/str
   str name=wtxml/str
   str name=fqtyp:person/str
  /lst
  
  ...
  
  result name=response numFound=1 start=0
   doc
str name=uiduser1/str
str name=tel12345 67890/str
str name=tels12345 67890/str/doc
  /result
  
  ...
  
  lst name=highlighting
   lst name=user1
arr name=tel
 str123456 67890/str !-- here should be a highlight --
/arr
arr name=tels
 strem123456 67890/em/str
/arr
   /lst
  /lst
  
  Any idea? Or do I have to change my velocity macros, always looking for a 
  different highlighted field?
  Best Regards



Fw: highlighting on hl.alternateField (copyField target) doesnt highlight

2014-06-05 Thread jay list
Anybody knowing this issue?

 Gesendet: Dienstag, 03. Juni 2014 um 09:11 Uhr
 Von: jay list jay.l...@web.de
 An: solr-user@lucene.apache.org
 Betreff: highlighting on hl.alternateField (copyField target) doesnt highlight

 
 Hello,
  
 im trying to implement a user friendly search for phone numbers. These 
 numbers consist out of two digit-tokens like 12345 67890.
  
 Finally I want the highlighting for the phone number in the search result, 
 without any concerns about was this search result hit by field  tel  or 
 copyField  tel2.
  
 The field tel is splitted by a StandardTokenizer in two tokens 12345 AND 
 67890.
 And I want to catch up those people, who enter 1234567890 without any space.
 I use copyField  tel2  to a solr.PatternReplaceCharFilterFactory to eliminate 
 non digits followed by a solr.KeywordTokenizerFactory.
  
 In both cases the search hits as expected.
  
 The highlighter works well for  tel  or  tel2,  but I want the highlight 
 always on field  tel!
 Using  f.tel.hl.alternateField=tel2  is returning the field value wihtout any 
 highlighting.
  
 lst name=params
  str name=qtel2:1234567890/str
  str name=f.tel.hl.alternateFieldtel2/str
  str name=hltrue/str
  str name=hl.requireFieldMatchtrue/str
  str name=hl.simple.preem/str
  str name=hl.simple.post/em/str
  str name=hl.fltel,tel2/str
  str name=fltel,tel2/str
  str name=wtxml/str
  str name=fqtyp:person/str
 /lst
 
 ...
 
 result name=response numFound=1 start=0
  doc
   str name=uiduser1/str
   str name=tel12345 67890/str
   str name=tels12345 67890/str/doc
 /result
 
 ...
 
 lst name=highlighting
  lst name=user1
   arr name=tel
str123456 67890/str !-- here should be a highlight --
   /arr
   arr name=tels
strem123456 67890/em/str
   /arr
  /lst
 /lst
 
 Any idea? Or do I have to change my velocity macros, always looking for a 
 different highlighted field?
 Best Regards


FW: suspect SOLR query from D029 (SOLR master)

2014-05-30 Thread Branham, Jeremy [HR]
We saw the file descriptors peak out and  full GCs running causing DOS on our 
SOLR servers this morning.
* Does this stack trace give enough information for some ideas?

* solr-spec
4.5.1-SNAPSHOT
* solr-impl
4.5.1-SNAPSHOT ${svnversion} - kx101435 - 2013-11-04 17:39:36
* lucene-spec
4.5.1
* lucene-impl
4.5.1 1533280 - mark - 2013-10-17 21:40:03


We also saw some IO connection exceptions in the SOLR log -

2014-05-30 10:24:20,334 ERROR - SolrDispatchFilter - 
null:org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http:// myserver.com:8080/svc/solr/wdsc
2014-05-30 10:24:20,334 ERROR - SolrCore   - 
org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://myserver.com:8080/svc/solr/wdsp
2014-05-30 10:24:20,333 ERROR - SolrCore   - 
org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http:// myserver.com:8080/svc/solr/wdsc
2014-05-30 10:24:20,335 ERROR - SolrCore   - 
org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http:// myserver.com:8080/svc/solr/wdsc
2014-05-30 10:24:20,336 ERROR - SolrDispatchFilter - 
null:org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http:// myserver.com:8080/svc/solr/wdsc
2014-05-30 10:24:20,335 ERROR - SolrDispatchFilter - 
null:org.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http:// myserver.com:8080/svc/solr/wdsp



Jeremy D. Branham
Tel: **DOTNET

From: Worley, Chris [HR]
Sent: Friday, May 30, 2014 11:38 AM
To: Stoner, Susan [HR]; Branham, Jeremy [HR]
Cc: Barrett, Kevin B [HR]; Aldrich, Daniel J [HR]; Duncan, Horace W [HR]
Subject: suspect SOLR query from D029 (SOLR master)

Looks like a suspect SOLR query was executed at approx. 8:54am this morning on 
the SOLR master server (D029) that caused the java GC to go through the roof.  
See below:

2014-05-30 08:54:17,092 ERROR - SolrDispatchFilter - 
null:java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum$Frame.loadBlock(BlockTreeTermsReader.java:2377)
at 
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader$SegmentTermsEnum.seekExact(BlockTreeTermsReader.java:1698)
at org.apache.lucene.index.TermContext.build(TermContext.java:95)
at org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:166)
at 
org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:183)
at 
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:384)
at 
org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:183)
at 
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:384)
at 
org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:183)
at 
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:384)
at 
org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:183)
at 
org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:384)
at 
org.apache.lucene.search.FilteredQuery.createWeight(FilteredQuery.java:82)
at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:690)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1501)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1367)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:474)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:434)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 

fw: (Issue) How improve solr facet performance

2014-05-23 Thread Alice.H.Yang (mis.cnsh04.Newegg) 41493
Hi, Solr Developer

  Thanks very much for your timely reply.

1.  I'm sorry, I have made a mistake, the total number of documents is 32 
Million, not 320 Million.
2.  The system memory is large for solr index, OS total has 256G, I set the 
solr tomcat HEAPSIZE=-Xms25G -Xmx100G

-How many fields are you faceting on?

Reply:  9 fields I facet on.

- How many unique values does your facet fields have (approximately)?

Reply:  3 facet fields have one hundred unique values, other 6 facet fields' 
unique values are between 3 to 15. 


- What is the content of your facets (Strings, numbers?)

Reply:  9 fields are all numbers.

- Which facet.method do you use?

Reply:  Used the default facet.method=fc

And we test this scenario:  If the number of facet fields' unique values is 
less we add facet.method=enum, there is a little to improve performance.

- What is the response time with faceting and a few thousand hits?

Reply:   result name=response numFound=2925 start=0  
   QTime is  int name=QTime6/int 


Best Regards,
Alice Yang
+86-021-51530666*41493
Floor 19,KaiKai Plaza,888,Wanhandu Rd,Shanghai(200042)

-Original Message-
From: Toke Eskildsen [mailto:t...@statsbiblioteket.dk]
Sent: Friday, May 23, 2014 8:08 PM
To: d...@lucene.apache.org
Subject: Re: (Issue) How improve solr facet performance

On Fri, 2014-05-23 at 11:45 +0200, Alice.H.Yang (mis.cnsh04.Newegg)
41493 wrote:
We are blocked by solr facet performance when query hits many 
 documents. (about 10,000,000)

[320M documents, immediate response for plain search with 1M hits]

 But when we add several facet.field to do facet ,QTime  increaseto 
 220ms or more.

It is not clear whether your observation of increased response time is due to 
many hits or faceting in itself.

- How many fields are you faceting on?
- How many unique values does your facet fields have (approximately)?
- What is the content of your facets (Strings, numbers?)
- Which facet.method do you use?
- What is the response time with faceting and a few thousand hits?

 Do you have some advice on how improve the facet performance when hit 
 many documents.

That depends on whether your bottleneck is the hitcount itself, the number of 
unique facet values or something third like I/O.


- Toke Eskildsen, State and University Library, Denmark



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional 
commands, e-mail: dev-h...@lucene.apache.org



FW: Files locked after indexing

2014-03-11 Thread Croci Francesco Luigi (ID SWS)
Hi to all,

I'm pretty new with solr and tika and I have a problem.

I have the following workflow in my (web)application:

  *   download a pdf file from an archive
  *   index the file
  *   delete the file


My problem is that after indexing the file, it remains locked and the 
delete-part throws an exception.

Here is my code-snippet for indexing the file:

try
{
   ContentStreamUpdateRequest req = new 
ContentStreamUpdateRequest(/update/extract);
   req.addFile(file, type);
   req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);

   NamedListObject result = server.request(req);

   Assert.assertEquals(0, ((NamedList?) 
result.get(responseHeader)).get(status));
}

I also tried the ContentStream way but without success:
ContentStream contentStream = null;

try
{
  contentStream = new ContentStreamBase.FileStream(document);

  ContentStreamUpdateRequest req = new 
ContentStreamUpdateRequest(UPDATE_EXTRACT_REQUEST);
  req.addContentStream(contentStream);
  req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);

  NamedListObject result = server.request(req);

  if (!((NamedList?) 
result.get(responseHeader)).get(status).equals(0))
  {
throw new IDSystemException(LOG, Document could not be indexed. Status 
returned:  +
 ((NamedList?) 
result.get(responseHeader)).get(status));
  }
}
   catch...
   finally
{
  try
  {
if(contentStream != null  contentStream.getStream() != null)
{
  contentStream.getStream().close();
}
  }
  catch (IOException ioe)
  {
throw new IDSystemException(LOG, ioe.getMessage(), ioe);
  }
}


Do I miss something?

Thank you
Francesco



FW: looking for working example defType=term

2013-08-12 Thread Johannes Elsinghorst
Well, i  couldnt get it work  but maybe thats because im not a solr expert. 
What im trying to do is:
I have an index with only one indexed field. This field is an id so I don't 
want the standard queryparser to try to break it up in tokens. On the client 
side I use solrj like this:
SolrQuery solrQuery = new SolrQuery().setQuery(id); QueryResponse 
queryResponse = getSolrServer().query(solrQuery);

I'd like to configure the TermQParserPlugin on the server side to minimize my 
queries.

Johannes
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Montag, 12. August 2013 17:10
To: Johannes Elsinghorst
Subject: Re: looking for working example defType=term

How are you using the term query parser?   The term query parser requires a 
field to be specified.

I use it this way:

   q=*:*fq={!term f=category}electronics

The term query parser would never make sense as a defType query parser, I 
don't think (you have to set the field through local params).

Erik


On Aug 12, 2013, at 11:01 , Johannes Elsinghorst wrote:

 Hi,
 can anyone provide a working example (solrconfig.xml,schema.xml) using the 
 TermQParserPlugin? I always get a Nullpointer-Exception on startup:
 8920 [searcherExecutor-4-thread-1] ERROR org.apache.solr.core.SolrCore  û 
 java.lang.NullPointerException
   at 
 org.apache.solr.search.TermQParserPlugin$1.parse(TermQParserPlugin.java:55)
   at org.apache.solr.search.QParser.getQuery(QParser.java:142)
   at 
 org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:142)
   at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
   at 
 org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:64)
   at org.apache.solr.core.SolrCore$5.call(SolrCore.java:1693)
   at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   at java.lang.Thread.run(Unknown Source)
 
 solarconfig.xml:
  lst name=defaults
str name=echoParamsexplicit/str
   str name=defTypeterm/str
   int name=rows10/int
   str name=dfid/str
 /lst
 
 Thanks,
 Johannes
 
 





Fw:

2013-07-25 Thread wiredkel

Hi!   http://mpreprointranet.com/google.com.offers.html



Fw:

2013-07-25 Thread wiredkel

Hi!   http://www.cedarsshawarma.com/google.com.offers.html



Fw:

2013-07-24 Thread wiredkel

Hi!   http://www.MAVERIQUEEVENTS.COM/google.com.offers.html



Fw:

2013-07-24 Thread wiredkel

Hi!   http://busnatura.home.pl/google.com.offers.html



Fw:

2013-07-23 Thread wiredkel

Hi!   http://optiideas.com/google.com.offers.html



Fw:

2013-07-23 Thread wiredkel

Hi!   http://millanao.cl/google.com.offers.html



Fw:

2013-07-22 Thread wiredkel

Hi!   http://210.172.48.53/google.com.offers.html



fw:

2013-07-21 Thread Ozgur Yilmazel
 
http://volumizercum.freeenhancementpills.com/apnzhdmv/pewnepiccjcjomtiqadeosdpvxbe












 ozguryilmazel












 7/22/2013 4:10:47 AM


FW: Solr and Lucene

2013-06-12 Thread Ophir Michaeli
Hi,

Which lucene version is used with Solr 4.2.1? And is it possible to open it
by luke? If not by any other tool? Thanks

Thanks


Re: FW: Solr and Lucene

2013-06-12 Thread Rafał Kuć
Hello!

Solr 4.2.1 is using Lucene 4.2.1. Basically Solr and Lucene are
currently using the same numbers after their development was merged.

As far for Luke I think that the last version is using beta or alpha
release of Lucene 4.0. I would try replacing Lucene jar's and see if
it works although I didn't try it.

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi,

 Which lucene version is used with Solr 4.2.1? And is it possible to open it
 by luke? If not by any other tool? Thanks

 Thanks



Re: FW: Solr and Lucene

2013-06-12 Thread heikki
This link might be useful too:
http://www.semanticmetadata.net/2013/04/11/luke-4-2-binaries/.

Kind regards,
Heikki Doeleman



On Wed, Jun 12, 2013 at 3:45 PM, Rafał Kuć r@solr.pl wrote:

 Hello!

 Solr 4.2.1 is using Lucene 4.2.1. Basically Solr and Lucene are
 currently using the same numbers after their development was merged.

 As far for Luke I think that the last version is using beta or alpha
 release of Lucene 4.0. I would try replacing Lucene jar's and see if
 it works although I didn't try it.

 --
 Regards,
  Rafał Kuć
  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

  Hi,

  Which lucene version is used with Solr 4.2.1? And is it possible to open
 it
  by luke? If not by any other tool? Thanks

  Thanks




FW: howto: get the value from a multivalued field?

2013-05-23 Thread world hello




hi, all - 
how can I retrieve the value out of a multivalued field in a customized 
function query?I want to implement a function query whose first parameter is a 
multi-value fileld, from which values are retrieved and manipulated. 
however, I used the code but get exceptions - can not use FieldCache on 
multivalued field
/public ValueSource parse(FunctionQParser fp) 
throws ParseException {
try { ValueSource vs = fp.parseValueSource();   }   
catch (...)   {   }
Thanks.
- Frank


  

FW:

2013-05-16 Thread Michael Lorz
http://hardonfonts.com/mmndsejat.php 





  











Michael Lorz  




FW: How can i multiply documents after DIH?

2013-01-08 Thread Harshvardhan Ojha
All,

Looking into a finding solution for Hotel searches based on the below criteria's

1.City/Hotel
2.Data Range
3.Persons

We have created documents which contains all the basic needed information 
inclusive of per day rates. The document looks like the below 

=

doc
 str name=citycodeSHL/str
 date name=startdate2013-01-06T18:30:00Z/date
 date name=enddate2013-01-06T18:30:00Z/date
 str name=id2008090516/str
double name=tariff2400.0/double
 double name=tax600.0/double
 long name=_version_1423509483572690944/long
/doc


=

My search requirement is like

q=city AND startdate:[2013-01-06 TO 2013-01-08]

or

q=id: 2008090516 AND startdate:[2013-01-06 TO 2013-01-08]

and this combination for dates can be anything from daterange:[x TO y].

I have close to a 100K combinations to start with based on 
city,date-ranges,number of nights(days of stay) . I am looking at options to 
create search responses or even using this set of documents as an input source 
for them 

e.g: Running some Map-Reduce jobs to get all the 100K search responses and 
putting into the store or cache.  

Looking for suggestions cum options. 

Regards
Harshvardhan Ojha




RE: FW: Replication error and Shard Inconsistencies..

2012-12-06 Thread Annette Newton
Hi,

The file descriptor count is always quite low..  At the moment after heavy 
usage for a few days file descriptor counts are between 100-150 and I don't 
have any errors in the logs.  My worry at the moment is around all the 
CLOSE_WAIT connections I am seeing.  This is particularly true on the boxes 
marked as leaders, the replicas have a few but nowhere near as many.

Thanks for the response.

-Original Message-
From: Andre Bois-Crettez [mailto:andre.b...@kelkoo.com] 
Sent: 05 December 2012 17:57
To: solr-user@lucene.apache.org
Subject: Re: FW: Replication error and Shard Inconsistencies..

Not sure but, maybe you are running out of file descriptors ?
On each solr instance, look at the dashboard admin page, there is a bar with 
File Descriptor Count.

However if this was the case, I would expect to see lots of errors in the solr 
logs...

André


On 12/05/2012 06:41 PM, Annette Newton wrote:
 Sorry to bombard you - final update of the day...

 One thing that I have noticed is that we have a lot of connections 
 between the solr boxes with the connection set to CLOSE_WAIT and they 
 hang around for ages.

 -Original Message-
 From: Annette Newton [mailto:annette.new...@servicetick.com]
 Sent: 05 December 2012 13:55
 To: solr-user@lucene.apache.org
 Subject: FW: Replication error and Shard Inconsistencies..

 Update:

 I did a full restart of the solr cloud setup, stopped all the 
 instances, cleared down zookeeper and started them up individually.  I 
 then removed the index from one of the replicas, restarted solr and it 
 replicated ok.  So I'm wondering whether this is something that happens over 
 a period of time.

 Also just to let you know I changed the schema a couple of times and 
 reloaded the cores on all instances previous to the problem.  Don't 
 know if this could have contributed to the problem.

 Thanks.

 -Original Message-
 From: Annette Newton [mailto:annette.new...@servicetick.com]
 Sent: 05 December 2012 09:04
 To: solr-user@lucene.apache.org
 Subject: RE: Replication error and Shard Inconsistencies..

 Hi Mark,

 Thanks so much for the reply.

 We are using the release version of 4.0..

 It's very strange replication appears to be underway, but no files are 
 being copied across.  I have attached both the log from the new node 
 that I tried to bring up and the Schema and config we are using.

 I think it's probably something weird with our config, so I'm going to 
 play around with it today.  If I make any progress I'll send an update.

 Thanks again.

 -Original Message-
 From: Mark Miller [mailto:markrmil...@gmail.com]
 Sent: 05 December 2012 00:04
 To: solr-user@lucene.apache.org
 Subject: Re: Replication error and Shard Inconsistencies..

 Hey Annette,

 Are you using Solr 4.0 final? A version of 4x or 5x?

 Do you have the logs for when the replica tried to catch up to the leader?

 Stopping and starting the node is actually a fine thing to do. Perhaps 
 you can try it again and capture the logs.

 If a node is not listed as live but is in the clusterstate, that is 
 fine. It shouldn't be consulted. To remove it, you either have to 
 unload it with the core admin api or you could manually delete it's 
 registered state under the node states node that the Overseer looks at.

 Also, it would be useful to see the logs of the new node coming 
 up.there should be info about what happens when it tries to replicate.

 It almost sounds like replication is just not working for your setup 
 at all and that you have to tweak some configuration. You shouldn't 
 see these nodes as active then though - so we should get to the bottom of 
 this.

 - Mark

 On Dec 4, 2012, at 4:37 AM, Annette 
 Newtonannette.new...@servicetick.com
 wrote:

 Hi all,

 I have a quite weird issue with Solr cloud.  I have a 4 shard, 2 
 replica
 setup, yesterday one of the nodes lost communication with the cloud 
 setup, which resulted in it trying to run replication, this failed, 
 which has left me with a Shard (Shard 4) that has one node with 
 2,833,940 documents on the leader and 409,837 on the follower - 
 obviously a big discrepancy and this leads to queries returning 
 differing results depending on which of these nodes it gets the data 
 from.  There is no indication of a problem on the admin site other 
 than the big discrepancy in the number of documents.  They are all marked as 
 active etc.

 So I thought that I would force replication to happen again, by 
 stopping
 and starting solr (probably the wrong thing to do) but this resulted 
 in no change.  So I turned off that node and replaced it with a new 
 one.  In zookeeper live nodes doesn't list that machine but it is 
 still being shown as active on in the ClusterState.json, I have attached 
 images showing this.
 This means the new node hasn't replaced the old node but is now a 
 replica on Shard 1!  Also that node doesn't appear to have replicated 
 Shard 1's data anyway, it didn't get marked with replicating

FW: Replication error and Shard Inconsistencies..

2012-12-05 Thread Annette Newton
Update:

I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually.  I then removed the
index from one of the replicas, restarted solr and it replicated ok.  So I'm
wondering whether this is something that happens over a period of time. 

Also just to let you know I changed the schema a couple of times and
reloaded the cores on all instances previous to the problem.  Don't know if
this could have contributed to the problem.

Thanks.

-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com] 
Sent: 05 December 2012 09:04
To: solr-user@lucene.apache.org
Subject: RE: Replication error and Shard Inconsistencies..

Hi Mark,

Thanks so much for the reply.

We are using the release version of 4.0..

It's very strange replication appears to be underway, but no files are being
copied across.  I have attached both the log from the new node that I tried
to bring up and the Schema and config we are using.

I think it's probably something weird with our config, so I'm going to play
around with it today.  If I make any progress I'll send an update.

Thanks again.

-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: 05 December 2012 00:04
To: solr-user@lucene.apache.org
Subject: Re: Replication error and Shard Inconsistencies..

Hey Annette, 

Are you using Solr 4.0 final? A version of 4x or 5x?

Do you have the logs for when the replica tried to catch up to the leader?

Stopping and starting the node is actually a fine thing to do. Perhaps you
can try it again and capture the logs.

If a node is not listed as live but is in the clusterstate, that is fine. It
shouldn't be consulted. To remove it, you either have to unload it with the
core admin api or you could manually delete it's registered state under the
node states node that the Overseer looks at.

Also, it would be useful to see the logs of the new node coming up.there
should be info about what happens when it tries to replicate.

It almost sounds like replication is just not working for your setup at all
and that you have to tweak some configuration. You shouldn't see these nodes
as active then though - so we should get to the bottom of this.

- Mark

On Dec 4, 2012, at 4:37 AM, Annette Newton annette.new...@servicetick.com
wrote:

 Hi all,
  
 I have a quite weird issue with Solr cloud.  I have a 4 shard, 2 
 replica
setup, yesterday one of the nodes lost communication with the cloud setup,
which resulted in it trying to run replication, this failed, which has left
me with a Shard (Shard 4) that has one node with 2,833,940 documents on the
leader and 409,837 on the follower - obviously a big discrepancy and this
leads to queries returning differing results depending on which of these
nodes it gets the data from.  There is no indication of a problem on the
admin site other than the big discrepancy in the number of documents.  They
are all marked as active etc.
  
 So I thought that I would force replication to happen again, by 
 stopping
and starting solr (probably the wrong thing to do) but this resulted in no
change.  So I turned off that node and replaced it with a new one.  In
zookeeper live nodes doesn't list that machine but it is still being shown
as active on in the ClusterState.json, I have attached images showing this.
This means the new node hasn't replaced the old node but is now a replica on
Shard 1!  Also that node doesn't appear to have replicated Shard 1's data
anyway, it didn't get marked with replicating or anything. 
  
 How do I clear the zookeeper state without taking down the entire solr
cloud setup?  How do I force a node to replicate from the others in the
shard?
  
 Thanks in advance.
  
 Annette Newton
  
  
 LiveNodes.zip





FW: Replication error and Shard Inconsistencies..

2012-12-05 Thread Annette Newton
Sorry to bombard you - final update of the day...

One thing that I have noticed is that we have a lot of connections between
the solr boxes with the connection set to CLOSE_WAIT and they hang around
for ages.

-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com] 
Sent: 05 December 2012 13:55
To: solr-user@lucene.apache.org
Subject: FW: Replication error and Shard Inconsistencies..

Update:

I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually.  I then removed the
index from one of the replicas, restarted solr and it replicated ok.  So I'm
wondering whether this is something that happens over a period of time. 

Also just to let you know I changed the schema a couple of times and
reloaded the cores on all instances previous to the problem.  Don't know if
this could have contributed to the problem.

Thanks.

-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com]
Sent: 05 December 2012 09:04
To: solr-user@lucene.apache.org
Subject: RE: Replication error and Shard Inconsistencies..

Hi Mark,

Thanks so much for the reply.

We are using the release version of 4.0..

It's very strange replication appears to be underway, but no files are being
copied across.  I have attached both the log from the new node that I tried
to bring up and the Schema and config we are using.

I think it's probably something weird with our config, so I'm going to play
around with it today.  If I make any progress I'll send an update.

Thanks again.

-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: 05 December 2012 00:04
To: solr-user@lucene.apache.org
Subject: Re: Replication error and Shard Inconsistencies..

Hey Annette, 

Are you using Solr 4.0 final? A version of 4x or 5x?

Do you have the logs for when the replica tried to catch up to the leader?

Stopping and starting the node is actually a fine thing to do. Perhaps you
can try it again and capture the logs.

If a node is not listed as live but is in the clusterstate, that is fine. It
shouldn't be consulted. To remove it, you either have to unload it with the
core admin api or you could manually delete it's registered state under the
node states node that the Overseer looks at.

Also, it would be useful to see the logs of the new node coming up.there
should be info about what happens when it tries to replicate.

It almost sounds like replication is just not working for your setup at all
and that you have to tweak some configuration. You shouldn't see these nodes
as active then though - so we should get to the bottom of this.

- Mark

On Dec 4, 2012, at 4:37 AM, Annette Newton annette.new...@servicetick.com
wrote:

 Hi all,
  
 I have a quite weird issue with Solr cloud.  I have a 4 shard, 2 
 replica
setup, yesterday one of the nodes lost communication with the cloud setup,
which resulted in it trying to run replication, this failed, which has left
me with a Shard (Shard 4) that has one node with 2,833,940 documents on the
leader and 409,837 on the follower - obviously a big discrepancy and this
leads to queries returning differing results depending on which of these
nodes it gets the data from.  There is no indication of a problem on the
admin site other than the big discrepancy in the number of documents.  They
are all marked as active etc.
  
 So I thought that I would force replication to happen again, by 
 stopping
and starting solr (probably the wrong thing to do) but this resulted in no
change.  So I turned off that node and replaced it with a new one.  In
zookeeper live nodes doesn't list that machine but it is still being shown
as active on in the ClusterState.json, I have attached images showing this.
This means the new node hasn't replaced the old node but is now a replica on
Shard 1!  Also that node doesn't appear to have replicated Shard 1's data
anyway, it didn't get marked with replicating or anything. 
  
 How do I clear the zookeeper state without taking down the entire solr
cloud setup?  How do I force a node to replicate from the others in the
shard?
  
 Thanks in advance.
  
 Annette Newton
  
  
 LiveNodes.zip







Re: FW: Replication error and Shard Inconsistencies..

2012-12-05 Thread Andre Bois-Crettez

Not sure but, maybe you are running out of file descriptors ?
On each solr instance, look at the dashboard admin page, there is a
bar with File Descriptor Count.

However if this was the case, I would expect to see lots of errors in
the solr logs...

André


On 12/05/2012 06:41 PM, Annette Newton wrote:

Sorry to bombard you - final update of the day...

One thing that I have noticed is that we have a lot of connections between
the solr boxes with the connection set to CLOSE_WAIT and they hang around
for ages.

-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com]
Sent: 05 December 2012 13:55
To: solr-user@lucene.apache.org
Subject: FW: Replication error and Shard Inconsistencies..

Update:

I did a full restart of the solr cloud setup, stopped all the instances,
cleared down zookeeper and started them up individually.  I then removed the
index from one of the replicas, restarted solr and it replicated ok.  So I'm
wondering whether this is something that happens over a period of time.

Also just to let you know I changed the schema a couple of times and
reloaded the cores on all instances previous to the problem.  Don't know if
this could have contributed to the problem.

Thanks.

-Original Message-
From: Annette Newton [mailto:annette.new...@servicetick.com]
Sent: 05 December 2012 09:04
To: solr-user@lucene.apache.org
Subject: RE: Replication error and Shard Inconsistencies..

Hi Mark,

Thanks so much for the reply.

We are using the release version of 4.0..

It's very strange replication appears to be underway, but no files are being
copied across.  I have attached both the log from the new node that I tried
to bring up and the Schema and config we are using.

I think it's probably something weird with our config, so I'm going to play
around with it today.  If I make any progress I'll send an update.

Thanks again.

-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: 05 December 2012 00:04
To: solr-user@lucene.apache.org
Subject: Re: Replication error and Shard Inconsistencies..

Hey Annette,

Are you using Solr 4.0 final? A version of 4x or 5x?

Do you have the logs for when the replica tried to catch up to the leader?

Stopping and starting the node is actually a fine thing to do. Perhaps you
can try it again and capture the logs.

If a node is not listed as live but is in the clusterstate, that is fine. It
shouldn't be consulted. To remove it, you either have to unload it with the
core admin api or you could manually delete it's registered state under the
node states node that the Overseer looks at.

Also, it would be useful to see the logs of the new node coming up.there
should be info about what happens when it tries to replicate.

It almost sounds like replication is just not working for your setup at all
and that you have to tweak some configuration. You shouldn't see these nodes
as active then though - so we should get to the bottom of this.

- Mark

On Dec 4, 2012, at 4:37 AM, Annette Newtonannette.new...@servicetick.com
wrote:


Hi all,

I have a quite weird issue with Solr cloud.  I have a 4 shard, 2
replica

setup, yesterday one of the nodes lost communication with the cloud setup,
which resulted in it trying to run replication, this failed, which has left
me with a Shard (Shard 4) that has one node with 2,833,940 documents on the
leader and 409,837 on the follower - obviously a big discrepancy and this
leads to queries returning differing results depending on which of these
nodes it gets the data from.  There is no indication of a problem on the
admin site other than the big discrepancy in the number of documents.  They
are all marked as active etc.


So I thought that I would force replication to happen again, by
stopping

and starting solr (probably the wrong thing to do) but this resulted in no
change.  So I turned off that node and replaced it with a new one.  In
zookeeper live nodes doesn't list that machine but it is still being shown
as active on in the ClusterState.json, I have attached images showing this.
This means the new node hasn't replaced the old node but is now a replica on
Shard 1!  Also that node doesn't appear to have replicated Shard 1's data
anyway, it didn't get marked with replicating or anything.


How do I clear the zookeeper state without taking down the entire solr

cloud setup?  How do I force a node to replicate from the others in the
shard?


Thanks in advance.

Annette Newton


LiveNodes.zip







--
André Bois-Crettez

Search technology, Kelkoo
http://www.kelkoo.com/


Kelkoo SAS
Société par Actions Simplifiée
Au capital de € 4.168.964,30
Siège social : 8, rue du Sentier 75002 Paris
425 093 069 RCS Paris

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leurs destinataires. Si vous n'êtes pas le destinataire de ce 
message, merci de le détruire et d'en avertir l'expéditeur.


FW: Unsubscribe does not appear to be working

2012-04-30 Thread Kevin Bootz
I continue to receive posts from the solr group even after submitting an 
unsubscribe per the instructions from the ezmlm app. Is there perhaps a delay 
after I confirm the unsubscribe request? 14 posts received so far today. At 
this point I have a delete rule to auto trash any received but unnecessary 
traffic on our server.

Thanks


-Original Message-
From: Kevin Bootz 
Sent: Sunday, April 29, 2012 11:13 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: Unsubscribe does not appear to be working

Thanks all. Second unsubscribe confirmation email sent to the ezmlm app. 
Perhaps it will take this time...


Hi! This is the ezmlm program. I'm managing the solr-user@lucene.apache.org 
mailing list.

I'm working for my owner, who can be reached at 
solr-user-ow...@lucene.apache.org.

To confirm that you would like

   myemail

removed from the solr-user mailing list, please send a short reply to this 
address:

   solr-user-uc.1335712063.gfmnpbjnkpamcooicane-myem...@lucene.apache.org

...



-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org] 
Sent: Friday, April 27, 2012 12:44 PM
To: solr-user@lucene.apache.org
Subject: Re: Unsubscribe does not appear to be working


: There is no such thing as a 'solr forum' or a 'solr forum account.'
: 
: If you are subscribed to this list, an email to the unsubscribe
: address will unsubscribe you. If some intermediary or third party is
: forwarding email from this list to you, no one here can help you.

And more specificaly:

* sending email to solr-user-help@lucene will generate an automated reply with 
details about how to unsubscribe and even how to tell what address you are 
subscribed with.

* if the autoamted system isn't working for you, please send all of the 
important details (who you are, what you've tried, what automated responses 
you've gotten) to the solr-user-owner@lucene alias so the moderators can try to 
help you.



-Hoss


FW: unsubscribe

2012-04-30 Thread Kevin Bootz

BTW,
The first request to unsubscribe was sent in February if that helps track 
this down

Thx

From: Kevin Bootz
Sent: Friday, February 24, 2012 7:55 AM
To: 
'solr-user-uc.1330079879.acnmkgjcnnlfgdhmmlkn-kbootz=caci@lucene.apache.org'
Subject: unsubscribe




Fw: confirm subscribe to solr-user@lucene.apache.org

2012-03-29 Thread Rahul Mandaliya



- Forwarded Message -
From: Rahul Mandaliya rahul_i...@yahoo.com
To: solr-user@lucene.apache.org solr-user@lucene.apache.org 
Sent: Thursday, March 29, 2012 9:38 AM
Subject: Fw: confirm subscribe to solr-user@lucene.apache.org
 



hi,
 i am giving confirmation for subscribtion to solr-user@lucene.apache.org
regards,
Rahul




FW: boost question. need boost to take a query like bq

2012-02-11 Thread Bill Bell


I did find a solution, but the output is horrible. Why does explain look so
badly?

lst name=explainstr name=2H7DF
6.351252 = (MATCH) boost(*:*,query(specialties_ids:
#1;#0;#0;#0;#0;#0;#0;#0;#0; ,def=0.0)), product of:
  1.0 = (MATCH) MatchAllDocsQuery, product of:
1.0 = queryNorm
  6.351252 = query(specialties_ids: #1;#0;#0;#0;#0;#0;#0;#0;#0;
,def=0.0)=6.351252
/str


defType=edismaxboost=query($param)param=multi_field:87
--


We like the boost parameter in SOLR 3.5 with eDismax.

The question we have is what we would like to replace bq with boost, but we
get the multi-valued field issue when we try to do the equivalent queriesŠ
HTTP ERROR 400
Problem accessing /solr/providersearch/select. Reason:
can not use FieldCache on multivalued field: specialties_ids


q=*:*bq=multi_field:87^2defType=dismax

How do you do this using boost?

q=*:*boost=multi_field:87defType=edismax

We know we can use bq with edismax, but we like the multiply feature of
boost.

If I change it to a single valued field I get results, but they are all 1.0.

str name=YFFL5
1.0 = (MATCH) MatchAllDocsQuery, product of:
  1.0 = queryNorm
/str

q=*:*boost=single_field:87defType=edismax  // this works, but I need it on
multivalued






  1   2   >