Re: Multiple hashJoin or innerJoin

2017-06-16 Thread Zheng Lin Edwin Yeo
This is the full error message from the Node for the second example, which
is the following query that get stucked.

innerJoin(innerJoin(
  search(people, q=*:*, fl="personId,name", sort="personId asc"),
  search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
  on="personId=petsId"
),
  search(collection1, q=*:*, fl="collectionId,collectionName",
sort="collectionId asc"),
)on="personId=collectionId"


--
Full error message:

java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout
expired
: 5/5 ms
at
org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlo
ckingCallback.java:219)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:220)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:496)
at
org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStrea
m.java:90)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
at org.apache.solr.util.FastWriter.flush(FastWriter.java:140)
at org.apache.solr.util.FastWriter.write(FastWriter.java:54)
at
org.apache.solr.response.JSONWriter.writeStr(JSONResponseWriter.java:
482)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWrit
er.java:132)
at
org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559
)
at
org.apache.solr.handler.ExportWriter$StringFieldWriter.write(ExportWr
iter.java:1445)
at
org.apache.solr.handler.ExportWriter.writeDoc(ExportWriter.java:302)
at
org.apache.solr.handler.ExportWriter.lambda$writeDocs$4(ExportWriter.
java:268)
at
org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:
547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWrit
er.java:193)
at
org.apache.solr.response.JSONWriter$1.add(JSONResponseWriter.java:532
)
at
org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:267)

at
org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:
219)
at
org.apache.solr.response.JSONWriter.writeIterator(JSONResponseWriter.
java:523)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWrit
er.java:175)
at
org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559
)
at
org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:
219)
at
org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:
547)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWrit
er.java:193)
at
org.apache.solr.response.JSONWriter$2.put(JSONResponseWriter.java:559
)
at
org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java
:217)
at
org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:
547)
at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2564)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(Q
ueryResponseWriterUtil.java:49)
at
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:
809)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilte
r.java:347)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilte
r.java:298)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(Servlet
Handler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java
:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.j
ava:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.jav
a:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandl
er.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandl
er.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:
512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandle
r.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandle
r.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.j
ava:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(Cont
extHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerColl
ection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper
.java:134)
   

Re: Multiple hashJoin or innerJoin

2017-06-16 Thread Zheng Lin Edwin Yeo
Hi Joel,

Below are the results which I am getting.

If I use this query;

innerJoin(innerJoin(
  search(people, q=*:*, fl="personId,name", sort="personId asc"),
  search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
  on="personId=petsId"
),
  search(collection1, q=*:*, fl="collectionId,collectionName",
sort="collectionId asc"),
)on="petsId=collectionId"

I will get this exception error.

{"result-set":{"docs":[{"EXCEPTION":"Invalid JoinStream - all incoming
stream comparators (sort) must be a superset of this stream's
equalitor.","EOF":true}]}}



But if I use this query:

innerJoin(innerJoin(
  search(people, q=*:*, fl="personId,name", sort="personId asc"),
  search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
  on="personId=petsId"
),
  search(collection1, q=*:*, fl="collectionId,collectionName",
sort="collectionId asc"),
)on="personId=collectionId"

The query will get stuck, until I get this message. After which, the whole
Solr is hanged, and I have to restart Solr to get it working again. This is
in Solr 6.5.1.

2017-06-17 03:16:00.916 WARN
 (zkCallback-8-thread-4-processing-n:192.168.0.1:8983_solr
x:collection1_shard1_replica1 s:shard1 c:collection1
r:core_node1-EventThread) [c:collection1 s:shard1 r:core_node1
x:collection1_shard1_replica1] o.a.s.c.c.ConnectionManager Our previous
ZooKeeper session was expired. Attempting to reconnect to recover
relationship with ZooKeeper...


Regards,
Edwin


On 15 June 2017 at 23:36, Zheng Lin Edwin Yeo  wrote:

> Hi Joel,
>
> Yes, I got this error:
>
> {"result-set":{"docs":[{"EXCEPTION":"Invalid JoinStream - all incoming stream 
> comparators (sort) must be a superset of this stream's 
> equalitor.","EOF":true}]}}   
>
>
> Ok, will try out the work around first.
>
> Regards,
> Edwin
>
>
> On 15 June 2017 at 20:16, Joel Bernstein  wrote:
>
>> It looks like you are running into this bug:
>> https://issues.apache.org/jira/browse/SOLR-10512. This not been resolved
>> yet, but I believe there is a work around which is described in the
>> ticket.
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Wed, Jun 14, 2017 at 10:09 PM, Zheng Lin Edwin Yeo <
>> edwinye...@gmail.com>
>> wrote:
>>
>> > I have found that this is possible, but currently I have problems if the
>> > field name to join in all the 3 collections are different.
>> >
>> > For example, if in "people" collection, it is called personId, and in
>> > "pets" collection, it is called petsId. But in "collectionId", it is
>> called
>> > collectionName, but it won't work when I place it this way below. Any
>> > suggestions on how I can handle this?
>> >
>> > innerJoin(innerJoin(
>> >   search(people, q=*:*, fl="personId,name", sort="personId asc"),
>> >   search(pets, q=type:cat, fl="pertsId,petName", sort="personId asc"),
>> >   on="personId=petsId"
>> > ),
>> >   search(collection1, q=*:*, fl="collectionId,collectionName",
>> > sort="personId asc"),
>> > )on="personId=collectionId"
>> >
>> >
>> > Regards,
>> > Edwin
>> >
>> > On 14 June 2017 at 23:13, Zheng Lin Edwin Yeo 
>> > wrote:
>> >
>> > > Hi,
>> > >
>> > > I'm using Solr 6.5.1.
>> > >
>> > > Is it possible to have multiple hashJoin or innerJoin in the query?
>> > >
>> > > An example will be something like this for innerJoin:
>> > >
>> > > innerJoin(innerJoin(
>> > >   search(people, q=*:*, fl="personId,name", sort="personId asc"),
>> > >   search(pets, q=type:cat, fl="personId,petName", sort="personId
>> asc"),
>> > >   on="personId"
>> > > ),
>> > >   search(collection1, q=*:*, fl="personId,personName", sort="personId
>> > > asc"),
>> > > )
>> > >
>> > > Regards,
>> > > Edwin
>> > >
>> >
>>
>
>


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Never mind. I had a different config for zookeeper on second vm which
brought a different cloud.

On Fri, Jun 16, 2017, 8:48 PM Satya Marivada 
wrote:

> Here is the image:
>
> https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
>
> There is a node on 002: 15101 port missing from the cloud.
>
> On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson 
> wrote:
>
>> Images don't come through the mailer, they're stripped. You'll have to put
>> it somewhere else and provide a link.
>>
>> Best,
>> Erick
>>
>>
>> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
>> nodes
>> > are joining the cloud, the fourth one is coming up separately and not
>> > joining the other 3 nodes.
>> > Please see below in the picture on admin screen how fourth node is not
>> > joining. Any suggestions.
>> >
>> > Thanks,
>> > satya
>> >
>> >
>> > [image: image.png]
>> >
>>
>


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Here is the image:

https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0

There is a node on 002: 15101 port missing from the cloud.

On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson 
wrote:

> Images don't come through the mailer, they're stripped. You'll have to put
> it somewhere else and provide a link.
>
> Best,
> Erick
>
>
> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada  >
> wrote:
>
> > Hi,
> >
> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
> nodes
> > are joining the cloud, the fourth one is coming up separately and not
> > joining the other 3 nodes.
> > Please see below in the picture on admin screen how fourth node is not
> > joining. Any suggestions.
> >
> > Thanks,
> > satya
> >
> >
> > [image: image.png]
> >
>


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Erick Erickson
Images don't come through the mailer, they're stripped. You'll have to put
it somewhere else and provide a link.

Best,
Erick


On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada 
wrote:

> Hi,
>
> I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
> are joining the cloud, the fourth one is coming up separately and not
> joining the other 3 nodes.
> Please see below in the picture on admin screen how fourth node is not
> joining. Any suggestions.
>
> Thanks,
> satya
>
>
> [image: image.png]
>


node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Hi,

I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
are joining the cloud, the fourth one is coming up separately and not
joining the other 3 nodes.
Please see below in the picture on admin screen how fourth node is not
joining. Any suggestions.

Thanks,
satya


[image: image.png]


Re: IndexSchema uniqueKey is not stored

2017-06-16 Thread Chris Hostetter

Markus: I'm 95% certain that as long as the uniqueKey (effectively) has 
useDocValuesAsStored everything should still work and it's just the error 
checking/warning that's broken -- if you want to test that out and file a 
jira that would be helpful.

the one concern i have is that a deliberate choice (that i was not a 
part of and disagree with) was made that useDocValuesAsStored="true" 
should behave differently then stored="true" and thse fields do *NOT* 
match globs in the fl.

So I'm pretty sure if your uniqueKey field is stored="false" 
useDocValuesAsStored"true" something like 'fl=*,div(price,rating)' won't 
return the uniqueKey.

maybe that's "ok" for end users, and they just have to know to ask for 
'fl=id,*,...' explicitly, but i'm not sure if it will break any cloud 
related shard requests.



: Date: Fri, 16 Jun 2017 15:34:54 +
: From: Markus Jelsma 
: Reply-To: solr-user@lucene.apache.org
: To: solr-user 
: Subject: IndexSchema uniqueKey is not stored
: 
: Hi,
: 
: Moving over to docValues as stored field i got this:
: o.a.s.s.IndexSchema uniqueKey is not stored - distributed search and 
MoreLikeThis will not work
: 
: But distributed and MLT still work.
: 
: Is the warning actually obsolete these days?
: 
: Regards,
: Markus
: 
: 

-Hoss
http://www.lucidworks.com/


JSON facet API: exclusive lower bound, inclusive upper bound

2017-06-16 Thread chris



The docs for the JSON facet API tell us that the default ranges are inclusive 
of the lower bounds and exclusive of the upper bounds. �I'd like to do the 
opposite (exclusive lower, inclusive upper), but I can't figure out how to 
combine the 'include' parameters to make it
work.
�
�

Re: Live update the zookeeper for SOLR

2017-06-16 Thread Xie, Sean
Solr is configured with the zookeeper ensemble as mentioned below.

I will provide logs in a later time.

From: Shawn Heisey mailto:apa...@elyograg.org>>
Date: Friday, Jun 16, 2017, 12:27 PM
To: solr-user@lucene.apache.org 
mailto:solr-user@lucene.apache.org>>
Subject: [EXTERNAL] Re: Live update the zookeeper for SOLR

On 6/16/2017 9:05 AM, Xie, Sean wrote:
> Is there a way to keep SOLR alive when zookeeper instances (3 instance
> ensemble) are rolling updated one at a time? It seems SOLR cluster use
> one of the zookeeper instance and when the communication is broken in
> between, it won’t be able to reconnect to another zookeeper instance
> and keep itself alive. A service restart is need in this situation.
> Any way to keep the service alive all the time?

Have you informed Solr about all three of the ZK hosts?  You need a
zkHost like this, with an optional chroot:

zkHost="server1.example.com:2181,server2.example.com:2181,server3.example.com:2181/chroot"

There are more zkHost examples and a better description of the string
format in this javadoc:

https://lucene.apache.org/solr/6_6_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html#CloudSolrClient-java.lang.String-

If Solr hasn't been explicitly informed about all the hosts in the
ensemble, then it cannot connect to surviving hosts.

I've never heard of the problem you described happening as long as
Zookeeper quorum is maintained and Solr is properly configured.

If you can show that a correctly configured Solr 6.6 server loses
connection to ZK when one of the ZK servers is taken down, that's a bug,
and we need an issue in Jira with documentation of the problem.

Thanks,
Shawn

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.


Re: Facet is not working while querying with group

2017-06-16 Thread Erick Erickson
bq: But I only changed the docvalues not the multivalued

It's the same issue. There is remnant metadata when you change whether
a field uses docValues or not. The error message can be ambiguous
depending on where the issue is encountered.

Best,
Erick

On Fri, Jun 16, 2017 at 9:28 AM, Aman Deep Singh
 wrote:
> But I only changed the docvalues not the multivalued ,
> Anyway I will try to reproduce this by deleting the entire data directory
>
> On 16-Jun-2017 9:52 PM, "Erick Erickson"  wrote:
>
>> bq: deleted entire index from the solr by delete by query command
>>
>> That's not what I meant. Either
>> a> create an entirely new collection starting with the modified schema
>> or
>> b> shut down all your Solr instances. Go into each replica/core and
>> 'rm -rf data'. Restart Solr.
>>
>> That way you're absolutely sure everything's gone.
>>
>> Best,
>> Erick
>>
>> On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
>>  wrote:
>> > Yes ,it was a new schema(new collection),and after that I change only
>> > docvalues= true using schema api,but before changing the schema I have
>> > deleted entire index from the solr by delete by query command using admin
>> > gui.
>> >
>> > On 16-Jun-2017 9:28 PM, "Erick Erickson" 
>> wrote:
>> >
>> > My guess is you changed the definition of the field from
>> > multiValued="true" to "false" at some point. Even if you re-index all
>> > docs, some of the metadata can still be present.
>> >
>> > Did yo completely blow away the data? By that I mean remove the entire
>> > data dir (i.e. the parent of the "index" directory) (stand alone) or
>> > create a new collection (SolrCloud)?
>> >
>> > Best,
>> > Erick
>> >
>> > On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
>> >  wrote:
>> >> Hi,
>> >> Facets are not working when i'm querying with group command
>> >> request-
>> >> facet.field=isBlibliShipping&facet=true&group.facet=true&
>> > group.field=productCode&group=true&indent=on&q=*:*&wt=json
>> >>
>> >> Schema for facet field
>> >> > >> "false" indexed="true"stored="true"/>
>> >>
>> >> It was throwing error stating
>> >> Type mismatch: isBlibliShipping was indexed with multiple values per
>> >> document, use SORTED_SET instead
>> >>
>> >> The full stacktrace is attached as below
>> >> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
>> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
>> >> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
>> >> path=/select
>> >> params={q=*:*&facet.field=isBlibliShipping&indent=on&
>> > group.facet=true&facet=true&wt=json&group.field=
>> productCode&_=1497601224212&
>> > group=true}
>> >> hits=5346 status=500 QTime=29
>> >> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
>> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
>> >> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
>> *Exception
>> >> during facet.field: isBlibliShipping*
>> >> at
>> >> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
>> > SimpleFacets.java:809)
>> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> >> at org.apache.solr.request.SimpleFacets$3.execute(
>> > SimpleFacets.java:742)
>> >> at
>> >> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
>> > SimpleFacets.java:818)
>> >> at
>> >> org.apache.solr.handler.component.FacetComponent.
>> > getFacetCounts(FacetComponent.java:330)
>> >> at
>> >> org.apache.solr.handler.component.FacetComponent.
>> > process(FacetComponent.java:274)
>> >> at
>> >> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
>> > SearchHandler.java:296)
>> >> at
>> >> org.apache.solr.handler.RequestHandlerBase.handleRequest(
>> > RequestHandlerBase.java:173)
>> >> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>> >> at org.apache.solr.servlet.HttpSolrCall.execute(
>> HttpSolrCall.java:723)
>> >> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>> >> at
>> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
>> > SolrDispatchFilter.java:361)
>> >> at
>> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
>> > SolrDispatchFilter.java:305)
>> >> at
>> >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
>> > doFilter(ServletHandler.java:1691)
>> >> at
>> >> org.eclipse.jetty.servlet.ServletHandler.doHandle(
>> ServletHandler.java:582)
>> >> at
>> >> org.eclipse.jetty.server.handler.ScopedHandler.handle(
>> > ScopedHandler.java:143)
>> >> at
>> >> org.eclipse.jetty.security.SecurityHandler.handle(
>> > SecurityHandler.java:548)
>> >> at
>> >> org.eclipse.jetty.server.session.SessionHandler.
>> > doHandle(SessionHandler.java:226)
>> >> at
>> >> org.eclipse.jetty.server.handler.ContextHandler.
>> > doHandle(ContextHandler.java:1180)
>> >> at
>> >> org.eclipse.jetty.servlet.ServletHandler.doScope(
>> ServletHandler.java:512)
>> >> at
>> >> org.eclipse.jetty.server.session.SessionH

Re: (how) do folks use the Cloud Graph (Radial) in the Solr Admin UI?

2017-06-16 Thread Mike Drob
+solr-user

Might get a different audience on this list.

-- Forwarded message --
From: Christine Poerschke (BLOOMBERG/ LONDON) 
Date: Fri, Jun 16, 2017 at 11:43 AM
Subject: (how) do folks use the Cloud Graph (Radial) in the Solr Admin UI?
To: d...@lucene.apache.org


Any thoughts on potentially removing the radial cloud graph?

https://issues.apache.org/jira/browse/SOLR-5405 is the background for the
question and further input and views would be very welcome.

Thanks,
Christine
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org


Re: Custom Response writer

2017-06-16 Thread Rick Leir
M,
You ask what is better, and that is often a matter of opinion. My guess is that 
you should have that name stored in the Solr doc, so it can be in the response 
when you have a match. Oh, and I find JSON easier to work with than XML. Cheers 
-- Rick

On June 16, 2017 10:19:03 AM EDT, mganeshs  wrote:
>Hi, 
>
>We have requirement like in the response we would like to add
>description of
>an item with item id(this field comes from solr response by default) or
>employee name along with employee id ( this is just an example use case
>). 
>
>In the solr document what we have is only item id or employee id. But
>after
>the query executes and result is returned, in the response we need to
>fill
>employee name or item name. 
>
>So we decided to customize xmlwriter and in the place of filling the
>xml
>tags, we are checking and creating the additional tag for employee name
>or
>item name. These names / additional info are read from our own read
>through
>cache which are loaded in a static block.
>
>Is there a better way/option than customizing the xmlwriter ? 
>
>Regards,
>
>
>
>--
>View this message in context:
>http://lucene.472066.n3.nabble.com/Custom-Response-writer-tp4340908.html
>Sent from the Solr - User mailing list archive at Nabble.com.

-- 
Sorry for being brief. Alternate email is rickleir at yahoo dot com 

Re: Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
But I only changed the docvalues not the multivalued ,
Anyway I will try to reproduce this by deleting the entire data directory

On 16-Jun-2017 9:52 PM, "Erick Erickson"  wrote:

> bq: deleted entire index from the solr by delete by query command
>
> That's not what I meant. Either
> a> create an entirely new collection starting with the modified schema
> or
> b> shut down all your Solr instances. Go into each replica/core and
> 'rm -rf data'. Restart Solr.
>
> That way you're absolutely sure everything's gone.
>
> Best,
> Erick
>
> On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
>  wrote:
> > Yes ,it was a new schema(new collection),and after that I change only
> > docvalues= true using schema api,but before changing the schema I have
> > deleted entire index from the solr by delete by query command using admin
> > gui.
> >
> > On 16-Jun-2017 9:28 PM, "Erick Erickson" 
> wrote:
> >
> > My guess is you changed the definition of the field from
> > multiValued="true" to "false" at some point. Even if you re-index all
> > docs, some of the metadata can still be present.
> >
> > Did yo completely blow away the data? By that I mean remove the entire
> > data dir (i.e. the parent of the "index" directory) (stand alone) or
> > create a new collection (SolrCloud)?
> >
> > Best,
> > Erick
> >
> > On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
> >  wrote:
> >> Hi,
> >> Facets are not working when i'm querying with group command
> >> request-
> >> facet.field=isBlibliShipping&facet=true&group.facet=true&
> > group.field=productCode&group=true&indent=on&q=*:*&wt=json
> >>
> >> Schema for facet field
> >>  >> "false" indexed="true"stored="true"/>
> >>
> >> It was throwing error stating
> >> Type mismatch: isBlibliShipping was indexed with multiple values per
> >> document, use SORTED_SET instead
> >>
> >> The full stacktrace is attached as below
> >> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> >> path=/select
> >> params={q=*:*&facet.field=isBlibliShipping&indent=on&
> > group.facet=true&facet=true&wt=json&group.field=
> productCode&_=1497601224212&
> > group=true}
> >> hits=5346 status=500 QTime=29
> >> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> >> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> >> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException:
> *Exception
> >> during facet.field: isBlibliShipping*
> >> at
> >> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
> > SimpleFacets.java:809)
> >> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >> at org.apache.solr.request.SimpleFacets$3.execute(
> > SimpleFacets.java:742)
> >> at
> >> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
> > SimpleFacets.java:818)
> >> at
> >> org.apache.solr.handler.component.FacetComponent.
> > getFacetCounts(FacetComponent.java:330)
> >> at
> >> org.apache.solr.handler.component.FacetComponent.
> > process(FacetComponent.java:274)
> >> at
> >> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
> > SearchHandler.java:296)
> >> at
> >> org.apache.solr.handler.RequestHandlerBase.handleRequest(
> > RequestHandlerBase.java:173)
> >> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> >> at org.apache.solr.servlet.HttpSolrCall.execute(
> HttpSolrCall.java:723)
> >> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> >> at
> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:361)
> >> at
> >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> > SolrDispatchFilter.java:305)
> >> at
> >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> > doFilter(ServletHandler.java:1691)
> >> at
> >> org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:582)
> >> at
> >> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:143)
> >> at
> >> org.eclipse.jetty.security.SecurityHandler.handle(
> > SecurityHandler.java:548)
> >> at
> >> org.eclipse.jetty.server.session.SessionHandler.
> > doHandle(SessionHandler.java:226)
> >> at
> >> org.eclipse.jetty.server.handler.ContextHandler.
> > doHandle(ContextHandler.java:1180)
> >> at
> >> org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:512)
> >> at
> >> org.eclipse.jetty.server.session.SessionHandler.
> > doScope(SessionHandler.java:185)
> >> at
> >> org.eclipse.jetty.server.handler.ContextHandler.
> > doScope(ContextHandler.java:1112)
> >> at
> >> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> > ScopedHandler.java:141)
> >> at
> >> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> > ContextHandlerCollection.java:213)
> >> at
> >> org.eclipse.jetty.server.handler.HandlerCollection

Re: Live update the zookeeper for SOLR

2017-06-16 Thread Shawn Heisey
On 6/16/2017 9:05 AM, Xie, Sean wrote:
> Is there a way to keep SOLR alive when zookeeper instances (3 instance
> ensemble) are rolling updated one at a time? It seems SOLR cluster use
> one of the zookeeper instance and when the communication is broken in
> between, it won’t be able to reconnect to another zookeeper instance
> and keep itself alive. A service restart is need in this situation.
> Any way to keep the service alive all the time?

Have you informed Solr about all three of the ZK hosts?  You need a
zkHost like this, with an optional chroot:

zkHost="server1.example.com:2181,server2.example.com:2181,server3.example.com:2181/chroot"

There are more zkHost examples and a better description of the string
format in this javadoc:

https://lucene.apache.org/solr/6_6_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html#CloudSolrClient-java.lang.String-

If Solr hasn't been explicitly informed about all the hosts in the
ensemble, then it cannot connect to surviving hosts.

I've never heard of the problem you described happening as long as
Zookeeper quorum is maintained and Solr is properly configured.

If you can show that a correctly configured Solr 6.6 server loses
connection to ZK when one of the ZK servers is taken down, that's a bug,
and we need an issue in Jira with documentation of the problem.

Thanks,
Shawn



Re: Facet is not working while querying with group

2017-06-16 Thread Erick Erickson
bq: deleted entire index from the solr by delete by query command

That's not what I meant. Either
a> create an entirely new collection starting with the modified schema
or
b> shut down all your Solr instances. Go into each replica/core and
'rm -rf data'. Restart Solr.

That way you're absolutely sure everything's gone.

Best,
Erick

On Fri, Jun 16, 2017 at 9:10 AM, Aman Deep Singh
 wrote:
> Yes ,it was a new schema(new collection),and after that I change only
> docvalues= true using schema api,but before changing the schema I have
> deleted entire index from the solr by delete by query command using admin
> gui.
>
> On 16-Jun-2017 9:28 PM, "Erick Erickson"  wrote:
>
> My guess is you changed the definition of the field from
> multiValued="true" to "false" at some point. Even if you re-index all
> docs, some of the metadata can still be present.
>
> Did yo completely blow away the data? By that I mean remove the entire
> data dir (i.e. the parent of the "index" directory) (stand alone) or
> create a new collection (SolrCloud)?
>
> Best,
> Erick
>
> On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
>  wrote:
>> Hi,
>> Facets are not working when i'm querying with group command
>> request-
>> facet.field=isBlibliShipping&facet=true&group.facet=true&
> group.field=productCode&group=true&indent=on&q=*:*&wt=json
>>
>> Schema for facet field
>> > "false" indexed="true"stored="true"/>
>>
>> It was throwing error stating
>> Type mismatch: isBlibliShipping was indexed with multiple values per
>> document, use SORTED_SET instead
>>
>> The full stacktrace is attached as below
>> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
>> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
>> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
>> path=/select
>> params={q=*:*&facet.field=isBlibliShipping&indent=on&
> group.facet=true&facet=true&wt=json&group.field=productCode&_=1497601224212&
> group=true}
>> hits=5346 status=500 QTime=29
>> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
>> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
>> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: *Exception
>> during facet.field: isBlibliShipping*
>> at
>> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
> SimpleFacets.java:809)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at org.apache.solr.request.SimpleFacets$3.execute(
> SimpleFacets.java:742)
>> at
>> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
> SimpleFacets.java:818)
>> at
>> org.apache.solr.handler.component.FacetComponent.
> getFacetCounts(FacetComponent.java:330)
>> at
>> org.apache.solr.handler.component.FacetComponent.
> process(FacetComponent.java:274)
>> at
>> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
> SearchHandler.java:296)
>> at
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(
> RequestHandlerBase.java:173)
>> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
>> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>> at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:361)
>> at
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
> SolrDispatchFilter.java:305)
>> at
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
> doFilter(ServletHandler.java:1691)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:143)
>> at
>> org.eclipse.jetty.security.SecurityHandler.handle(
> SecurityHandler.java:548)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:226)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1180)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:185)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1112)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
>> at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> ContextHandlerCollection.java:213)
>> at
>> org.eclipse.jetty.server.handler.HandlerCollection.
> handle(HandlerCollection.java:119)
>> at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
>> at
>> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
> RewriteHandler.java:335)
>> at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:134)
>> at org.eclipse.jetty.server.Server.handle(Server.java:534)
>> at org.eclipse.jetty.server.HttpChannel.h

Re: Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
Yes ,it was a new schema(new collection),and after that I change only
docvalues= true using schema api,but before changing the schema I have
deleted entire index from the solr by delete by query command using admin
gui.

On 16-Jun-2017 9:28 PM, "Erick Erickson"  wrote:

My guess is you changed the definition of the field from
multiValued="true" to "false" at some point. Even if you re-index all
docs, some of the metadata can still be present.

Did yo completely blow away the data? By that I mean remove the entire
data dir (i.e. the parent of the "index" directory) (stand alone) or
create a new collection (SolrCloud)?

Best,
Erick

On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
 wrote:
> Hi,
> Facets are not working when i'm querying with group command
> request-
> facet.field=isBlibliShipping&facet=true&group.facet=true&
group.field=productCode&group=true&indent=on&q=*:*&wt=json
>
> Schema for facet field
>  "false" indexed="true"stored="true"/>
>
> It was throwing error stating
> Type mismatch: isBlibliShipping was indexed with multiple values per
> document, use SORTED_SET instead
>
> The full stacktrace is attached as below
> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> path=/select
> params={q=*:*&facet.field=isBlibliShipping&indent=on&
group.facet=true&facet=true&wt=json&group.field=productCode&_=1497601224212&
group=true}
> hits=5346 status=500 QTime=29
> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: *Exception
> during facet.field: isBlibliShipping*
> at
> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(
SimpleFacets.java:809)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.apache.solr.request.SimpleFacets$3.execute(
SimpleFacets.java:742)
> at
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(
SimpleFacets.java:818)
> at
> org.apache.solr.handler.component.FacetComponent.
getFacetCounts(FacetComponent.java:330)
> at
> org.apache.solr.handler.component.FacetComponent.
process(FacetComponent.java:274)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(
SearchHandler.java:296)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(
RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:361)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:305)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1691)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1180)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
> at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(
RewriteHandler.java:335)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:251)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
executeProduceConsume(ExecuteProduceConsume.java:303)
> at
> org.eclipse.jetty.util.thread.s

Re: Facet is not working while querying with group

2017-06-16 Thread Erick Erickson
My guess is you changed the definition of the field from
multiValued="true" to "false" at some point. Even if you re-index all
docs, some of the metadata can still be present.

Did yo completely blow away the data? By that I mean remove the entire
data dir (i.e. the parent of the "index" directory) (stand alone) or
create a new collection (SolrCloud)?

Best,
Erick

On Fri, Jun 16, 2017 at 1:39 AM, Aman Deep Singh
 wrote:
> Hi,
> Facets are not working when i'm querying with group command
> request-
> facet.field=isBlibliShipping&facet=true&group.facet=true&group.field=productCode&group=true&indent=on&q=*:*&wt=json
>
> Schema for facet field
>  "false" indexed="true"stored="true"/>
>
> It was throwing error stating
> Type mismatch: isBlibliShipping was indexed with multiple values per
> document, use SORTED_SET instead
>
> The full stacktrace is attached as below
> 2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
> path=/select
> params={q=*:*&facet.field=isBlibliShipping&indent=on&group.facet=true&facet=true&wt=json&group.field=productCode&_=1497601224212&group=true}
> hits=5346 status=500 QTime=29
> 2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
> s:shard1 r:core_node1 x:productCollection_shard1_replica1]
> o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: *Exception
> during facet.field: isBlibliShipping*
> at
> org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:809)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at org.apache.solr.request.SimpleFacets$3.execute(SimpleFacets.java:742)
> at
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:818)
> at
> org.apache.solr.handler.component.FacetComponent.getFacetCounts(FacetComponent.java:330)
> at
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:274)
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> at
> org.eclipse.

Re: Live update the zookeeper for SOLR

2017-06-16 Thread Chris Ulicny
> It seems SOLR cluster use one of the zookeeper instance

Is solr configured to point to exactly one host or to a list of the hosts
in the zookeeper ensemble? The zkHost value should contain all of the zk
hosts.

We had a similar issue where the solr instance was only pointing to zk1 and
none of the rest. It went down, and that resulted in the solr instance
being inaccessible since it can't find the rest of the zk ensemble.

Best,
Chris

On Fri, Jun 16, 2017 at 11:05 AM Xie, Sean  wrote:

> Is there a way to keep SOLR alive when zookeeper instances (3 instance
> ensemble) are rolling updated one at a time? It seems SOLR cluster use one
> of the zookeeper instance and when the communication is broken in between,
> it won’t be able to reconnect to another zookeeper instance and keep itself
> alive. A service restart is need in this situation. Any way to keep the
> service alive all the time?
>
> Thanks
> Sean
>
> Confidentiality Notice::  This email, including attachments, may include
> non-public, proprietary, confidential or legally privileged information.
> If you are not an intended recipient or an authorized agent of an intended
> recipient, you are hereby notified that any dissemination, distribution or
> copying of the information contained in or transmitted with this e-mail is
> unauthorized and strictly prohibited.  If you have received this email in
> error, please notify the sender by replying to this message and permanently
> delete this e-mail, its attachments, and any copies of it immediately.  You
> should not retain, copy or use this e-mail or any attachment for any
> purpose, nor disclose all or any part of the contents to any other person.
> Thank you.
>


IndexSchema uniqueKey is not stored

2017-06-16 Thread Markus Jelsma
Hi,

Moving over to docValues as stored field i got this:
o.a.s.s.IndexSchema uniqueKey is not stored - distributed search and 
MoreLikeThis will not work

But distributed and MLT still work.

Is the warning actually obsolete these days?

Regards,
Markus



Live update the zookeeper for SOLR

2017-06-16 Thread Xie, Sean
Is there a way to keep SOLR alive when zookeeper instances (3 instance 
ensemble) are rolling updated one at a time? It seems SOLR cluster use one of 
the zookeeper instance and when the communication is broken in between, it 
won’t be able to reconnect to another zookeeper instance and keep itself alive. 
A service restart is need in this situation. Any way to keep the service alive 
all the time?

Thanks
Sean

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.


RE: Custom Response writer

2017-06-16 Thread Markus Jelsma
Yes, index the employee and item names instead of only their ID's. And if you 
can't for some reason, i'd implement a DocTransformer instead of a 
ResponseWriter.

Regards,
Markus

 
 
-Original message-
> From:mganeshs 
> Sent: Friday 16th June 2017 16:19
> To: solr-user@lucene.apache.org
> Subject: Custom Response writer
> 
> Hi, 
> 
> We have requirement like in the response we would like to add description of
> an item with item id(this field comes from solr response by default) or
> employee name along with employee id ( this is just an example use case ). 
> 
> In the solr document what we have is only item id or employee id. But after
> the query executes and result is returned, in the response we need to fill
> employee name or item name. 
> 
> So we decided to customize xmlwriter and in the place of filling the xml
> tags, we are checking and creating the additional tag for employee name or
> item name. These names / additional info are read from our own read through
> cache which are loaded in a static block.
> 
> Is there a better way/option than customizing the xmlwriter ? 
> 
> Regards,
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Custom-Response-writer-tp4340908.html
> Sent from the Solr - User mailing list archive at Nabble.com.
> 


Custom Response writer

2017-06-16 Thread mganeshs
Hi, 

We have requirement like in the response we would like to add description of
an item with item id(this field comes from solr response by default) or
employee name along with employee id ( this is just an example use case ). 

In the solr document what we have is only item id or employee id. But after
the query executes and result is returned, in the response we need to fill
employee name or item name. 

So we decided to customize xmlwriter and in the place of filling the xml
tags, we are checking and creating the additional tag for employee name or
item name. These names / additional info are read from our own read through
cache which are loaded in a static block.

Is there a better way/option than customizing the xmlwriter ? 

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Custom-Response-writer-tp4340908.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Possible bug in Solrj-6.6.0

2017-06-16 Thread Joel Bernstein
Yes that is correct.

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jun 16, 2017 at 9:55 AM, Aman Deep Singh 
wrote:

> Thanks Joel,
> It is working now
> One quick question,as you say that we can use solr client cache multiple
> time so can I create a single instance of solr client cache and use it
> again and again ,since we are using one single bean for client object.
>
>
> On 16-Jun-2017 6:28 PM, "Joel Bernstein"  wrote:
>
> The issue is that in 6.6 CloudSolrStream is expecting a StreamContext to be
> set. So you'll need to update your code to do this. This was part of
> changes made to make streaming work in non-SolrCloud environments.
>
> You also need to create a SolrClientCache which caches the SolrClients.
>
> Example:
>
> SolrClientCache cache = new SolrClientCache();
>
> StreamContext streamContext = new StreamContext();
>
> streamContext.setSolrClientCache(cache);
>
> CloudSolrStream stream = new CloudSolrStream(...);
> stream.setStreamContext(streamContext);
> stream.open();
> 
>
> The SolrClientCache can be shared by multiple requests and should be closed
> when the application exits.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Jun 16, 2017 at 2:17 AM, Aman Deep Singh <
> amandeep.coo...@gmail.com>
> wrote:
>
> > Hi,
> > I think their is a possible bug in Solrj version 6.6.0 ,as streaming is
> not
> > working
> > as i have a piece of code
> >
> > public Set getAllIds(String requestId, String field) {
> > LOG.info("Now Trying to fetch all the ids from SOLR for request Id
> > {}", requestId);
> > Map props = new HashMap();
> > props.put("q", field + ":*");
> > props.put("qt", "/export");
> > props.put("sort", field + " asc");
> > props.put("fl", field);
> > Set idSet = new HashSet<>();
> > try (CloudSolrStream cloudSolrStream = new
> > CloudSolrStream(cloudSolrClient.getZkHost(),
> > cloudSolrClient.getDefaultCollection(), new
> > MapSolrParams(props))) {
> > cloudSolrStream.open();
> > while (true) {
> > Tuple tuple = cloudSolrStream.read();
> > if (tuple.EOF) {
> > break;
> > }
> > idSet.add(tuple.getString(field));
> > }
> > return idSet;
> > } catch (IOException ex) {
> > LOG.error("Error while fetching the ids from SOLR for request
> > Id {} ", requestId, ex);
> > }
> > return Collections.emptySet();
> > }
> >
> >
> > This is working in the Solrj 6.5.1 but now it start throwing Error
> > after upgrading to solrj-6.6.0
> >
> > java.io.IOException: java.lang.NullPointerException
> > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > constructStreams(CloudSolrStream.java:408)
> > ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> > - ishan - 2017-05-30 07:32:54]
> > at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> > open(CloudSolrStream.java:299)
> > ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> > - ishan - 2017-05-30 07:32:54]
> >
> >
> > Thanks,
> >
> > Aman Deep Singh
> >
>


Re: Possible bug in Solrj-6.6.0

2017-06-16 Thread Aman Deep Singh
Thanks Joel,
It is working now
One quick question,as you say that we can use solr client cache multiple
time so can I create a single instance of solr client cache and use it
again and again ,since we are using one single bean for client object.


On 16-Jun-2017 6:28 PM, "Joel Bernstein"  wrote:

The issue is that in 6.6 CloudSolrStream is expecting a StreamContext to be
set. So you'll need to update your code to do this. This was part of
changes made to make streaming work in non-SolrCloud environments.

You also need to create a SolrClientCache which caches the SolrClients.

Example:

SolrClientCache cache = new SolrClientCache();

StreamContext streamContext = new StreamContext();

streamContext.setSolrClientCache(cache);

CloudSolrStream stream = new CloudSolrStream(...);
stream.setStreamContext(streamContext);
stream.open();


The SolrClientCache can be shared by multiple requests and should be closed
when the application exits.






















Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jun 16, 2017 at 2:17 AM, Aman Deep Singh 
wrote:

> Hi,
> I think their is a possible bug in Solrj version 6.6.0 ,as streaming is
not
> working
> as i have a piece of code
>
> public Set getAllIds(String requestId, String field) {
> LOG.info("Now Trying to fetch all the ids from SOLR for request Id
> {}", requestId);
> Map props = new HashMap();
> props.put("q", field + ":*");
> props.put("qt", "/export");
> props.put("sort", field + " asc");
> props.put("fl", field);
> Set idSet = new HashSet<>();
> try (CloudSolrStream cloudSolrStream = new
> CloudSolrStream(cloudSolrClient.getZkHost(),
> cloudSolrClient.getDefaultCollection(), new
> MapSolrParams(props))) {
> cloudSolrStream.open();
> while (true) {
> Tuple tuple = cloudSolrStream.read();
> if (tuple.EOF) {
> break;
> }
> idSet.add(tuple.getString(field));
> }
> return idSet;
> } catch (IOException ex) {
> LOG.error("Error while fetching the ids from SOLR for request
> Id {} ", requestId, ex);
> }
> return Collections.emptySet();
> }
>
>
> This is working in the Solrj 6.5.1 but now it start throwing Error
> after upgrading to solrj-6.6.0
>
> java.io.IOException: java.lang.NullPointerException
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> constructStreams(CloudSolrStream.java:408)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:299)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
>
>
> Thanks,
>
> Aman Deep Singh
>


Re: Possible bug in Solrj-6.6.0

2017-06-16 Thread Joel Bernstein
The issue is that in 6.6 CloudSolrStream is expecting a StreamContext to be
set. So you'll need to update your code to do this. This was part of
changes made to make streaming work in non-SolrCloud environments.

You also need to create a SolrClientCache which caches the SolrClients.

Example:

SolrClientCache cache = new SolrClientCache();

StreamContext streamContext = new StreamContext();

streamContext.setSolrClientCache(cache);

CloudSolrStream stream = new CloudSolrStream(...);
stream.setStreamContext(streamContext);
stream.open();


The SolrClientCache can be shared by multiple requests and should be closed
when the application exits.






















Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Jun 16, 2017 at 2:17 AM, Aman Deep Singh 
wrote:

> Hi,
> I think their is a possible bug in Solrj version 6.6.0 ,as streaming is not
> working
> as i have a piece of code
>
> public Set getAllIds(String requestId, String field) {
> LOG.info("Now Trying to fetch all the ids from SOLR for request Id
> {}", requestId);
> Map props = new HashMap();
> props.put("q", field + ":*");
> props.put("qt", "/export");
> props.put("sort", field + " asc");
> props.put("fl", field);
> Set idSet = new HashSet<>();
> try (CloudSolrStream cloudSolrStream = new
> CloudSolrStream(cloudSolrClient.getZkHost(),
> cloudSolrClient.getDefaultCollection(), new
> MapSolrParams(props))) {
> cloudSolrStream.open();
> while (true) {
> Tuple tuple = cloudSolrStream.read();
> if (tuple.EOF) {
> break;
> }
> idSet.add(tuple.getString(field));
> }
> return idSet;
> } catch (IOException ex) {
> LOG.error("Error while fetching the ids from SOLR for request
> Id {} ", requestId, ex);
> }
> return Collections.emptySet();
> }
>
>
> This is working in the Solrj 6.5.1 but now it start throwing Error
> after upgrading to solrj-6.6.0
>
> java.io.IOException: java.lang.NullPointerException
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> constructStreams(CloudSolrStream.java:408)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
> at org.apache.solr.client.solrj.io.stream.CloudSolrStream.
> open(CloudSolrStream.java:299)
> ~[solr-solrj-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241
> - ishan - 2017-05-30 07:32:54]
>
>
> Thanks,
>
> Aman Deep Singh
>


Facet is not working while querying with group

2017-06-16 Thread Aman Deep Singh
Hi,
Facets are not working when i'm querying with group command
request-
facet.field=isBlibliShipping&facet=true&group.facet=true&group.field=productCode&group=true&indent=on&q=*:*&wt=json

Schema for facet field


It was throwing error stating
Type mismatch: isBlibliShipping was indexed with multiple values per
document, use SORTED_SET instead

The full stacktrace is attached as below
2017-06-16 08:20:47.367 INFO  (qtp1205044462-12) [c:productCollection
s:shard1 r:core_node1 x:productCollection_shard1_replica1]
o.a.s.c.S.Request [productCollection_shard1_replica1]  webapp=/solr
path=/select
params={q=*:*&facet.field=isBlibliShipping&indent=on&group.facet=true&facet=true&wt=json&group.field=productCode&_=1497601224212&group=true}
hits=5346 status=500 QTime=29
2017-06-16 08:20:47.369 ERROR (qtp1205044462-12) [c:productCollection
s:shard1 r:core_node1 x:productCollection_shard1_replica1]
o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: *Exception
during facet.field: isBlibliShipping*
at
org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:809)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.request.SimpleFacets$3.execute(SimpleFacets.java:742)
at
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:818)
at
org.apache.solr.handler.component.FacetComponent.getFacetCounts(FacetComponent.java:330)
at
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:274)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: *java.lang.IllegalStateException: Type mismatch:
isBlibliShipping was indexed with multiple values per document, use
SORTED_SET instead*
at
org.apache.solr.uninverting.FieldCacheImpl$SortedDocValuesCache.createValue(FieldCacheImpl.java:799)
at
org.apache.solr.uninverting.FieldCacheImpl$Cache.get(FieldCacheImpl.java:187)
at
org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:767)
at
org.apache.solr.uninverting.FieldCacheImpl.getTermsIndex(FieldCacheImpl.java:747)
at
org.apa