JDBC: Collection not found with count(*) and uppercase name

2016-07-17 Thread Damien Kamerman
Hi,

I'm on Solr 6.1 and testing a JDBC query from SquirrelSQL and I find this
query works OK:
select id from c_D02016

But when I try this query I get an error: Collection not found c_d02016
select count(*) from c_D02016.

It seems solr is expecting the collection/table name to be lower-case. Has
anyone else seen this?

Here's the full log from the server:
ERROR - 2016-07-18 13:46:23.711; [c:ip_0 s:shard1 r:core_node1
x:c_0_shard1_replica1] org.apache.solr.common.SolrException;
java.io.IOException: org.apache.solr.common.SolrException: Collection not
found: c_d02016
at
org.apache.solr.client.solrj.io.stream.StatsStream.open(StatsStream.java:221)
at
org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1578)
at
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
at
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:423)
at
org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:304)
at
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:168)
at
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
at
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
at
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
at
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
at
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:731)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:318)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Collection not found:
ip_tiger_d02016-00
at
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1248)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:961)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at
org.apache.solr.client.solrj.io.stream.StatsStream.open(StatsStream.java:218)
... 40 more

Regards,
Damien.


Re: solr server heap out

2016-07-17 Thread 신재동
Hi.

I had a very similar problem.

If you find heap out or frequent major gc while monitor with jvisualvm, you
may have a problem memory leak or gc.

And I had gc problem, I solved it using G1GC.
You can find some articles about that.

Regards,
Jade Shin.

2016. 7. 13. 오후 5:56에 "sara hajili" 님이 작성:

Hi .u can monitor heap size by jvm.
Run solr and open jvisualvm in "javahome/bin "
Monitor your heap size that solr use.
You can create heap image and check objects that are in heap.and check why
you get heap size error.
On Jul 13, 2016 9:55 AM, "Midas A"  wrote:

> Hi,
> I frequently getting solr heap out once or twice a day. what could be the
> possible reasons for the same and is there any way to log memory used by
> the query in solr.log .
>
> Thanks ,
> Abhishek Tiwari
>


Re: How to replace a solr cloud node

2016-07-17 Thread Erick Erickson
I recommend against manually editing your znodes,
that's like a recipe for disaster unless you know
_exactly_ what you are doing. One mistake and you
risk your collections. And since you're "continuously
indexing", I don't see what good it would do you anyway
since by the time you were finished editing the znode and
(presumably) copying the index over it would be out of
sync with the leader and do a full replication anyway.

Instead, just bring up a 5th Solr node and use the
Collections API ADDREPLICA command to add a new
replica on it corresponding to each replica on the node
you're replacing. All the replication & etc will just happen
automatically, no down time, no problems.

You can specify exactly what node the replica goes on
etc.

I'd then issue a DELETEREPLICA on all the replicas on
the bad node to remove them from the cluster state.

So at the end of the process you may have shards like
collection1_shard1_replica1, collection1_shard1_replica3
Not having collection1_shard1_replica2 is of no
consequence.

One caution. While the replica is being added and while
the sync is going on, incoming updates will be written
to the new replica's tlog and replayed as the final step in the
sync. Under very heavy indexing loads (thousands of docs
per second) the sync can take quite a while to complete.
You do _NOT_ need to stop indexing or even throttle it,
but if you can reduce the indexing load your ADDREPLICA
steps will go faster.

Best,
Erick

On Sun, Jul 17, 2016 at 1:21 PM, vidit.asthana  wrote:
> I have a 4 machine cluster with ~100 collections. Each collection has
> numShards=2 and replicationFactor=2.  Data directory size of each node is
> ~120GB.  One of my node is having some hardware issue, so I need to replace
> it. How can I do that without taking whole cluster down. IP of new node will
> be different. Solr version is 5.1.0.
>
> I cannot take a downtime. Continuous indexing and querying is happening. I
> know how to do it by manually editing state.json of all collections but I
> think its unsafe to do it when cluster is up and might create inconsistency.
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/How-to-replace-a-solr-cloud-node-tp4287556.html
> Sent from the Solr - User mailing list archive at Nabble.com.


How to replace a solr cloud node

2016-07-17 Thread vidit.asthana
I have a 4 machine cluster with ~100 collections. Each collection has
numShards=2 and replicationFactor=2.  Data directory size of each node is
~120GB.  One of my node is having some hardware issue, so I need to replace
it. How can I do that without taking whole cluster down. IP of new node will
be different. Solr version is 5.1.0. 

I cannot take a downtime. Continuous indexing and querying is happening. I
know how to do it by manually editing state.json of all collections but I
think its unsafe to do it when cluster is up and might create inconsistency.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-replace-a-solr-cloud-node-tp4287556.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOLR Suggester | CFQ Variable | - not supported

2016-07-17 Thread Rajesh Kapur
Hi,

Thanks for the reply.

Yes, i tried to search by setting CFQ=abc\-def and also as "abc-def", but
no luck.

Thanks.

On Sun, Jul 17, 2016 at 9:19 PM, Erick Erickson 
wrote:

> What have you tried? Did you try escaping it?
>
> Best,
> Erick
>
> On Sun, Jul 17, 2016 at 7:32 AM, Rajesh Kapur 
> wrote:
> > Hi
> >
> > I am still facing this issue. Anyone can help me in this.
> >
> > Thanks
> >
> > On 14-Jul-2016 1:15 PM, "Rajesh Kapur"  wrote:
> >
> >> Hi,
> >>
> >> I am facing issue while implementing suggester for my project where I am
> >> passing CFQ value having - in between, but it is not giving me desired
> >> output.
> >>
> >> Could you please let me know what should I do so that SOLR suggester
> >> starts accepting - in CFQ parameters.
> >>
> >> Thanks
> >> Rajesh Kapur
> >>
>


Re: SOLR Suggester | CFQ Variable | - not supported

2016-07-17 Thread Erick Erickson
What have you tried? Did you try escaping it?

Best,
Erick

On Sun, Jul 17, 2016 at 7:32 AM, Rajesh Kapur  wrote:
> Hi
>
> I am still facing this issue. Anyone can help me in this.
>
> Thanks
>
> On 14-Jul-2016 1:15 PM, "Rajesh Kapur"  wrote:
>
>> Hi,
>>
>> I am facing issue while implementing suggester for my project where I am
>> passing CFQ value having - in between, but it is not giving me desired
>> output.
>>
>> Could you please let me know what should I do so that SOLR suggester
>> starts accepting - in CFQ parameters.
>>
>> Thanks
>> Rajesh Kapur
>>


Re: SOLR Suggester | CFQ Variable | - not supported

2016-07-17 Thread Rajesh Kapur
Hi

I am still facing this issue. Anyone can help me in this.

Thanks

On 14-Jul-2016 1:15 PM, "Rajesh Kapur"  wrote:

> Hi,
>
> I am facing issue while implementing suggester for my project where I am
> passing CFQ value having - in between, but it is not giving me desired
> output.
>
> Could you please let me know what should I do so that SOLR suggester
> starts accepting - in CFQ parameters.
>
> Thanks
> Rajesh Kapur
>


Solr Cloud - how to implement local indexing without SSL and distributed search with SSL

2016-07-17 Thread Sarit Weber
Hi all,
 
We are currently using Solr 6.0.0, with Solr Cloud and SSL 
where:
1. Collections are defined with router.name=implicit 
2. Collections are built of shard per machine 
3. Data from a specific machine is indexed on that shard and documents 
always keep on coming.  
4. Search is distributed - from each machine we want to search for data on 
the entire collection - using the Solr Cloud API. 
 
We noticed that indexing is much faster without SSL, but we can not remove 
it from distributed search. 
 
Because we force the index to be local (each machine index its own data) 
- 
We were wondering if there is an option to remove SSL from indexing and 
keep using it for Searching.
The solution will have to require the indexing to be done locally, not 
calling the remote zookeeper.  
 
Is there any way to achieve this with Solr Cloud?

Thanks,
Sarit Weber 





Solr Cloud - how to implement local indexing without SSL and distributed search with SSL

2016-07-17 Thread Sarit Weber
Hi all,
 
We are currently using Solr 6.0.0, with Solr Cloud and SSL 
where:
1. Collections are defined with router.name=implicit 
2. Collections are built of shard per machine 
3. Data from a specific machine is indexed on that shard and documents 
always keep on coming.  
4. Search is distributed - from each machine we want to search for data on 
the entire collection - using the Solr Cloud API. 
 
We noticed that indexing is much faster without SSL, but we can not remove 
it from distributed search. 
 
Because we force the index to be local (each machine index its own data) 
- 
We were wondering if there is an option to remove SSL from indexing and 
keep using it for Searching.
The solution will have to require the indexing to be done locally, not 
calling the remote zookeeper.  
 
Is there any way to achieve this with Solr Cloud?


Thanks,
Sarit Weber