[jira] [Commented] (SOLR-5971) 'Illegal character in query' when proxying request

2014-04-09 Thread Eric Bus (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13963968#comment-13963968
 ] 

Eric Bus commented on SOLR-5971:


Unfortunately, that does not change the error. After encoding the braces, the 
same error is reported on the node without the replica. The results on the 
other nodes is the same.

> 'Illegal character in query' when proxying request
> --
>
> Key: SOLR-5971
> URL: https://issues.apache.org/jira/browse/SOLR-5971
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.7.1
> Environment: Debian Wheezy, Java(TM) SE Runtime Environment (build 
> 1.6.0_26-b03)
>Reporter: Eric Bus
>  Labels: characters, exception, invalid, proxy, query, solrcloud
>
> My cluster contains 3 Solr instances. I have a collection consisting of one 
> shard with 2 replica's. So one node in the cluster does not have a replicate 
> of the shard.
> The following query works when I query one of the two replica nodes:
> http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*
> But when I query the node without the replica, I get;
> {msg=Illegal character in query at index 78: 
> http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
>   at java.net.URI.create(URI.java:842)
>   at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>   at java.lang.Thread.run(Thread.java:662)
> Without the facet.field attribute, it works fine on all the nodes.
> Is this some kind of double escaping when proxying the request?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5971) 'Illegal character in query' when proxying request

2014-04-08 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5971:
---

Description: 
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)

Without the facet.field attribute, it works fine on all the nodes.
Is this some kind of double escaping when proxying the request?

  was:
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{quote}
msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.

[jira] [Updated] (SOLR-5971) 'Illegal character in query' when proxying request

2014-04-08 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5971:
---

Description: 
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{msg=Illegal character in query at index 78: 
http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)

Without the facet.field attribute, it works fine on all the nodes.
Is this some kind of double escaping when proxying the request?

  was:
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doH

[jira] [Updated] (SOLR-5971) 'Illegal character in query' when proxying request

2014-04-08 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5971:
---

Description: 
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{quote}
msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)
{quote}

Without the facet.field attribute, it works fine on all the nodes.
Is this some kind of double escaping when proxying the request?

  was:
My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{quote}
{msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.ec

[jira] [Created] (SOLR-5971) 'Illegal character in query' when proxying request

2014-04-08 Thread Eric Bus (JIRA)
Eric Bus created SOLR-5971:
--

 Summary: 'Illegal character in query' when proxying request
 Key: SOLR-5971
 URL: https://issues.apache.org/jira/browse/SOLR-5971
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.7.1
 Environment: Debian Wheezy, Java(TM) SE Runtime Environment (build 
1.6.0_26-b03)

Reporter: Eric Bus


My cluster contains 3 Solr instances. I have a collection consisting of one 
shard with 2 replica's. So one node in the cluster does not have a replicate of 
the shard.

The following query works when I query one of the two replica nodes:

http://X.X.X.X:8080/solr/collection/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*

But when I query the node without the replica, I get;

{quote}
{msg=Illegal character in query at index 78: 
http://10.40.0.13:8080/solr/bakker_hillegom_nl/select/?facet=true&facet.field={!ex%3Dfilters,filter1340+key%3Dfacet1340Values}string_months_month&facet=true&q=*:*,trace=java.lang.IllegalArgumentException
at java.net.URI.create(URI.java:842)
at org.apache.http.client.methods.HttpGet.(HttpGet.java:69)
at 
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:527)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)
{quote}

Without the facet.field attribute, it works fine on all the nodes.
Is this some kind of double escaping when proxying the request?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5920) Sorting on date field returns string cast exception

2014-03-27 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5920:
---

Description: 
After upgrading to 4.7, sorting on a date field returns the folllow trace:

{quote}


5007java.lang.String 
cannot be cast to org.apache.lucene.util.BytesRefjava.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.lucene.util.BytesRef
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:237)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:162)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:104)
at 
org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:159)
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:909)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:661)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:640)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:321)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)
500

{quote}

The date field is specified as:

{quote}{quote}

And it's used as a dynamic field:

{quote}{quote}

Nothing in this configuration has changed since 4.6.1.

Sorting on other values, like integers and text, works fine. Only date fields 
are a problem.

  was:
After upgrading to 4.7, sorting on a date field returns the folllow trace:

{{

5007java.lang.String 
cannot be cast to org.apache.lucene.util.BytesRefjava.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.lucene.util.BytesRef
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
at 
org.apache.s

[jira] [Updated] (SOLR-5920) Sorting on date field returns string cast exception

2014-03-27 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5920:
---

Description: 
After upgrading to 4.7, sorting on a date field returns the folllow trace:

{{

5007java.lang.String 
cannot be cast to org.apache.lucene.util.BytesRefjava.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.lucene.util.BytesRef
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:237)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:162)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:104)
at 
org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:159)
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:909)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:661)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:640)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:321)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)
500
}}

The date field is specified as:

{{}}

And it's used as a dynamic field:

{{   }}

Nothing in this configuration has changed since 4.6.1.

Sorting on other values, like integers and text, works fine. Only date fields 
are a problem.

  was:
After upgrading to 4.7, sorting on a date field returns the folllow trace:



5007java.lang.String 
cannot be cast to org.apache.lucene.util.BytesRefjava.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.lucene.util.BytesRef
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
at 
org.apache.solr.handler.component.Shard

[jira] [Created] (SOLR-5920) Sorting on date field returns string cast exception

2014-03-27 Thread Eric Bus (JIRA)
Eric Bus created SOLR-5920:
--

 Summary: Sorting on date field returns string cast exception
 Key: SOLR-5920
 URL: https://issues.apache.org/jira/browse/SOLR-5920
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.7
 Environment: Debian, Java JVM 1.6.0_26
Reporter: Eric Bus


After upgrading to 4.7, sorting on a date field returns the folllow trace:



5007java.lang.String 
cannot be cast to org.apache.lucene.util.BytesRefjava.lang.ClassCastException: java.lang.String cannot be cast to 
org.apache.lucene.util.BytesRef
at 
org.apache.lucene.search.FieldComparator$TermOrdValComparator.compareValues(FieldComparator.java:940)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:245)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:237)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:162)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:104)
at 
org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:159)
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:909)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:661)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:640)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:321)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:780)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:662)
500


The date field is specified as:



And it's used as a dynamic field:

   

Nothing in this configuration has changed since 4.6.1.

Sorting on other values, like integers and text, works fine. Only date fields 
are a problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2599) CloneFieldUpdateProcessor (copyField-equse equivilent)

2014-03-05 Thread Eric Bus (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13921011#comment-13921011
 ] 

Eric Bus commented on SOLR-2599:


Is there a specific reason why dest has to be a fixed fieldname? I would like 
to migrate our copyField settings to this 'new' processor, but that would 
require wildcards for both source and destination.  For example:



As far as I can see, this processor does not support this behaviour?

> CloneFieldUpdateProcessor (copyField-equse equivilent)
> --
>
> Key: SOLR-2599
> URL: https://issues.apache.org/jira/browse/SOLR-2599
> Project: Solr
>  Issue Type: New Feature
>  Components: update
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 4.0-ALPHA
>
> Attachments: SOLR-2599-hoss.patch, SOLR-2599.patch, SOLR-2599.patch
>
>
> Need an UpdateProcessor which can copy and move fields



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2014-02-05 Thread Eric Bus (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13892010#comment-13892010
 ] 

Eric Bus commented on SOLR-5379:


Has anyone modified this patch to work on 4.6.1? I tried to do a manual merge 
for the second patch. But a lot has changed in the SolrQueryParserBase.java 
file.

> Query-time multi-word synonym expansion
> ---
>
> Key: SOLR-5379
> URL: https://issues.apache.org/jira/browse/SOLR-5379
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Tien Nguyen Manh
>  Labels: multi-word, queryparser, synonym
> Fix For: 4.7
>
> Attachments: quoted.patch, synonym-expander.patch
>
>
> While dealing with synonym at query time, solr failed to work with multi-word 
> synonyms due to some reasons:
> - First the lucene queryparser tokenizes user query by space so it split 
> multi-word term into two terms before feeding to synonym filter, so synonym 
> filter can't recognized multi-word term to do expansion
> - Second, if synonym filter expand into multiple terms which contains 
> multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
> handle synonyms. But MultiPhraseQuery don't work with term have different 
> number of words.
> For the first one, we can extend quoted all multi-word synonym in user query 
> so that lucene queryparser don't split it. There are a jira task related to 
> this one https://issues.apache.org/jira/browse/LUCENE-2605.
> For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
> SHOULD which contains multiple PhraseQuery in case tokens stream have 
> multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5579) Leader stops processing collection-work-queue after failed collection reload

2014-01-09 Thread Eric Bus (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866696#comment-13866696
 ] 

Eric Bus commented on SOLR-5579:


Just a quick update: the leader again stopped working. I had to restart the 
cluster to get everything working again. The script that is running to check 
the status did not work, so unfortunately I don't have additional information 
from the logs. When I do, I'll report back here.

> Leader stops processing collection-work-queue after failed collection reload
> 
>
> Key: SOLR-5579
> URL: https://issues.apache.org/jira/browse/SOLR-5579
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.5.1
> Environment: Debian Linux 6.0 running on VMWare
> Using embedded SOLR Jetty.
>Reporter: Eric Bus
>Assignee: Mark Miller
>  Labels: collections, queue
>
> I've been experiencing the same problem a few times now. My leader in 
> /overseer_elect/leader stops processing the collection queue at 
> /overseer/collection-queue-work. The queue will build up and it will trigger 
> an alert in my monitoring tool.
> I haven't been able to pinpoint the reason that the leader stops, but usually 
> I kill the leader node to trigger a leader election. The new node will pick 
> up the queue. And this is where the problems start.
> When the new leader is processing the queue and picks up a reload for a shard 
> without an active leader, the queue stops. It keeps repeating the message 
> that there is no active leader for the shard. But a new leader is never 
> elected:
> {quote}
> ERROR - 2013-12-24 14:43:40.390; org.apache.solr.common.SolrException; Error 
> while trying to recover. 
> core=magento_349_shard1_replica1:org.apache.solr.common.SolrException: No 
> registered leader was found, collection:magento_349 slice:shard1
> at 
> org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:482)
> at 
> org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:465)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:317)
> at 
> org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219)
> ERROR - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; 
> Recovery failed - trying again... (7) core=magento_349_shard1_replica1
> INFO  - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; Wait 
> 256.0 seconds before trying to recover again (8)
> {quote}
> Is the leader election in some way connected to the collection queue? If so, 
> can this be a deadlock, because it won't elect until the reload is complete?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5579) Leader stops processing collection-work-queue after failed collection reload

2013-12-24 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5579:
---

Description: 
I've been experiencing the same problem a few times now. My leader in 
/overseer_elect/leader stops processing the collection queue at 
/overseer/collection-queue-work. The queue will build up and it will trigger an 
alert in my monitoring tool.

I haven't been able to pinpoint the reason that the leader stops, but usually I 
kill the leader node to trigger a leader election. The new node will pick up 
the queue. And this is where the problems start.

When the new leader is processing the queue and picks up a reload for a shard 
without an active leader, the queue stops. It keeps repeating the message that 
there is no active leader for the shard. But a new leader is never elected:

{quote}
ERROR - 2013-12-24 14:43:40.390; org.apache.solr.common.SolrException; Error 
while trying to recover. 
core=magento_349_shard1_replica1:org.apache.solr.common.SolrException: No 
registered leader was found, collection:magento_349 slice:shard1
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:482)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:465)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:317)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219)

ERROR - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (7) core=magento_349_shard1_replica1
INFO  - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; Wait 
256.0 seconds before trying to recover again (8)
{quote}

Is the leader election in some way connected to the collection queue? If so, 
can this be a deadlock, because it won't elect until the reload is complete?


  was:
I've been experiencing the same problem a few times now. My leader in 
/overseer_elect/leader stops processing the collection queue at 
/overseer/collection-queue-work. The queue will build up and it will trigger an 
alert in my monitoring tool.

I haven't been able to pinpoint the reason that the leader stops, but usually I 
kill the leader node to trigger a leader election. The new node will pick up 
the queue. And this is where the problems start.

When the new leader is processing the queue and picks up a reload for a shard 
without an active leader, the queue stops. It keeps repeating the message that 
there is no active leader for the shard. But a new leader is never elected:

{quote}
ERROR - 2013-12-24 14:43:40.390; org.apache.solr.common.SolrException; Error 
while trying to recover. 
core=magento_349_shard1_replica1:org.apache.solr.common.SolrException: No 
registered leader was found, collection:magento_349 slice:shar
d1
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:482)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:465)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:317)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219)

ERROR - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (7) core=magento_349_shard1_replica1
INFO  - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; Wait 
256.0 seconds before trying to recover again (8)
{quote}

Is the leader election in some way connected to the collection queue? If so, 
can this be a deadlock, because it won't elect until the reload is complete?



> Leader stops processing collection-work-queue after failed collection reload
> 
>
> Key: SOLR-5579
> URL: https://issues.apache.org/jira/browse/SOLR-5579
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.5.1
> Environment: Debian Linux 6.0 running on VMWare
> Using embedded SOLR Jetty.
>Reporter: Eric Bus
>  Labels: collections, queue
>
> I've been experiencing the same problem a few times now. My leader in 
> /overseer_elect/leader stops processing the collection queue at 
> /overseer/collection-queue-work. The queue will build up and it will trigger 
> an alert in my monitoring tool.
> I haven't been able to pinpoint the reason that the leader stops, but usually 
> I kill the leader node to trigger a leader election. The new node will pick 
> up the queue. And this is where the problems start.
> When the new leader is processing the queue and picks up a reload for a shard 
> without an active leader, the queue stops. It keeps repeating the message 
> that there is no active leader for the shard. But a new leader is never 
> elected:
> {quote}
> ERROR - 2013-12-24 14:43:40.390; org.apache.solr.common.SolrException

[jira] [Created] (SOLR-5579) Leader stops processing collection-work-queue after failed collection reload

2013-12-24 Thread Eric Bus (JIRA)
Eric Bus created SOLR-5579:
--

 Summary: Leader stops processing collection-work-queue after 
failed collection reload
 Key: SOLR-5579
 URL: https://issues.apache.org/jira/browse/SOLR-5579
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.5.1
 Environment: Debian Linux 6.0 running on VMWare
Using embedded SOLR Jetty.
Reporter: Eric Bus


I've been experiencing the same problem a few times now. My leader in 
/overseer_elect/leader stops processing the collection queue at 
/overseer/collection-queue-work. The queue will build up and it will trigger an 
alert in my monitoring tool.

I haven't been able to pinpoint the reason that the leader stops, but usually I 
kill the leader node to trigger a leader election. The new node will pick up 
the queue. And this is where the problems start.

When the new leader is processing the queue and picks up a reload for a shard 
without an active leader, the queue stops. It keeps repeating the message that 
there is no active leader for the shard. But a new leader is never elected:

{quote}
ERROR - 2013-12-24 14:43:40.390; org.apache.solr.common.SolrException; Error 
while trying to recover. 
core=magento_349_shard1_replica1:org.apache.solr.common.SolrException: No 
registered leader was found, collection:magento_349 slice:shar
d1
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:482)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:465)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:317)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:219)

ERROR - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; 
Recovery failed - trying again... (7) core=magento_349_shard1_replica1
INFO  - 2013-12-24 14:43:40.391; org.apache.solr.cloud.RecoveryStrategy; Wait 
256.0 seconds before trying to recover again (8)
{quote}

Is the leader election in some way connected to the collection queue? If so, 
can this be a deadlock, because it won't elect until the reload is complete?




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5427) SolrCloud leaking (many) filehandles to deleted files

2013-11-11 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus closed SOLR-5427.
--

Resolution: Not A Problem

This problem seems to be related to running SOLR inside a Tomcat server. I 
switched to the bundled Jetty, and the problems are gone. No open files after 
running the server for about 2 days. Normally, the first open file handles 
would appear in a few minutes or hours.

> SolrCloud leaking (many) filehandles to deleted files
> -
>
> Key: SOLR-5427
> URL: https://issues.apache.org/jira/browse/SOLR-5427
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.3, 4.4, 4.5
> Environment: Debian Linux 6.0 running on VMWare
> Tomcat 6
>Reporter: Eric Bus
>
> I'm running SolrCloud on three nodes. I've been experiencing strange problems 
> on these nodes. The main problem is that my disk is filling up, because old 
> tlog files are not being released by SOLR.
> I suspect this problem is caused by a lot of open connectins between the 
> nodes in CLOSE_WAIT status. After running a node for only 2 days, the node 
> already has 33 connections and about 11.000 deleted files that are still open.
> I'm running about 100 cores on each nodes. Could this be causing the rate in 
> which things are going wrong? I suspect that on a setup with only 1 
> collection and 3 shards, the problem stays hidden for quite some time.
> lsof -p 15452 -n | grep -i tcp | grep CLOSE_WAIT
> java15452 root   45u  IPv6   706925770t0  TCP 
> 11.1.0.12:46533->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root   48u  IPv6   706925790t0  TCP 
> 11.1.0.12:46535->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root  205u  IPv6   727594340t0  TCP 
> 11.1.0.12:41744->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root  378u  IPv6   723591150t0  TCP 
> 11.1.0.12:44767->11.1.0.11:http-alt (CLOSE_WAIT)
> java15452 root  381u  IPv6   723591160t0  TCP 
> 11.1.0.12:44768->11.1.0.11:http-alt (CLOSE_WAIT)
> java15452 root 5252u  IPv6   727594450t0  TCP 
> 11.1.0.12:41751->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root 6193u  IPv6   740216510t0  TCP 
> 11.1.0.12:39170->11.1.0.11:http-alt (CLOSE_WAIT)
> java15452 root *150u  IPv6   740216480t0  TCP 
> 11.1.0.12:53865->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root *152u  IPv6   727594240t0  TCP 
> 11.1.0.12:41737->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *526u  IPv6   740279950t0  TCP 
> 11.1.0.12:53965->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root *986u  IPv6   727686370t0  TCP 
> 11.1.0.12:42246->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *626u  IPv6   727499830t0  TCP 
> 11.1.0.12:41297->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *476u  IPv6   727686330t0  TCP 
> 11.1.0.12:42243->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *567u  IPv6   727686220t0  TCP 
> 11.1.0.12:42234->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *732u  IPv6   727685990t0  TCP 
> 11.1.0.12:42230->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *799u  IPv6   727594270t0  TCP 
> 11.1.0.12:41739->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *259u  IPv6   727686260t0  TCP 
> 11.1.0.12:42237->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *272u  IPv6   727689970t0  TCP 
> 11.1.0.12:42263->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *493u  IPv6   727594070t0  TCP 
> 11.1.0.12:41729->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *693u  IPv6   740209090t0  TCP 
> 11.1.0.12:53853->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root *740u  IPv6   727499960t0  TCP 
> 11.1.0.12:41306->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *749u  IPv6   739752300t0  TCP 
> 11.1.0.12:38825->11.1.0.11:http-alt (CLOSE_WAIT)
> java15452 root *750u  IPv6   739746190t0  TCP 
> 11.1.0.12:53499->11.1.0.13:http-alt (CLOSE_WAIT)
> java15452 root *771u  IPv6   727594200t0  TCP 
> 11.1.0.12:41734->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *793u  IPv6   727686530t0  TCP 
> 11.1.0.12:42256->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *900u  IPv6   727686180t0  TCP 
> 11.1.0.12:42233->11.1.0.12:http-alt (CLOSE_WAIT)
> java15452 root *045u  IPv6   727664770t0  TCP 
> 11.1.0.12:41181->11.1.0.11:http-alt (CLOSE_WAIT)
> jav

[jira] [Updated] (SOLR-5427) SolrCloud leaking (many) filehandles to deleted files

2013-11-07 Thread Eric Bus (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Bus updated SOLR-5427:
---

Description: 
I'm running SolrCloud on three nodes. I've been experiencing strange problems 
on these nodes. The main problem is that my disk is filling up, because old 
tlog files are not being released by SOLR.

I suspect this problem is caused by a lot of open connectins between the nodes 
in CLOSE_WAIT status. After running a node for only 2 days, the node already 
has 33 connections and about 11.000 deleted files that are still open.

I'm running about 100 cores on each nodes. Could this be causing the rate in 
which things are going wrong? I suspect that on a setup with only 1 collection 
and 3 shards, the problem stays hidden for quite some time.

lsof -p 15452 -n | grep -i tcp | grep CLOSE_WAIT

java15452 root   45u  IPv6   706925770t0  TCP 
11.1.0.12:46533->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root   48u  IPv6   706925790t0  TCP 
11.1.0.12:46535->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root  205u  IPv6   727594340t0  TCP 
11.1.0.12:41744->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root  378u  IPv6   723591150t0  TCP 
11.1.0.12:44767->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root  381u  IPv6   723591160t0  TCP 
11.1.0.12:44768->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root 5252u  IPv6   727594450t0  TCP 
11.1.0.12:41751->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root 6193u  IPv6   740216510t0  TCP 
11.1.0.12:39170->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *150u  IPv6   740216480t0  TCP 
11.1.0.12:53865->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root *152u  IPv6   727594240t0  TCP 
11.1.0.12:41737->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *526u  IPv6   740279950t0  TCP 
11.1.0.12:53965->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root *986u  IPv6   727686370t0  TCP 
11.1.0.12:42246->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *626u  IPv6   727499830t0  TCP 
11.1.0.12:41297->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *476u  IPv6   727686330t0  TCP 
11.1.0.12:42243->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *567u  IPv6   727686220t0  TCP 
11.1.0.12:42234->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *732u  IPv6   727685990t0  TCP 
11.1.0.12:42230->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *799u  IPv6   727594270t0  TCP 
11.1.0.12:41739->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *259u  IPv6   727686260t0  TCP 
11.1.0.12:42237->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *272u  IPv6   727689970t0  TCP 
11.1.0.12:42263->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *493u  IPv6   727594070t0  TCP 
11.1.0.12:41729->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *693u  IPv6   740209090t0  TCP 
11.1.0.12:53853->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root *740u  IPv6   727499960t0  TCP 
11.1.0.12:41306->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *749u  IPv6   739752300t0  TCP 
11.1.0.12:38825->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *750u  IPv6   739746190t0  TCP 
11.1.0.12:53499->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root *771u  IPv6   727594200t0  TCP 
11.1.0.12:41734->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *793u  IPv6   727686530t0  TCP 
11.1.0.12:42256->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *900u  IPv6   727686180t0  TCP 
11.1.0.12:42233->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *045u  IPv6   727664770t0  TCP 
11.1.0.12:41181->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *233u  IPv6   739750350t0  TCP 
11.1.0.12:38812->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *476u  IPv6   740254790t0  TCP 
11.1.0.12:39225->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *512u  IPv6   740304070t0  TCP 
11.1.0.12:39312->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *533u  IPv6   740216490t0  TCP 
11.1.0.12:40102->11.1.0.12:http-alt (CLOSE_WAIT)
java15452 root *716u  IPv6   740208990t0  TCP 
11.1.0.12:53850->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root *837u  IPv6   739752240t0  TCP 
11.1.0.12:38819->11.1.0.11:http-alt (CLOSE_WAIT)
java15452 root *009u  IPv6   740208940t0  TCP 
11.1.0.12:53849->11.1.0.13:http-alt (CLOSE_WAIT)
java15452 root

[jira] [Created] (SOLR-5427) SolrCloud leaking (many) filehandles to deleted files

2013-11-07 Thread Eric Bus (JIRA)
Eric Bus created SOLR-5427:
--

 Summary: SolrCloud leaking (many) filehandles to deleted files
 Key: SOLR-5427
 URL: https://issues.apache.org/jira/browse/SOLR-5427
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5, 4.4, 4.3
 Environment: Debian Linux 6.0 running on VMWare
Tomcat 6
Reporter: Eric Bus


I'm running SolrCloud on three nodes. I've been experiencing strange problems 
on these nodes. The main problem is that my disk is filling up, because old 
tlog files are not being released by SOLR.

I suspect this problem is caused by a lot of open connectins between the nodes 
in CLOSE_WAIT status. After running a node for only 2 days, the node already 
has 33 connections and about 11.000 deleted files that are still open.

I'm running about 100 cores on each nodes. Could this be causing the rate in 
which things are going wrong? I suspect that on a setup with only 1 collection 
and 3 shards, the problem stays hidden for quite some time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org