[jira] [Commented] (SOLR-5690) Null pointerException in AbstractStatsValues.accumulate

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893162#comment-13893162
 ] 

ASF subversion and git services commented on SOLR-5690:
---

Commit 1565106 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1565106 ]

SOLR-5690: Fix NPE in AbstractStatsValues.accumulate with docValues and docs 
with empty field

> Null pointerException in AbstractStatsValues.accumulate
> ---
>
> Key: SOLR-5690
> URL: https://issues.apache.org/jira/browse/SOLR-5690
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 4.7
>Reporter: Elran Dvir
>Priority: Minor
> Attachments: SOLR-5690.patch
>
>
> It happens when there is a string field with docValues="true" and default="".
> Then, with documents that have empty string value in the field,
> values.exists(docID) is true but values.strVal(docID) is null, and it throws 
> null pointer exception when trying to add the value to distinctValues set.
> the solr query is stats=true&stats.field=X&stats.calcdistinct=true
> stack trace:
> java.lang.NullPointerException at java.util.TreeMap.put(TreeMap.java:567) at 
> java.util.TreeSet.add(TreeSet.java:266) at 
> org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:164)
>  at 
> org.apache.solr.handler.component.StringStatsValues.accumulate(StatsValuesFactory.java:535)
>  at 
> org.apache.solr.handler.component.SimpleStats.getFieldCacheStats(StatsComponent.java:274)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:225)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:200)
>  at 
> org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:68)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:370) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>  at java.lang.Thread.run(Thread.java:804)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5690) Null pointerException in AbstractStatsValues.accumulate

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893163#comment-13893163
 ] 

ASF subversion and git services commented on SOLR-5690:
---

Commit 1565108 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565108 ]

SOLR-5690: Fix NPE in AbstractStatsValues.accumulate with docValues and docs 
with empty field

> Null pointerException in AbstractStatsValues.accumulate
> ---
>
> Key: SOLR-5690
> URL: https://issues.apache.org/jira/browse/SOLR-5690
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 4.7
>Reporter: Elran Dvir
>Priority: Minor
> Attachments: SOLR-5690.patch
>
>
> It happens when there is a string field with docValues="true" and default="".
> Then, with documents that have empty string value in the field,
> values.exists(docID) is true but values.strVal(docID) is null, and it throws 
> null pointer exception when trying to add the value to distinctValues set.
> the solr query is stats=true&stats.field=X&stats.calcdistinct=true
> stack trace:
> java.lang.NullPointerException at java.util.TreeMap.put(TreeMap.java:567) at 
> java.util.TreeSet.add(TreeSet.java:266) at 
> org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:164)
>  at 
> org.apache.solr.handler.component.StringStatsValues.accumulate(StatsValuesFactory.java:535)
>  at 
> org.apache.solr.handler.component.SimpleStats.getFieldCacheStats(StatsComponent.java:274)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:225)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:200)
>  at 
> org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:68)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:370) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>  at java.lang.Thread.run(Thread.java:804)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5690) Null pointerException in AbstractStatsValues.accumulate

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5690.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0
 Assignee: Shalin Shekhar Mangar

Thanks Elran!

> Null pointerException in AbstractStatsValues.accumulate
> ---
>
> Key: SOLR-5690
> URL: https://issues.apache.org/jira/browse/SOLR-5690
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 4.7
>Reporter: Elran Dvir
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5690.patch
>
>
> It happens when there is a string field with docValues="true" and default="".
> Then, with documents that have empty string value in the field,
> values.exists(docID) is true but values.strVal(docID) is null, and it throws 
> null pointer exception when trying to add the value to distinctValues set.
> the solr query is stats=true&stats.field=X&stats.calcdistinct=true
> stack trace:
> java.lang.NullPointerException at java.util.TreeMap.put(TreeMap.java:567) at 
> java.util.TreeSet.add(TreeSet.java:266) at 
> org.apache.solr.handler.component.AbstractStatsValues.accumulate(StatsValuesFactory.java:164)
>  at 
> org.apache.solr.handler.component.StringStatsValues.accumulate(StatsValuesFactory.java:535)
>  at 
> org.apache.solr.handler.component.SimpleStats.getFieldCacheStats(StatsComponent.java:274)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsFields(StatsComponent.java:225)
>  at 
> org.apache.solr.handler.component.SimpleStats.getStatsCounts(StatsComponent.java:200)
>  at 
> org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:68)
>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904) at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) 
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>  at org.eclipse.jetty.server.Server.handle(Server.java:370) at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
>  at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
>  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>  at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>  at java.lang.Thread.run(Thread.java:804)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



FixedBitSet vs OpenBitSet

2014-02-06 Thread Shai Erera
Hi

I recently modified FixedBitSet.iterator() to return its own iterator,
instead of OpenBitSetIterator, and while I haven't done a thorough
performance test, the results on LUCENE-5425 suggested this helped in
general (even though we only tested facets). Also, last night's benchmark
for WildcardQuery and PrefixQuery indicate a 20% jump in QPS.

This got me thinking (again) why do we keep both bitsets, and whether we
couldn't live with just one. So clearly FBS is faster than OBS (perhaps
unless you use fastSet/Get) since it doesn't need to do bounds checking.
Also, FBS lets your grow itself by offering a convenient copy constructor
which allows to expand/shrink the set.

OpenBigSet however lets you manage 64 * 2^32 bits, while FixedBitSet only
allows managing 2^32 bits. Is there any reason for this? Besides the fact
that neither can return a DISI over more than 2^32-1 bits/docs, I don't see
any real reason for that. We'll need to change FBS API to take/return long,
but besides that?

I did a quick search on the usage of OpenBitSet and eclipse found 272
mentions of it. I believe some, if not all, of these uses can be replaced
by FixedBitSet. Definitely when the number of bits is known in advance, but
I think also when it's not known, by growing the bitset.

One place I found is DocValuesConsumer, which clearly knows the number of
bits in advance (dv.getValueCount()), however this returns a long, so this
prevents cutting over to FBS immediately, but if we allow FBS to handle
longs, I think that we can?

Anyway, wanted to get your thoughts before I open an issue and start this
work.

Shai


[jira] [Commented] (SOLR-5691) Unsynchronized WeakHashMap in SolrDispatchFilter causing issues in SolrCloud

2014-02-06 Thread Bojan Smid (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893200#comment-13893200
 ] 

Bojan Smid commented on SOLR-5691:
--

Thanks for fixing!

> Unsynchronized WeakHashMap in SolrDispatchFilter causing issues in SolrCloud
> 
>
> Key: SOLR-5691
> URL: https://issues.apache.org/jira/browse/SOLR-5691
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1
>Reporter: Bojan Smid
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> I have a large SolrCloud setup, 7 nodes, each hosting few 1000 cores 
> (leaders/replicas of same shard exist on different nodes), which is maybe 
> making it easier to notice the problem.
> Node can randomly get into a state where it "stops" responding to PeerSync 
> /get requests from other nodes. When that happens, threaddump of that node 
> shows multiple entries like this one (one entry for each "blocked" request 
> from other node; they don't go away with time):
> "http-bio-8080-exec-1781" daemon prio=5 tid=0x44017720 nid=0x25ae  [ JVM 
> locked by VM at safepoint, polling bits: safep ]
>java.lang.Thread.State: RUNNABLE
> at java.util.WeakHashMap.get(WeakHashMap.java:471)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:201)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> WeakHashMap's internal state can easily get corrupted when used in 
> unsynchronized way, in which case it is known to enter infinite loop in 
> .get() call. It is very likely that this happens here too. The reason why 
> other maybe don't see this issue could be related to huge number of cores I 
> have in this system. The problem is usually created when some node is 
> starting. Also, it doesn't happen with each start, it obviously depends on 
> "correct" timing of events which lead to map's corruption.
> The fix may be as simple as changing:
> protected final Map parsers = new 
> WeakHashMap();
> to:
>   protected final Map parsers = 
> Collections.synchronizedMap(
>   new WeakHashMap());
> but there may be performance considerations around this since it is entrance 
> into Solr.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5629) SolrIndexSearcher.name should include core name

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893226#comment-13893226
 ] 

ASF subversion and git services commented on SOLR-5629:
---

Commit 1565140 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565140 ]

SOLR-5629: SolrIndexSearcher.name should include core name

> SolrIndexSearcher.name should include core name
> ---
>
> Key: SOLR-5629
> URL: https://issues.apache.org/jira/browse/SOLR-5629
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shikhar Bhushan
>Assignee: Erick Erickson
>Priority: Minor
>
> The name attribute on {{SolrIndexSearcher}} is used in log lines, but does 
> not include the core name.
> So in a multi-core setup it is unnecessarily difficult to trace what core's 
> searcher is being referred to, e.g. in log lines that provide info on 
> searcher opens & closes.
> One-line patch that helps:
> Replace
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + (name!=null ? " 
> "+name : "");
> {noformat}
> with
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + "[" + 
> core.getName() + "]" + (name!=null ? " "+name : "");
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5629) SolrIndexSearcher.name should include core name

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893224#comment-13893224
 ] 

ASF subversion and git services commented on SOLR-5629:
---

Commit 1565138 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1565138 ]

SOLR-5629: SolrIndexSearcher.name should include core name

> SolrIndexSearcher.name should include core name
> ---
>
> Key: SOLR-5629
> URL: https://issues.apache.org/jira/browse/SOLR-5629
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shikhar Bhushan
>Assignee: Erick Erickson
>Priority: Minor
>
> The name attribute on {{SolrIndexSearcher}} is used in log lines, but does 
> not include the core name.
> So in a multi-core setup it is unnecessarily difficult to trace what core's 
> searcher is being referred to, e.g. in log lines that provide info on 
> searcher opens & closes.
> One-line patch that helps:
> Replace
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + (name!=null ? " 
> "+name : "");
> {noformat}
> with
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + "[" + 
> core.getName() + "]" + (name!=null ? " "+name : "");
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5629) SolrIndexSearcher.name should include core name

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5629.
-

   Resolution: Fixed
Fix Version/s: 4.7
   5.0

This was a trivial fix so I went ahead and committed it. I hope you don't mind 
Erick.

Thanks Shikhar!

> SolrIndexSearcher.name should include core name
> ---
>
> Key: SOLR-5629
> URL: https://issues.apache.org/jira/browse/SOLR-5629
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shikhar Bhushan
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> The name attribute on {{SolrIndexSearcher}} is used in log lines, but does 
> not include the core name.
> So in a multi-core setup it is unnecessarily difficult to trace what core's 
> searcher is being referred to, e.g. in log lines that provide info on 
> searcher opens & closes.
> One-line patch that helps:
> Replace
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + (name!=null ? " 
> "+name : "");
> {noformat}
> with
> {noformat}
> this.name = "Searcher@" + Integer.toHexString(hashCode()) + "[" + 
> core.getName() + "]" + (name!=null ? " "+name : "");
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2355) simple distrib update processor

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-2355.
-

   Resolution: Duplicate
Fix Version/s: (was: 4.7)
   4.0-ALPHA

This was fixed as part of SOLR-2358 in the initial release of SolrCloud.

> simple distrib update processor
> ---
>
> Key: SOLR-2355
> URL: https://issues.apache.org/jira/browse/SOLR-2355
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud, update
>Reporter: Yonik Seeley
>Priority: Minor
> Fix For: 4.0-ALPHA
>
> Attachments: DistributedUpdateProcessorFactory.java, 
> TestDistributedUpdate.java
>
>
> Here's a simple update processor for distributed indexing that I implemented 
> years ago.
> It implements a simple hash(id) MOD nservers and just fails if any servers 
> are down.
> Given the recent activity in distributed indexing, I thought this might be at 
> least a good source for ideas.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2341) Shard distribution policy

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-2341.
-

   Resolution: Duplicate
Fix Version/s: (was: 4.7)

This is already implemented in the form of HashBasedRouter, CompositeIdRouter 
etc.

> Shard distribution policy
> -
>
> Key: SOLR-2341
> URL: https://issues.apache.org/jira/browse/SOLR-2341
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: William Mayor
>Priority: Minor
> Attachments: SOLR-2341.patch, SOLR-2341.patch
>
>
> A first crack at creating policies to be used for determining to which of a 
> list of shards a document should go. See discussion on "Distributed Indexing" 
> on dev-list.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FixedBitSet vs OpenBitSet

2014-02-06 Thread Michael McCandless
Wow, those gains are substantial:

  PrefixQuery http://people.apache.org/~mikemccand/lucenebench/Prefix3.html

  WildcardQuery http://people.apache.org/~mikemccand/lucenebench/Wildcard.html

I will add an annotation for this change :)

The motivation for FBS was to store docIDs, and since those are
limited to a java int, we used an int index back then.

You're right that OpenBitSet has "elasticity", but one could also
achieve that by growing a FBS.  Maybe it should be renamed to
ElasticBitSet.

It does also allow long bit sets ... probably if we added
get/set(long) to FBS that wouldn't hurt performance?  If we replaced
get/set(int) with get/set(long) I'm not sure...

Mike McCandless

http://blog.mikemccandless.com


On Thu, Feb 6, 2014 at 3:39 AM, Shai Erera  wrote:
> Hi
>
> I recently modified FixedBitSet.iterator() to return its own iterator,
> instead of OpenBitSetIterator, and while I haven't done a thorough
> performance test, the results on LUCENE-5425 suggested this helped in
> general (even though we only tested facets). Also, last night's benchmark
> for WildcardQuery and PrefixQuery indicate a 20% jump in QPS.
>
> This got me thinking (again) why do we keep both bitsets, and whether we
> couldn't live with just one. So clearly FBS is faster than OBS (perhaps
> unless you use fastSet/Get) since it doesn't need to do bounds checking.
> Also, FBS lets your grow itself by offering a convenient copy constructor
> which allows to expand/shrink the set.
>
> OpenBigSet however lets you manage 64 * 2^32 bits, while FixedBitSet only
> allows managing 2^32 bits. Is there any reason for this? Besides the fact
> that neither can return a DISI over more than 2^32-1 bits/docs, I don't see
> any real reason for that. We'll need to change FBS API to take/return long,
> but besides that?
>
> I did a quick search on the usage of OpenBitSet and eclipse found 272
> mentions of it. I believe some, if not all, of these uses can be replaced by
> FixedBitSet. Definitely when the number of bits is known in advance, but I
> think also when it's not known, by growing the bitset.
>
> One place I found is DocValuesConsumer, which clearly knows the number of
> bits in advance (dv.getValueCount()), however this returns a long, so this
> prevents cutting over to FBS immediately, but if we allow FBS to handle
> longs, I think that we can?
>
> Anyway, wanted to get your thoughts before I open an issue and start this
> work.
>
> Shai

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-5426) Make SortedSetDocValuesReaderState customizable

2014-02-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-5426:
--

Assignee: Michael McCandless

> Make SortedSetDocValuesReaderState customizable
> ---
>
> Key: LUCENE-5426
> URL: https://issues.apache.org/jira/browse/LUCENE-5426
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Affects Versions: 4.6
>Reporter: John Wang
>Assignee: Michael McCandless
> Attachments: sortedsetreaderstate.patch, sortedsetreaderstate.patch
>
>
> We have a reader that have a different data structure (in memory) where the 
> cost of computing ordinals per reader open is too expensive in the realtime 
> setting.
> We are maintaining in memory data structure that supports all functionality 
> and would like to leverage SortedSetDocValuesAccumulator.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2014-02-06 Thread Tomek Szpinda (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893262#comment-13893262
 ] 

Tomek Szpinda commented on SOLR-4146:
-

Hi, 
it happens on 4.5.1 when I try to go to 'Core Admin' tab.

The url causing error:

/solr/admin/cores?wt=json

and response:

{"responseHeader":{"status":500,"QTime":3},"defaultCoreName":"collection1","error":{"msg":"Error
 handling 'status' action ","trace":"org.apache.solr.common.SolrException: 
Error handling 'status' action \n\tat 
org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:663)\n\tat
 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:163)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:655)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:246)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)\n\tat
 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)\n\tat
 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)\n\tat
 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)\n\tat
 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)\n\tat
 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)\n\tat
 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)\n\tat 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)\n\tat
 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)\n\tat
 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1041)\n\tat
 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:603)\n\tat
 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)\n\tat
 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
 java.lang.Thread.run(Thread.java:744)\nCaused by: 
org.apache.lucene.store.AlreadyClosedException: this Directory is closed\n\tat 
org.apache.lucene.store.Directory.ensureOpen(Directory.java:260)\n\tat 
org.apache.lucene.store.RAMDirectory.listAll(RAMDirectory.java:107)\n\tat 
org.apache.lucene.store.NRTCachingDirectory.listAll(NRTCachingDirectory.java:124)\n\tat
 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:712)\n\tat
 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:663)\n\tat
 org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:376)\n\tat 
org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:337)\n\tat
 
org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:561)\n\tat
 
org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:1002)\n\tat
 
org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:651)\n\t...
 20 more\n","code":500}}


> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 5.0
>Reporter: Markus Jelsma
> Fix For: 5.0
>
> Attachments: solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalin

[jira] [Created] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-5702:
-

 Summary: info-log collection.configName in 
ZkStateReader.readConfigName
 Key: SOLR-5702
 URL: https://issues.apache.org/jira/browse/SOLR-5702
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.6.1
Reporter: Christine Poerschke


The scenario we had was that a solr instance for an existing collection did 
mysteriously not use the -Dcollection.configName= specified config. 
This it turned out was rightly so because zookeeper already had a 
configName= for the already existing collection. 
org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
value if required.

Solr info-logging the configName it uses would help developers in this scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5426) Make SortedSetDocValuesReaderState customizable

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893274#comment-13893274
 ] 

ASF subversion and git services commented on LUCENE-5426:
-

Commit 1565167 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1565167 ]

LUCENE-5426: allow customization of SortedSetDocValuesReaderState for Lucene 
doc values faceting

> Make SortedSetDocValuesReaderState customizable
> ---
>
> Key: LUCENE-5426
> URL: https://issues.apache.org/jira/browse/LUCENE-5426
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Affects Versions: 4.6
>Reporter: John Wang
>Assignee: Michael McCandless
> Attachments: sortedsetreaderstate.patch, sortedsetreaderstate.patch
>
>
> We have a reader that have a different data structure (in memory) where the 
> cost of computing ordinals per reader open is too expensive in the realtime 
> setting.
> We are maintaining in memory data structure that supports all functionality 
> and would like to leverage SortedSetDocValuesAccumulator.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



lucene-solr pull request: info-log collection.configName in ZkStateReader.r...

2014-02-06 Thread cpoerschke
GitHub user cpoerschke opened a pull request:

https://github.com/apache/lucene-solr/pull/29

info-log collection.configName in ZkStateReader.readConfigName

for https://issues.apache.org/jira/i#browse/SOLR-5702

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr branch_4x-log-configName

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/29.patch


commit fdb34fca2328e5e300c234d3d8a7214ebc5cb963
Author: Christine Poerschke 
Date:   2014-02-05T18:28:02Z

lucene-solr: info-log collection.configName in ZkStateReader.readConfigName




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5426) Make SortedSetDocValuesReaderState customizable

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893276#comment-13893276
 ] 

ASF subversion and git services commented on LUCENE-5426:
-

Commit 1565168 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565168 ]

LUCENE-5426: allow customization of SortedSetDocValuesReaderState for Lucene 
doc values faceting

> Make SortedSetDocValuesReaderState customizable
> ---
>
> Key: LUCENE-5426
> URL: https://issues.apache.org/jira/browse/LUCENE-5426
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Affects Versions: 4.6
>Reporter: John Wang
>Assignee: Michael McCandless
> Attachments: sortedsetreaderstate.patch, sortedsetreaderstate.patch
>
>
> We have a reader that have a different data structure (in memory) where the 
> cost of computing ordinals per reader open is too expensive in the realtime 
> setting.
> We are maintaining in memory data structure that supports all functionality 
> and would like to leverage SortedSetDocValuesAccumulator.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893277#comment-13893277
 ] 

Christine Poerschke commented on SOLR-5702:
---

https://github.com/apache/lucene-solr/pull/29 created

> info-log collection.configName in ZkStateReader.readConfigName
> --
>
> Key: SOLR-5702
> URL: https://issues.apache.org/jira/browse/SOLR-5702
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.6.1
>Reporter: Christine Poerschke
>
> The scenario we had was that a solr instance for an existing collection did 
> mysteriously not use the -Dcollection.configName= specified 
> config. This it turned out was rightly so because zookeeper already had a 
> configName= for the already existing collection. 
> org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
> value if required.
> Solr info-logging the configName it uses would help developers in this 
> scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5426) Make SortedSetDocValuesReaderState customizable

2014-02-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5426.


   Resolution: Fixed
Fix Version/s: 4.7
   5.0

Thanks John!

> Make SortedSetDocValuesReaderState customizable
> ---
>
> Key: LUCENE-5426
> URL: https://issues.apache.org/jira/browse/LUCENE-5426
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Affects Versions: 4.6
>Reporter: John Wang
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: sortedsetreaderstate.patch, sortedsetreaderstate.patch
>
>
> We have a reader that have a different data structure (in memory) where the 
> cost of computing ordinals per reader open is too expensive in the realtime 
> setting.
> We are maintaining in memory data structure that supports all functionality 
> and would like to leverage SortedSetDocValuesAccumulator.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5376) Add a demo search server

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893287#comment-13893287
 ] 

ASF subversion and git services commented on LUCENE-5376:
-

Commit 1565184 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1565184 ]

LUCENE-5376: merge trunk

> Add a demo search server
> 
>
> Key: LUCENE-5376
> URL: https://issues.apache.org/jira/browse/LUCENE-5376
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: lucene-demo-server.tgz
>
>
> I think it'd be useful to have a "demo" search server for Lucene.
> Rather than being fully featured, like Solr, it would be minimal, just 
> wrapping the existing Lucene modules to show how you can make use of these 
> features in a server setting.
> The purpose is to demonstrate how one can build a minimal search server on 
> top of APIs like SearchManager, SearcherLifetimeManager, etc.
> This is also useful for finding rough edges / issues in Lucene's APIs that 
> make building a server unnecessarily hard.
> I don't think it should have back compatibility promises (except Lucene's 
> index back compatibility), so it's free to improve as Lucene's APIs change.
> As a starting point, I'll post what I built for the "eating your own dog 
> food" search app for Lucene's & Solr's jira issues 
> http://jirasearch.mikemccandless.com (blog: 
> http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It 
> uses Netty to expose basic indexing & searching APIs via JSON, but it's very 
> rough (lots nocommits).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Retrieving the document after partial updates.

2014-02-06 Thread mohit2360
I have made all the suggested changes to my project so far suggested through
various forum.

like ::   http://plone.org/products/ftw.solr

Though the fields are partially updated but when is projection is selected 
it shows only the  fields used in the app for updates and "_version_" with
some value but not the all the fields of the document.

Scenario ::: 
 
these many fields in the documents 


[{"person_id":203433,"fname":"Amar","lname":"Jyadav","isLoggedIn":true}]


after setting ,"isLoggedIn" to false then to its showing then to it is not
showing any changes but rather this is projected by Solr engine.  


[{"person_id":203433,"isLoggedIn":true,"_version_":1459287359397822464}]


rest fields like fname, lnameare not there after partial updates.

Solution required as soon as possible. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Retrieving-the-document-after-partial-updates-tp4115808.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: FixedBitSet vs OpenBitSet

2014-02-06 Thread Shai Erera
I ran a quick benchmark on faceting (which uses FixedBitSet a lot) and the
results seem to be in the noise, at least on the server I ran. Maybe on a
32-bit system it would show different results, but I don't have any. What I
measured is FixedBitSet cutover to long instead of int:

TaskQPS base  StdDevQPS comp
StdDevPct diff
  IntNRQ9.18  (5.0%)8.87  (4.6%)
-3.4% ( -12% -6%)
 Prefix3   96.44  (4.0%)   94.40  (3.6%)
-2.1% (  -9% -5%)
OrNotHighMed   18.00  (4.4%)   17.80  (4.9%)
-1.2% ( -10% -8%)
   OrNotHighHigh   13.10  (3.5%)   12.95  (3.9%)
-1.1% (  -8% -6%)
  OrHighHigh3.77  (3.1%)3.73  (3.3%)
-1.0% (  -7% -5%)
   OrHighLow5.48  (3.1%)5.43  (3.3%)
-1.0% (  -7% -5%)
OrNotHighLow   29.77  (5.5%)   29.51  (5.9%)
-0.9% ( -11% -   11%)
   OrHighNotHigh8.00  (3.1%)7.94  (3.6%)
-0.8% (  -7% -6%)
OrHighNotLow9.18  (3.0%)9.11  (3.3%)
-0.7% (  -6% -5%)
Wildcard   14.27  (3.6%)   14.18  (3.2%)
-0.6% (  -7% -6%)
   OrHighMed   12.03  (2.8%)   11.98  (3.2%)
-0.4% (  -6% -5%)
  Fuzzy2   25.42  (2.9%)   25.34  (2.6%)
-0.3% (  -5% -5%)
 AndHighHigh   15.96  (1.7%)   15.97  (2.2%)
0.0% (  -3% -3%)
 Respell   24.37  (2.7%)   24.42  (3.3%)
0.2% (  -5% -6%)
  AndHighMed   16.29  (2.1%)   16.33  (2.1%)
0.2% (  -3% -4%)
OrHighNotMed   14.60  (2.9%)   14.64  (3.1%)
0.3% (  -5% -6%)
 MedSloppyPhrase1.68  (7.7%)1.68  (6.3%)
0.3% ( -12% -   15%)
 LowSpanNear4.59  (4.0%)4.61  (4.8%)
0.3% (  -8% -9%)
 LowSloppyPhrase   20.66  (2.7%)   20.75  (2.5%)
0.4% (  -4% -5%)
 MedSpanNear   14.86  (4.1%)   14.94  (4.6%)
0.6% (  -7% -9%)
HighSloppyPhrase1.80 (10.1%)1.81  (9.1%)
0.7% ( -16% -   22%)
   LowPhrase5.77  (7.3%)5.82  (7.5%)
0.8% ( -13% -   16%)
 LowTerm   99.61  (5.5%)  100.41  (4.7%)
0.8% (  -8% -   11%)
HighSpanNear3.71  (4.8%)3.75  (6.7%)
1.0% ( -10% -   13%)
   MedPhrase   79.50  (7.0%)   80.33  (6.9%)
1.0% ( -11% -   16%)
  HighPhrase2.29 (10.1%)2.32 (10.2%)
1.3% ( -17% -   23%)
  Fuzzy1   35.61  (2.4%)   36.07  (3.3%)
1.3% (  -4% -7%)
 MedTerm   26.49  (3.9%)   26.84  (4.2%)
1.3% (  -6% -9%)
HighTerm   22.04  (3.1%)   22.35  (3.1%)
1.4% (  -4% -7%)
  AndHighLow  195.41  (2.7%)  199.15  (4.0%)
1.9% (  -4% -8%)

There are two issues though, both come from Bits: length() returns an
integer and get() takes an integer.

Maybe we shouldn't change FixedBitSet, but instead create a new LongBitSet
or something which is not elastic, and doesn't let you trip on the
differences between set/fastSet. It only has set, doesn't implement Bits,
doesn't extend DocIdSet (which makes no sense anyway). We then use
FixedBitSet for the "docs" bits and LongBitSet e.g.in DVConsumer, Terms --
wherever the size is known in advance.

We can then review the places which rely on elasticity and decide if OBS is
still needed or not.

Shai


On Thu, Feb 6, 2014 at 12:38 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Wow, those gains are substantial:
>
>   PrefixQuery
> http://people.apache.org/~mikemccand/lucenebench/Prefix3.html
>
>   WildcardQuery
> http://people.apache.org/~mikemccand/lucenebench/Wildcard.html
>
> I will add an annotation for this change :)
>
> The motivation for FBS was to store docIDs, and since those are
> limited to a java int, we used an int index back then.
>
> You're right that OpenBitSet has "elasticity", but one could also
> achieve that by growing a FBS.  Maybe it should be renamed to
> ElasticBitSet.
>
> It does also allow long bit sets ... probably if we added
> get/set(long) to FBS that wouldn't hurt performance?  If we replaced
> get/set(int) with get/set(long) I'm not sure...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Thu, Feb 6, 2014 at 3:39 AM, Shai Erera  wrote:
> > Hi
> >
> > I recently modified FixedBitSet.iterator() to return its own iterator,
> > instead of OpenBitSetIterator, and while I haven't done a thorough
> > performance test, the results on LUCENE-5425 suggested this helped in
> > general (even though we only tested 

Re: Retrieving the document after partial updates.

2014-02-06 Thread Jack Krupansky
Can you confirm that your Solr schema satisfies this stated requirement from 
the description you referenced:


"If there's a field in the Solr schema that's not stored=True, it will get 
dropped from documents in Solr on the next update to that document. Indexing 
won't fail, but that field simply won't have any content any more."


IOW, the stored="true" attribute is needed on any Solr fields that you wish 
to preserve on atomic update operations.


-- Jack Krupansky

-Original Message- 
From: mohit2360

Sent: Thursday, February 6, 2014 6:44 AM
To: dev@lucene.apache.org
Subject: Retrieving the document after partial updates.

I have made all the suggested changes to my project so far suggested through
various forum.

like ::   http://plone.org/products/ftw.solr

Though the fields are partially updated but when is projection is selected
it shows only the  fields used in the app for updates and "_version_" with
some value but not the all the fields of the document.

Scenario :::

these many fields in the documents


[{"person_id":203433,"fname":"Amar","lname":"Jyadav","isLoggedIn":true}]


after setting ,"isLoggedIn" to false then to its showing then to it is not
showing any changes but rather this is projected by Solr engine.


[{"person_id":203433,"isLoggedIn":true,"_version_":1459287359397822464}]


rest fields like fname, lnameare not there after partial updates.

Solution required as soon as possible.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Retrieving-the-document-after-partial-updates-tp4115808.html

Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2014-02-06 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893368#comment-13893368
 ] 

Markus Jelsma commented on SOLR-5379:
-

Ok, it seems i had some bad jars laying around messsing things up if a specific 
token filter was in use. Anyway, this patch works fine from single word to 
multi word but not the other way around. 

I have a 4.5.0 check out here with just this patch. Using the example schema 
and data and the usual [seabiscuit,sea biscit,biscit] syns:
http://localhost:8983/solr/select?defType=edismax&qf=name&rows=0&debugQuery=true&q=

{code}
q=biscit => (+DisjunctionMaxQuery(((name:seabiscuit name:"sea biscit" 
name:biscit/no_coord
q=seabiscuit => (+DisjunctionMaxQuery(((name:seabiscuit name:"sea biscit" 
name:biscit/no_coord
q=sea biscit => (+(DisjunctionMaxQuery((name:sea)) 
DisjunctionMaxQuery(((name:seabiscuit name:"sea biscit" 
name:biscit)/no_coord
{code}

This is all very nice but, if we change the syns from [seabiscuit,sea 
biscit,biscit] to [seabiscuit,sea biscit] it no longer works for
{code}
q=sea biscit => (+(DisjunctionMaxQuery((name:sea)) 
DisjunctionMaxQuery((name:biscit/no_coord
{code}

[~tiennm] So i assume this is clearly not the desired behaviour right? 



> Query-time multi-word synonym expansion
> ---
>
> Key: SOLR-5379
> URL: https://issues.apache.org/jira/browse/SOLR-5379
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Tien Nguyen Manh
>  Labels: multi-word, queryparser, synonym
> Fix For: 4.7
>
> Attachments: quoted.patch, synonym-expander.patch
>
>
> While dealing with synonym at query time, solr failed to work with multi-word 
> synonyms due to some reasons:
> - First the lucene queryparser tokenizes user query by space so it split 
> multi-word term into two terms before feeding to synonym filter, so synonym 
> filter can't recognized multi-word term to do expansion
> - Second, if synonym filter expand into multiple terms which contains 
> multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
> handle synonyms. But MultiPhraseQuery don't work with term have different 
> number of words.
> For the first one, we can extend quoted all multi-word synonym in user query 
> so that lucene queryparser don't split it. There are a jira task related to 
> this one https://issues.apache.org/jira/browse/LUCENE-2605.
> For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
> SHOULD which contains multiple PhraseQuery in case tokens stream have 
> multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2014-02-06 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893393#comment-13893393
 ] 

Markus Jelsma commented on SOLR-5379:
-

By the way: using the SynonymQuotedDismaxQParser doesn't change anything.

> Query-time multi-word synonym expansion
> ---
>
> Key: SOLR-5379
> URL: https://issues.apache.org/jira/browse/SOLR-5379
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Tien Nguyen Manh
>  Labels: multi-word, queryparser, synonym
> Fix For: 4.7
>
> Attachments: quoted.patch, synonym-expander.patch
>
>
> While dealing with synonym at query time, solr failed to work with multi-word 
> synonyms due to some reasons:
> - First the lucene queryparser tokenizes user query by space so it split 
> multi-word term into two terms before feeding to synonym filter, so synonym 
> filter can't recognized multi-word term to do expansion
> - Second, if synonym filter expand into multiple terms which contains 
> multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
> handle synonyms. But MultiPhraseQuery don't work with term have different 
> number of words.
> For the first one, we can extend quoted all multi-word synonym in user query 
> so that lucene queryparser don't split it. There are a jira task related to 
> this one https://issues.apache.org/jira/browse/LUCENE-2605.
> For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
> SHOULD which contains multiple PhraseQuery in case tokens stream have 
> multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5130) Implement addReplica Collections API

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5130:


Attachment: SOLR-5130.patch

Supported Parameters:
# collection
# shard (optional)
# node
# _route_ (optional)
# name - the core name
# instanceDir (optional)
# dataDir (optional)

The collection.configName is looked up and passed along to the core admin 
create. If name is not specified then it is assigned. I intend to auto-assign 
shard as well if neither shard nor _route_ is specified but that is not 
implemented yet.

I'm working on the tests.


> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5130.patch
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5130) Implement addReplica Collections API

2014-02-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893410#comment-13893410
 ] 

Shalin Shekhar Mangar edited comment on SOLR-5130 at 2/6/14 2:56 PM:
-

Supported Parameters:
# collection
# shard (optional)
# node
# _route_ (optional)
# name - the core name (optional)
# instanceDir (optional)
# dataDir (optional)

The collection.configName is looked up and passed along to the core admin 
create. If name is not specified then it is assigned. I intend to auto-assign 
shard as well if neither shard nor _route_ is specified but that is not 
implemented yet.

I'm working on the tests.



was (Author: shalinmangar):
Supported Parameters:
# collection
# shard (optional)
# node
# _route_ (optional)
# name - the core name
# instanceDir (optional)
# dataDir (optional)

The collection.configName is looked up and passed along to the core admin 
create. If name is not specified then it is assigned. I intend to auto-assign 
shard as well if neither shard nor _route_ is specified but that is not 
implemented yet.

I'm working on the tests.


> Implement addReplica Collections API
> 
>
> Key: SOLR-5130
> URL: https://issues.apache.org/jira/browse/SOLR-5130
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5130.patch
>
>
> addReplica API will add a node to a given collection/shard.
> Parameters:
> # node
> # collection
> # shard (optional)
> # _route_ (optional) (see SOLR-4221)
> If shard or _route_ is not specified then physical shards will be created on 
> the node for the given collection using the persisted values of 
> maxShardsPerNode and replicationFactor.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5426) Make SortedSetDocValuesReaderState customizable

2014-02-06 Thread John Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893469#comment-13893469
 ] 

John Wang commented on LUCENE-5426:
---

Thanks Michael! Can't wait for release of 4.7 :)

> Make SortedSetDocValuesReaderState customizable
> ---
>
> Key: LUCENE-5426
> URL: https://issues.apache.org/jira/browse/LUCENE-5426
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Affects Versions: 4.6
>Reporter: John Wang
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: sortedsetreaderstate.patch, sortedsetreaderstate.patch
>
>
> We have a reader that have a different data structure (in memory) where the 
> cost of computing ordinals per reader open is too expensive in the realtime 
> setting.
> We are maintaining in memory data structure that supports all functionality 
> and would like to leverage SortedSetDocValuesAccumulator.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5658) commitWithin does not reflect the new documents added

2014-02-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893461#comment-13893461
 ] 

Hoss Man commented on SOLR-5658:


mark: since this issue was recorded as "fixed" in 4.6.1 changes, re-opening now 
to address the problem it may have caused seems like a bad idea from an 
accountability standpoint -- since if/when it's fixed, it will be confusing to 
users if it gets "re-recorded" in CHANGES under 4.7 (or whatever)

Suggest you re-resolve this, and open a new linked ("Broken By") issue for the 
newly discovered problem in 4.6.1.

> commitWithin does not reflect the new documents added
> -
>
> Key: SOLR-5658
> URL: https://issues.apache.org/jira/browse/SOLR-5658
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 5.0
>Reporter: Varun Thacker
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6.1, 5.0, 4.7
>
> Attachments: SOLR-5658.patch, SOLR-5658.patch
>
>
> I start 4 nodes using the setup mentioned on - 
> https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud
>  
> I added a document using - 
> curl http://localhost:8983/solr/update?commitWithin=1 -H "Content-Type: 
> text/xml" --data-binary ' name="id">testdoc'
> In Solr 4.5.1 there is 1 soft commit with openSearcher=true and 1 hard commit 
> with openSearcher=false
> In Solr 4.6.x there is there is only one commit hard commit with 
> openSearcher=false
>  
> So even after 10 seconds queries on none of the shards reflect the added 
> document. 
> This was also reported on the solr-user list ( 
> http://lucene.472066.n3.nabble.com/Possible-regression-for-Solr-4-6-0-commitWithin-does-not-work-with-replicas-td4106102.html
>  )
> Here are the relevant logs 
> Logs from Solr 4.5.1
> Node 1:
> {code}
> 420021 [qtp619011445-12] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update params={commitWithin=1} {add=[testdoc]} 0 45
> {code}
>  
> Node 2:
> {code}
> 119896 [qtp1608701025-10] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update 
> params={distrib.from=http://192.168.1.103:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2}
>  {add=[testdoc (1458003295513608192)]} 0 348
> 129648 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 129679 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@e174f70 main
> 129680 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – end_commit_flush
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> QuerySenderListener sending requests to Searcher@e174f70 
> main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> QuerySenderListener done.
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> [collection1] Registered new searcher Searcher@e174f70 
> main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
> 134648 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> SolrDeletionPolicy.onCommit: commits: num=2
>   
> commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
>   
> commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_4,generation=4}
> 134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> newest commit generation = 4
> 134660 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – end_commit_flush
>  {code}
>  
> Node 3:
>  
> Node 4:
> {code}
> 374545 [qtp1608701025-16] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update 
> params={distrib.from=http://192.168.1.103:7574/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2}
>  {add=[testdoc (1458002133233172480)]} 0 20
> 384545 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler

[jira] [Commented] (SOLR-5658) commitWithin does not reflect the new documents added

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893483#comment-13893483
 ] 

Mark Miller commented on SOLR-5658:
---

Reopen is just so its not lost until we figure what if anything we do.

> commitWithin does not reflect the new documents added
> -
>
> Key: SOLR-5658
> URL: https://issues.apache.org/jira/browse/SOLR-5658
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 5.0
>Reporter: Varun Thacker
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.6.1, 5.0, 4.7
>
> Attachments: SOLR-5658.patch, SOLR-5658.patch
>
>
> I start 4 nodes using the setup mentioned on - 
> https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud
>  
> I added a document using - 
> curl http://localhost:8983/solr/update?commitWithin=1 -H "Content-Type: 
> text/xml" --data-binary ' name="id">testdoc'
> In Solr 4.5.1 there is 1 soft commit with openSearcher=true and 1 hard commit 
> with openSearcher=false
> In Solr 4.6.x there is there is only one commit hard commit with 
> openSearcher=false
>  
> So even after 10 seconds queries on none of the shards reflect the added 
> document. 
> This was also reported on the solr-user list ( 
> http://lucene.472066.n3.nabble.com/Possible-regression-for-Solr-4-6-0-commitWithin-does-not-work-with-replicas-td4106102.html
>  )
> Here are the relevant logs 
> Logs from Solr 4.5.1
> Node 1:
> {code}
> 420021 [qtp619011445-12] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update params={commitWithin=1} {add=[testdoc]} 0 45
> {code}
>  
> Node 2:
> {code}
> 119896 [qtp1608701025-10] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update 
> params={distrib.from=http://192.168.1.103:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2}
>  {add=[testdoc (1458003295513608192)]} 0 348
> 129648 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 129679 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@e174f70 main
> 129680 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – end_commit_flush
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> QuerySenderListener sending requests to Searcher@e174f70 
> main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> QuerySenderListener done.
> 129681 [searcherExecutor-5-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> [collection1] Registered new searcher Searcher@e174f70 
> main{StandardDirectoryReader(segments_3:11:nrt _2(4.5.1):C1)}
> 134648 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – start 
> commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> 134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> SolrDeletionPolicy.onCommit: commits: num=2
>   
> commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_3,generation=3}
>   
> commit{dir=NRTCachingDirectory(org.apache.lucene.store.NIOFSDirectory@/Users/varun/solr-4.5.1/node2/solr/collection1/data/index
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@66a394a3; 
> maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_4,generation=4}
> 134658 [commitScheduler-7-thread-1] INFO  org.apache.solr.core.SolrCore  – 
> newest commit generation = 4
> 134660 [commitScheduler-7-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – end_commit_flush
>  {code}
>  
> Node 3:
>  
> Node 4:
> {code}
> 374545 [qtp1608701025-16] INFO  
> org.apache.solr.update.processor.LogUpdateProcessor  – [collection1] 
> webapp=/solr path=/update 
> params={distrib.from=http://192.168.1.103:7574/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2}
>  {add=[testdoc (1458002133233172480)]} 0 20
> 384545 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – start 
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=true,prepareCommit=false}
> 384552 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.search.SolrIndexSearcher  – Opening Searcher@36137e08 main
> 384553 [commitScheduler-8-thread-1] INFO  
> org.apache.solr.update.UpdateHandler  – 

[jira] [Commented] (LUCENE-5376) Add a demo search server

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893504#comment-13893504
 ] 

ASF subversion and git services commented on LUCENE-5376:
-

Commit 1565325 from [~mikemccand] in branch 'dev/branches/lucene5376'
[ https://svn.apache.org/r1565325 ]

LUCENE-5376: add some more test cases for dynamic expressions; clean up test 
code

> Add a demo search server
> 
>
> Key: LUCENE-5376
> URL: https://issues.apache.org/jira/browse/LUCENE-5376
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Attachments: lucene-demo-server.tgz
>
>
> I think it'd be useful to have a "demo" search server for Lucene.
> Rather than being fully featured, like Solr, it would be minimal, just 
> wrapping the existing Lucene modules to show how you can make use of these 
> features in a server setting.
> The purpose is to demonstrate how one can build a minimal search server on 
> top of APIs like SearchManager, SearcherLifetimeManager, etc.
> This is also useful for finding rough edges / issues in Lucene's APIs that 
> make building a server unnecessarily hard.
> I don't think it should have back compatibility promises (except Lucene's 
> index back compatibility), so it's free to improve as Lucene's APIs change.
> As a starting point, I'll post what I built for the "eating your own dog 
> food" search app for Lucene's & Solr's jira issues 
> http://jirasearch.mikemccandless.com (blog: 
> http://blog.mikemccandless.com/2013/05/eating-dog-food-with-lucene.html ). It 
> uses Netty to expose basic indexing & searching APIs via JSON, but it's very 
> rough (lots nocommits).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893526#comment-13893526
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565344 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1565344 ]

LUCENE-5434: NRT support for file systems that do no have delete on last close 
or cannot delete while referenced semantics.
SOLR-5693: Running on HDFS does work correctly with NRT search.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5693) Running on HDFS does work correctly with NRT search.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893525#comment-13893525
 ] 

ASF subversion and git services commented on SOLR-5693:
---

Commit 1565344 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1565344 ]

LUCENE-5434: NRT support for file systems that do no have delete on last close 
or cannot delete while referenced semantics.
SOLR-5693: Running on HDFS does work correctly with NRT search.

> Running on HDFS does work correctly with NRT search.
> 
>
> Key: SOLR-5693
> URL: https://issues.apache.org/jira/browse/SOLR-5693
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> Like NFS, HDFS has different file delete semantics than Windows and Unix. For 
> non NRT cases, you can work around this by reserving commit points, but NRT 
> counts on delete on last close semantics (unix), or delete fails, try again 
> later (windows). This is because files can be merged away before they even 
> become part of a commit point. Meanwhile, real time readers can reference 
> those files.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-5436:
---

 Summary: RefrenceManager#accquire can result in infinite loop if 
manager resource is abused outside of the manager
 Key: LUCENE-5436
 URL: https://issues.apache.org/jira/browse/LUCENE-5436
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 4.6.1, 5.0, 4.7
Reporter: Simon Willnauer
 Fix For: 5.0, 4.7
 Attachments: LUCENE-5436.patch

I think I found a bug that can cause the ReferenceManager to stick in an 
infinite loop if the managed reference is decremented outside of the manager 
without a corresponding increment. I think this is pretty bad since the 
debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5436:


Attachment: LUCENE-5436.patch

here is a patch and a test that shows the problem

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893530#comment-13893530
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565347 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565347 ]

LUCENE-5434: NRT support for file systems that do no have delete on last close 
or cannot delete while referenced semantics.
SOLR-5693: Running on HDFS does work correctly with NRT search.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved LUCENE-5434.
-

Resolution: Fixed

Thanks Mike! Good thought on the assert.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5693) Running on HDFS does work correctly with NRT search.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893529#comment-13893529
 ] 

ASF subversion and git services commented on SOLR-5693:
---

Commit 1565347 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565347 ]

LUCENE-5434: NRT support for file systems that do no have delete on last close 
or cannot delete while referenced semantics.
SOLR-5693: Running on HDFS does work correctly with NRT search.

> Running on HDFS does work correctly with NRT search.
> 
>
> Key: SOLR-5693
> URL: https://issues.apache.org/jira/browse/SOLR-5693
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> Like NFS, HDFS has different file delete semantics than Windows and Unix. For 
> non NRT cases, you can work around this by reserving commit points, but NRT 
> counts on delete on last close semantics (unix), or delete fails, try again 
> later (windows). This is because files can be merged away before they even 
> become part of a commit point. Meanwhile, real time readers can reference 
> those files.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5693) Running on HDFS does work correctly with NRT search.

2014-02-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5693.
---

Resolution: Fixed

> Running on HDFS does work correctly with NRT search.
> 
>
> Key: SOLR-5693
> URL: https://issues.apache.org/jira/browse/SOLR-5693
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
>
> Like NFS, HDFS has different file delete semantics than Windows and Unix. For 
> non NRT cases, you can work around this by reserving commit points, but NRT 
> counts on delete on last close semantics (unix), or delete fails, try again 
> later (windows). This is because files can be merged away before they even 
> become part of a commit point. Meanwhile, real time readers can reference 
> those files.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12106 - Failure!

2014-02-06 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12106/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestIndexWriterReader.testUpdateDocument

Error Message:
MockDirectoryWrapper: file "_5.tvd" is still open: cannot delete

Stack Trace:
java.lang.AssertionError: MockDirectoryWrapper: file "_5.tvd" is still open: 
cannot delete
at 
__randomizedtesting.SeedInfo.seed([8B4984DAD9F90593:DC2A2A8BEB3D2653]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
at 
org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:517)
at 
org.apache.lucene.index.IndexFileDeleter.deleteCommits(IndexFileDeleter.java:286)
at 
org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:457)
at 
org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:3049)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3029)
at 
org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1036)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:927)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:889)
at 
org.apache.lucene.index.TestIndexWriterReader.testUpdateDocument(TestIndexWriterReader.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailu

Re: [JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12106 - Failure!

2014-02-06 Thread Mark Miller
Hmm…havn’t seen that happen yet. I think it’s a test issue, I’ll look closer.

- Mark  

http://about.me/markrmiller



On Feb 6, 2014, 12:25:42 PM, (null)  wrote: Build: 
builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12106/

1 tests failed.
REGRESSION: org.apache.lucene.index.TestIndexWriterReader.testUpdateDocument

Error Message:
MockDirectoryWrapper: file "_5.tvd" is still open: cannot delete

Stack Trace:
java.lang.AssertionError: MockDirectoryWrapper: file "_5.tvd" is still open: 
cannot delete
at __randomizedtesting.SeedInfo.seed([8B4984DAD9F90593:DC2A2A8BEB3D2653]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
at org.apache.lucene.index.IndexFileDeleter.decRef(IndexFileDeleter.java:517)
at 
org.apache.lucene.index.IndexFileDeleter.deleteCommits(IndexFileDeleter.java:286)
at 
org.apache.lucene.index.IndexFileDeleter.checkpoint(IndexFileDeleter.java:457)
at org.apache.lucene.index.IndexWriter.finishCommit(IndexWriter.java:3049)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3029)
at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1036)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:927)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:889)
at 
org.apache.lucene.index.TestIndexWriterReader.testUpdateDocument(TestIndexWriterReader.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893568#comment-13893568
 ] 

Mark Miller commented on LUCENE-5434:
-

I think a problem with a the wider testing is that even if you are just getting 
nrt readers, depending on timing, you can still have legit deletes of open 
files that don't make it into a commit point and are not referenced by an nrt 
reader.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893577#comment-13893577
 ] 

Michael McCandless commented on LUCENE-5436:


Hmm, sneaky.  So we are adding best-effort catching of this mis-use?

Instead of isClosed can we add protected getRefCount?  That's just more 
consistent with the current abstract methods we already have?

There's a small typo in the exception message (then -> the).

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893590#comment-13893590
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565373 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1565373 ]

LUCENE-5434: This test method uses a non nrt reader.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893592#comment-13893592
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565374 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565374 ]

LUCENE-5434: This test method uses a non nrt reader.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893595#comment-13893595
 ] 

Mark Miller commented on LUCENE-5434:
-

Scratch that last comment. I just couldn't spot the in your face non nrt 
indexreader open in that test. Mike pointed it out in IRC.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5418) Don't use .advance on costly (e.g. distance range facets) filters

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893633#comment-13893633
 ] 

ASF subversion and git services commented on LUCENE-5418:
-

Commit 1565387 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1565387 ]

LUCENE-5418: faster drill-down/sideways on costly filters

> Don't use .advance on costly (e.g. distance range facets) filters
> -
>
> Key: LUCENE-5418
> URL: https://issues.apache.org/jira/browse/LUCENE-5418
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5418.patch, LUCENE-5418.patch
>
>
> If you use a distance filter today (see 
> http://blog.mikemccandless.com/2014/01/geospatial-distance-faceting-using.html
>  ), then drill down on one of those ranges, under the hood Lucene is using 
> .advance on the Filter, which is very costly because we end up computing 
> distance on (possibly many) hits that don't match the query.
> It's better performance to find the hits matching the Query first, and then 
> check the filter.
> FilteredQuery can already do this today, when you use its 
> QUERY_FIRST_FILTER_STRATEGY.  This essentially accomplishes the same thing as 
> Solr's "post filters" (I think?) but with a far simpler/better/less code 
> approach.
> E.g., I believe ElasticSearch uses this API when it applies costly filters.
> Longish term, I think  Query/Filter ought to know itself that it's expensive, 
> and cases where such a Query/Filter is MUST'd onto a BooleanQuery (e.g. 
> ConstantScoreQuery), or the Filter is a clause in BooleanFilter, or it's 
> passed to IndexSearcher.search, we should also be "smart" here and not call 
> .advance on such clauses.  But that'd be a biggish change ... so for today 
> the "workaround" is the user must carefully construct the FilteredQuery 
> themselves.
> In the mean time, as another workaround, I want to fix DrillSideways so that 
> when you drill down on such filters it doesn't use .advance; this should give 
> a good speedup for the "normal path" API usage with a costly filter.
> I'm iterating on the lucene server branch (LUCENE-5376) but once it's working 
> I plan to merge this back to trunk / 4.7.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5702:
--

 Priority: Minor  (was: Major)
Fix Version/s: 4.7
   5.0
 Assignee: Mark Miller

> info-log collection.configName in ZkStateReader.readConfigName
> --
>
> Key: SOLR-5702
> URL: https://issues.apache.org/jira/browse/SOLR-5702
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.6.1
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> The scenario we had was that a solr instance for an existing collection did 
> mysteriously not use the -Dcollection.configName= specified 
> config. This it turned out was rightly so because zookeeper already had a 
> configName= for the already existing collection. 
> org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
> value if required.
> Solr info-logging the configName it uses would help developers in this 
> scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5418) Don't use .advance on costly (e.g. distance range facets) filters

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893649#comment-13893649
 ] 

ASF subversion and git services commented on LUCENE-5418:
-

Commit 1565391 from [~mikemccand] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565391 ]

LUCENE-5418: faster drill-down/sideways on costly filters

> Don't use .advance on costly (e.g. distance range facets) filters
> -
>
> Key: LUCENE-5418
> URL: https://issues.apache.org/jira/browse/LUCENE-5418
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5418.patch, LUCENE-5418.patch
>
>
> If you use a distance filter today (see 
> http://blog.mikemccandless.com/2014/01/geospatial-distance-faceting-using.html
>  ), then drill down on one of those ranges, under the hood Lucene is using 
> .advance on the Filter, which is very costly because we end up computing 
> distance on (possibly many) hits that don't match the query.
> It's better performance to find the hits matching the Query first, and then 
> check the filter.
> FilteredQuery can already do this today, when you use its 
> QUERY_FIRST_FILTER_STRATEGY.  This essentially accomplishes the same thing as 
> Solr's "post filters" (I think?) but with a far simpler/better/less code 
> approach.
> E.g., I believe ElasticSearch uses this API when it applies costly filters.
> Longish term, I think  Query/Filter ought to know itself that it's expensive, 
> and cases where such a Query/Filter is MUST'd onto a BooleanQuery (e.g. 
> ConstantScoreQuery), or the Filter is a clause in BooleanFilter, or it's 
> passed to IndexSearcher.search, we should also be "smart" here and not call 
> .advance on such clauses.  But that'd be a biggish change ... so for today 
> the "workaround" is the user must carefully construct the FilteredQuery 
> themselves.
> In the mean time, as another workaround, I want to fix DrillSideways so that 
> when you drill down on such filters it doesn't use .advance; this should give 
> a good speedup for the "normal path" API usage with a costly filter.
> I'm iterating on the lucene server branch (LUCENE-5376) but once it's working 
> I plan to merge this back to trunk / 4.7.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5418) Don't use .advance on costly (e.g. distance range facets) filters

2014-02-06 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5418.


Resolution: Fixed

> Don't use .advance on costly (e.g. distance range facets) filters
> -
>
> Key: LUCENE-5418
> URL: https://issues.apache.org/jira/browse/LUCENE-5418
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/facet
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5418.patch, LUCENE-5418.patch
>
>
> If you use a distance filter today (see 
> http://blog.mikemccandless.com/2014/01/geospatial-distance-faceting-using.html
>  ), then drill down on one of those ranges, under the hood Lucene is using 
> .advance on the Filter, which is very costly because we end up computing 
> distance on (possibly many) hits that don't match the query.
> It's better performance to find the hits matching the Query first, and then 
> check the filter.
> FilteredQuery can already do this today, when you use its 
> QUERY_FIRST_FILTER_STRATEGY.  This essentially accomplishes the same thing as 
> Solr's "post filters" (I think?) but with a far simpler/better/less code 
> approach.
> E.g., I believe ElasticSearch uses this API when it applies costly filters.
> Longish term, I think  Query/Filter ought to know itself that it's expensive, 
> and cases where such a Query/Filter is MUST'd onto a BooleanQuery (e.g. 
> ConstantScoreQuery), or the Filter is a clause in BooleanFilter, or it's 
> passed to IndexSearcher.search, we should also be "smart" here and not call 
> .advance on such clauses.  But that'd be a biggish change ... so for today 
> the "workaround" is the user must carefully construct the FilteredQuery 
> themselves.
> In the mean time, as another workaround, I want to fix DrillSideways so that 
> when you drill down on such filters it doesn't use .advance; this should give 
> a good speedup for the "normal path" API usage with a costly filter.
> I'm iterating on the lucene server branch (LUCENE-5376) but once it's working 
> I plan to merge this back to trunk / 4.7.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893662#comment-13893662
 ] 

ASF subversion and git services commented on SOLR-5702:
---

Commit 1565399 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1565399 ]

SOLR-5702: Log config name found for collection at info level.

> info-log collection.configName in ZkStateReader.readConfigName
> --
>
> Key: SOLR-5702
> URL: https://issues.apache.org/jira/browse/SOLR-5702
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.6.1
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> The scenario we had was that a solr instance for an existing collection did 
> mysteriously not use the -Dcollection.configName= specified 
> config. This it turned out was rightly so because zookeeper already had a 
> configName= for the already existing collection. 
> org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
> value if required.
> Solr info-logging the configName it uses would help developers in this 
> scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-5702.
---

Resolution: Fixed

Thanks Christine! Sorry I forgot to put the comment thats supposed to close the 
pull request (which didn't work last time anyway - perhaps due to case).

> info-log collection.configName in ZkStateReader.readConfigName
> --
>
> Key: SOLR-5702
> URL: https://issues.apache.org/jira/browse/SOLR-5702
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.6.1
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> The scenario we had was that a solr instance for an existing collection did 
> mysteriously not use the -Dcollection.configName= specified 
> config. This it turned out was rightly so because zookeeper already had a 
> configName= for the already existing collection. 
> org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
> value if required.
> Solr info-logging the configName it uses would help developers in this 
> scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5702) info-log collection.configName in ZkStateReader.readConfigName

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893666#comment-13893666
 ] 

ASF subversion and git services commented on SOLR-5702:
---

Commit 1565400 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565400 ]

SOLR-5702: Log config name found for collection at info level.

> info-log collection.configName in ZkStateReader.readConfigName
> --
>
> Key: SOLR-5702
> URL: https://issues.apache.org/jira/browse/SOLR-5702
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.6.1
>Reporter: Christine Poerschke
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.7
>
>
> The scenario we had was that a solr instance for an existing collection did 
> mysteriously not use the -Dcollection.configName= specified 
> config. This it turned out was rightly so because zookeeper already had a 
> configName= for the already existing collection. 
> org.apache.solr.cloud.ZkCLI linkconfig needs to be run to update the existing 
> value if required.
> Solr info-logging the configName it uses would help developers in this 
> scenario.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893684#comment-13893684
 ] 

Simon Willnauer commented on LUCENE-5436:
-

bq. Hmm, sneaky. So we are adding best-effort catching of this mis-use?
yeah this is nothing else than preventing the manager from going into an 
infinite loop.

bq. Instead of isClosed can we add protected getRefCount? That's just more 
consistent with the current abstract methods we already have?
yeah I think that is a better name.

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5436:


Attachment: LUCENE-5436.patch

here is a new patch using getRefCount...

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893702#comment-13893702
 ] 

Michael McCandless commented on LUCENE-5436:


+1, thanks Simon.

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5426) org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 exceeds length of pr

2014-02-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893706#comment-13893706
 ] 

Hoss Man commented on SOLR-5426:


Arun: Since you seem to have a grasp on the problem here, would it be possible 
for you to help write a unit test to recreate it?


> org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: 
> org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
> exceeds length of provided text sized 840
> --
>
> Key: SOLR-5426
> URL: https://issues.apache.org/jira/browse/SOLR-5426
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.4, 4.5.1
>Reporter: Nikolay
>Priority: Minor
> Attachments: OffsetLimitTokenFilter.java.patch, highlighter.zip
>
>
> Highlighter does not work correctly on test-data.
> I added index- and config- files (see attached highlighter.zip) for 
> reproducing this issue.
> Everything works fine if I search without highlighting:
> http://localhost:8983/solr/global/select?q=aa&wt=json&indent=true
> But if search with highlighting: 
> http://localhost:8983/solr/global/select?q=aa&wt=json&indent=true&hl=true&hl.fl=*_stx&hl.simple.pre=&hl.simple.post=<%2Fem>
> I'm get the error:
> ERROR - 2013-11-07 10:17:15.797; org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.SolrException: 
> org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
> exceeds length of provided text sized 840
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:542)
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:414)
>   at 
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:139)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
>   at 
> org.eclipse.jetty.util.thread.Que

[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-02-06 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893708#comment-13893708
 ] 

Tim Allison commented on LUCENE-5205:
-

SImilar to previous efforts

> [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
> classic QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Fix For: 4.7
>
> Attachments: LUCENE-5205.patch.gz, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5426) org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 exceeds length of pr

2014-02-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893709#comment-13893709
 ] 

Hoss Man commented on SOLR-5426:


Also: when submitting patches, it's really helpful if you can please generate 
the patch against the entire code base...

https://wiki.apache.org/solr/HowToContribute#Generating_a_patch

> org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: 
> org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
> exceeds length of provided text sized 840
> --
>
> Key: SOLR-5426
> URL: https://issues.apache.org/jira/browse/SOLR-5426
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 4.4, 4.5.1
>Reporter: Nikolay
>Priority: Minor
> Attachments: OffsetLimitTokenFilter.java.patch, highlighter.zip
>
>
> Highlighter does not work correctly on test-data.
> I added index- and config- files (see attached highlighter.zip) for 
> reproducing this issue.
> Everything works fine if I search without highlighting:
> http://localhost:8983/solr/global/select?q=aa&wt=json&indent=true
> But if search with highlighting: 
> http://localhost:8983/solr/global/select?q=aa&wt=json&indent=true&hl=true&hl.fl=*_stx&hl.simple.pre=&hl.simple.post=<%2Fem>
> I'm get the error:
> ERROR - 2013-11-07 10:17:15.797; org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.SolrException: 
> org.apache.lucene.search.highlight.InvalidTokenOffsetsException: Token 0 
> exceeds length of provided text sized 840
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:542)
>   at 
> org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:414)
>   at 
> org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:139)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:703)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:406)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.ja

[jira] [Commented] (SOLR-5423) CSV output doesn't include function field

2014-02-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893715#comment-13893715
 ] 

Hoss Man commented on SOLR-5423:


Hey Arun, some comments about your patch:

* please generate patches against he entire code base so they are easier to 
apply & review: https://wiki.apache.org/solr/HowToContribute#Generating_a_patch
* can you explain what's going on with the getOriginalNameForFunctionField 
function in your patch? ... it seems extremely sketchy.  if the problem is just 
that ValueSourceAugmenter's "getName" doesn't match the actual fieldname the 
transformer puts in the modified documents, we should fix that in 
ValueSourceAugmenter
* in general, the amount of instanceof checking going on in your patch seems 
really brittle, and i'm not exactly sure why it's neccessary.  As I understand 
it the CSVResponseWriter already loops over all of the field names in the 
documents to be returned to get the list of column names it should output -- so 
as long as the transformer logic is applied before that, then won't the field 
names be picked up automatically?

independent of any of the above questions about this patch, we shouldn't move 
forward with committing support for transformers to the csv repsonse writer w/o 
having some tests that prove it works.



> CSV output doesn't include function field
> -
>
> Key: SOLR-5423
> URL: https://issues.apache.org/jira/browse/SOLR-5423
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: James Wilson
> Attachments: CSVResponseWriter.java.patch
>
>
> Given a schema with 
>
>
>   
> the following query returns no rows:
> http://localhost:8983/solr/collection1/select?q=*%3A*&rows=30&fl=div(price%2Cnumpages)&wt=csv&indent=true
> However, setting wt=json or wt=xml, it works.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-5436:


Attachment: LUCENE-5436.patch

here is a patch with a changes entry... I will commit shortly

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893728#comment-13893728
 ] 

ASF subversion and git services commented on LUCENE-5436:
-

Commit 1565429 from [~simonw] in branch 'dev/trunk'
[ https://svn.apache.org/r1565429 ]

LUCENE-5436: RefrenceManager#accquire can result in infinite loop if manager 
resource is abused outside of the manager

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893739#comment-13893739
 ] 

ASF subversion and git services commented on LUCENE-5436:
-

Commit 1565430 from [~simonw] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565430 ]

LUCENE-5436: RefrenceManager#accquire can result in infinite loop if manager 
resource is abused outside of the manager

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5436) RefrenceManager#accquire can result in infinite loop if manager resource is abused outside of the manager

2014-02-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-5436.
-

Resolution: Fixed
  Assignee: Simon Willnauer

committed to 4x and trunk

> RefrenceManager#accquire can result in infinite loop if manager resource is 
> abused outside of the manager
> -
>
> Key: LUCENE-5436
> URL: https://issues.apache.org/jira/browse/LUCENE-5436
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.6.1, 5.0, 4.7
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5436.patch, LUCENE-5436.patch, LUCENE-5436.patch
>
>
> I think I found a bug that can cause the ReferenceManager to stick in an 
> infinite loop if the managed reference is decremented outside of the manager 
> without a corresponding increment. I think this is pretty bad since the 
> debugging of this is a mess and we should rather throw ISE instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12118 - Failure!

2014-02-06 Thread builder
Build: builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12118/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

Error Message:
Captured an uncaught exception in thread: Thread[id=354, name=Lucene Merge 
Thread #0, state=RUNNABLE, group=TGRP-TestNRTThreads]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=354, name=Lucene Merge Thread #0, 
state=RUNNABLE, group=TGRP-TestNRTThreads]
at 
__randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E:427D4F0468DCC125]:0)
Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is still open: 
cannot delete
at __randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E]:0)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
Caused by: java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is 
still open: cannot delete
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
at 
org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:349)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3721)
at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _c.cfs
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:534)
at 
org.apache.lucene.store.MockDirectoryWrapper$1.openSlice(MockDirectoryWrapper.java:953)
at 
org.apache.lucene.store.CompoundFileDirectory.openInput(CompoundFileDirectory.java:271)
at 
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:121)
at 
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:116)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4236)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3706)
... 2 more




Build Log:
[...truncated 881 lines...]
   [junit4] Suite: org.apache.lucene.index.TestNRTThreads
   [junit4]   2> 06.02.2014 22:59:18 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNUNG: Uncaught exception in thread: Thread[Lucene Merge 
Thread #0,6,TGRP-TestNRTThreads]
   [junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is still open: 
cannot delete
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E]:0)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
   [junit4]   2> Caused by: java.lang.AssertionError: MockDirectoryWrapper: 
file "_c.cfs" is still open: cannot delete
   [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
   [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
   [junit4]   2>at 
org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
   [junit4]   2>at 
org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:349)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3721)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
   [junit4]   2>at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
   [junit4]   2> Caused by: java.lang.RuntimeException: unclosed IndexInput: 
_c.cfs
   [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectory

Re: [JENKINS] Lucene-4x-Linux-Java6-64-test-only - Build # 12118 - Failure!

2014-02-06 Thread Mark Miller
Looking - assume there is a std index reader being opened in this test as well.

- Mark

http://about.me/markrmiller

On Feb 6, 2014, at 4:01 PM, buil...@flonkings.com wrote:

> Build: builds.flonkings.com/job/Lucene-4x-Linux-Java6-64-test-only/12118/
> 
> 1 tests failed.
> REGRESSION:  org.apache.lucene.index.TestNRTThreads.testNRTThreads
> 
> Error Message:
> Captured an uncaught exception in thread: Thread[id=354, name=Lucene Merge 
> Thread #0, state=RUNNABLE, group=TGRP-TestNRTThreads]
> 
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=354, name=Lucene Merge Thread #0, 
> state=RUNNABLE, group=TGRP-TestNRTThreads]
>   at 
> __randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E:427D4F0468DCC125]:0)
> Caused by: org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is still open: 
> cannot delete
>   at __randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E]:0)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
> Caused by: java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is 
> still open: cannot delete
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
>   at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
>   at 
> org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:349)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3721)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
> Caused by: java.lang.RuntimeException: unclosed IndexInput: _c.cfs
>   at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:534)
>   at 
> org.apache.lucene.store.MockDirectoryWrapper$1.openSlice(MockDirectoryWrapper.java:953)
>   at 
> org.apache.lucene.store.CompoundFileDirectory.openInput(CompoundFileDirectory.java:271)
>   at 
> org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:121)
>   at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
>   at 
> org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:116)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
>   at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
>   at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4236)
>   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3706)
>   ... 2 more
> 
> 
> 
> 
> Build Log:
> [...truncated 881 lines...]
>   [junit4] Suite: org.apache.lucene.index.TestNRTThreads
>   [junit4]   2> 06.02.2014 22:59:18 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>   [junit4]   2> WARNUNG: Uncaught exception in thread: Thread[Lucene Merge 
> Thread #0,6,TGRP-TestNRTThreads]
>   [junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.AssertionError: MockDirectoryWrapper: file "_c.cfs" is still open: 
> cannot delete
>   [junit4]   2>   at 
> __randomizedtesting.SeedInfo.seed([D9A45B1F2927D74E]:0)
>   [junit4]   2>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:545)
>   [junit4]   2>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:518)
>   [junit4]   2> Caused by: java.lang.AssertionError: MockDirectoryWrapper: 
> file "_c.cfs" is still open: cannot delete
>   [junit4]   2>   at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:433)
>   [junit4]   2>   at 
> org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:392)
>   [junit4]   2>   at 
> org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:584)
>   [junit4]   2>   at 
> org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:349)
>   [junit4]   2>   at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3721)
>   [junit4]   2>   at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
>   [junit4]  

Lucene 4.6.1 changes section

2014-02-06 Thread Simon Willnauer
hey folks,

I don't see the 4.6.1 section in the 4.x branch. How do we handle
this, since we have a Lucene 4.3.1 we should also have Lucene 4.6.1
there no?

simon

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-06 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5437:


Priority: Minor  (was: Major)

> ASCIIFoldingFilter that emits both unfolded and folded tokens
> -
>
> Key: LUCENE-5437
> URL: https://issues.apache.org/jira/browse/LUCENE-5437
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nik Everett
>Priority: Minor
>
> I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
> tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-06 Thread Nik Everett (JIRA)
Nik Everett created LUCENE-5437:
---

 Summary: ASCIIFoldingFilter that emits both unfolded and folded 
tokens
 Key: LUCENE-5437
 URL: https://issues.apache.org/jira/browse/LUCENE-5437
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Nik Everett


I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5437) ASCIIFoldingFilter that emits both unfolded and folded tokens

2014-02-06 Thread Nik Everett (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nik Everett updated LUCENE-5437:


Attachment: LUCENE-5437.patch

Sorry for moving so much code.

> ASCIIFoldingFilter that emits both unfolded and folded tokens
> -
>
> Key: LUCENE-5437
> URL: https://issues.apache.org/jira/browse/LUCENE-5437
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nik Everett
>Priority: Minor
> Attachments: LUCENE-5437.patch
>
>
> I've found myself wanting an ASCIIFoldingFilter that emits both the folded 
> tokens and the original, unfolded tokens.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1303 - Failure!

2014-02-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1303/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 10031 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20140206_224802_871.syserr
   [junit4] >>> JVM J0: stderr (verbatim) 
   [junit4] java(213,0x13b0ce000) malloc: *** error for object 
0x2013b0bca80: pointer being freed was not allocated
   [junit4] *** set a breakpoint in malloc_error_break to debug
   [junit4] <<< JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/heapdumps 
-Dtests.prefix=tests -Dtests.seed=73445E381E1A5417 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.disableHdfs=true -Dfile.encoding=US-ASCII -classpath 
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/test-framework/lib/junit4-ant-2.0.13.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queries/lucene-queries-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/queryparser/lucene-queryparser-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/join/lucene-join-5.0-SNAPSHOT.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/antlr-runtime-3.5.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/asm-commons-4.1.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-codec-1.7.jar:/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/core/lib/commons-configuration-1.6.jar:/Users/jenkins/workspace/Lucene-Solr-tru

[jira] [Assigned] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-06 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-5659:
--

Assignee: Anshum Gupta  (was: Shalin Shekhar Mangar)

> Ignore or throw proper error message for bad delete containing bad composite 
> ID
> ---
>
> Key: SOLR-5659
> URL: https://issues.apache.org/jira/browse/SOLR-5659
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
> Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
> 13:48:08
>Reporter: Markus Jelsma
>Assignee: Anshum Gupta
> Fix For: 5.0
>
>
> The following error is thrown when sending deleteById via SolrJ with ID 
> ending with an exclamation mark, it is also the case for deletes by id via 
> the URL. For some curious reason delete by query using the id field does not 
> fail, but i would expect the same behaviour.
> * fails: /solr/update?commit=true&stream.body=a!
> * ok: 
> /solr/update?commit=true&stream.body=id:a!
> {code}
> 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
> java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
> at 
> org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
> at 
> org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
> at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
> at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
> at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724) 
> {code}
> See also: 
> http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5659) Ignore or throw proper error message for bad delete containing bad composite ID

2014-02-06 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5659:
---

Attachment: SOLR-5659.patch

Fix and a test which fails without the patch.

> Ignore or throw proper error message for bad delete containing bad composite 
> ID
> ---
>
> Key: SOLR-5659
> URL: https://issues.apache.org/jira/browse/SOLR-5659
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
> Environment: 5.0-SNAPSHOT 1480985:1559676M - markus - 2014-01-20 
> 13:48:08
>Reporter: Markus Jelsma
>Assignee: Anshum Gupta
> Fix For: 5.0
>
> Attachments: SOLR-5659.patch
>
>
> The following error is thrown when sending deleteById via SolrJ with ID 
> ending with an exclamation mark, it is also the case for deletes by id via 
> the URL. For some curious reason delete by query using the id field does not 
> fail, but i would expect the same behaviour.
> * fails: /solr/update?commit=true&stream.body=a!
> * ok: 
> /solr/update?commit=true&stream.body=id:a!
> {code}
> 2014-01-22 15:32:48,826 ERROR [solr.core.SolrCore] - [http-8080-exec-5] - : 
> java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.solr.common.cloud.CompositeIdRouter$KeyParser.getHash(CompositeIdRouter.java:291)
> at 
> org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:58)
> at 
> org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:33)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:218)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:961)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.handler.loader.XMLLoader.processDelete(XMLLoader.java:347)
> at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:278)
> at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
> at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1915)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:785)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:203)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724) 
> {code}
> See also: 
> http://lucene.472066.n3.nabble.com/AIOOBException-on-trunk-since-21st-or-22nd-build-td4112753.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reassigned SOLR-5644:
--

Assignee: Anshum Gupta

> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5644:
---

Attachment: SOLR-5644.patch

A basic fix that retries for 10 seconds and throws an exception if it still 
doesn't have a leader.

> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893968#comment-13893968
 ] 

Mark Miller commented on SOLR-5644:
---

I don't think that clusterstate object will ever be updated?

What about changing the splitshard method to take zkstatereader and use 
zkstatereader#getleaderretry?

> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4037) Continuous Ping query caused exception: java.util.concurrent.RejectedExecutionException

2014-02-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893977#comment-13893977
 ] 

Anshum Gupta commented on SOLR-4037:


Does this issue still exist?

>From what I can think of, it must be the ThreadPoolExececutor that has 
>exhausted all it's threads and is just rejecting any subsequent requests.

> Continuous Ping query caused exception: 
> java.util.concurrent.RejectedExecutionException
> ---
>
> Key: SOLR-4037
> URL: https://issues.apache.org/jira/browse/SOLR-4037
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.0
> Environment: 5.0-SNAPSHOT 1366361:1404534M - markus - 2012-11-01 
> 12:37:38
> Debian Squeeze, Tomcat 6, Sun Java 6, 10 nodes, 10 shards, rep. factor 2.
>Reporter: Markus Jelsma
> Fix For: 4.7
>
>
> See: 
> http://lucene.472066.n3.nabble.com/Continuous-Ping-query-caused-exception-java-util-concurrent-RejectedExecutionException-td4017470.html
> Using this week's trunk we sometime see nodes entering a some funky state 
> where it continuously reports exceptions. Replication and query handling is 
> still possible but there is an increase in CPU time:
> {code}
> 2012-11-01 09:24:28,337 INFO [solr.core.SolrCore] - [http-8080-exec-4] - : 
> [openindex_f] webapp=/solr path=/admin/ping params={} status=500 QTime=21
> 2012-11-01 09:24:28,337 ERROR [solr.core.SolrCore] - [http-8080-exec-4] - : 
> org.apache.solr.common.SolrException: Ping query caused exception: 
> java.util.concurrent.RejectedExecutionException
> at 
> org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:259)
> at 
> org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:207)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:476)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:88)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:214)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1830)
> at 
> org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:250)
> ... 19 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:6

[jira] [Updated] (SOLR-5700) Improve error handling of remote queries (proxied requests)

2014-02-06 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-5700:
-

Attachment: SOLR-5700v2.patch

Here's another rev at this patch.

This fixes the test failure and calls abort if the remote query is not 
successful.

> Improve error handling of remote queries (proxied requests)
> ---
>
> Key: SOLR-5700
> URL: https://issues.apache.org/jira/browse/SOLR-5700
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5700.patch, SOLR-5700v2.patch
>
>
> The current remoteQuery code in SolrDispatchFilter yields error messages like 
> the following:
> org.apache.solr.servlet.SolrDispatchFilter: 
> null:org.apache.solr.common.SolrException: Error trying to proxy request for 
> url: http://localhost:8983/solr/myCollection/update
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:580)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:288)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:169)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.ProxyUserFilter.doFilter(ProxyUserFilter.java:241)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter$2.doFilter(SolrHadoopAuthenticationFilter.java:140)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.doFilter(SolrHadoopAuthenticationFilter.java:145)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>   at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>   at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.IOException: Server returned HTTP response code: 401 for 
> URL: 
> http://search-testing-c4-secure-4.ent.cloudera.com:8983/solr/sentryCollection/update?stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield+name%3D%22id%22%3E1383855038349doc1%3C%2Ffield%3E%3Cfield+name%3D%22description%22%3Efirst+test+document+1383855038349%3C%2Ffield%3E%3C%2Fdoc%3E%3C%2Fadd%3E&doAs=user1
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1674)
>   at 
> sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1672)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1670)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1243)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.r

[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893986#comment-13893986
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565485 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1565485 ]

LUCENE-5434: This test method uses a non nrt reader.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893990#comment-13893990
 ] 

Anshum Gupta commented on SOLR-5644:


My bad! Had that, removed that.

Will just put up another patch.

> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5434) NRT support for file systems that do no have delete on last close or cannot delete while referenced semantics.

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893992#comment-13893992
 ] 

ASF subversion and git services commented on LUCENE-5434:
-

Commit 1565486 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565486 ]

LUCENE-5434: This test method uses a non nrt reader.

> NRT support for file systems that do no have delete on last close or cannot 
> delete while referenced semantics.
> --
>
> Key: LUCENE-5434
> URL: https://issues.apache.org/jira/browse/LUCENE-5434
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: LUCENE-5434.patch, LUCENE-5434.patch, LUCENE-5434.patch, 
> LUCENE-5434.patch
>
>
> See SOLR-5693 and our HDFS support - for something like HDFS to work with 
> NRT, we need an ability for near realtime readers to hold references to their 
> files to prevent deletes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5703) User Guide docs on using cursors & deep paging

2014-02-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5703:
--

 Summary: User Guide docs on using cursors & deep paging
 Key: SOLR-5703
 URL: https://issues.apache.org/jira/browse/SOLR-5703
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man


SOLR-5463 and cursorMark need documented in the user guide -- beyond just 
simple usage, we need to explain the why/how it's distinct from regular 
pagination.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5703) User Guide docs on using cursors & deep paging

2014-02-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5703:
---

Attachment: pagination.user.guide.txt

Attaching a text file with the verbage i've come up with.

the basic idea i had was to build up a new page in the doc (still not sure 
where it should live) where we first describe how basic pagination can be 
implemented for apps with UIs using start+rows, then talk about when/why "deep 
pagination" using start+rows is problematic, and then introduce cursor based 
pagination.  I also included info in both sections about how index updates 
affect multiple requests when using each type of pagination.

would appreciate feedback and any suggestions folks have for how a new page 
like this should fit into the current ref guide structure

(Note: txt file is in my own little psuedo-markup since confluence doesn't have 
a wiki syntax anymore)

> User Guide docs on using cursors & deep paging
> --
>
> Key: SOLR-5703
> URL: https://issues.apache.org/jira/browse/SOLR-5703
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 5.0, 4.7
>
> Attachments: pagination.user.guide.txt
>
>
> SOLR-5463 and cursorMark need documented in the user guide -- beyond just 
> simple usage, we need to explain the why/how it's distinct from regular 
> pagination.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5644:
---

Attachment: SOLR-5644.patch

Using zkStateReader.getLeaderRetry(). This should get and use the updated 
clusterstate.

Any suggestions on keeping a 10s timeout for getting a leader or reduce it to 
the default for getLeaderRetry()?


> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch, SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5644) SplitShard does not handle not finding a shard leader well.

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894014#comment-13894014
 ] 

Mark Miller commented on SOLR-5644:
---

10 seems fine to me.

> SplitShard does not handle not finding a shard leader well.
> ---
>
> Key: SOLR-5644
> URL: https://issues.apache.org/jira/browse/SOLR-5644
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5644.patch, SOLR-5644.patch
>
>
> In OverseerCollectionProcessor:
> // find the leader for the shard
> Replica parentShardLeader = clusterState.getLeader(collectionName, slice);
> This returns null if there is no current leader and the following code does 
> not deal with that case and instead NPE's.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene 4.6.1 changes section

2014-02-06 Thread Mark Miller
Yeah man, good catch. We need to move it in there from the 4.6 branch!

- Mark

http://about.me/markrmiller

On Feb 6, 2014, at 4:21 PM, Simon Willnauer  wrote:

> hey folks,
> 
> I don't see the 4.6.1 section in the 4.x branch. How do we handle
> this, since we have a Lucene 4.3.1 we should also have Lucene 4.6.1
> there no?
> 
> simon
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894084#comment-13894084
 ] 

Mark Miller commented on SOLR-3854:
---

Thanks Steve - I started looking at this last night and have done a bit more 
work early this morning. I'm tweaking the tests a little to get some more 
coverage, but I think things look good overall.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894139#comment-13894139
 ] 

Mark Miller commented on SOLR-3854:
---

Server restart is fine - I had it in my head we didn't regenerate the url in zk 
on every startup - but its only coreNodeName that is not generated. Nevermind.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5541) Allow QueryElevationComponent to accept elevateIds and excludeIds as http parameters

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894141#comment-13894141
 ] 

ASF subversion and git services commented on SOLR-5541:
---

Commit 1565520 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1565520 ]

SOLR-5541: Use StrUtils.splitSmart to handle escape chars

> Allow QueryElevationComponent to accept elevateIds and excludeIds as http 
> parameters
> 
>
> Key: SOLR-5541
> URL: https://issues.apache.org/jira/browse/SOLR-5541
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 4.6
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.7
>
> Attachments: SOLR-5541.patch, SOLR-5541.patch, SOLR-5541.patch, 
> SOLR-5541.patch
>
>
> The QueryElevationComponent currently uses an xml file to map query strings 
> to elevateIds and excludeIds.
> This ticket adds the ability to pass in elevateIds and excludeIds through two 
> new http parameters "elevateIds" and "excludeIds".
> This will allow more sophisticated business logic to be used in selecting 
> which ids to elevate/exclude.
> Proposed syntax:
> http://localhost:8983/solr/elevate?q=*:*&elevatedIds=3,4&excludeIds=6,8
> The elevateIds and excludeIds point to the unique document Id.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5541) Allow QueryElevationComponent to accept elevateIds and excludeIds as http parameters

2014-02-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894165#comment-13894165
 ] 

ASF subversion and git services commented on SOLR-5541:
---

Commit 1565526 from [~joel.bernstein] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1565526 ]

SOLR-5541: Use StrUtils.splitSmart to handle escape chars

> Allow QueryElevationComponent to accept elevateIds and excludeIds as http 
> parameters
> 
>
> Key: SOLR-5541
> URL: https://issues.apache.org/jira/browse/SOLR-5541
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 4.6
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 4.7
>
> Attachments: SOLR-5541.patch, SOLR-5541.patch, SOLR-5541.patch, 
> SOLR-5541.patch
>
>
> The QueryElevationComponent currently uses an xml file to map query strings 
> to elevateIds and excludeIds.
> This ticket adds the ability to pass in elevateIds and excludeIds through two 
> new http parameters "elevateIds" and "excludeIds".
> This will allow more sophisticated business logic to be used in selecting 
> which ids to elevate/exclude.
> Proposed syntax:
> http://localhost:8983/solr/elevate?q=*:*&elevatedIds=3,4&excludeIds=6,8
> The elevateIds and excludeIds point to the unique document Id.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5700) Improve error handling of remote queries (proxied requests)

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894136#comment-13894136
 ] 

Mark Miller commented on SOLR-5700:
---

Hmm...I've seen some weird test fails while testing this out. I don't know that 
its not just bad luck or this patch, so I'll spend some more time later running 
the tests. Need to see if I see the same things with a clean check out or what.

> Improve error handling of remote queries (proxied requests)
> ---
>
> Key: SOLR-5700
> URL: https://issues.apache.org/jira/browse/SOLR-5700
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-5700.patch, SOLR-5700v2.patch
>
>
> The current remoteQuery code in SolrDispatchFilter yields error messages like 
> the following:
> org.apache.solr.servlet.SolrDispatchFilter: 
> null:org.apache.solr.common.SolrException: Error trying to proxy request for 
> url: http://localhost:8983/solr/myCollection/update
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:580)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:288)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:169)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.ProxyUserFilter.doFilter(ProxyUserFilter.java:241)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter$2.doFilter(SolrHadoopAuthenticationFilter.java:140)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384)
>   at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.doFilter(SolrHadoopAuthenticationFilter.java:145)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.solr.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>   at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>   at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>   at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>   at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>   at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
>   at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>   at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
>   at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
>   at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
>   at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.IOException: Server returned HTTP response code: 401 for 
> URL: 
> http://search-testing-c4-secure-4.ent.cloudera.com:8983/solr/sentryCollection/update?stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield+name%3D%22id%22%3E1383855038349doc1%3C%2Ffield%3E%3Cfield+name%3D%22description%22%3Efirst+test+document+1383855038349%3C%2Ffield%3E%3C%2Fdoc%3E%3C%2Fadd%3E&doAs=user1
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1674)
>   at 
> sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1672)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1670)
>   at 
> sun.net

[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894099#comment-13894099
 ] 

Steve Davids commented on SOLR-3854:


True, it will only be respected on a server restart. So if you wanted to go 
from http -> https you would need to update the clusterprops.json then restart 
all of your nodes to pick up the new configuration. Although, if the scheme was 
stored as apart of either the HttpShardHandler or in the solr.xml a restart 
would be required as well. I don't think people will be flipping SSL on/off, 
more of a global set it and forget it property.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894095#comment-13894095
 ] 

Mark Miller commented on SOLR-3854:
---

Hmm...it seems like we really don't want to put the scheme in zk as part of the 
base url - if you want to turn things on and off, it won't respect that anyway.

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894090#comment-13894090
 ] 

Steve Davids commented on SOLR-3854:


Thanks, let me know if you need a hand. I was also thinking that the following 
test should be added to verify the scheme:

{code}
  private void testBaseUrlHttpsScheme() {
List replicas = getZkReplicas();
assertFalse("No replicas found in ZooKeeper", replicas.isEmpty());

for(Replica replica : replicas) {
  String baseUrl = (String) replica.get(ZkStateReader.BASE_URL_PROP);
  assertTrue(baseUrl + " didn't begin with a https url scheme", 
StringUtils.startsWith(baseUrl, "https://";));
  try {
URL url = new URL(baseUrl);
assertNotNull("No path can be found for " + replica.getNodeName(), 
url.getPath());
  } catch (Exception ex) {
fail(replica.getNodeName() + " failed to build a proper URL [" + 
baseUrl + "]");
  }
}
  }
  
  protected List getZkReplicas() {
List replicas = new ArrayList();
ClusterState clusterState = 
cloudClient.getZkStateReader().getClusterState();
for(String collection : clusterState.getCollections()) {
  for(Slice slice : clusterState.getSlicesMap(collection).values()) {
replicas.addAll(slice.getReplicas());
  }
}

return replicas;
  }
{code}

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-06 Thread Jesse Sipprell (JIRA)
Jesse Sipprell created SOLR-5704:


 Summary: solr.xml coreNodeDirectory is ignored when creating new 
cores via REST(ish) apis
 Key: SOLR-5704
 URL: https://issues.apache.org/jira/browse/SOLR-5704
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.6.1
 Environment: x86_64 Linux
x86_64 Sun Java 7_u21
Reporter: Jesse Sipprell
Priority: Minor


"New style" core.properties auto-configuration works correctly at startup when 
${coreRootDirectory} is specified in ${solr.home}/solr.xml, however it does not 
work if a core is later created dynamically via either (indirectly) the 
collection API or (directly) the core API. Core creation is always attempted in 
${solr.home}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-06 Thread Jesse Sipprell (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894206#comment-13894206
 ] 

Jesse Sipprell commented on SOLR-5704:
--

The following is the common solr.xml we use on all 4.6.1 nodes with path and 
hostname redactions for security reasons:

{noformat}

  ${adminHandler:org.apache.solr.handler.admin.CoreAdminHandler}
  ${coreLoadThreads:3}
  ${coreRootDirectory:/data/solr/cores/} 

  ${managementPath:admin}
  ${sharedLib:lib}
  ${shareSchema:false}
  
  ${transientCacheSize:2147483647}
  
${distribUpdTimeout:3}
${distribUpdateTimeout:15000}
${leaderVoteWait:3}
${host:a-fully-qualified-hostname.mcclatchyinteractive.com}
${hostContext:solr}
${jetty.port:8983}
${zkClientTimeout:15000}
${zkHost:a,zk,cluster,host,port,namespace,list}
${genericCoreNodeNames:true}
  

{noformat}


> solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
> apis
> 
>
> Key: SOLR-5704
> URL: https://issues.apache.org/jira/browse/SOLR-5704
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
> Environment: x86_64 Linux
> x86_64 Sun Java 7_u21
>Reporter: Jesse Sipprell
>Priority: Minor
>  Labels: solr.xml
>
> "New style" core.properties auto-configuration works correctly at startup 
> when ${coreRootDirectory} is specified in ${solr.home}/solr.xml, however it 
> does not work if a core is later created dynamically via either (indirectly) 
> the collection API or (directly) the core API. Core creation is always 
> attempted in ${solr.home}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-06 Thread Jesse Sipprell (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894206#comment-13894206
 ] 

Jesse Sipprell edited comment on SOLR-5704 at 2/7/14 5:30 AM:
--

The following is the common solr.xml we use on all 4.6.1 nodes with path and 
hostname redactions for security reasons:

{code:xml}

  ${adminHandler:org.apache.solr.handler.admin.CoreAdminHandler}
  ${coreLoadThreads:3}
  ${coreRootDirectory:/data/solr/cores/} 

  ${managementPath:admin}
  ${sharedLib:lib}
  ${shareSchema:false}
  
  ${transientCacheSize:2147483647}
  
${distribUpdTimeout:3}
${distribUpdateTimeout:15000}
${leaderVoteWait:3}
${host:a-fully-qualified-hostname.mcclatchyinteractive.com}
${hostContext:solr}
${jetty.port:8983}
${zkClientTimeout:15000}
${zkHost:a,zk,cluster,host,port,namespace,list}
${genericCoreNodeNames:true}
  

{code}



was (Author: jsipprell):
The following is the common solr.xml we use on all 4.6.1 nodes with path and 
hostname redactions for security reasons:

{noformat}

  ${adminHandler:org.apache.solr.handler.admin.CoreAdminHandler}
  ${coreLoadThreads:3}
  ${coreRootDirectory:/data/solr/cores/} 

  ${managementPath:admin}
  ${sharedLib:lib}
  ${shareSchema:false}
  
  ${transientCacheSize:2147483647}
  
${distribUpdTimeout:3}
${distribUpdateTimeout:15000}
${leaderVoteWait:3}
${host:a-fully-qualified-hostname.mcclatchyinteractive.com}
${hostContext:solr}
${jetty.port:8983}
${zkClientTimeout:15000}
${zkHost:a,zk,cluster,host,port,namespace,list}
${genericCoreNodeNames:true}
  

{noformat}


> solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
> apis
> 
>
> Key: SOLR-5704
> URL: https://issues.apache.org/jira/browse/SOLR-5704
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
> Environment: x86_64 Linux
> x86_64 Sun Java 7_u21
>Reporter: Jesse Sipprell
>Priority: Minor
>  Labels: solr.xml
>
> "New style" core.properties auto-configuration works correctly at startup 
> when ${coreRootDirectory} is specified in ${solr.home}/solr.xml, however it 
> does not work if a core is later created dynamically via either (indirectly) 
> the collection API or (directly) the core API. Core creation is always 
> attempted in ${solr.home}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894214#comment-13894214
 ] 

Mark Miller commented on SOLR-3854:
---

So I looked at pulling the new ssl code in AbstractFullDistribZkTestBase. Most 
tests should just work with this, and couple that explicitly use http or 
something just need a little getter. Then, rather than run this one tests and 
add the extra test time, like a dozen tests can randomly run with ssl or not.

The problem is that it seems to all work fine running tests one at a time in 
eclipse, but running them with ant test has all kinds of fails. I have not 
figured out why yet. I've tried a few things, but everything looks like it 
should clean up properly (the latest patch was missing clearing the system 
properties it set in teardown or afterclass).

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5704) solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) apis

2014-02-06 Thread Jesse Sipprell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Sipprell updated SOLR-5704:
-

Description: 
"New style" core.properties auto-configuration works correctly at startup when 
{{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, however 
it does not work if a core is later created dynamically via either (indirectly) 
the collection API or (directly) the core API.

Core creation is always attempted in {{$\{solr.home\}}}.

  was:"New style" core.properties auto-configuration works correctly at startup 
when ${coreRootDirectory} is specified in ${solr.home}/solr.xml, however it 
does not work if a core is later created dynamically via either (indirectly) 
the collection API or (directly) the core API. Core creation is always 
attempted in ${solr.home}.


> solr.xml coreNodeDirectory is ignored when creating new cores via REST(ish) 
> apis
> 
>
> Key: SOLR-5704
> URL: https://issues.apache.org/jira/browse/SOLR-5704
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6.1
> Environment: x86_64 Linux
> x86_64 Sun Java 7_u21
>Reporter: Jesse Sipprell
>Priority: Minor
>  Labels: solr.xml
>
> "New style" core.properties auto-configuration works correctly at startup 
> when {{$\{coreRootDirectory\}}} is specified in {{$\{solr.home\}/solr.xml}}, 
> however it does not work if a core is later created dynamically via either 
> (indirectly) the collection API or (directly) the core API.
> Core creation is always attempted in {{$\{solr.home\}}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3854) SolrCloud does not work with https

2014-02-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13894214#comment-13894214
 ] 

Mark Miller edited comment on SOLR-3854 at 2/7/14 5:38 AM:
---

So I looked at pulling the new ssl test code into 
AbstractFullDistribZkTestBase. Most tests should just work with this, and a 
couple that explicitly use http or something just need a little getter. Then, 
rather than run this one test and add the extra test time, like a dozen tests 
can randomly run with ssl or not.

The problem is that it seems to all work fine running tests one at a time in 
eclipse, but running them with ant test has all kinds of fails. I have not 
figured out why yet. I've tried a few things, but everything looks like it 
should clean up properly (the latest patch was missing clearing the system 
properties it set in teardown or afterclass). Somehow the tests are interacting 
with each other, or they don't work when run by ant (there are some differences 
unless you pass security managers and what not via your ide).


was (Author: markrmil...@gmail.com):
So I looked at pulling the new ssl code in AbstractFullDistribZkTestBase. Most 
tests should just work with this, and couple that explicitly use http or 
something just need a little getter. Then, rather than run this one tests and 
add the extra test time, like a dozen tests can randomly run with ssl or not.

The problem is that it seems to all work fine running tests one at a time in 
eclipse, but running them with ant test has all kinds of fails. I have not 
figured out why yet. I've tried a few things, but everything looks like it 
should clean up properly (the latest patch was missing clearing the system 
properties it set in teardown or afterclass).

> SolrCloud does not work with https
> --
>
> Key: SOLR-3854
> URL: https://issues.apache.org/jira/browse/SOLR-3854
> Project: Solr
>  Issue Type: Bug
>Reporter: Sami Siren
>Assignee: Mark Miller
> Fix For: 5.0, 4.7
>
> Attachments: SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, SOLR-3854.patch, 
> SOLR-3854.patch
>
>
> There are a few places in current codebase that assume http is used. This 
> prevents using https when running solr in cloud mode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >