[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 518 - Failure!

2013-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/518/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection to http://localhost:51853 refused

Stack Trace:
org.apache.http.conn.HttpHostConnectException: Connection to 
http://localhost:51853 refused
at 
__randomizedtesting.SeedInfo.seed([AC4BFCC35CEA9F5C:7B1E1D683361972]:0)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:178)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:51)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:196)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:402)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
or

[jira] [Created] (SOLR-4900) Leader election deadlock after restarting leader in 4.2.1

2013-06-04 Thread John Guerrero (JIRA)
John Guerrero created SOLR-4900:
---

 Summary: Leader election deadlock after restarting leader in 4.2.1
 Key: SOLR-4900
 URL: https://issues.apache.org/jira/browse/SOLR-4900
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2
 Environment: Linux 64 bit, Tomcat 6.0.35, Java 6u27 64 bit
Reporter: John Guerrero


Copying post from 
http://lucene.472066.n3.nabble.com/Leader-election-deadlock-after-restarting-leader-in-4-2-1-td4067988.html

SOLR 4.2.1, tomcat 6.0.35, CentOS 6.2 (2.6.32-220.4.1.el6.x86_64 #1 SMP), java 
6u27 64 bit 
6 nodes, 2 shards, 3 replicas each.  Names changed to r1s2 (replica1 - shard 
2), r2s2, and r3s2 for each replica in shard 2. 

*What we see*:
* Under production load, we restart a leader (r1s2), and observe in the cloud 
admin 
that the old leader is in state "Down" and no new leader is ever elected. 
* The system will stay like this until we stop the old leader (or cause a ZK 
timeout...see below). 

*Please note*: the leader is killed, then kill -9'd 5 seconds later, before 
restarting.  We have since changed this. 

*Digging into the logs on the old leader (r1s2 = replica1-shard 2)*:
* The old leader restarted at 5:23:29 PM, but appears to be stuck in 
SolrDispatchFilter.init() -- (See recovery at bottom). 
* It doesn't want to become leader, possibly due to the unclean shutdown. 
May 28, 2013 5:24:42 PM org.apache.solr.update.PeerSync handleVersions 
INFO: PeerSync: core=browse url=http://r1s2:8080/solr  Our versions are too 
old. ourHighThreshold=1436325665147191297 otherLowThreshold=1436325775374548992 
* It then tries to recover, but cannot, because there is no leader. 
May 28, 2013 5:24:43 PM org.apache.solr.common.SolrException log 
SEVERE: Error while trying to recover. 
core=browse:org.apache.solr.common.SolrException: No registered leader was 
found, collection:browse slice:shard2 
* Meanwhile, it appears that blocking in init(), prevents the http-8080 handler 
from starting (See recovery at bottom). 

*Digging into the other replicas (r2s2)*:
* For some reason, the old leader (r1s2) remains in the list of replicas that 
r2s2 attempts to sync to. 
May 28, 2013 5:23:42 PM org.apache.solr.update.PeerSync sync 
INFO: PeerSync: core=browse url=http://r2s2:8080/solr START 
replicas=[http://r1s2:8080/solr/browse/, http://r3s2:8080/solr/browse/] 
nUpdates=100 
* This apparently fails (30 second timeout), possibly due to http-8080 handler 
not being started on r1s2. 
May 28, 2013 5:24:12 PM org.apache.solr.update.PeerSync handleResponse 
WARNING: PeerSync: core=browse url=http://r2s2:8080/solr  exception talking to 
http://r1s2:8080/solr/browse/, failed 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://r1s2:8080/solr/browse

*At this point, the cluster will remain indefinitely without a leader, if 
nothing else changes.*

But in this particular instance, we took some stack and heap dumps from r1s2, 
which paused java 
long enough to cause a *zookeeper timeout on the old leader (r1s2)*: 
May 28, 2013 5:33:26 PM org.apache.zookeeper.ClientCnxn$SendThread run 
INFO: Client session timed out, have not heard from server in 38226ms for 
sessionid 0x23d28e0f584005d, closing socket connection and attempting reconnect 

Then, one of the replicas (r3s2) finally stopped trying to sync to r1s2 and 
succeeded in becoming leader: 
May 28, 2013 5:33:34 PM org.apache.solr.update.PeerSync sync 
INFO: PeerSync: core=browse url=http://r3s2:8080/solr START 
replicas=[http://r2s2:8080/solr/browse/] nUpdates=100 
May 28, 2013 5:33:34 PM org.apache.solr.update.PeerSync handleVersions 
INFO: PeerSync: core=browse url=http://r3s2:8080/solr  Received 100 versions 
from r2s2:8080/solr/browse/ 
May 28, 2013 5:33:34 PM org.apache.solr.update.PeerSync handleVersions 
INFO: PeerSync: core=browse url=http://r3s2:8080/solr  Our versions are newer. 
ourLowThreshold=1436325775374548992 otherHigh=1436325775805513730 
May 28, 2013 5:33:34 PM org.apache.solr.update.PeerSync sync 
INFO: PeerSync: core=browse url=http://r3s2:8080/solr DONE. sync succeeded 

Now that we have a leader, r1s2 can succeed in recovery and finish 
SolrDispatchFilter.init(), 
apparently allowing the http-8080 handler to start (r1s2). 
May 28, 2013 5:34:49 PM org.apache.solr.cloud.RecoveryStrategy replay 
INFO: No replay needed. core=browse 
May 28, 2013 5:34:49 PM org.apache.solr.cloud.RecoveryStrategy doRecovery 
INFO: Replication Recovery was successful - registering as Active. core=browse 
May 28, 2013 5:34:49 PM org.apache.solr.cloud.ZkController publish 
INFO: publishing core=browse state=active 
May 28, 2013 5:34:49 PM org.apache.solr.cloud.ZkController publish 
INFO: numShards not found on descriptor - reading it from system property 
May 28, 2013 5:34:49 PM org.apache.solr.cloud.RecoveryStrategy doRecovery 
INFO: Fini

[jira] [Created] (LUCENE-5033) SlowFuzzyQuery appears to fail with edit distance >=3 in some cases

2013-06-04 Thread Tim Allison (JIRA)
Tim Allison created LUCENE-5033:
---

 Summary: SlowFuzzyQuery appears to fail with edit distance >=3 in 
some cases
 Key: LUCENE-5033
 URL: https://issues.apache.org/jira/browse/LUCENE-5033
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.3
Reporter: Tim Allison
Priority: Minor


Levenshtein edit btwn "monday" and "montugu" should be 4.  The following shows 
a query with "sim" set to 3, and there is a hit.

  public void testFuzzinessLong2() throws Exception {
 Directory directory = newDirectory();
 RandomIndexWriter writer = new RandomIndexWriter(random(), directory);
 addDoc("monday", writer);
 
 IndexReader reader = writer.getReader();
 IndexSearcher searcher = newSearcher(reader);
 writer.close();

 SlowFuzzyQuery query;
 query = new SlowFuzzyQuery(new Term("field", "montugu"), 3, 0);   
 ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
 assertEquals(0, hits.length);
  }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4891) JsonLoader should preserve field value types from the JSON content stream

2013-06-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-4891.
--

   Resolution: Fixed
Fix Version/s: 4.4

Committed:

- trunk: [r1489676|http://svn.apache.org/viewvc?view=rev&rev=1489676]
- branch_4x: [r1489677|http://svn.apache.org/viewvc?view=rev&rev=1489677]

> JsonLoader should preserve field value types from the JSON content stream
> -
>
> Key: SOLR-4891
> URL: https://issues.apache.org/jira/browse/SOLR-4891
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.4
>
> Attachments: SOLR-4891.patch
>
>
> JSON content streams carry some basic type information for their field 
> values, as parsed by Noggit: LONG, NUMBER, BIGNUMBER, and BOOLEAN.  
> {{JsonLoader}} should set field value object types in the 
> {{SolrInputDocument}} according to the content stream's data types. 
> Currently {{JsonLoader}} converts all non-{{String}}-typed field values to 
> {{String}}-s.
> There is a comment in {{JsonLoader.parseSingleFieldValue()}}, where the 
> convert-everything-to-string logic happens, that says "for legacy reasons, 
> single values s are expected to be strings", but other content streams' type 
> information is not flattened like this, e.g. {{JavabinLoader}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2058) Adds optional "phrase slop" to edismax "pf2", "pf3" and "pf" parameters with field~slop^boost syntax

2013-06-04 Thread Naomi Dushay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675456#comment-13675456
 ] 

Naomi Dushay commented on SOLR-2058:


Michael - per your comment on Sep 25, 2012 -- that behavioral change is *not* 
desirable, in my opinion.

> Adds optional "phrase slop" to edismax "pf2", "pf3" and "pf" parameters with 
> field~slop^boost syntax
> 
>
> Key: SOLR-2058
> URL: https://issues.apache.org/jira/browse/SOLR-2058
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
> Environment: n/a
>Reporter: Ron Mayer
>Assignee: James Dyer
>Priority: Minor
> Fix For: 4.0-ALPHA
>
> Attachments: edismax_pf_with_slop_v2.1.patch, 
> edismax_pf_with_slop_v2.patch, pf2_with_slop.patch, 
> SOLR-2058-and-3351-not-finished.patch, SOLR-2058.patch
>
>
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201008.mbox/%3c4c659119.2010...@0ape.com%3E
> {quote}
> From  Ron Mayer 
> ... my results might  be even better if I had a couple different "pf2"s with 
> different "ps"'s  at the same time.   In particular.   One with ps=0 to put a 
> high boost on ones the have  the right ordering of words.  For example 
> insuring that [the query]:
>   "red hat black jacket"
>  boosts only documents with "red hats" and not "black hats".   And another 
> pf2 with a more modest boost with ps=5 or so to handle the query above also 
> boosting docs with 
>   "red baseball hat".
> {quote}
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201008.mbox/%3caanlktimd+v3g6d_mnhp+jykkd+dej8fvmvf_1lqoi...@mail.gmail.com%3E]
> {quote}
> From  Yonik Seeley 
> Perhaps fold it into the pf/pf2 syntax?
> pf=text^2// current syntax... makes phrases with a boost of 2
> pf=text~1^2  // proposed syntax... makes phrases with a slop of 1 and
> a boost of 2
> That actually seems pretty natural given the lucene query syntax - an
> actual boosted sloppy phrase query already looks like
> {{text:"foo bar"~1^2}}
> -Yonik
> {quote}
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201008.mbox/%3calpine.deb.1.10.1008161300510.6...@radix.cryptio.net%3E]
> {quote}
> From  Chris Hostetter 
> Big +1 to this idea ... the existing "ps" param can stick arround as the 
> default for any field that doesn't specify it's own slop in the pf/pf2/pf3 
> fields using the "~" syntax.
> -Hoss
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4890) Can't find (or read) directory to add to classloader: /non/existent/dir/yields/warning (resolved as: /non/existent/dir/yields/warning).

2013-06-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675345#comment-13675345
 ] 

Steve Rowe commented on SOLR-4890:
--

bq. the comments in that file, at least regarding this line, are totally 
unhelpful.

Patches welcome!

> Can't find (or read) directory to add to classloader: 
> /non/existent/dir/yields/warning (resolved as: 
> /non/existent/dir/yields/warning).
> ---
>
> Key: SOLR-4890
> URL: https://issues.apache.org/jira/browse/SOLR-4890
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools, web gui
>Affects Versions: 4.3
> Environment: Linux (CentOS 6.2)
>Reporter: Aaron Greenspan
>Priority: Minor
>  Labels: Confusing
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> I just did a fresh install of Solr 4.3.0 twice. Both times, the default build 
> yielded this in Logging:
> Can't find (or read) directory to add to classloader: 
> /non/existent/dir/yields/warning (resolved as: 
> /non/existent/dir/yields/warning).
> This appears to come from line 87 in solrconfig.xml for collection1, which I 
> think is supposed to be commented out. Or maybe it has some other purpose. 
> Either way, the comments in that file, at least regarding this line, are 
> totally unhelpful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4890) Can't find (or read) directory to add to classloader: /non/existent/dir/yields/warning (resolved as: /non/existent/dir/yields/warning).

2013-06-04 Thread Aaron Greenspan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675284#comment-13675284
 ] 

Aaron Greenspan commented on SOLR-4890:
---

I don't mean to be inflammatory, but I find this pretty shocking. The mentality 
that users should be warned (literally, here) up-front as to what consequences 
*may* follow *if* they screw up is perhaps the most backwards and heavy-handed 
approach to UX I've ever encountered. Even Microsoft, in the worst of its 
mid-90s products, did not intentionally insert warnings and/or errors that did 
not have to be there.

A fresh build of a software product should not throw any warnings or errors 
when its default configuration is set up. If the user eventually encounters an 
error due to a misconfiguration (which I've noticed with Solr seems to be just 
about everything I do), then the error should help the user determine not only 
what went wrong, but how to fix it as well. Ideally, the product should be 
designed to prevent such misconfigurations from ever occurring.

For the potential philosophical discussion that could be had on this matter 
I'll check out the mailing list, but as far as line 87 in solrconfig.xml, I 
think that it's a legitimate issue worthy of a bug report.

> Can't find (or read) directory to add to classloader: 
> /non/existent/dir/yields/warning (resolved as: 
> /non/existent/dir/yields/warning).
> ---
>
> Key: SOLR-4890
> URL: https://issues.apache.org/jira/browse/SOLR-4890
> Project: Solr
>  Issue Type: Bug
>  Components: scripts and tools, web gui
>Affects Versions: 4.3
> Environment: Linux (CentOS 6.2)
>Reporter: Aaron Greenspan
>Priority: Minor
>  Labels: Confusing
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> I just did a fresh install of Solr 4.3.0 twice. Both times, the default build 
> yielded this in Logging:
> Can't find (or read) directory to add to classloader: 
> /non/existent/dir/yields/warning (resolved as: 
> /non/existent/dir/yields/warning).
> This appears to come from line 87 in solrconfig.xml for collection1, which I 
> think is supposed to be commented out. Or maybe it has some other purpose. 
> Either way, the comments in that file, at least regarding this line, are 
> totally unhelpful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3633) web UI reports an error if CoreAdminHandler says there are no SolrCores

2013-06-04 Thread Aaron Greenspan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675265#comment-13675265
 ] 

Aaron Greenspan commented on SOLR-3633:
---

Mark,

Perhaps it doesn't crash in that it doesn't throw a segmentation fault, but it 
does throw a Java exception and then stops working, which for a novice user 
like me means that there is no way to go back and add a core (since the only 
way I'd know how to do it is through the web UI)--the only way to fix it. And 
even if there was such a way to add a core, I'd run into issue 4461, and not be 
able to anyway.

Aaron

> web UI reports an error if CoreAdminHandler says there are no SolrCores
> ---
>
> Key: SOLR-3633
> URL: https://issues.apache.org/jira/browse/SOLR-3633
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0-ALPHA
>Reporter: Hoss Man
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.4
>
> Attachments: SOLR-3633.patch, SOLR-3633.patch
>
>
> Spun off from SOLR-3591...
> * having no SolrCores is a valid situation
> * independent of what may happen in SOLR-3591, the web UI should cleanly deal 
> with there being no SolrCores, and just hide/grey out any tabs that can't be 
> supported w/o at least one core
> * even if there are no SolrCores the core admin features (ie: creating a new 
> core) should be accessible in the UI

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2991) In 3X, not used consistently in all places Directory objects are instantiated

2013-06-04 Thread Alexander Kanarsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Kanarsky updated SOLR-2991:
-

Attachment: SOLR-2991.patch

> In 3X,  not used consistently in all places Directory objects are 
> instantiated
> -
>
> Key: SOLR-2991
> URL: https://issues.apache.org/jira/browse/SOLR-2991
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.6.1
>Reporter: Mark Miller
>Priority: Minor
> Attachments: SOLR-2991.patch
>
>
> nipunb noted on the mailing list then when configuring solr to use an 
> alternate  (ie: simple) the stats for the SolrIndexSearcher list 
> NativeFSLockFactory being used by the Directory.
> The problem seems to be that SolrIndexConfig is not consulted when 
> constructing Directory objects used for IndexReader (it's only used by 
> SolrIndexWriter)
> I don't _think_ this is a problem in most cases since the IndexReaders should 
> all be readOnly in the core solr code) but plugins could attempt to use them 
> in other ways.  In general it seems like a really bad bug waiting to happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2991) In 3X, not used consistently in all places Directory objects are instantiated

2013-06-04 Thread Alexander Kanarsky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675252#comment-13675252
 ] 

Alexander Kanarsky commented on SOLR-2991:
--

While I agree that this is a minor issue since the index readers/searchers 
should not be directly using lock factory set, it makes sense to have a 
consistent lock types between readers and writers. Proposed patch seems to fix 
the problem, it just ensures the proper lock factory is set after the opening 
of the Directory. I reused the static method in SolrIndexWriter, but since the 
method does not really belongs to the SolrIndexWriter it could be shared 
between readers and writers.

> In 3X,  not used consistently in all places Directory objects are 
> instantiated
> -
>
> Key: SOLR-2991
> URL: https://issues.apache.org/jira/browse/SOLR-2991
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.6.1
>Reporter: Mark Miller
>Priority: Minor
> Attachments: SOLR-2991.patch
>
>
> nipunb noted on the mailing list then when configuring solr to use an 
> alternate  (ie: simple) the stats for the SolrIndexSearcher list 
> NativeFSLockFactory being used by the Directory.
> The problem seems to be that SolrIndexConfig is not consulted when 
> constructing Directory objects used for IndexReader (it's only used by 
> SolrIndexWriter)
> I don't _think_ this is a problem in most cases since the IndexReaders should 
> all be readOnly in the core solr code) but plugins could attempt to use them 
> in other ways.  In general it seems like a really bad bug waiting to happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #348: POMs out of sync

2013-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/348/

4 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 230 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 230 
seconds
at 
__randomizedtesting.SeedInfo.seed([D754C9A27826F443:56B247BA0F79947F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:173)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:131)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:126)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:512)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:146)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=3981, 
name=recoveryCmdExecutor-2281-thread-3, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:298) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679)2) Thread[id=3977, 
name=recoveryCmdExecutor-2281-thread-1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at 
java.net.Socket.connect(Socket.java:546) at 
org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:127)
 at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
 at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:365)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
 at org.apache.solr.cloud.SyncStrategy$1.run(SyncStrategy.java:298) 
at 
java.util.concurrent.ThreadPoolExecutor.

[jira] [Commented] (SOLR-4891) JsonLoader should preserve field value types from the JSON content stream

2013-06-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674609#comment-13674609
 ] 

Steve Rowe commented on SOLR-4891:
--

If there are no objections I'll commit this later today.

> JsonLoader should preserve field value types from the JSON content stream
> -
>
> Key: SOLR-4891
> URL: https://issues.apache.org/jira/browse/SOLR-4891
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-4891.patch
>
>
> JSON content streams carry some basic type information for their field 
> values, as parsed by Noggit: LONG, NUMBER, BIGNUMBER, and BOOLEAN.  
> {{JsonLoader}} should set field value object types in the 
> {{SolrInputDocument}} according to the content stream's data types. 
> Currently {{JsonLoader}} converts all non-{{String}}-typed field values to 
> {{String}}-s.
> There is a comment in {{JsonLoader.parseSingleFieldValue()}}, where the 
> convert-everything-to-string logic happens, that says "for legacy reasons, 
> single values s are expected to be strings", but other content streams' type 
> information is not flattened like this, e.g. {{JavabinLoader}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Documentation for Solr/Lucene 4.x, termIndexInterval and limitations of Lucene File format

2013-06-04 Thread Tom Burton-West
Thanks Mike.

I'm running CheckIndex on the 2TB index right now.Hopefully it will
finish running by tomorrow.  I'll send you a copy of the output.

Tom


On Mon, Jun 3, 2013 at 9:04 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Hi Tom,
>
> On Mon, Jun 3, 2013 at 12:11 PM, Tom Burton-West 
> wrote:
>
> > What is the current limit?
>
> I *think* (but would be nice to hear back how many terms you were able
> to index into one segment ;) ) there is no hard limit to the max
> number of terms, now that FSTs can handle more than 2.1 B
> bytes/nodes/arcs.
>
> I'll update those javadocs, thanks!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-4884) do confluence import of solr ref guide

2013-06-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674520#comment-13674520
 ] 

Hoss Man commented on SOLR-4884:


bq. When checking the content, I think there are going to be pages that use 
plugins not in use in CWIKI that will cause errors on pages. 

... right, my plan was as you suggested there: import and review and fix what 
breaks. 

The key thing in my mind is that even besides those specific issues, even if we 
have those plugins installed -- they may not play nice with the autoexport.  We 
just have to review+fix+iterate.

bq. Since you have the rights to import the space, you also have the rights to 
add plugins.

I'm going to avoid doing that w/o explicit sign off from infra on the 
individual plugins since it involves running arbitrary binary code downloaded 
from a third party on apache.org hardware and i don't wnat to me the guy who 
fucks that up and lets in the trojan.

bq. Or, I can remove references to them from the export I made and resubmit it.

Ah ... interesting, i hadn't considered that it might be easier to fix on the 
lucid side and then re-import.  I'll keep that in mind.

bq. If you don't like the stylesheet...

i'm not worried about the styling in confluence since it will have to be redone 
for the autoexport anyway and that's configured differently .. low priority.


> do confluence import of solr ref guide
> --
>
> Key: SOLR-4884
> URL: https://issues.apache.org/jira/browse/SOLR-4884
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> Process for importing Solr Ref Guide export from LucidWorks into 
> cwiki.apache.org, based on the "[Adding a project 
> Space|https://cwiki.apache.org/INFRA/cwiki.html#Cwiki-AddingaProjectSpace]"; 
> instructions from INFRA...
> All steps to be taken by Hoss, who is already a member of the 
> confluence-admin group in CWIKI...
> * Download the [ref guide 4.3 
> zip|https://issues.apache.org/jira/secure/attachment/12585923/SolrRefGuide.4.3.zip]
>  provided by Cassandra as an attachment in parent issue SOLR-4618 to my local 
> computer
> * use the "Remove Space" operation on the existing SOLR space to remove it...
> ** https://cwiki.apache.org/confluence/spaces/editspace.action?key=SOLR
> ** the space is currently empty except for some stub content
> * Use the "Upload a zipped backup to Confluence" form on the Confluence 
> Backup admin page to upload & import that ZIP file into CWIKI
> ** https://cwiki.apache.org/confluence/admin/backup.action
> * Browse the newly created (dynamic) SOLR wiki space to sanity check that the 
> import seemed to have worked properly and the newly re-created SOLR space 
> looks correct
> * Update the Space permisions: 
> https://cwiki.apache.org/confluence/spaces/spacepermissions.action?key=SOLR
> ** grant total access to solr-admins and confluence-administrators (group)
> ** grant access to everything except "mail removal" and "space admin" to 
> solr-committers (group)
> ** grant view & export access to confluence-users (group)
> ** grant view access to autoexport (user)
> ** grant view & export to anonymous (anon)
> ** Remove any individual rights granted to my account by Confluence when the 
> Space was created.
> ** remove any other access that might have survived the import
> * Goto Plugins, and Autoexport the initial site.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4989) Hanging on DocumentsWriterStallControl.waitIfStalled forever

2013-06-04 Thread Jessica Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674514#comment-13674514
 ] 

Jessica Cheng commented on LUCENE-4989:
---

I very highly suspect that the cause is where I outlined in this bug, so if 
doAfterFlush(dwpt); is placed in a second finally block, I think we're fine 
closing this bug and I can reopen or open a new bug if I see it again. However, 
I don't think this is LUCENE 5002 because there definitely wasn't any blocked 
thread (I double-checked in my thread dump). Thanks!

> Hanging on DocumentsWriterStallControl.waitIfStalled forever
> 
>
> Key: LUCENE-4989
> URL: https://issues.apache.org/jira/browse/LUCENE-4989
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.1
> Environment: Linux 2.6.32
>Reporter: Jessica Cheng
>  Labels: hang
> Fix For: 5.0, 4.3.1
>
>
> In an environment where our underlying storage was timing out on various 
> operations, we find all of our indexing threads eventually stuck in the 
> following state (so far for 4 days):
> "Thread-0" daemon prio=5 Thread id=556  WAITING
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterStallControl.waitIfStalled(DocumentsWriterStallControl.java:74)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitIfStalled(DocumentsWriterFlushControl.java:676)
>   at 
> org.apache.lucene.index.DocumentsWriter.preUpdate(DocumentsWriter.java:301)
>   at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:361)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1484)
>   at ...
> I have not yet enabled detail logging and tried to reproduce yet, but looking 
> at the code, I see that DWFC.abortPendingFlushes does
> try {
>   dwpt.abort();
>   doAfterFlush(dwpt);
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> }
> (and the same for the blocked ones). Since the throwable is ignored, I can't 
> say for sure, but I've seen DWPT.abort thrown in other cases, so if it does 
> throw, we'd fail to call doAfterFlush and properly decrement flushBytes. This 
> can be a problem, right? Is it possible to do this instead:
> try {
>   dwpt.abort();
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> } finally {
>   try {
> doAfterFlush(dwpt);
>   } catch (Throwable ex2) {
> // ignore - keep on aborting the flush queue
>   }
> }
> It's ugly but safer. Otherwise, maybe at least add logging for the throwable 
> just to make sure this is/isn't happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1913) QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations on Integer Fields

2013-06-04 Thread Deepthi Sigireddi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674487#comment-13674487
 ] 

Deepthi Sigireddi commented on SOLR-1913:
-

Hi Israel,
The plugin works great after I migrated it to comply with the 4.x libraries. 
Thanks for building it!
Any chance you can create a patch for this issue and attach it? Otherwise I'm 
planning to create a patch so that the issue can progress through the commit 
process. Do let me know.

> QParserPlugin plugin for Search Results Filtering Based on Bitwise Operations 
> on Integer Fields
> ---
>
> Key: SOLR-1913
> URL: https://issues.apache.org/jira/browse/SOLR-1913
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Israel Ekpo
> Fix For: 4.4
>
> Attachments: bitwise_filter_plugin.jar, SOLR-1913.bitwise.tar.gz, 
> SOLR-1913-src.tar.gz, solr-bitwise-plugin.jar, WEB-INF lib.jpg
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> BitwiseQueryParserPlugin is a org.apache.solr.search.QParserPlugin that 
> allows 
> users to filter the documents returned from a query
> by performing bitwise operations between a particular integer field in the 
> index
> and the specified value.
> This Solr plugin is based on the BitwiseFilter in LUCENE-2460
> See https://issues.apache.org/jira/browse/LUCENE-2460 for more details
> This is the syntax for searching in Solr:
> http://localhost:8983/path/to/solr/select/?q={!bitwise field=fieldname 
> op=OPERATION_NAME source=sourcevalue negate=boolean}remainder of query
> Example :
> http://localhost:8983/solr/bitwise/select/?q={!bitwise field=user_permissions 
> op=AND source=3 negate=true}state:FL
> The negate parameter is optional
> The field parameter is the name of the integer field
> The op parameter is the name of the operation; one of {AND, OR, XOR}
> The source parameter is the specified integer value
> The negate parameter is a boolean indicating whether or not to negate the 
> results of the bitwise operation
> To test out this plugin, simply copy the jar file containing the plugin 
> classes into your $SOLR_HOME/lib directory and then
> add the following to your solrconfig.xml file after the dismax request 
> handler:
>  class="org.apache.solr.bitwise.BitwiseQueryParserPlugin" basedOn="dismax" />
> Restart your servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4879) Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when at least two vertexes are more than 180 degrees apart

2013-06-04 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-4879:
--

Assignee: David Smiley

> Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when 
> at least two vertexes are more than 180 degrees apart
> --
>
> Key: SOLR-4879
> URL: https://issues.apache.org/jira/browse/SOLR-4879
> Project: Solr
>  Issue Type: Bug
> Environment: Linux, Solr 4.0.0, Solr 4.3.0
>Reporter: Øystein Torget
>Assignee: David Smiley
>
> When trying to index a field of the type 
> solr.SpatialRecursivePrefixTreeFieldType the indexing will fail if two 
> vertexes are more than 180 longitudal degress apart.
> For instance this polygon will fail: 
> POLYGON((-161 49,  0 49,   20 49,   20 89.1,  0 89.1,   -161 89.2,-161 
> 49))
> but this will not.
> POLYGON((-160 49,  0 49,   20 49,   20 89.1,  0 89.1,   -160 89.2,-160 
> 49))
> This contradicts the documentation found here: 
> http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
> The documentation states that each vertex must be less than 180 longitudal 
> degrees apart from the previous vertex.
> Relevant parts from the schema.xml file:
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>
> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>distErrPct="0.025"
>maxDistErr="0.09"
>units="degrees"
> />
>  stored="true" />

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4879) Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when at least two vertexes are more than 180 degrees apart

2013-06-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674455#comment-13674455
 ] 

David Smiley commented on SOLR-4879:


Ok, this is a bona-fide bug in Spatial4j.  I reported it there: 
https://github.com/spatial4j/spatial4j/issues/41
Once this gets fixed (which won't take long as it appears to be a simple bug) 
then you'll need to build a development non-official release of Spatial4j and 
deploy that into Solr, replacing Spatial4j 0.3.  I'll close this issue once 
Spatial4j's next release ends up shipping with Lucene/Solr.

Thanks for reporting the bug, Oystein!

> Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when 
> at least two vertexes are more than 180 degrees apart
> --
>
> Key: SOLR-4879
> URL: https://issues.apache.org/jira/browse/SOLR-4879
> Project: Solr
>  Issue Type: Bug
> Environment: Linux, Solr 4.0.0, Solr 4.3.0
>Reporter: Øystein Torget
>
> When trying to index a field of the type 
> solr.SpatialRecursivePrefixTreeFieldType the indexing will fail if two 
> vertexes are more than 180 longitudal degress apart.
> For instance this polygon will fail: 
> POLYGON((-161 49,  0 49,   20 49,   20 89.1,  0 89.1,   -161 89.2,-161 
> 49))
> but this will not.
> POLYGON((-160 49,  0 49,   20 49,   20 89.1,  0 89.1,   -160 89.2,-160 
> 49))
> This contradicts the documentation found here: 
> http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
> The documentation states that each vertex must be less than 180 longitudal 
> degrees apart from the previous vertex.
> Relevant parts from the schema.xml file:
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>
> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>distErrPct="0.025"
>maxDistErr="0.09"
>units="degrees"
> />
>  stored="true" />

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 5981 - Still Failing!

2013-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/5981/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
REGRESSION:  org.apache.solr.core.TestJmxIntegration.testJmxRegistration

Error Message:
No SolrDynamicMBeans found

Stack Trace:
java.lang.AssertionError: No SolrDynamicMBeans found
at 
__randomizedtesting.SeedInfo.seed([34B10D5242133434:BA6069682F526C51]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestJmxIntegration.testJmxRegistration(TestJmxIntegration.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:780)




Build Log:
[...truncated 9223 lines...]
[junit4:junit4] Suite: org.apache.solr.core.TestJmxIn

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.7.0) - Build # 516 - Still Failing!

2013-06-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/516/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 9243 lines...]
[junit4:junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.7.0_21.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/heapdumps
 -Dtests.prefix=tests -Dtests.seed=936B1AF99911A1D4 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.4 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true 
-Dtests.asserts.gracious=false -Dtests.multiplier=1 -DtempDir=. 
-Djava.io.tmpdir=. 
-Djunit4.tempDir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/tools/junit4/tests.policy
 -Dlucene.version=4.4-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -classpath 
/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/test:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/test-files:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/test-framework/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-solrj/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/build/solr-core/classes/java:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/common/lucene-analyzers-common-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/codecs/lucene-codecs-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/highlighter/lucene-highlighter-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/memory/lucene-memory-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/misc/lucene-misc-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/spatial/lucene-spatial-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/suggest/lucene-suggest-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/grouping/lucene-grouping-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queries/lucene-queries-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/lucene/build/queryparser/lucene-queryparser-4.4-SNAPSHOT.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/cglib-nodep-2.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-cli-1.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-codec-1.7.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-fileupload-1.2.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/commons-lang-2.6.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/easymock-3.0.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/guava-14.0.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/javax.servlet-api-3.0.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/objenesis-1.2.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/org.restlet-2.1.1.jar:/Users/jenkins/jenkins-slave/workspace/Lucene-Solr-4.x-MacOSX/solr/core/lib/org.restlet.ext.servlet-2.1.1.jar:/Users/jenkins/jenkins-s

[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-06-04 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674395#comment-13674395
 ] 

Shawn Heisey commented on SOLR-4788:


Arun Rangarajan posted to solr-user that he was running into this problem on an 
upgrade from 3.6.2 to 4.2.1, so now we know that it worked properly in the 3.x 
versions.


> Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
> is empty
> --
>
> Key: SOLR-4788
> URL: https://issues.apache.org/jira/browse/SOLR-4788
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.2, 4.3
> Environment: solr-spec
> 4.2.1.2013.03.26.08.26.55
> solr-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:26:55
> lucene-spec
> 4.2.1
> lucene-impl
> 4.2.1 1461071 - mark - 2013-03-26 08:23:34
> OR
> solr-spec
> 4.3.0
> solr-impl
> 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
> lucene-spec
> 4.3.0
> lucene-impl
> 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
>Reporter: chakming wong
>Assignee: Shalin Shekhar Mangar
> Attachments: entitytest.patch, entitytest.patch, entitytest.patch, 
> entitytest.patch, entitytest.patch
>
>
> {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
>  03\:02\:06
> last_index_time=2013-05-06 03\:05\:22
> entity2.last_index_time=2013-05-06 03\:03\:14
> entity3.last_index_time=2013-05-06 03\:05\:22
> {code}
> {code:title=conf/solrconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> ...
>  class="org.apache.solr.handler.dataimport.DataImportHandler">
> 
> dihconfig.xml
> 
> 
> ...
> {code}
> {code:title=conf/dihconfig.xml|borderStyle=solid} encoding="UTF-8" ?>
> 
>  type="JdbcDataSource" driver="com.mysql.jdbc.Driver"
> url="jdbc:mysql://*:*/*"
> user="*" password="*"/>
> 
>  query="SELECT * FROM table_a"
> deltaQuery="SELECT table_a_id FROM table_b WHERE 
> last_modified > '${dataimporter.entity1.last_index_time}'"
> deltaImportQuery="SELECT * FROM table_a WHERE id = 
> '${dataimporter.entity1.id}'"
> transformer="TemplateTransformer">
>  ...
>   ... 
> ... 
> 
> 
>   ... 
>   ...
> 
> 
>   ... 
>   ...
> 
> 
> 
> {code} 
> In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
> cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4884) do confluence import of solr ref guide

2013-06-04 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674353#comment-13674353
 ] 

Cassandra Targett commented on SOLR-4884:
-

That looks right and matches the checklist I use for LucidWorks docs.

When checking the content, I think there are going to be pages that use plugins 
not in use in CWIKI that will cause errors on pages. I mentioned these in my 
initial comment to SOLR-4618 (item 5):

https://issues.apache.org/jira/browse/SOLR-4618?focusedCommentId=13607963&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13607963

Since you have the rights to import the space, you also have the rights to add 
plugins. Or, I can remove references to them from the export I made and 
resubmit it. The really big one is the Navigation Plugin, which I used to add 
Next-Previous-Up page links at the bottom of nearly every page, it's also the 
easiest to remove if necessary. That plugin, however, is the most difficult to 
find and install - for some reason it's not in Atlassian's plugin marketplace, 
but I can upload the .jar for you if you want or point you to where I got it.

I'll be interested to see how all the page formatting survives through 
autoexport - particularly the page with all the language tokenizers/filters.

If you don't like the stylesheet (assuming it survives the import; I can't 
remember if it does or not), it can be easily removed/modified from Space Admin 
-> Stylesheet.

> do confluence import of solr ref guide
> --
>
> Key: SOLR-4884
> URL: https://issues.apache.org/jira/browse/SOLR-4884
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> Process for importing Solr Ref Guide export from LucidWorks into 
> cwiki.apache.org, based on the "[Adding a project 
> Space|https://cwiki.apache.org/INFRA/cwiki.html#Cwiki-AddingaProjectSpace]"; 
> instructions from INFRA...
> All steps to be taken by Hoss, who is already a member of the 
> confluence-admin group in CWIKI...
> * Download the [ref guide 4.3 
> zip|https://issues.apache.org/jira/secure/attachment/12585923/SolrRefGuide.4.3.zip]
>  provided by Cassandra as an attachment in parent issue SOLR-4618 to my local 
> computer
> * use the "Remove Space" operation on the existing SOLR space to remove it...
> ** https://cwiki.apache.org/confluence/spaces/editspace.action?key=SOLR
> ** the space is currently empty except for some stub content
> * Use the "Upload a zipped backup to Confluence" form on the Confluence 
> Backup admin page to upload & import that ZIP file into CWIKI
> ** https://cwiki.apache.org/confluence/admin/backup.action
> * Browse the newly created (dynamic) SOLR wiki space to sanity check that the 
> import seemed to have worked properly and the newly re-created SOLR space 
> looks correct
> * Update the Space permisions: 
> https://cwiki.apache.org/confluence/spaces/spacepermissions.action?key=SOLR
> ** grant total access to solr-admins and confluence-administrators (group)
> ** grant access to everything except "mail removal" and "space admin" to 
> solr-committers (group)
> ** grant view & export access to confluence-users (group)
> ** grant view access to autoexport (user)
> ** grant view & export to anonymous (anon)
> ** Remove any individual rights granted to my account by Confluence when the 
> Space was created.
> ** remove any other access that might have survived the import
> * Goto Plugins, and Autoexport the initial site.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr - ORM like layer

2013-06-04 Thread Tuğcem Oral
Hi folks,

I wonder that there exist and ORM like layer for solr such that it
generates the solr schema from given complex object type and index given
list of corresponding objects. I wrote a simple module for that need in one
of my projects and happyly ready to generalize it and contribute to solr,
if there's not such a module exists or in progress.

Thanks all.

-- 
TO


[jira] [Commented] (SOLR-4879) Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when at least two vertexes are more than 180 degrees apart

2013-06-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674326#comment-13674326
 ] 

Øystein Torget commented on SOLR-4879:
--

[~dsmiley] The problem I am having is that even if each point is less than < 
180 for the previous one, then the indexing fails as long as two points are 
more than 180 degrees apart. I.e. it is not possible to make a polygon that 
spans more than half the globe. Is that what is meant in the documentation?

> Indexing a field of type solr.SpatialRecursivePrefixTreeFieldType fails when 
> at least two vertexes are more than 180 degrees apart
> --
>
> Key: SOLR-4879
> URL: https://issues.apache.org/jira/browse/SOLR-4879
> Project: Solr
>  Issue Type: Bug
> Environment: Linux, Solr 4.0.0, Solr 4.3.0
>Reporter: Øystein Torget
>
> When trying to index a field of the type 
> solr.SpatialRecursivePrefixTreeFieldType the indexing will fail if two 
> vertexes are more than 180 longitudal degress apart.
> For instance this polygon will fail: 
> POLYGON((-161 49,  0 49,   20 49,   20 89.1,  0 89.1,   -161 89.2,-161 
> 49))
> but this will not.
> POLYGON((-160 49,  0 49,   20 49,   20 89.1,  0 89.1,   -160 89.2,-160 
> 49))
> This contradicts the documentation found here: 
> http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
> The documentation states that each vertex must be less than 180 longitudal 
> degrees apart from the previous vertex.
> Relevant parts from the schema.xml file:
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType"
>
> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>distErrPct="0.025"
>maxDistErr="0.09"
>units="degrees"
> />
>  stored="true" />

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: How to set HTTP header on Solr response?

2013-06-04 Thread Tomás Fernández Löbbe
I'm not sure which would be the best way to do this. The main concern on
SOLR-2079 was not make Solr or any of it built-in components depend
directly on http, as Solr can be used in other ways (although, as I don't
know much about Solr's UI, I'm not sure if this still applies here).
Maybe you could add this extra information to the SolrResponse, and then
move that to the HttpResponse headers in the writeResponse method of the
SolrDispatchFilter, in a similar way that Solr is doing today with the
exceptions.

Tomás


On Mon, Jun 3, 2013 at 7:09 PM, Shawn Heisey  wrote:

> On 6/3/2013 3:22 PM, Tomás Fernández Löbbe wrote:
>
>> I don't think that's possible now. I recently had a need to access the
>> request headers and there was no way to do it. I added a change to make
>> the original http request to be added to the SolrQueryRequest context
>> (SOLR-2079), but I don't think there is an option to do something
>> similar with the response at this point (at least from what I see in
>> SolrDispatchFilter).
>>
>
> Do you have any advice about how I might expose what I need without
> breaking anything?  Perhaps it might be simply a matter of including a
> NamedList with the headers/values that I need, similar to the way that the
> file content is included, and accessing that object at the point where the
> http response is built.  Any pointers about where to put this code would be
> awesome.
>
>
> Thanks,
> Shawn
>
>
> --**--**-
> To unsubscribe, e-mail: 
> dev-unsubscribe@lucene.apache.**org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-4989) Hanging on DocumentsWriterStallControl.waitIfStalled forever

2013-06-04 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13674180#comment-13674180
 ] 

Simon Willnauer commented on LUCENE-4989:
-

I actually committed the fix outlined above in LUCENE-5002 and I think this was 
the cause. I wasn't able to reproduce though but I think we can close it.

> Hanging on DocumentsWriterStallControl.waitIfStalled forever
> 
>
> Key: LUCENE-4989
> URL: https://issues.apache.org/jira/browse/LUCENE-4989
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.1
> Environment: Linux 2.6.32
>Reporter: Jessica Cheng
>  Labels: hang
> Fix For: 5.0, 4.3.1
>
>
> In an environment where our underlying storage was timing out on various 
> operations, we find all of our indexing threads eventually stuck in the 
> following state (so far for 4 days):
> "Thread-0" daemon prio=5 Thread id=556  WAITING
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at 
> org.apache.lucene.index.DocumentsWriterStallControl.waitIfStalled(DocumentsWriterStallControl.java:74)
>   at 
> org.apache.lucene.index.DocumentsWriterFlushControl.waitIfStalled(DocumentsWriterFlushControl.java:676)
>   at 
> org.apache.lucene.index.DocumentsWriter.preUpdate(DocumentsWriter.java:301)
>   at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:361)
>   at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1484)
>   at ...
> I have not yet enabled detail logging and tried to reproduce yet, but looking 
> at the code, I see that DWFC.abortPendingFlushes does
> try {
>   dwpt.abort();
>   doAfterFlush(dwpt);
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> }
> (and the same for the blocked ones). Since the throwable is ignored, I can't 
> say for sure, but I've seen DWPT.abort thrown in other cases, so if it does 
> throw, we'd fail to call doAfterFlush and properly decrement flushBytes. This 
> can be a problem, right? Is it possible to do this instead:
> try {
>   dwpt.abort();
> } catch (Throwable ex) {
>   // ignore - keep on aborting the flush queue
> } finally {
>   try {
> doAfterFlush(dwpt);
>   } catch (Throwable ex2) {
> // ignore - keep on aborting the flush queue
>   }
> }
> It's ugly but safer. Otherwise, maybe at least add logging for the throwable 
> just to make sure this is/isn't happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org