[jira] [Commented] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit point has deletes/field updates
[ https://issues.apache.org/jira/browse/LUCENE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138564#comment-14138564 ] Vitaly Funstein commented on LUCENE-5931: - Mike/Robert, I have a follow-up question. I have backported the fix to 4.6 and now I believe I am seeing another serious issue here. :( If the old reader passed in to {{DirectoryReader.openIfChanged(DirectoryReader, IndexCommit)}} is actually an NRT reader, then it seems that if there is unflushed/uncommitted data in the associated writer's buffers, in particular deletes, the returned reader will see those changes - thus violating the intent of opening the index at just the commit point we wanted, frozen in time. Here's my original test case modified to show the problem: {code} import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field.Store; import org.apache.lucene.document.StringField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexCommit; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy; import org.apache.lucene.index.ReaderManager; import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocs; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.util.Version; import org.junit.After; import org.junit.Before; import org.junit.Test; import java.io.File; public class CommitReuseTest { private final File path = new File(indexDir); private IndexWriter writer; private final SnapshotDeletionPolicy snapshotter = new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()); @Before public void initIndex() throws Exception { path.mkdirs(); IndexWriterConfig idxWriterCfg = new IndexWriterConfig(Version.LUCENE_46, null); idxWriterCfg.setIndexDeletionPolicy(snapshotter); idxWriterCfg.setInfoStream(System.out); Directory dir = FSDirectory.open(path); writer = new IndexWriter(dir, idxWriterCfg); writer.commit(); // make sure all index metadata is written out } @After public void stop() throws Exception { writer.close(); } @Test public void test() throws Exception { Document doc; ReaderManager rm = new ReaderManager(writer, true); // Index some data for (int i = 0; i 100; i++) { doc = new Document(); doc.add(new StringField(key- + i, ABC, Store.YES)); writer.addDocument(doc); } writer.commit(); IndexCommit ic1 = snapshotter.snapshot(); doc = new Document(); doc.add(new StringField(key- + 0, AAA, Store.YES)); writer.updateDocument(new Term(key- + 0, ABC), doc); rm.maybeRefreshBlocking(); DirectoryReader latest = rm.acquire(); assertTrue(latest.hasDeletions()); // This reader will be used for searching against commit point 1 DirectoryReader searchReader = DirectoryReader.openIfChanged(latest, ic1); //assertFalse(searchReader.hasDeletions()); // XXX - this fails too! rm.release(latest); IndexSearcher s = new IndexSearcher(searchReader); Query q = new TermQuery(new Term(key-0, ABC)); TopDocs td = s.search(q, 10); assertEquals(1, td.totalHits); searchReader.close(); rm.close(); snapshotter.release(ic1); } } {code} Note, that if I comment out the {{updateDocument()}} call, the test passes. Also, if you only have one entry in the index and not enough, then it appears that while refreshing the NRT reader, the segment containing just the single delete will be removed, making it look like the test passes: {noformat} IW 0 [Wed Sep 17 22:32:47 PDT 2014; main]: drop 100% deleted segments: _4(4.6):c1/1 {noformat} This output does not appear when running the code above, unchanged. Hope this helps... I can't make further headway myself though. DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit point has deletes/field updates - Key: LUCENE-5931 URL: https://issues.apache.org/jira/browse/LUCENE-5931 Project: Lucene - Core Issue Type: Bug Components: core/index Affects Versions: 4.6.1 Reporter: Vitaly Funstein Assignee: Michael McCandless Priority: Critical Attachments: CommitReuseTest.java, LUCENE-5931.patch, LUCENE-5931.patch, LUCENE-5931.patch
[jira] [Comment Edited] (LUCENE-5931) DirectoryReader.openIfChanged(oldReader, commit) incorrectly assumes given commit point has deletes/field updates
[ https://issues.apache.org/jira/browse/LUCENE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138564#comment-14138564 ] Vitaly Funstein edited comment on LUCENE-5931 at 9/18/14 6:08 AM: -- Mike/Robert, I have a follow-up question. I have backported the fix to 4.6 and now I believe I am seeing another serious issue here. :( If the old reader passed in to {{DirectoryReader.openIfChanged(DirectoryReader, IndexCommit)}} is actually an NRT reader, then it seems that if there is unflushed/uncommitted data in the associated writer's buffers, in particular deletes, the returned reader will see those changes - thus violating the intent of opening the index at just the commit point we wanted, frozen in time. Here's my original test case modified to show the problem: {code} import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field.Store; import org.apache.lucene.document.StringField; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexCommit; import org.apache.lucene.index.IndexWriter; import org.apache.lucene.index.IndexWriterConfig; import org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy; import org.apache.lucene.index.ReaderManager; import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocs; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.util.Version; import org.junit.After; import org.junit.Before; import org.junit.Test; import java.io.File; public class CommitReuseTest { private final File path = new File(indexDir); private IndexWriter writer; private final SnapshotDeletionPolicy snapshotter = new SnapshotDeletionPolicy(new KeepOnlyLastCommitDeletionPolicy()); @Before public void initIndex() throws Exception { path.mkdirs(); IndexWriterConfig idxWriterCfg = new IndexWriterConfig(Version.LUCENE_46, null); idxWriterCfg.setIndexDeletionPolicy(snapshotter); idxWriterCfg.setInfoStream(System.out); Directory dir = FSDirectory.open(path); writer = new IndexWriter(dir, idxWriterCfg); writer.commit(); // make sure all index metadata is written out } @After public void stop() throws Exception { writer.close(); } @Test public void test() throws Exception { Document doc; ReaderManager rm = new ReaderManager(writer, true); // Index some data for (int i = 0; i 100; i++) { doc = new Document(); doc.add(new StringField(key- + i, ABC, Store.YES)); writer.addDocument(doc); } writer.commit(); IndexCommit ic1 = snapshotter.snapshot(); doc = new Document(); doc.add(new StringField(key- + 0, AAA, Store.YES)); writer.updateDocument(new Term(key- + 0, ABC), doc); rm.maybeRefreshBlocking(); DirectoryReader latest = rm.acquire(); assertTrue(latest.hasDeletions()); // This reader will be used for searching against commit point 1 DirectoryReader searchReader = DirectoryReader.openIfChanged(latest, ic1); //assertFalse(searchReader.hasDeletions()); // XXX - this fails too! rm.release(latest); IndexSearcher s = new IndexSearcher(searchReader); Query q = new TermQuery(new Term(key-0, ABC)); TopDocs td = s.search(q, 10); assertEquals(1, td.totalHits); searchReader.close(); rm.close(); snapshotter.release(ic1); } } {code} Note, that if I comment out the {{updateDocument()}} call, the test passes. Also, if you only have one entry in the index, then it appears that while refreshing the NRT reader, the segment containing just the single delete will be removed, making it look like the test passes: {noformat} IW 0 [Wed Sep 17 22:32:47 PDT 2014; main]: drop 100% deleted segments: _4(4.6):c1/1 {noformat} This output does not appear when running the code above, unchanged. Hope this helps... I can't make further headway myself though. was (Author: vfunstein): Mike/Robert, I have a follow-up question. I have backported the fix to 4.6 and now I believe I am seeing another serious issue here. :( If the old reader passed in to {{DirectoryReader.openIfChanged(DirectoryReader, IndexCommit)}} is actually an NRT reader, then it seems that if there is unflushed/uncommitted data in the associated writer's buffers, in particular deletes, the returned reader will see those changes - thus violating the intent of opening the index at just the commit point we wanted, frozen in time. Here's my original test case modified to show the problem: {code} import static org.junit.Assert.assertEquals;
[jira] [Commented] (SOLR-6491) Umbrella JIRA for managing the leader assignments
[ https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138576#comment-14138576 ] Ramkumar Aiyengar commented on SOLR-6491: - +1 to that, the automatic rebalance should certainly be preferred, the manual reordering should only be for cases where that doesn't suffice. Umbrella JIRA for managing the leader assignments - Key: SOLR-6491 URL: https://issues.apache.org/jira/browse/SOLR-6491 Project: Solr Issue Type: Improvement Affects Versions: 4.11, 5.0 Reporter: Erick Erickson Assignee: Erick Erickson Leaders can currently get out of balance due to the sequence of how nodes are brought up in a cluster. For very good reasons shard leadership cannot be permanently assigned. However, it seems reasonable that a sys admin could optionally specify that a particular node be the _preferred_ leader for a particular collection/shard. During leader election, preference would be given to any node so marked when electing any leader. So the proposal here is to add another role for preferredLeader to the collections API, something like ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId Second, it would be good to have a new collections API call like ELECTPREFERREDLEADERS?collection=collection_name (I really hate that name so far, but you see the idea). That command would (asynchronously?) make an attempt to transfer leadership for each shard in a collection to the leader labeled as the preferred leader by the new ADDROLE role. I'm going to start working on this, any suggestions welcome! This will subsume several other JIRAs, I'll link them momentarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138616#comment-14138616 ] ASF subversion and git services commented on SOLR-6115: --- Commit 1625897 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1625897 ] SOLR-6115: Fix enum usage in DeleteReplicaTest Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 4.9, 5.0 Attachments: SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-6115: Attachment: SOLR-6115-branch_4x.patch This patch is for branch_4x (it is different because SOLR-5473 hasn't been merged to 4x yet) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 4.9, 5.0 Attachments: SOLR-6115-branch_4x.patch, SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138636#comment-14138636 ] ASF subversion and git services commented on SOLR-6115: --- Commit 1625903 from sha...@apache.org in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1625903 ] SOLR-6115: Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 4.9, 5.0 Attachments: SOLR-6115-branch_4x.patch, SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-6115. - Resolution: Fixed Fix Version/s: (was: 4.9) 4.11 Assignee: Shalin Shekhar Mangar Thanks Erick for the original suggestion of fixing this mess :) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 4.11, 5.0 Attachments: SOLR-6115-branch_4x.patch, SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [VOTE] Release 4.9.1 RC0
Hi, unfortunately I forgot one thing: The big security issue with Apache POI! As we are releasing a new version, we should fix this, too! Respin? Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Michael McCandless [mailto:luc...@mikemccandless.com] Sent: Wednesday, September 17, 2014 5:04 PM To: Lucene/Solr dev Subject: [VOTE] Release 4.9.1 RC0 Artifacts here: http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 1625586 4.9.1 /tmp/smoke491 True SUCCESS! [0:24:36.203643] Here's my +1 Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6388) Update Apache TIKA 1.5's Apache POI dependency to 3.10.1
[ https://issues.apache.org/jira/browse/SOLR-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-6388: Fix Version/s: 4.9.1 Update Apache TIKA 1.5's Apache POI dependency to 3.10.1 Key: SOLR-6388 URL: https://issues.apache.org/jira/browse/SOLR-6388 Project: Solr Issue Type: Task Components: contrib - Solr Cell (Tika extraction) Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 4.9.1, 4.10, 5.0 Attachments: SOLR-6388.patch TIKA 1.5 currently uses Apache POI 1.10-beta2 to extract Microsoft Ofiice documents. Apache POI releases 3.10.1 today (waiting for Maven Central...). We should upgrade the Solr POI dependency to 3.10.1, because the older version has various problems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6388) Update Apache TIKA 1.5's Apache POI dependency to 3.10.1
[ https://issues.apache.org/jira/browse/SOLR-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138657#comment-14138657 ] ASF subversion and git services commented on SOLR-6388: --- Commit 1625908 from [~thetaphi] in branch 'dev/branches/lucene_solr_4_9' [ https://svn.apache.org/r1625908 ] Merged revision(s) 1618604, 1618960 from lucene/dev/branches/branch_4x: Merged revision(s) 1618603 from lucene/dev/trunk: SOLR-6388: Update Apache TIKA 1.5's Apache POI dependency to 3.10.1 Merged revision(s) 1618959 from lucene/dev/trunk: SOLR-6388: Add changes entry Update Apache TIKA 1.5's Apache POI dependency to 3.10.1 Key: SOLR-6388 URL: https://issues.apache.org/jira/browse/SOLR-6388 Project: Solr Issue Type: Task Components: contrib - Solr Cell (Tika extraction) Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: 4.9.1, 4.10, 5.0 Attachments: SOLR-6388.patch TIKA 1.5 currently uses Apache POI 1.10-beta2 to extract Microsoft Ofiice documents. Apache POI releases 3.10.1 today (waiting for Maven Central...). We should upgrade the Solr POI dependency to 3.10.1, because the older version has various problems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [VOTE] Release 4.9.1 RC0
Hi, I committed the Apache POI dependency upgrade for Solr. Sorry that I missed this, but releasing a new version with a known security problem that already has a CVE number is a no-go. So -1 to release the current artifacts. I am very sorry! :( In addition, the changes.txt of Solr had a misplaced changes entry (preexisting problem in 4.9.0) - now corrected. Uwe P.S.: The CVE numbers are already circulated and several Linux distributions already opened issues and updated their packages. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Uwe Schindler [mailto:u...@thetaphi.de] Sent: Thursday, September 18, 2014 9:35 AM To: dev@lucene.apache.org Subject: RE: [VOTE] Release 4.9.1 RC0 Hi, unfortunately I forgot one thing: The big security issue with Apache POI! As we are releasing a new version, we should fix this, too! Respin? Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Michael McCandless [mailto:luc...@mikemccandless.com] Sent: Wednesday, September 17, 2014 5:04 PM To: Lucene/Solr dev Subject: [VOTE] Release 4.9.1 RC0 Artifacts here: http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 1625586 4.9.1 /tmp/smoke491 True SUCCESS! [0:24:36.203643] Here's my +1 Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 631 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/631/ 1 tests failed. REGRESSION: org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload Error Message: expected:[{indexVersion=1411025056407,generation=2,filelist=[_3dj.doc, _3dj.fdt, _3dj.fdx, _3dj.fnm, _3dj.nvd, _3dj.nvm, _3dj.si, _3dj.tim, _3dj.tip, _6q2.doc, _6q2.fdt, _6q2.fdx, _6q2.fnm, _6q2.nvd, _6q2.nvm, _6q2.si, _6q2.tim, _6q2.tip, _a2k.doc, _a2k.fdt, _a2k.fdx, _a2k.fnm, _a2k.nvd, _a2k.nvm, _a2k.si, _a2k.tim, _a2k.tip, _df4.cfe, _df4.cfs, _df4.si, _grn.cfe, _grn.cfs, _grn.si, _k46.cfe, _k46.cfs, _k46.si, _nbo.doc, _nbo.fdt, _nbo.fdx, _nbo.fnm, _nbo.nvd, _nbo.nvm, _nbo.si, _nbo.tim, _nbo.tip, _nbp.cfe, _nbp.cfs, _nbp.si, _nbq.doc, _nbq.fdt, _nbq.fdx, _nbq.fnm, _nbq.nvd, _nbq.nvm, _nbq.si, _nbq.tim, _nbq.tip, _nbr.doc, _nbr.fdt, _nbr.fdx, _nbr.fnm, _nbr.nvd, _nbr.nvm, _nbr.si, _nbr.tim, _nbr.tip, _nbs.doc, _nbs.fdt, _nbs.fdx, _nbs.fnm, _nbs.nvd, _nbs.nvm, _nbs.si, _nbs.tim, _nbs.tip, _nbt.doc, _nbt.fdt, _nbt.fdx, _nbt.fnm, _nbt.nvd, _nbt.nvm, _nbt.si, _nbt.tim, _nbt.tip, _nbu.doc, _nbu.fdt, _nbu.fdx, _nbu.fnm, _nbu.nvd, _nbu.nvm, _nbu.si, _nbu.tim, _nbu.tip, _nbv.doc, _nbv.fdt, _nbv.fdx, _nbv.fnm, _nbv.nvd, _nbv.nvm, _nbv.si, _nbv.tim, _nbv.tip, _nbw.doc, _nbw.fdt, _nbw.fdx, _nbw.fnm, _nbw.nvd, _nbw.nvm, _nbw.si, _nbw.tim, _nbw.tip, _nbx.doc, _nbx.fdt, _nbx.fdx, _nbx.fnm, _nbx.nvd, _nbx.nvm, _nbx.si, _nbx.tim, _nbx.tip, _nby.doc, _nby.fdt, _nby.fdx, _nby.fnm, _nby.nvd, _nby.nvm, _nby.si, _nby.tim, _nby.tip, _nbz.doc, _nbz.fdt, _nbz.fdx, _nbz.fnm, _nbz.nvd, _nbz.nvm, _nbz.si, _nbz.tim, _nbz.tip, _nc0.doc, _nc0.fdt, _nc0.fdx, _nc0.fnm, _nc0.nvd, _nc0.nvm, _nc0.si, _nc0.tim, _nc0.tip, _nc1.doc, _nc1.fdt, _nc1.fdx, _nc1.fnm, _nc1.nvd, _nc1.nvm, _nc1.si, _nc1.tim, _nc1.tip, _nc2.doc, _nc2.fdt, _nc2.fdx, _nc2.fnm, _nc2.nvd, _nc2.nvm, _nc2.si, _nc2.tim, _nc2.tip, _nc3.doc, _nc3.fdt, _nc3.fdx, _nc3.fnm, _nc3.nvd, _nc3.nvm, _nc3.si, _nc3.tim, _nc3.tip, _nc4.doc, _nc4.fdt, _nc4.fdx, _nc4.fnm, _nc4.nvd, _nc4.nvm, _nc4.si, _nc4.tim, _nc4.tip, _nc5.doc, _nc5.fdt, _nc5.fdx, _nc5.fnm, _nc5.nvd, _nc5.nvm, _nc5.si, _nc5.tim, _nc5.tip, _nc6.doc, _nc6.fdt, _nc6.fdx, _nc6.fnm, _nc6.nvd, _nc6.nvm, _nc6.si, _nc6.tim, _nc6.tip, _nc8.doc, _nc8.fdt, _nc8.fdx, _nc8.fnm, _nc8.nvd, _nc8.nvm, _nc8.si, _nc8.tim, _nc8.tip, _nc9.doc, _nc9.fdt, _nc9.fdx, _nc9.fnm, _nc9.nvd, _nc9.nvm, _nc9.si, _nc9.tim, _nc9.tip, _nca.doc, _nca.fdt, _nca.fdx, _nca.fnm, _nca.nvd, _nca.nvm, _nca.si, _nca.tim, _nca.tip, _ncb.doc, _ncb.fdt, _ncb.fdx, _ncb.fnm, _ncb.nvd, _ncb.nvm, _ncb.si, _ncb.tim, _ncb.tip, segments_2]}] but was:[{indexVersion=1411025056407,generation=3,filelist=[_3dj.doc, _3dj.fdt, _3dj.fdx, _3dj.fnm, _3dj.nvd, _3dj.nvm, _3dj.si, _3dj.tim, _3dj.tip, _6q2.doc, _6q2.fdt, _6q2.fdx, _6q2.fnm, _6q2.nvd, _6q2.nvm, _6q2.si, _6q2.tim, _6q2.tip, _a2k.doc, _a2k.fdt, _a2k.fdx, _a2k.fnm, _a2k.nvd, _a2k.nvm, _a2k.si, _a2k.tim, _a2k.tip, _df4.cfe, _df4.cfs, _df4.si, _grn.cfe, _grn.cfs, _grn.si, _k46.cfe, _k46.cfs, _k46.si, _nc6.doc, _nc6.fdt, _nc6.fdx, _nc6.fnm, _nc6.nvd, _nc6.nvm, _nc6.si, _nc6.tim, _nc6.tip, _nc7.cfe, _nc7.cfs, _nc7.si, _nc8.doc, _nc8.fdt, _nc8.fdx, _nc8.fnm, _nc8.nvd, _nc8.nvm, _nc8.si, _nc8.tim, _nc8.tip, _nc9.doc, _nc9.fdt, _nc9.fdx, _nc9.fnm, _nc9.nvd, _nc9.nvm, _nc9.si, _nc9.tim, _nc9.tip, _nca.doc, _nca.fdt, _nca.fdx, _nca.fnm, _nca.nvd, _nca.nvm, _nca.si, _nca.tim, _nca.tip, _ncb.doc, _ncb.fdt, _ncb.fdx, _ncb.fnm, _ncb.nvd, _ncb.nvm, _ncb.si, _ncb.tim, _ncb.tip, segments_3]}, {indexVersion=1411025056407,generation=2,filelist=[_3dj.doc, _3dj.fdt, _3dj.fdx, _3dj.fnm, _3dj.nvd, _3dj.nvm, _3dj.si, _3dj.tim, _3dj.tip, _6q2.doc, _6q2.fdt, _6q2.fdx, _6q2.fnm, _6q2.nvd, _6q2.nvm, _6q2.si, _6q2.tim, _6q2.tip, _a2k.doc, _a2k.fdt, _a2k.fdx, _a2k.fnm, _a2k.nvd, _a2k.nvm, _a2k.si, _a2k.tim, _a2k.tip, _df4.cfe, _df4.cfs, _df4.si, _grn.cfe, _grn.cfs, _grn.si, _k46.cfe, _k46.cfs, _k46.si, _nbo.doc, _nbo.fdt, _nbo.fdx, _nbo.fnm, _nbo.nvd, _nbo.nvm, _nbo.si, _nbo.tim, _nbo.tip, _nbp.cfe, _nbp.cfs, _nbp.si, _nbq.doc, _nbq.fdt, _nbq.fdx, _nbq.fnm, _nbq.nvd, _nbq.nvm, _nbq.si, _nbq.tim, _nbq.tip, _nbr.doc, _nbr.fdt, _nbr.fdx, _nbr.fnm, _nbr.nvd, _nbr.nvm, _nbr.si, _nbr.tim, _nbr.tip, _nbs.doc, _nbs.fdt, _nbs.fdx, _nbs.fnm, _nbs.nvd, _nbs.nvm, _nbs.si, _nbs.tim, _nbs.tip, _nbt.doc, _nbt.fdt, _nbt.fdx, _nbt.fnm, _nbt.nvd, _nbt.nvm, _nbt.si, _nbt.tim, _nbt.tip, _nbu.doc, _nbu.fdt, _nbu.fdx, _nbu.fnm, _nbu.nvd, _nbu.nvm, _nbu.si, _nbu.tim, _nbu.tip, _nbv.doc, _nbv.fdt, _nbv.fdx, _nbv.fnm, _nbv.nvd, _nbv.nvm, _nbv.si, _nbv.tim, _nbv.tip, _nbw.doc, _nbw.fdt, _nbw.fdx, _nbw.fnm, _nbw.nvd, _nbw.nvm, _nbw.si, _nbw.tim, _nbw.tip, _nbx.doc, _nbx.fdt, _nbx.fdx, _nbx.fnm, _nbx.nvd, _nbx.nvm, _nbx.si, _nbx.tim, _nbx.tip, _nby.doc, _nby.fdt, _nby.fdx, _nby.fnm, _nby.nvd, _nby.nvm, _nby.si, _nby.tim, _nby.tip, _nbz.doc, _nbz.fdt, _nbz.fdx, _nbz.fnm, _nbz.nvd, _nbz.nvm, _nbz.si, _nbz.tim, _nbz.tip,
Re: [VOTE] Release 4.9.1 RC0
No problem Uwe, I'll respin. Mike McCandless http://blog.mikemccandless.com On Thu, Sep 18, 2014 at 3:54 AM, Uwe Schindler u...@thetaphi.de wrote: Hi, I committed the Apache POI dependency upgrade for Solr. Sorry that I missed this, but releasing a new version with a known security problem that already has a CVE number is a no-go. So -1 to release the current artifacts. I am very sorry! :( In addition, the changes.txt of Solr had a misplaced changes entry (preexisting problem in 4.9.0) - now corrected. Uwe P.S.: The CVE numbers are already circulated and several Linux distributions already opened issues and updated their packages. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Uwe Schindler [mailto:u...@thetaphi.de] Sent: Thursday, September 18, 2014 9:35 AM To: dev@lucene.apache.org Subject: RE: [VOTE] Release 4.9.1 RC0 Hi, unfortunately I forgot one thing: The big security issue with Apache POI! As we are releasing a new version, we should fix this, too! Respin? Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Michael McCandless [mailto:luc...@mikemccandless.com] Sent: Wednesday, September 17, 2014 5:04 PM To: Lucene/Solr dev Subject: [VOTE] Release 4.9.1 RC0 Artifacts here: http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1- RC0-rev1625586 1625586 4.9.1 /tmp/smoke491 True SUCCESS! [0:24:36.203643] Here's my +1 Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1836 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1836/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 60132 lines...] -documentation-lint: [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for malformed docs... [exec] [exec] /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build/docs/solr-analytics/overview-summary.html [exec] missing: org.apache.solr.handler.component [exec] [exec] Missing javadocs were found! BUILD FAILED /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:491: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:78: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:548: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/solr/build.xml:564: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:2471: exec returned: 1 Total time: 228 minutes 42 seconds Build step 'Invoke Ant' marked build as failure [description-setter] Description set: Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Attachment: Automaton.diff Major rework of Automaton.Builder to not allocate unnecessary memory in finish(). Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138686#comment-14138686 ] Markus Heiden commented on LUCENE-5959: --- I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138686#comment-14138686 ] Markus Heiden edited comment on LUCENE-5959 at 9/18/14 8:22 AM: I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. I don't know if my solution is correct, because there is no direct test for Automaton. But Lucene builds fine. was (Author: markus_heiden): I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11279 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11279/ Java: 32bit/jdk1.8.0_20 -server -XX:+UseSerialGC 1 tests failed. REGRESSION: org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch Error Message: java.lang.ClassCastException: org.apache.solr.common.params.CollectionParams$CollectionAction cannot be cast to java.lang.String Stack Trace: org.apache.solr.client.solrj.SolrServerException: java.lang.ClassCastException: org.apache.solr.common.params.CollectionParams$CollectionAction cannot be cast to java.lang.String at __randomizedtesting.SeedInfo.seed([2301E08B7645B5A6:A2E76E93011AD59A]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.doRequest(LBHttpSolrServer.java:381) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:304) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:874) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601) at org.apache.solr.cloud.DeleteReplicaTest.tryToRemoveOnlyIfDown(DeleteReplicaTest.java:154) at org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:125) at org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Attachment: (was: Automaton.diff) Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Attachment: Automaton.diff Major rework of Automaton.Builder. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Comment: was deleted (was: Major rework of Automaton.Builder to not allocate unnecessary memory in finish().) Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[VOTE] Release 4.9.1 RC1
Artifacts here: http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1-RC1-rev1625909 Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.9.1-RC1-rev1625909 1625909 4.9.1 /tmp/smoke491 True SUCCESS! [0:23:57.460556] Here's my +1 Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries
[ https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138708#comment-14138708 ] Alan Woodward commented on LUCENE-5911: --- Do you mean add a publish() method to MemoryIndex? There's no easy way of getting sortTerms() to run on all the fields outside of calling toString() at the moment, unless I'm missing something. Make MemoryIndex thread-safe for queries Key: LUCENE-5911 URL: https://issues.apache.org/jira/browse/LUCENE-5911 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Priority: Minor Attachments: LUCENE-5911.patch We want to be able to run multiple queries at once over a MemoryIndex in luwak (see https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191), but this isn't possible with the current implementation. However, looking at the code, it seems that it would be relatively simple to make MemoryIndex thread-safe for reads/queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4862 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4862/ All tests passed Build Log: [...truncated 60387 lines...] -documentation-lint: [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for malformed docs... [exec] [exec] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build/docs/solr-analytics/overview-summary.html [exec] missing: org.apache.solr.handler.component [exec] [exec] Missing javadocs were found! BUILD FAILED /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:491: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/build.xml:78: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:548: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/solr/build.xml:564: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java7/lucene/common-build.xml:2471: exec returned: 1 Total time: 100 minutes 10 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Sending artifact delta relative to Lucene-Solr-Tests-trunk-Java7 #4851 Archived 1 artifacts Archive block size is 32768 Received 0 blocks and 464 bytes Compression is 0.0% Took 24 ms Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138731#comment-14138731 ] Shalin Shekhar Mangar commented on SOLR-6365: - Sorry for being late with feedback but I really think this shouldn't be called paramSet. This is basically about refactoring the initial configuration for request handlers out of their section. In future when do have real query/param templates then this name will come back to bite us. We should call it what it is such as initArgs or something similar. specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 4.11, 5.0 Attachments: SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- paramSet name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /paramSet {code} other examples {code:xml} paramSet name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /paramSet requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/ requestHandler name=/dump2 class=DumpRequestHandler paramSet=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Attachment: (was: Automaton.diff) Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Comment: was deleted (was: Major rework of Automaton.Builder.) Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6532) Possibility to return an error immediately from ping handler if no searcher is available
Ere Maijala created SOLR-6532: - Summary: Possibility to return an error immediately from ping handler if no searcher is available Key: SOLR-6532 URL: https://issues.apache.org/jira/browse/SOLR-6532 Project: Solr Issue Type: Improvement Components: search Affects Versions: 4.10 Environment: Load-balanced service Reporter: Ere Maijala Priority: Minor Especially in a load-balanced environment it would be useful if it was possible to configure PingRequestHandler to return right away with an error status when a searcher is not (yet) available. This would allow the load-balancer to quickly fail over to a Solr instance that's able to serve the requests. Currently the ping handler waits for the searcher to become available, which means the load-balancer has to either keep waiting or use a suitably short timeout, which is difficult to define in a way that provides a timely failover without false negatives. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138686#comment-14138686 ] Markus Heiden edited comment on LUCENE-5959 at 9/18/14 9:57 AM: I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. was (Author: markus_heiden): I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. I don't know if my solution is correct, because there is no direct test for Automaton. But Lucene builds fine. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138749#comment-14138749 ] Noble Paul edited comment on SOLR-6365 at 9/18/14 10:11 AM: yeah let's make it initArgs how about just args ? was (Author: noble.paul): yeah let's make it initArgs specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 4.11, 5.0 Attachments: SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- paramSet name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /paramSet {code} other examples {code:xml} paramSet name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /paramSet requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/ requestHandler name=/dump2 class=DumpRequestHandler paramSet=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reopened SOLR-6365: -- yeah let's make it initArgs specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 4.11, 5.0 Attachments: SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- paramSet name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /paramSet {code} other examples {code:xml} paramSet name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /paramSet requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/ requestHandler name=/dump2 class=DumpRequestHandler paramSet=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Attachment: Automaton.diff Major rework of Automaton.Builder. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138686#comment-14138686 ] Markus Heiden edited comment on LUCENE-5959 at 9/18/14 10:22 AM: - I reworked the Builder completely, see Automaton.diff. Now no unneeded memory will be allocated in finish(). This looks for me like a clean and (memory) efficient solution. was (Author: markus_heiden): I reworked the Builder completely, see Automaton.diff. Now in finish() no unneeded memory will be allocated. This looks for me like a clean and (memory) efficient solution. Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138771#comment-14138771 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625934 from [~rcmuir] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625934 ] LUCENE-5944: move trunk to 6.x, create branch_5x move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5960: -- Attachment: (was: topoSortStates.patch) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: AnalyzingSuggester.diff Initially size HashSet visited correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5960: -- Comment: was deleted (was: Patch with suggested changes.) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: AnalyzingSuggester.diff Initially size HashSet visited correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5960: -- Attachment: AnalyzingSuggester.diff Patch with suggested changes. Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: AnalyzingSuggester.diff Initially size HashSet visited correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5960: -- Description: Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. (was: Initially size HashSet visited correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set.) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5959: -- Labels: patch performance (was: patch) Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Markus Heiden updated LUCENE-5960: -- Labels: patch performance (was: patch) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-2878: -- Attachment: LUCENE-2878.patch Talk of a 5.0 release has got me working on this again. Here's my latest patch against trunk. * consolidates DocsEnum and DocsAndPositionsEnum * all Scorers now implement nextPosition(), startPosition(), endPosition(), startOffset() and endOffset(), including ExactPhraseScorer and SloppyPhraseScorer * Collectors have a postingFeatures() method to indicate the features they require from scorers (frequencies, positions, offsets, payloads, etc) * adds a number of new queries in oal.search.posfilter that can use nextPosition() There are a few test failures still, which I'm chasing down. Still to do: * work out a decent way of ensuring that position filter queries don't get run inadvertently on subqueries from separate fields * clean up the docs() and docsAndPositions() API in PostingsReader * javadocs, etc * payload queries * nuke spans! This has moved on a long way from the existing branches, to the point that it's probably worth deleting them and opening a new one. I've gone with adding nextPosition() directly to Scorer, which I think ends up with a cleaner API than the IntervalIterators that Simon and I worked on last year. It's been three years, let's get this done. Allow Scorer to expose positions and payloads aka. nuke spans -- Key: LUCENE-2878 URL: https://issues.apache.org/jira/browse/LUCENE-2878 Project: Lucene - Core Issue Type: Improvement Components: core/search Affects Versions: Positions Branch Reporter: Simon Willnauer Assignee: Robert Muir Labels: gsoc2014 Fix For: Positions Branch Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878_trunk.patch, LUCENE-2878_trunk.patch, PosHighlighter.patch, PosHighlighter.patch Currently we have two somewhat separate types of queries, the one which can make use of positions (mainly spans) and payloads (spans). Yet Span*Query doesn't really do scoring comparable to what other queries do and at the end of the day they are duplicating lot of code all over lucene. Span*Queries are also limited to other Span*Query instances such that you can not use a TermQuery or a BooleanQuery with SpanNear or anthing like that. Beside of the Span*Query limitation other queries lacking a quiet interesting feature since they can not score based on term proximity since scores doesn't expose any positional information. All those problems bugged me for a while now so I stared working on that using the bulkpostings API. I would have done that first cut on trunk but TermScorer is working on BlockReader that do not expose positions while the one in this branch does. I started adding a new Positions class which users can pull from a scorer, to prevent unnecessary positions enums I added ScorerContext#needsPositions and eventually Scorere#needsPayloads to create the corresponding enum on demand. Yet, currently only TermQuery / TermScorer implements this API and other simply return null instead. To show that the API really works and our BulkPostings work fine too with positions I cut over TermSpanQuery to use a TermScorer under the hood and nuked TermSpans entirely. A nice sideeffect of this was that the Position BulkReading implementation got some exercise which now :) work all with positions while Payloads for bulkreading are kind of experimental in the patch and those only work with Standard codec. So all spans now work on top of TermScorer ( I truly hate spans since today ) including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother to implement the other codecs yet since I want to get feedback on the API and on this first cut before I go one with it. I will upload the corresponding patch in a minute. I also had to cut over SpanQuery.getSpans(IR) to SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk first but after that pain today I need a break first :). The patch passes all core tests (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't look into the MemoryIndex
[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2116 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2116/ 1 tests failed. REGRESSION: org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds Error Message: 2: soft wasn't fast enough Stack Trace: java.lang.AssertionError: 2: soft wasn't fast enough at __randomizedtesting.SeedInfo.seed([920A9B1403E998FD:CE1F352DE86BD985]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeRapidAdds(SoftAutoCommitTest.java:316) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at
[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138749#comment-14138749 ] Noble Paul edited comment on SOLR-6365 at 9/18/14 11:26 AM: yeah let's make it initArgs how about just initParams ? was (Author: noble.paul): yeah let's make it initArgs how about just args ? specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 4.11, 5.0 Attachments: SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- paramSet name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /paramSet {code} other examples {code:xml} paramSet name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /paramSet requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/ requestHandler name=/dump2 class=DumpRequestHandler paramSet=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4863 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4863/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup Error Message: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=1138, name=Thread-629, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=1138, name=Thread-629, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) at __randomizedtesting.SeedInfo.seed([9B947A8132805977]:0) FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=1138, name=Thread-629, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=1138, name=Thread-629, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138823#comment-14138823 ] Uwe Schindler commented on LUCENE-5944: --- I renamed all Jenkins Jobs and changed them to use other branch. We now also need to rename the JIRA version and add the new 5.0 to all additional backported issues. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138834#comment-14138834 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625946 from [~thetaphi] in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1625946 ] LUCENE-5944: delete old branch_4x move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component
[ https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6365: - Attachment: SOLR-6365.patch changed the tag name from paramSet to initParams specify appends, defaults, invariants outside of the component --- Key: SOLR-6365 URL: https://issues.apache.org/jira/browse/SOLR-6365 Project: Solr Issue Type: Improvement Reporter: Noble Paul Assignee: Noble Paul Fix For: 5.0, 6.0 Attachments: SOLR-6365.patch, SOLR-6365.patch The components are configured in solrconfig.xml mostly for specifying these extra parameters. If we separate these out, we can avoid specifying the components altogether and make solrconfig much simpler. Eventually we want users to see all functions as paths instead of components and control these params from outside , through an API and persisted in ZK objectives : * define standard components implicitly and let users override some params only * reuse standard params across components * define multiple param sets and mix and match these params at request time example {code:xml} !-- use json for all paths and _txt as the default search field-- paramSet name=global path=/** lst name=defaults str name=wtjson/str str name=df_txt/str /lst /paramSet {code} other examples {code:xml} paramSet name=a path=/dump3,/root/*,/root1/** lst name=defaults str name=aA/str /lst lst name=invariants str name=bB/str /lst lst name=appends str name=cC/str /lst /paramSet requestHandler name=/dump3 class=DumpRequestHandler/ requestHandler name=/dump4 class=DumpRequestHandler/ requestHandler name=/root/dump5 class=DumpRequestHandler/ requestHandler name=/root1/anotherlevel/dump6 class=DumpRequestHandler/ requestHandler name=/dump1 class=DumpRequestHandler paramSet=a/ requestHandler name=/dump2 class=DumpRequestHandler paramSet=a lst name=defaults str name=aA1/str /lst lst name=invariants str name=bB1/str /lst lst name=appends str name=cC1/str /lst /requestHandler {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-5944: -- Fix Version/s: 5.0 move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene
[ https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeroen Steggink updated SOLR-6441: -- Attachment: SOLR-6441.patch MoreLikeThis support for stopwords as in Lucene --- Key: SOLR-6441 URL: https://issues.apache.org/jira/browse/SOLR-6441 Project: Solr Issue Type: Improvement Components: MoreLikeThis Affects Versions: 4.9 Reporter: Jeroen Steggink Priority: Minor Labels: difficulty-easy, impact-low, workaround-exists Fix For: 4.10, 5.0 Attachments: SOLR-6441.patch, SOLR-6441.patch In the Lucene implementation of MoreLikeThis, it's possible to add a list of stopwords which are considered uninteresting and are ignored. It would be a great addition to the MoreLikeThisHandler to be able to specify a list of stopwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138842#comment-14138842 ] Uwe Schindler commented on LUCENE-5944: --- OK: - I renamed JIRA Version 4.11 to 5.0 - I renamed JIRA version 5.0 to 6.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138842#comment-14138842 ] Uwe Schindler edited comment on LUCENE-5944 at 9/18/14 12:01 PM: - OK:´ - I renamed JIRA version 5.0 to 6.0 - I renamed JIRA Version 4.11 to 5.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. was (Author: thetaphi): OK: - I renamed JIRA Version 4.11 to 5.0 - I renamed JIRA version 5.0 to 6.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138842#comment-14138842 ] Uwe Schindler edited comment on LUCENE-5944 at 9/18/14 12:01 PM: - OK:´ - I renamed JIRA version 5.0 to 6.0 - I renamed JIRA Version 4.11 to 5.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. All Jenins Jobs were renamed, too (ASF, Policeman, Flonkings). was (Author: thetaphi): OK:´ - I renamed JIRA version 5.0 to 6.0 - I renamed JIRA Version 4.11 to 5.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138842#comment-14138842 ] Uwe Schindler edited comment on LUCENE-5944 at 9/18/14 12:02 PM: - OK:´ - I renamed JIRA version 5.0 to 6.0 - I renamed JIRA Version 4.11 to 5.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. All Jenkins Jobs were renamed, too (ASF, Policeman, Flonkings). was (Author: thetaphi): OK:´ - I renamed JIRA version 5.0 to 6.0 - I renamed JIRA Version 4.11 to 5.0 For all additional backports from trunk to 5.x, we should add the 5.0 JIRA fix version. 4.11 is gone. branch_4x is gone, too, please SVN switch branch_4x to branch_5x. We now must bump the version numbers in the build files. All Jenins Jobs were renamed, too (ASF, Policeman, Flonkings). move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene
[ https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138844#comment-14138844 ] Jeroen Steggink commented on SOLR-6441: --- I have posted a new patch according to the remarks Steve posted. MoreLikeThis support for stopwords as in Lucene --- Key: SOLR-6441 URL: https://issues.apache.org/jira/browse/SOLR-6441 Project: Solr Issue Type: Improvement Components: MoreLikeThis Affects Versions: 4.9 Reporter: Jeroen Steggink Priority: Minor Labels: difficulty-easy, impact-low, workaround-exists Fix For: 4.10, 5.0 Attachments: SOLR-6441.patch, SOLR-6441.patch In the Lucene implementation of MoreLikeThis, it's possible to add a list of stopwords which are considered uninteresting and are ignored. It would be a great addition to the MoreLikeThisHandler to be able to specify a list of stopwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene
[ https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeroen Steggink updated SOLR-6441: -- Fix Version/s: (was: 4.10) MoreLikeThis support for stopwords as in Lucene --- Key: SOLR-6441 URL: https://issues.apache.org/jira/browse/SOLR-6441 Project: Solr Issue Type: Improvement Components: MoreLikeThis Affects Versions: 4.10 Reporter: Jeroen Steggink Priority: Minor Labels: difficulty-easy, impact-low, workaround-exists Fix For: 4.10.1, 5.0 Attachments: SOLR-6441.patch, SOLR-6441.patch In the Lucene implementation of MoreLikeThis, it's possible to add a list of stopwords which are considered uninteresting and are ignored. It would be a great addition to the MoreLikeThisHandler to be able to specify a list of stopwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene
[ https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeroen Steggink updated SOLR-6441: -- Fix Version/s: 4.10.1 MoreLikeThis support for stopwords as in Lucene --- Key: SOLR-6441 URL: https://issues.apache.org/jira/browse/SOLR-6441 Project: Solr Issue Type: Improvement Components: MoreLikeThis Affects Versions: 4.10 Reporter: Jeroen Steggink Priority: Minor Labels: difficulty-easy, impact-low, workaround-exists Fix For: 4.10.1, 5.0 Attachments: SOLR-6441.patch, SOLR-6441.patch In the Lucene implementation of MoreLikeThis, it's possible to add a list of stopwords which are considered uninteresting and are ignored. It would be a great addition to the MoreLikeThisHandler to be able to specify a list of stopwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6441) MoreLikeThis support for stopwords as in Lucene
[ https://issues.apache.org/jira/browse/SOLR-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeroen Steggink updated SOLR-6441: -- Affects Version/s: (was: 4.9) 4.10 MoreLikeThis support for stopwords as in Lucene --- Key: SOLR-6441 URL: https://issues.apache.org/jira/browse/SOLR-6441 Project: Solr Issue Type: Improvement Components: MoreLikeThis Affects Versions: 4.10 Reporter: Jeroen Steggink Priority: Minor Labels: difficulty-easy, impact-low, workaround-exists Fix For: 4.10.1, 5.0 Attachments: SOLR-6441.patch, SOLR-6441.patch In the Lucene implementation of MoreLikeThis, it's possible to add a list of stopwords which are considered uninteresting and are ignored. It would be a great addition to the MoreLikeThisHandler to be able to specify a list of stopwords. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11280 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11280/ Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 60177 lines...] -documentation-lint: [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [echo] Checking for malformed docs... [exec] [exec] /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/docs/solr-analytics/overview-summary.html [exec] missing: org.apache.solr.handler.component [exec] [exec] Missing javadocs were found! BUILD FAILED /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:491: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:78: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:548: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build.xml:564: The following error occurred while executing this line: /mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:2471: exec returned: 1 Total time: 111 minutes 19 seconds Build step 'Invoke Ant' marked build as failure [description-setter] Description set: Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseParallelGC Archiving artifacts Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6526) Solr Streaming API
[ https://issues.apache.org/jira/browse/SOLR-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-6526: - Description: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example: {code} //Create three CloudSolrStreams to different solr cloud clusters. They could be anywhere in the world. SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, a); // Alias this stream as a SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, b); // Alias this stream as b SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, c); // Alias this stream as c // Merge Join stream1 and stream2 using a comparator to compare tuples. MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new MyComp()); //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s define the hashKeys for tuples HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new HashKey(), new HashKey()); //Sum field1 from SumStream sumStream1 = new SumStream(joinStream2, a.field1); AveStream sumStream2 = new SumStream(sumStream1, b.field2); Tuple t = null; //Read from the stream until it's finished. while((t != sumStream2().read()) != null); //Get the sums from the joined data. long sum1 = sumStream1.getSum(); long sum2 = sumStream2.getSum(); {code} was: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example a CollapseStream could be created: {code} CollapseStream collapseStream = new CollapseStream(new CloudSolrStream(zkUrl, queryRequest)); Tuple tuple = null; while((tuple = collapseStream.read()) != null) { } {code} Solr Streaming API -- Key: SOLR-6526 URL: https://issues.apache.org/jira/browse/SOLR-6526 Project: Solr Issue Type: New Feature Components: clients - java Reporter: Joel Bernstein Fix For: 6.0 Attachments: SOLR-6526.patch It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes
[jira] [Updated] (SOLR-6526) Solr Streaming API
[ https://issues.apache.org/jira/browse/SOLR-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-6526: - Description: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example: {code} //Create three CloudSolrStreams to different solr cloud clusters. They could be anywhere in the world. SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, a); // Alias this stream as a SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, b); // Alias this stream as b SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, c); // Alias this stream as c // Merge Join stream1 and stream2 using a comparator to compare tuples. MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new MyComp()); //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s define the hashKeys for tuples HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new HashKey(), new HashKey()); //Sum field1 from SumStream sumStream1 = new SumStream(joinStream2, a.field1); SumStream sumStream2 = new SumStream(sumStream1, b.field2); Tuple t = null; //Read from the stream until it's finished. while((t != sumStream2().read()) != null); //Get the sums from the joined data. long sum1 = sumStream1.getSum(); long sum2 = sumStream2.getSum(); {code} was: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example: {code} //Create three CloudSolrStreams to different solr cloud clusters. They could be anywhere in the world. SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, a); // Alias this stream as a SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, b); // Alias this stream as b SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, c); // Alias this stream as c // Merge Join stream1 and stream2 using a comparator to compare tuples. MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new MyComp()); //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s define the hashKeys for tuples HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new HashKey(), new HashKey()); //Sum field1 from SumStream sumStream1 = new SumStream(joinStream2, a.field1); AveStream sumStream2 = new SumStream(sumStream1, b.field2); Tuple t = null; //Read from the stream until it's finished. while((t != sumStream2().read()) != null); //Get the sums from the joined data. long sum1 = sumStream1.getSum(); long sum2 = sumStream2.getSum(); {code} Solr Streaming API -- Key: SOLR-6526 URL: https://issues.apache.org/jira/browse/SOLR-6526 Project: Solr Issue Type: New Feature Components: clients - java Reporter: Joel Bernstein Fix For: 6.0 Attachments: SOLR-6526.patch It would be great if there was a SolrJ
[jira] [Updated] (SOLR-6526) Solr Streaming API
[ https://issues.apache.org/jira/browse/SOLR-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-6526: - Description: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example: {code} //Create three CloudSolrStreams to different solr cloud clusters. They could be anywhere in the world. SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, a); // Alias this stream as a SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, b); // Alias this stream as b SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, c); // Alias this stream as c // Merge Join stream1 and stream2 using a comparator to compare tuples. MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new MyComp()); //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s define the hashKeys for tuples HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new HashKey(), new HashKey()); //Sum the aliased fields from the joined tuples. SumStream sumStream1 = new SumStream(joinStream2, a.field1); SumStream sumStream2 = new SumStream(sumStream1, b.field2); Tuple t = null; //Read from the stream until it's finished. while((t != sumStream2().read()) != null); //Get the sums from the joined data. long sum1 = sumStream1.getSum(); long sum2 = sumStream2.getSum(); {code} was: It would be great if there was a SolrJ library that could connect to Solr's /export handler (SOLR-5244) and perform streaming operations on the sorted result sets. This ticket defines the base interfaces and implementations for the Streaming API. The base API contains three classes: *SolrStream*: This represents a stream from a single Solr instance. It speaks directly to the /export handler and provides methods to read() Tuples and close() the stream *CloudSolrStream*: This represents a stream from a SolrCloud collection. It speaks with Zk to discover the Solr instances in the collection and then creates SolrStreams to make the requests. The results from the underlying streams are merged inline to produce a single sorted stream of tuples. *Tuple*: The data structure returned by the read() method of the SolrStream API. It is nested to support grouping and Cartesian product set operations. Once these base classes are implemented it paves the way for building *Decorator* streams that perform operations on the sorted Tuple sets. For example: {code} //Create three CloudSolrStreams to different solr cloud clusters. They could be anywhere in the world. SolrStream stream1 = new CloudSolrStream(zkUrl1, queryRequest1, a); // Alias this stream as a SolrStream stream2 = new CloudSolrStream(zkUrl2, queryRequest2, b); // Alias this stream as b SolrStream stream3 = new CloudSolrStream(zkUrl3, queryRequest3, c); // Alias this stream as c // Merge Join stream1 and stream2 using a comparator to compare tuples. MergeJoinStream joinStream1 = new MergeJoinStream(stream1, stream2, new MyComp()); //Hash join the tuples from the joinStream1 with stream3 the HashKey()'s define the hashKeys for tuples HashJoinStream joinStream2 = new HashJoinStream(joinStream1,stream3, new HashKey(), new HashKey()); //Sum field1 from SumStream sumStream1 = new SumStream(joinStream2, a.field1); SumStream sumStream2 = new SumStream(sumStream1, b.field2); Tuple t = null; //Read from the stream until it's finished. while((t != sumStream2().read()) != null); //Get the sums from the joined data. long sum1 = sumStream1.getSum(); long sum2 = sumStream2.getSum(); {code} Solr Streaming API -- Key: SOLR-6526 URL: https://issues.apache.org/jira/browse/SOLR-6526 Project: Solr Issue Type: New Feature Components: clients - java Reporter: Joel Bernstein Fix For: 6.0 Attachments: SOLR-6526.patch It would be
[jira] [Assigned] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless reassigned LUCENE-5960: -- Assignee: Michael McCandless Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138894#comment-14138894 ] Michael McCandless commented on LUCENE-5960: Thanks Markus, this looks great ... I'll commit shortly. Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138901#comment-14138901 ] ASF subversion and git services commented on LUCENE-5960: - Commit 1625965 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1625965 ] LUCENE-5960: Use a more efficient bitset, not a SetInteger, to track visited states Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138903#comment-14138903 ] ASF subversion and git services commented on LUCENE-5960: - Commit 1625966 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625966 ] LUCENE-5960: Use a more efficient bitset, not a SetInteger, to track visited states Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-5960. Resolution: Fixed Fix Version/s: 6.0 5.0 Thanks Markus! Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138919#comment-14138919 ] Shalin Shekhar Mangar commented on LUCENE-5944: --- Why did we delete branch_4x? When did we vote on not releasing 4.x anymore? move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138925#comment-14138925 ] Robert Muir commented on LUCENE-5944: - I opened this issue 6 days ago and nobody objected. we can't support any more 4.x releases in lucene anymore. Each release is plagued by corruptions in the back compat support. Its just not sustainable. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138927#comment-14138927 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625976 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1625976 ] LUCENE-5944: Bump version in trunk move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6491) Umbrella JIRA for managing the leader assignments
[ https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138928#comment-14138928 ] Erick Erickson commented on SOLR-6491: -- bq: The good thing about this command is I can just do it after a beer without screwing up the cluster badly Ahhh, yes. The drunken monkey proof API. I worked with a guy once who had a theory that if you couldn't understand your code after 3 beers it was too complicated and you should simplify it, although on the next day. Ever since I've tried to get places I work to institute the Friday Afternoon Beer Code Review but failed. I _really_ bet that would be a way to get more code reviews! What we're talking about here should do that however. There's the auto assign ticket to distribute the preferred roles evenly and the make it so ticket to actually change leadership. SOLR-6513 and SOLR-6517. We could also extend the leader election process to automatically do this, there's nothing precluding that here. So it's feeling like we can carry this idea forward, probably later today I'll post the assign-replica-property code for review and start working on 6513 and we'll go from there? Umbrella JIRA for managing the leader assignments - Key: SOLR-6491 URL: https://issues.apache.org/jira/browse/SOLR-6491 Project: Solr Issue Type: Improvement Affects Versions: 5.0, 6.0 Reporter: Erick Erickson Assignee: Erick Erickson Leaders can currently get out of balance due to the sequence of how nodes are brought up in a cluster. For very good reasons shard leadership cannot be permanently assigned. However, it seems reasonable that a sys admin could optionally specify that a particular node be the _preferred_ leader for a particular collection/shard. During leader election, preference would be given to any node so marked when electing any leader. So the proposal here is to add another role for preferredLeader to the collections API, something like ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId Second, it would be good to have a new collections API call like ELECTPREFERREDLEADERS?collection=collection_name (I really hate that name so far, but you see the idea). That command would (asynchronously?) make an attempt to transfer leadership for each shard in a collection to the leader labeled as the preferred leader by the new ADDROLE role. I'm going to start working on this, any suggestions welcome! This will subsume several other JIRAs, I'll link them momentarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138931#comment-14138931 ] Michael McCandless commented on LUCENE-5959: I like this, I'll commit shortly. Thanks Markus! Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6533) Support editing common solrconfig.xml values
Noble Paul created SOLR-6533: Summary: Support editing common solrconfig.xml values Key: SOLR-6533 URL: https://issues.apache.org/jira/browse/SOLR-6533 Project: Solr Issue Type: Sub-task Reporter: Noble Paul There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} cores.transientCacheSize indexConfig.mergeFactor {code} The api will be modeled around the bulk schema API {code} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {index.mergeFactor:5}, unset-property:{cores.transientCacheSize:} }' The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-2357) Thread Local memory leaks on restart
[ https://issues.apache.org/jira/browse/SOLR-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138963#comment-14138963 ] Matt C commented on SOLR-2357: -- threadlocal leaks are a known issue with tomcat 7.0.6, see http://wiki.apache.org/tomcat/MemoryLeakProtection Thread Local memory leaks on restart Key: SOLR-2357 URL: https://issues.apache.org/jira/browse/SOLR-2357 Project: Solr Issue Type: Bug Components: contrib - Solr Cell (Tika extraction), search Affects Versions: 1.4.1 Environment: Windows Server 2008, Apache Tomcat 7.0.8, Java 1.6.23 Reporter: Gus Heck Labels: memory_leak, threadlocal Restarting solr (via a changed to a watched resource or via manager app for example) after submitting documents with Solr-Cell, gives the following message (many many times), and causes Tomcat to shutdown completely. SEVERE: The web application [/solr] created a ThreadLocal with key of type [org. apache.solr.common.util.DateUtil.ThreadLocalDateFormat] (value [org.apache.solr. common.util.DateUtil$ThreadLocalDateFormat@dc30dfa]) and a value of type [java.t ext.SimpleDateFormat] (value [java.text.SimpleDateFormat@5af7aed5]) but failed t o remove it when the web application was stopped. Threads are going to be renewe d over time to try and avoid a probable memory leak. Feb 10, 2011 7:17:53 AM org.apache.catalina.loader.WebappClassLoader checkThread LocalMapForLeaks -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138969#comment-14138969 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625980 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625980 ] LUCENE-5944: Bump version in branch_5x move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6491) Umbrella JIRA for managing the leader assignments
[ https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138967#comment-14138967 ] Noble Paul commented on SOLR-6491: -- bq.So it's feeling like we can carry this idea forward, yeah I would like it in the following order # rebalanceLeaders . params (collection, shard(optional)) # switchLeader . params (collection, shard, targetReplicaName) # I'm not a fan of the preferredLeader thing . This information is persisted and the ops guy will have no clue where he assigned the leader. Or worse, another ops guy would take over and he will be completely clueless . I'm already the culprit of the overseer role feature. But, my defense would be that, it is only one node for the entire cluster and it can't be too bad Umbrella JIRA for managing the leader assignments - Key: SOLR-6491 URL: https://issues.apache.org/jira/browse/SOLR-6491 Project: Solr Issue Type: Improvement Affects Versions: 5.0, 6.0 Reporter: Erick Erickson Assignee: Erick Erickson Leaders can currently get out of balance due to the sequence of how nodes are brought up in a cluster. For very good reasons shard leadership cannot be permanently assigned. However, it seems reasonable that a sys admin could optionally specify that a particular node be the _preferred_ leader for a particular collection/shard. During leader election, preference would be given to any node so marked when electing any leader. So the proposal here is to add another role for preferredLeader to the collections API, something like ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId Second, it would be good to have a new collections API call like ELECTPREFERREDLEADERS?collection=collection_name (I really hate that name so far, but you see the idea). That command would (asynchronously?) make an attempt to transfer leadership for each shard in a collection to the leader labeled as the preferred leader by the new ADDROLE role. I'm going to start working on this, any suggestions welcome! This will subsume several other JIRAs, I'll link them momentarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138971#comment-14138971 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625983 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625983 ] LUCENE-5944: Bump Maven POM in branch_5x move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95096 - Failure!
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/95096/ 1 tests failed. REGRESSION: org.apache.lucene.util.TestVersion.testParseExceptions Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([DA7CAE0AA172F745:DD2CFC3A077F476A]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.fail(Assert.java:100) at org.apache.lucene.util.TestVersion.testParseExceptions(TestVersion.java:201) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1224 lines...] [junit4] Suite: org.apache.lucene.util.TestVersion [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestVersion -Dtests.method=testParseExceptions -Dtests.seed=DA7CAE0AA172F745 -Dtests.slow=true -Dtests.locale=ga -Dtests.timezone=Etc/GMT-7 -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.04s J1 | TestVersion.testParseExceptions [junit4] Throwable #1: java.lang.AssertionError [junit4]at __randomizedtesting.SeedInfo.seed([DA7CAE0AA172F745:DD2CFC3A077F476A]:0) [junit4]at
[jira] [Commented] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138981#comment-14138981 ] Erick Erickson commented on SOLR-6115: -- And thank _you_. I just ran into all this again yesterday, I had stuff scattered all over the place for some new functionality and thought that makes no sense, I'll So I fixed up the bits that I had been added, probably should have waited a day ;) It'll be cool to have all this straightened out! I'm sure there were places all over... Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 5.0, 6.0 Attachments: SOLR-6115-branch_4x.patch, SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5952) Give Version parsing exceptions more descriptive error messages
[ https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138987#comment-14138987 ] ASF subversion and git services commented on LUCENE-5952: - Commit 1625990 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625990 ] LUCENE-5944: Remove useless test (will be fixed soon by LUCENE-5952) Give Version parsing exceptions more descriptive error messages --- Key: LUCENE-5952 URL: https://issues.apache.org/jira/browse/LUCENE-5952 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.10 Reporter: Michael McCandless Priority: Blocker Fix For: 4.10.1, 5.0, 6.0 Attachments: LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch As discussed on the dev list, it's spooky how Version.java tries to fully parse the incoming version string ... and then throw exceptions that lack details about what invalid value it received, which file contained the invalid value, etc. It also seems too low level to be checking versions (e.g. is not future proof for when 4.10 is passed a 5.x index by accident), and seems redundant with the codec headers we already have for checking versions? Should we just go back to lenient parsing? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138986#comment-14138986 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625990 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625990 ] LUCENE-5944: Remove useless test (will be fixed soon by LUCENE-5952) move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138988#comment-14138988 ] ASF subversion and git services commented on LUCENE-5944: - Commit 1625991 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1625991 ] LUCENE-5944: Remove useless test (will be fixed soon by LUCENE-5952) move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5952) Give Version parsing exceptions more descriptive error messages
[ https://issues.apache.org/jira/browse/LUCENE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138989#comment-14138989 ] ASF subversion and git services commented on LUCENE-5952: - Commit 1625991 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1625991 ] LUCENE-5944: Remove useless test (will be fixed soon by LUCENE-5952) Give Version parsing exceptions more descriptive error messages --- Key: LUCENE-5952 URL: https://issues.apache.org/jira/browse/LUCENE-5952 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.10 Reporter: Michael McCandless Priority: Blocker Fix For: 4.10.1, 5.0, 6.0 Attachments: LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch, LUCENE-5952.patch As discussed on the dev list, it's spooky how Version.java tries to fully parse the incoming version string ... and then throw exceptions that lack details about what invalid value it received, which file contained the invalid value, etc. It also seems too low level to be checking versions (e.g. is not future proof for when 4.10 is passed a 5.x index by accident), and seems redundant with the codec headers we already have for checking versions? Should we just go back to lenient parsing? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Solr-Artifacts-trunk - Build # 2505 - Still Failing
Build: https://builds.apache.org/job/Solr-Artifacts-trunk/2505/ No tests ran. Build Log: [...truncated 36223 lines...] BUILD FAILED /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/solr/build.xml:596: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/solr/build.xml:588: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/solr/common-build.xml:440: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/lucene/common-build.xml:1577: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/lucene/common-build.xml:563: Unable to initialize POM pom.xml: Could not find the model file '/usr/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-trunk/lucene/build/poms/solr/contrib/analytics/pom.xml'. for project unknown Total time: 13 minutes 52 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Sending artifact delta relative to Solr-Artifacts-trunk #2502 Archived 80 artifacts Archive block size is 32768 Received 2179 blocks and 225886399 bytes Compression is 24.0% Took 2 min 45 sec Publishing Javadoc Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95096 - Failure!
Sorry, fixed already. - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: buil...@flonkings.com [mailto:buil...@flonkings.com] Sent: Thursday, September 18, 2014 4:02 PM To: dev@lucene.apache.org; sim...@apache.org; uschind...@apache.org Subject: [JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95096 - Failure! Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test- only/95096/ 1 tests failed. REGRESSION: org.apache.lucene.util.TestVersion.testParseExceptions Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([DA7CAE0AA172F745:DD2CFC3A077F47 6A]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.fail(Assert.java:100) at org.apache.lucene.util.TestVersion.testParseExceptions(TestVersion.java:20 1) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j ava:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize dRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando mizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando mizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando mizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule SetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1 .evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleTh readAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask (ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL eakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran domizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(Rando mizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(Rando mizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando mizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeA fterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCl assName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1 .evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAss ertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure .java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRule IgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnore TestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Stat ementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner. run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1224 lines...] [junit4] Suite: org.apache.lucene.util.TestVersion [junit4] 2 NOTE: reproduce with: ant test
[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 95097 - Still Failing!
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/95097/ 1 tests failed. FAILED: org.apache.lucene.util.TestVersion.testParseExceptions Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([146ABFB6D4C6A200:133AED8672CB122F]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.fail(Assert.java:100) at org.apache.lucene.util.TestVersion.testParseExceptions(TestVersion.java:201) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 1230 lines...] [junit4] Suite: org.apache.lucene.util.TestVersion [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestVersion -Dtests.method=testParseExceptions -Dtests.seed=146ABFB6D4C6A200 -Dtests.slow=true -Dtests.locale=es_CL -Dtests.timezone=America/Chihuahua -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.04s J1 | TestVersion.testParseExceptions [junit4] Throwable #1: java.lang.AssertionError [junit4]at __randomizedtesting.SeedInfo.seed([146ABFB6D4C6A200:133AED8672CB122F]:0) [junit4]at
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1206: POMs out of sync
I wonder if my latest patch got up - I had fixed that and changed some of those silly info log lines that were added to debug… - Mark http://about.me/markrmiller On Sep 12, 2014, at 9:22 PM, Steve Rowe sar...@gmail.com wrote: Forbidden-apis doesn’t like the String(byte[]) ctor in Overseer.java:316, committed by Noble in r1624556 (‘ant precommit’ would have caught this): 311: byte[] data = ZkStateReader.toJSON(e.getValue()); […] 316: log.info(going to create_collection {}, e.getKey(), new String(data)); - [mvn] [INFO] [forbiddenapis:check {execution: check-forbidden-apis}] [mvn] [INFO] Scanning for classes to check... [mvn] [INFO] Reading bundled API signatures: jdk-unsafe [mvn] [INFO] Reading bundled API signatures: jdk-deprecated [mvn] [INFO] Reading bundled API signatures: commons-io-unsafe-2.3 [mvn] [INFO] Reading API signatures: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/tools/forbiddenApis/base.txt [mvn] [INFO] Reading API signatures: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/tools/forbiddenApis/servlet-api.txt [mvn] [INFO] Loading classes to check... [mvn] [INFO] Scanning for API signatures and dependencies... [mvn] [ERROR] Forbidden method invocation: java.lang.String#init(byte[]) [Uses default charset] [mvn] [ERROR] in org.apache.solr.cloud.Overseer$ClusterStateUpdater (Overseer.java:316) [mvn] [ERROR] Scanned 1588 (and 1353 related) class file(s) for forbidden API invocations (in 0.66s), 1 error(s). - On Sep 12, 2014, at 8:59 PM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1206/ All tests passed Build Log: [...truncated 47112 lines...] BUILD FAILED /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:514: The following error occurred while executing this line: /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:198: The following error occurred while executing this line: : Java returned: 1 Total time: 63 minutes 14 seconds Build step 'Invoke Ant' marked build as failure Recording test results Email was triggered for: Failure Sending email for trigger: Failure - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14138998#comment-14138998 ] Tommaso Teofili commented on LUCENE-5944: - bq. I opened this issue 6 days ago and nobody objected. we can't support any more 4.x releases in lucene anymore. Each release is plagued by corruptions in the back compat support. Its just not sustainable. while I agree with the backcompat trouble it's still not acceptable to just move forward without a vote IMHO, even if the issue has been open for 6 days, there must be explicit consensus on things like moving to major releases. It's like saying that since java 7 will go EOL shortly we just move to java 8 without voting. To make it clear I would've voted +1 for the move, just I don't like the way it's been done. move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request: Add comme, quand, quant french sto...
GitHub user ebuildy opened a pull request: https://github.com/apache/lucene-solr/pull/95 Add comme, quand, quant french stop words You can merge this pull request into a Git repository by running: $ git pull https://github.com/ebuildy/lucene-solr patch-1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/95.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #95 commit b160505dd138125e3b9304a756d0f68798e71ffa Author: eBuildy ebui...@gmail.com Date: 2014-09-18T14:33:54Z Add comme, quand, quant french stop words --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6115) Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler
[ https://issues.apache.org/jira/browse/SOLR-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139001#comment-14139001 ] Shalin Shekhar Mangar commented on SOLR-6115: - Yeah, I think we can do better than what I did but it's a start. Please feel free to re-factor as you see fit. Cleanup enum/string action types in Overseer, OverseerCollectionProcessor and CollectionHandler --- Key: SOLR-6115 URL: https://issues.apache.org/jira/browse/SOLR-6115 Project: Solr Issue Type: Task Components: SolrCloud Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 5.0, 6.0 Attachments: SOLR-6115-branch_4x.patch, SOLR-6115.patch The enum/string handling for actions in Overseer and OCP is a mess. We should fix it. From: https://issues.apache.org/jira/browse/SOLR-5466?focusedCommentId=13918059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13918059 {quote} I started to untangle the fact that we have all the strings in OverseerCollectionProcessor, but also have a nice CollectionAction enum. And the commands are intermingled with parameters, it all seems rather confusing. Does it make sense to use the enum rather than the strings? Or somehow associate the two? Probably something for another JIRA though... {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6534) Multipolygon query problem with datelineRule=ccwRect
Jon Hines created SOLR-6534: --- Summary: Multipolygon query problem with datelineRule=ccwRect Key: SOLR-6534 URL: https://issues.apache.org/jira/browse/SOLR-6534 Project: Solr Issue Type: Bug Components: spatial Affects Versions: 4.9 Environment: Windows 7, Oracle JDK 1.7.0_45 Reporter: Jon Hines We are currently upgrading from Solr 4.1 to 4.9 and have observed some odd behavior with multipolygon queries now. It is difficult to describe what is happening so I took a screenshot with the documents and query area plotted on a map. You can see it here: [http://imgur.com/iBpYLMh] The blue areas represent the multipolygon and the purple areas represent the document footprints. The query being used is as follows: {quote} geo:Intersects(MULTIPOLYGON(((-3 2,4 2,4 8,-3 8,-3 2)),((-3 -11,4 -11,4 -4,-3 -4,-3 -11 {quote} This query returns all results when it should be returning only 8. If I run two separate queries with each individual polygon, I get 4 hits each as expected. I've narrowed this down to a problem with using 'datelineRule=ccwRect'. If I remove this setting, the query returns with the expected results. Unfortunately, this setting is required for our software though, since handling large polygon queries (spanning 180 degrees) are a requirement. Here are the relevant schema details: {quote} field name=geo type=location_rpt indexed=true stored=false/ fieldType name=location_rpt class=solr.SpatialRecursivePrefixTreeFieldType spatialContextFactory=com.spatial4j.core.context.jts.JtsSpatialContextFactory geo=true distErrPct=0.1 maxDistErr=0.09 units=degrees datelineRule=ccwRect normWrapLongitude=true autoIndex=true/ {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 11281 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11281/ Java: 64bit/ibm-j9-jdk7 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 1 tests failed. REGRESSION: org.apache.lucene.util.TestVersion.testParseExceptions Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([9253AB30E3046055:9503F9004509D07A]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.fail(Assert.java:100) at org.apache.lucene.util.TestVersion.testParseExceptions(TestVersion.java:201) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:853) Build Log: [...truncated 1543 lines...] [junit4] Suite: org.apache.lucene.util.TestVersion [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestVersion -Dtests.method=testParseExceptions -Dtests.seed=9253AB30E3046055 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=nb_NO -Dtests.timezone=Africa/Ndjamena -Dtests.file.encoding=ISO-8859-1 [junit4] FAILURE 0.02s J0 | TestVersion.testParseExceptions [junit4] Throwable #1: java.lang.AssertionError [junit4]at
[jira] [Commented] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139007#comment-14139007 ] ASF subversion and git services commented on LUCENE-5960: - Commit 1625998 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1625998 ] LUCENE-5960: move CHANGES entry under 5.0 Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5960) Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton)
[ https://issues.apache.org/jira/browse/LUCENE-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139009#comment-14139009 ] ASF subversion and git services commented on LUCENE-5960: - Commit 1625999 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1625999 ] LUCENE-5960: move CHANGES entry under 5.0 Avoid grow of Set in AnalyzingSuggester.topoSortStates(Automaton) - Key: LUCENE-5960 URL: https://issues.apache.org/jira/browse/LUCENE-5960 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: AnalyzingSuggester.diff Converted visited to a BitSet and sized it correctly in AnalyzingSuggester.topoSortStates(Automaton). This avoids dynamic resizing of the set. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139012#comment-14139012 ] ASF subversion and git services commented on LUCENE-5959: - Commit 1626002 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1626002 ] LUCENE-5959: add CHANGES entry Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139011#comment-14139011 ] ASF subversion and git services commented on LUCENE-5959: - Commit 1626001 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1626001 ] LUCENE-5959: add CHANGES entry Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-5959) Optimized memory management in Automaton.Builder.finish()
[ https://issues.apache.org/jira/browse/LUCENE-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-5959. Resolution: Fixed Fix Version/s: 6.0 5.0 Assignee: Michael McCandless Thanks Markus! Optimized memory management in Automaton.Builder.finish() - Key: LUCENE-5959 URL: https://issues.apache.org/jira/browse/LUCENE-5959 Project: Lucene - Core Issue Type: Improvement Components: core/other Affects Versions: 4.10 Reporter: Markus Heiden Assignee: Michael McCandless Priority: Minor Labels: patch, performance Fix For: 5.0, 6.0 Attachments: Automaton.diff, finish.patch Reworked Automaton.Builder.finish() to not allocate memory stepwise. Added growTransitions(int numTransitions) to be able to resize the transistions array just once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries
[ https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139015#comment-14139015 ] Simon Willnauer commented on LUCENE-5911: - I don't see how you can make this actually faster though... I don't think you can gain anything here really. I'd maybe add a method that allows you to sort all of them and use that before you pass the MemoryIndex to the search threads? Make MemoryIndex thread-safe for queries Key: LUCENE-5911 URL: https://issues.apache.org/jira/browse/LUCENE-5911 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Priority: Minor Attachments: LUCENE-5911.patch We want to be able to run multiple queries at once over a MemoryIndex in luwak (see https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191), but this isn't possible with the current implementation. However, looking at the code, it seems that it would be relatively simple to make MemoryIndex thread-safe for reads/queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5944) move trunk to 6.x, create branch_5x
[ https://issues.apache.org/jira/browse/LUCENE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139018#comment-14139018 ] Robert Muir commented on LUCENE-5944: - I didnt propose any release yet :) move trunk to 6.x, create branch_5x --- Key: LUCENE-5944 URL: https://issues.apache.org/jira/browse/LUCENE-5944 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Fix For: 5.0 In order to actually add real features (as opposed to just spending 24/7 fixing bugs and back compat), I need a trunk that doesn't have the back compat handcuffs. In the meantime, we should rename the current trunk (which is totally tied down in back compat already, without even a single release!) to branch_5x while you guys (i wont be doing any back compat anymore) figure out what you want to do with the back compat policly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 11281 - Still Failing!
Uwe fixed this one ... Mike McCandless http://blog.mikemccandless.com On Thu, Sep 18, 2014 at 10:41 AM, Policeman Jenkins Server jenk...@thetaphi.de wrote: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11281/ Java: 64bit/ibm-j9-jdk7 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 1 tests failed. REGRESSION: org.apache.lucene.util.TestVersion.testParseExceptions Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([9253AB30E3046055:9503F9004509D07A]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.fail(Assert.java:100) at org.apache.lucene.util.TestVersion.testParseExceptions(TestVersion.java:201) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:853) Build Log: [...truncated 1543 lines...] [junit4] Suite: org.apache.lucene.util.TestVersion [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestVersion -Dtests.method=testParseExceptions -Dtests.seed=9253AB30E3046055 -Dtests.multiplier=3 -Dtests.slow=true
[jira] [Updated] (SOLR-6534) Multipolygon query problem with datelineRule=ccwRect
[ https://issues.apache.org/jira/browse/SOLR-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon H updated SOLR-6534: Description: We are currently upgrading from Solr 4.1 to 4.9 and have observed some odd behavior with multipolygon queries now. It is difficult to describe what is happening so I took a screenshot with the documents and query area plotted on a map. You can see it here: [http://imgur.com/iBpYLMh] The blue areas represent the multipolygon and the purple areas represent the document footprints. The query being used is as follows: {quote} geo:Intersects(MULTIPOLYGON(((-3 2,4 2,4 8,-3 8,-3 2)),((-3 -11,4 -11,4 -4,-3 -4,-3 -11 {quote} This query returns all results when it should be returning only 8. If I run two separate queries with each individual polygon, I get 4 hits each as expected. I've narrowed this down to a problem with using 'datelineRule=ccwRect'. If I remove this setting, the query returns with the expected results. Unfortunately, this setting is required for our software though, since handling large polygon queries (spanning 180 degrees) are a requirement. Here are the relevant schema details: {quote} field name=geo type=location_rpt indexed=true stored=false/ fieldType name=location_rpt class=solr.SpatialRecursivePrefixTreeFieldType spatialContextFactory=com.spatial4j.core.context.jts.JtsSpatialContextFactory geo=true distErrPct=0.1 maxDistErr=0.09 units=degrees datelineRule=ccwRect normWrapLongitude=true autoIndex=true/ {quote} was: We are currently upgrading from Solr 4.1 to 4.9 and have observed some odd behavior with multipolygon queries now. It is difficult to describe what is happening so I took a screenshot with the documents and query area plotted on a map. You can see it here: [http://imgur.com/iBpYLMh] The blue areas represent the multipolygon and the purple areas represent the document footprints. The query being used is as follows: {quote} geo:Intersects(MULTIPOLYGON(((-3 2,4 2,4 8,-3 8,-3 2)),((-3 -11,4 -11,4 -4,-3 -4,-3 -11 {quote} This query returns all results when it should be returning only 8. If I run two separate queries with each individual polygon, I get 4 hits each as expected. I've narrowed this down to a problem with using 'datelineRule=ccwRect'. If I remove this setting, the query returns with the expected results. Unfortunately, this setting is required for our software though, since handling large polygon queries (spanning 180 degrees) are a requirement. Here are the relevant schema details: {quote} field name=geo type=location_rpt indexed=true stored=false/ fieldType name=location_rpt class=solr.SpatialRecursivePrefixTreeFieldType spatialContextFactory=com.spatial4j.core.context.jts.JtsSpatialContextFactory geo=true distErrPct=0.1 maxDistErr=0.09 units=degrees datelineRule=ccwRect normWrapLongitude=true autoIndex=true/ {quote} Multipolygon query problem with datelineRule=ccwRect Key: SOLR-6534 URL: https://issues.apache.org/jira/browse/SOLR-6534 Project: Solr Issue Type: Bug Components: spatial Affects Versions: 4.9 Environment: Windows 7, Oracle JDK 1.7.0_45 Reporter: Jon H We are currently upgrading from Solr 4.1 to 4.9 and have observed some odd behavior with multipolygon queries now. It is difficult to describe what is happening so I took a screenshot with the documents and query area plotted on a map. You can see it here: [http://imgur.com/iBpYLMh] The blue areas represent the multipolygon and the purple areas represent the document footprints. The query being used is as follows: {quote} geo:Intersects(MULTIPOLYGON(((-3 2,4 2,4 8,-3 8,-3 2)),((-3 -11,4 -11,4 -4,-3 -4,-3 -11 {quote} This query returns all results when it should be returning only 8. If I run two separate queries with each individual polygon, I get 4 hits each as expected. I've narrowed this down to a problem with using 'datelineRule=ccwRect'. If I remove this setting, the query returns with the expected results. Unfortunately, this setting is required for our software though, since handling large polygon queries (spanning 180 degrees) are a requirement. Here are the relevant schema details: {quote} field name=geo type=location_rpt indexed=true stored=false/ fieldType name=location_rpt class=solr.SpatialRecursivePrefixTreeFieldType spatialContextFactory=com.spatial4j.core.context.jts.JtsSpatialContextFactory geo=true distErrPct=0.1 maxDistErr=0.09 units=degrees datelineRule=ccwRect normWrapLongitude=true autoIndex=true/ {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org