[JENKINS] Lucene-Solr-tests-only-trunk - Build # 8686 - Failure

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/8686/

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.TestLBHttpSolrServer.testSimple

Error Message:
expected:3 but was:2

Stack Trace:
junit.framework.AssertionFailedError: expected:3 but was:2
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
at 
org.apache.solr.client.solrj.TestLBHttpSolrServer.testSimple(TestLBHttpSolrServer.java:127)




Build Log (for compile errors):
[...truncated 8454 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)
Can't delete a document using deleteDocument(int docID) if using IndexWriter 
AND IndexReader


 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade


It is impossible to delete a document with 
reader.deleteDocument(scoreDoc.doc) yet.

using:
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(writer, true);

results in:
  Exception in thread main java.lang.UnsupportedOperationException: This 
IndexReader cannot make any changes to the index (it was opened with readOnly = 
true)
  at 
org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
  at 
org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)

and using:
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(directory, false);
  
results in:
  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
NativeFSLock@S:\Java\Morpheum\lucene\write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  at 
org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)

A workaround is:
  for (ScoreDoc scoreDoc : hits) {
  Document document = reader.document(scoreDoc.doc);

  writer.addDocument(document);
  }

  writer.deleteDocuments(query);

But this is using the query twice and may result in inconsistent data (new 
added documents may be removed also).
On the other hand I can't use the writer.deleteDocuments(query) first because 
I need the documents for some updates.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Lade updated LUCENE-3180:
---

Attachment: ImpossibleLuceneCode.java

example code

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
   Directory directory = FSDirectory.open(new File(lucene));
   
   writer = new IndexWriter(directory, config);
   reader = IndexReader.open(writer, true);
 results in:
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 and using:
   Directory directory = FSDirectory.open(new File(lucene));
   
   writer = new IndexWriter(directory, config);
   reader = IndexReader.open(directory, false);
   
 results in:
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 A workaround is:
   for (ScoreDoc scoreDoc : hits) {
   Document document = reader.document(scoreDoc.doc);
   writer.addDocument(document);
   }
   writer.deleteDocuments(query);
 But this is using the query twice and may result in inconsistent data (new 
 added documents may be removed also).
 On the other hand I can't use the writer.deleteDocuments(query) first 
 because I need the documents for some updates.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Lade updated LUCENE-3180:
---

Attachment: ImpossibleLuceneCode.java

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
   Directory directory = FSDirectory.open(new File(lucene));
   
   writer = new IndexWriter(directory, config);
   reader = IndexReader.open(writer, true);
 results in:
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 and using:
   Directory directory = FSDirectory.open(new File(lucene));
   
   writer = new IndexWriter(directory, config);
   reader = IndexReader.open(directory, false);
   
 results in:
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 A workaround is:
   for (ScoreDoc scoreDoc : hits) {
   Document document = reader.document(scoreDoc.doc);
   writer.addDocument(document);
   }
   writer.deleteDocuments(query);
 But this is using the query twice and may result in inconsistent data (new 
 added documents may be removed also).
 On the other hand I can't use the writer.deleteDocuments(query) first 
 because I need the documents for some updates.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Lade updated LUCENE-3180:
---

Description: 
It is impossible to delete a document with 
reader.deleteDocument(scoreDoc.doc) yet.

using:
{code:java}
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(writer, true);
{code}

results in:
{code:java}
  Exception in thread main java.lang.UnsupportedOperationException: This 
IndexReader cannot make any changes to the index (it was opened with readOnly = 
true)
  at 
org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
  at 
org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
{code}

and using:
{code:java}
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(directory, false);
{code}
  
results in:
{code:java}
  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
NativeFSLock@S:\Java\Morpheum\lucene\write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  at 
org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
{code}

A workaround is:
{code:java}
  for (ScoreDoc scoreDoc : hits) {
  Document document = reader.document(scoreDoc.doc);

  writer.addDocument(document);
  }

  writer.deleteDocuments(query);
{code}

But this is using the query twice and may result in inconsistent data (new 
added documents may be removed also).
On the other hand I can't use the writer.deleteDocuments(query) first because 
I need the documents for some updates.


  was:
It is impossible to delete a document with 
reader.deleteDocument(scoreDoc.doc) yet.

using:
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(writer, true);

results in:
  Exception in thread main java.lang.UnsupportedOperationException: This 
IndexReader cannot make any changes to the index (it was opened with readOnly = 
true)
  at 
org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
  at 
org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)

and using:
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(directory, false);
  
results in:
  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
NativeFSLock@S:\Java\Morpheum\lucene\write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  at 
org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)

A workaround is:
  for (ScoreDoc scoreDoc : hits) {
  Document document = reader.document(scoreDoc.doc);

  writer.addDocument(document);
  }

  writer.deleteDocuments(query);

But this is using the query twice and may result in inconsistent data (new 
added documents may be removed also).
On the other hand I can't use the writer.deleteDocuments(query) first because 
I need the documents for some updates.



 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
 {code:java}
   Directory directory = FSDirectory.open(new File(lucene));
   
   writer = new IndexWriter(directory, config);
   reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
 

[JENKINS] Lucene-Solr-tests-only-trunk - Build # 8687 - Still Failing

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/8687/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
at 
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)




Build Log (for compile errors):
[...truncated 3254 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Lade updated LUCENE-3180:
---

Description: 
It is impossible to delete a document with 
reader.deleteDocument(scoreDoc.doc) yet.

using:
{code:java}
writer = new IndexWriter(directory, config);
reader = IndexReader.open(writer, true);
{code}

results in:
{code:java}
  Exception in thread main java.lang.UnsupportedOperationException: This 
IndexReader cannot make any changes to the index (it was opened with readOnly = 
true)
  at 
org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
  at 
org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
{code}

and using:
{code:java}
writer = new IndexWriter(directory, config);
reader = IndexReader.open(directory, false);
{code}
  
results in:
{code:java}
  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
NativeFSLock@S:\Java\Morpheum\lucene\write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  at 
org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
{code}


  was:
It is impossible to delete a document with 
reader.deleteDocument(scoreDoc.doc) yet.

using:
{code:java}
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(writer, true);
{code}

results in:
{code:java}
  Exception in thread main java.lang.UnsupportedOperationException: This 
IndexReader cannot make any changes to the index (it was opened with readOnly = 
true)
  at 
org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
  at 
org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
{code}

and using:
{code:java}
  Directory directory = FSDirectory.open(new File(lucene));
  
  writer = new IndexWriter(directory, config);
  reader = IndexReader.open(directory, false);
{code}
  
results in:
{code:java}
  org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
NativeFSLock@S:\Java\Morpheum\lucene\write.lock
  at org.apache.lucene.store.Lock.obtain(Lock.java:84)
  at 
org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
  at 
org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
  at 
de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
{code}

A workaround is:
{code:java}
  for (ScoreDoc scoreDoc : hits) {
  Document document = reader.document(scoreDoc.doc);

  writer.addDocument(document);
  }

  writer.deleteDocuments(query);
{code}

But this is using the query twice and may result in inconsistent data (new 
added documents may be removed also).
On the other hand I can't use the writer.deleteDocuments(query) first because 
I need the documents for some updates.



 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   

[jira] [Commented] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045817#comment-13045817
 ] 

Danny Lade commented on LUCENE-3180:


A workaround is:
{code:java}
  for (ScoreDoc scoreDoc : hits) {
  Document document = reader.document(scoreDoc.doc);

  writer.addDocument(document);
  }

  writer.deleteDocuments(query);
{code}

But this is using the query twice and may result in inconsistent data (new 
added documents may be removed also).
On the other hand I can't use the writer.deleteDocuments(query) first because 
I need the documents for some updates.


 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Lade updated LUCENE-3180:
---

Comment: was deleted

(was: example code)

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with 
 reader.deleteDocument(scoreDoc.doc) yet.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-91) IndexWriter ctor does not release lock on exception

2011-06-08 Thread Adam Ahmed (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045819#comment-13045819
 ] 

Adam Ahmed commented on LUCENE-91:
--

A colleague just noticed that the timeout isn't native.  That is, by default 
Lucene will attempt to create the lock once, but won't actually timeout 
because it uses tryLock().  So the problem is not Lucene (since it's guaranteed 
to give you at least one fair shot, even on a slow FS), but that the lock is 
already held.

So this goes back to being decidedly not-a-bug.  Apologies.

 IndexWriter ctor does not release lock on exception
 ---

 Key: LUCENE-91
 URL: https://issues.apache.org/jira/browse/LUCENE-91
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 1.2
 Environment: Operating System: All
 Platform: All
Reporter: Alex Staubo
Assignee: Lucene Developers

 If IndexWriter construction fails with an exception, the write.lock lock is 
 not
 released.
 For example, this happens if one tries to open an IndexWriter on an 
 FSDirectory
 which does not contain an Lucene index. FileNotFoundException will be thrown 
 by
 org.apache.lucene.store.FSInputStream, after which the write lock will remain 
 in
 the directory, and nobody can open the index.
 I have been using this pattern -- doing IndexWriter(..., false), catching
 FileNotFoundException and doing IndexWriter(..., true) -- in my code to
 initialize the index on demand, because the app never know if the index 
 already
 exists.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 8669 - Still Failing

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/8669/

1 tests failed.
FAILED:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
at 
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)




Build Log (for compile errors):
[...truncated 3240 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2793) Directory createOutput and openInput should take an IOContext

2011-06-08 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated LUCENE-2793:
--

Attachment: LUCENE-2793.patch

I just put up the IOContext class. If this is looking good then I'll make the 
necessary changes to the other classes.

 Directory createOutput and openInput should take an IOContext
 -

 Key: LUCENE-2793
 URL: https://issues.apache.org/jira/browse/LUCENE-2793
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/store
Reporter: Michael McCandless
Assignee: Varun Thacker
  Labels: gsoc2011, lucene-gsoc-11, mentor
 Attachments: LUCENE-2793.patch, LUCENE-2793.patch, LUCENE-2793.patch, 
 LUCENE-2793.patch, LUCENE-2793.patch, LUCENE-2793.patch, LUCENE-2793.patch


 Today for merging we pass down a larger readBufferSize than for searching 
 because we get better performance.
 I think we should generalize this to a class (IOContext), which would hold 
 the buffer size, but then could hold other flags like DIRECT (bypass OS's 
 buffer cache), SEQUENTIAL, etc.
 Then, we can make the DirectIOLinuxDirectory fully usable because we would 
 only use DIRECT/SEQUENTIAL during merging.
 This will require fixing how IW pools readers, so that a reader opened for 
 merging is not then used for searching, and vice/versa.  Really, it's only 
 all the open file handles that need to be different -- we could in theory 
 share del docs, norms, etc, if that were somehow possible.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2868) It should be easy to make use of TermState; rewritten queries should be shared automatically

2011-06-08 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-2868:


Attachment: LUCENE-2868.patch

Updated for latest trunk

 It should be easy to make use of TermState; rewritten queries should be 
 shared automatically
 

 Key: LUCENE-2868
 URL: https://issues.apache.org/jira/browse/LUCENE-2868
 Project: Lucene - Java
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Karl Wright
Assignee: Simon Willnauer
 Attachments: LUCENE-2868.patch, LUCENE-2868.patch, LUCENE-2868.patch, 
 lucene-2868.patch, lucene-2868.patch, query-rewriter.patch


 When you have the same query in a query hierarchy multiple times, tremendous 
 savings can now be had if the user knows enough to share the rewritten 
 queries in the hierarchy, due to the TermState addition.  But this is clumsy 
 and requires a lot of coding by the user to take advantage of.  Lucene should 
 be smart enough to share the rewritten queries automatically.
 This can be most readily (and powerfully) done by introducing a new method to 
 Query.java:
 Query rewriteUsingCache(IndexReader indexReader)
 ... and including a caching implementation right in Query.java which would 
 then work for all.  Of course, all callers would want to use this new method 
 rather than the current rewrite().

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045846#comment-13045846
 ] 

Simon Willnauer commented on LUCENE-3180:
-

Danny, this is all expected behavior. I am going to close this issue now since 
there is no bug here whatsoever. Please can you phrase quickly what you are 
trying to do and I am happy to explain best practice on the user list. Thanks.

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with reader.deleteDocument(docID) if 
 using an IndexWriter too.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-3180.
-

Resolution: Invalid

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with reader.deleteDocument(docID) if 
 using an IndexWriter too.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Danny Lade (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045866#comment-13045866
 ] 

Danny Lade commented on LUCENE-3180:


I tried to update some documents got by a query. The main problem ist, that not 
all documents I found has to be updated (because the information to the changes 
is calculated outside the index).

It looks like in the appended example:

{code:java}
for (ScoreDoc scoreDoc : hits) {
Document document = reader.document(scoreDoc.doc);

// calculate changes, if no changes, than ignore it
if (!hasChanged) {
continue;
}

// update document
// - remove / add fields
// - etc

// update at index
reader.deleteDocument(scoreDoc.doc);
writer.addDocument(document);
}
{code}

So how do I update (or delete and add) a collection of documents without a 
query or term given?


 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with reader.deleteDocument(docID) if 
 using an IndexWriter too.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2579) UIMAUpdateRequestProcessor ignore error fails if text.length() 0

2011-06-08 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi reassigned SOLR-2579:


Assignee: Koji Sekiguchi

 UIMAUpdateRequestProcessor ignore error fails if text.length()  0
 --

 Key: SOLR-2579
 URL: https://issues.apache.org/jira/browse/SOLR-2579
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.2
Reporter: Elmer Garduno
Assignee: Koji Sekiguchi
Priority: Minor
 Fix For: 3.3

 Attachments: SOLR-2579.patch


 If UIMAUpdateRequestProcessor is configured to ignore errors, an exception is 
 raised when logging the error and text.length()  100.
   if (solrUIMAConfiguration.isIgnoreErrors())
 log.warn(new StringBuilder(skip the text processing due to )
   .append(e.getLocalizedMessage()).append(optionalFieldInfo)
   .append( text=\).append(text.substring(0, 
 100)).append(...\).toString());
   else{
 throw new SolrException(ErrorCode.SERVER_ERROR,
 new StringBuilder(processing error: )
   .append(e.getLocalizedMessage()).append(optionalFieldInfo)
   .append( text=\).append(text.substring(0, 
 100)).append(...\).toString(), e);
   }
 I'm submitting a patch.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2579) UIMAUpdateRequestProcessor ignore error fails if text.length() 0

2011-06-08 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi updated SOLR-2579:
-

Fix Version/s: 3.2.1

Good catch, Elmer! I'm on traveling now. I'll look at this later today.

 UIMAUpdateRequestProcessor ignore error fails if text.length()  0
 --

 Key: SOLR-2579
 URL: https://issues.apache.org/jira/browse/SOLR-2579
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.2
Reporter: Elmer Garduno
Assignee: Koji Sekiguchi
Priority: Minor
 Fix For: 3.2.1, 3.3

 Attachments: SOLR-2579.patch


 If UIMAUpdateRequestProcessor is configured to ignore errors, an exception is 
 raised when logging the error and text.length()  100.
   if (solrUIMAConfiguration.isIgnoreErrors())
 log.warn(new StringBuilder(skip the text processing due to )
   .append(e.getLocalizedMessage()).append(optionalFieldInfo)
   .append( text=\).append(text.substring(0, 
 100)).append(...\).toString());
   else{
 throw new SolrException(ErrorCode.SERVER_ERROR,
 new StringBuilder(processing error: )
   .append(e.getLocalizedMessage()).append(optionalFieldInfo)
   .append( text=\).append(text.substring(0, 
 100)).append(...\).toString(), e);
   }
 I'm submitting a patch.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-3.x - Build # 8696 - Failure

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x/8696/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1227)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1145)
at 
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:923)




Build Log (for compile errors):
[...truncated 4970 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated SOLR-2564:


Attachment: SOLR-2564.patch

Attached a new patch that should select the term based collectors now more 
often:
* In the case of grouping by function and the value source is 
StrFieldValueSource.
* All field types that produce a StrFieldValueSource (in getValueSource) use 
now the term based collectors. So any custom field type can now also be 
supported. Both TextField and StrField produce a StrFieldValueSource. I'm not 
if this is the right approach, but it was the most easy way to implement. 

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Edited] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045904#comment-13045904
 ] 

Martijn van Groningen edited comment on SOLR-2564 at 6/8/11 11:12 AM:
--

{quote}
I've been checking out the performance, and it generally seems fine. But of 
course we normally short circuit based on comparators and often don't get 
beyond that... so to exercise  isolate the rest of the code, I tried a 
worst-case scenario where the short circuit wouldn't work (sort=docid desc) and 
solr trunk with this patch is ~16% slower than without it. Any ideas what the 
problem might be?
{quote}

What might be the problem is that the trunk is using (Grouping.java 589):
{code}
SearchGroup smallest = orderedGroups.pollLast();
{code}

Whilst the AbstractFirstPassGroupingCollector (line 217) is using:
{code}
final CollectedSearchGroupGROUP_VALUE_TYPE bottomGroup = orderedGroups.last();
orderedGroups.remove(bottomGroup);
{code}
The above also happen around line 271.

I haven't checked this out, but I think it is the most likely explanation 
between those two implementations. Retrieving the bottom group will be done in 
almost all cases when the short circuit doesn't work 

  was (Author: martijn.v.groningen):
{quote}
I've been checking out the performance, and it generally seems fine. But of 
course we normally short circuit based on comparators and often don't get 
beyond that... so to exercise  isolate the rest of the code, I tried a 
worst-case scenario where the short circuit wouldn't work (sort=docid desc) and 
solr trunk with this patch is ~16% slower than without it. Any ideas what the 
problem might be?
{quote}

What might be the problem is that the trunk is using (Grouping.java 589):
{code}
SearchGroup smallest = orderedGroups.pollLast();
{code}

Whilst the AbstractFirstPassGroupingCollector (line 217) is using:
{code}
final CollectedSearchGroupGROUP_VALUE_TYPE bottomGroup = orderedGroups.last();
orderedGroups.remove(bottomGroup);
{code}
The above also happen around line 271.

I haven't checked this out, but I think it is the most likely explanation 
between those two implementations. Retrieving the bottom will be done in almost 
all cases when the short circuit doesn't work 
  
 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045917#comment-13045917
 ] 

Simon Willnauer commented on LUCENE-3176:
-

I can reproduce this easily and even if I set search threads to 0 and index 
threads to 1. I forced the IW to use OpenMode.CREATE and suddenly the tests are 
not failing anymore. It seems that the tempdir is not cleaned up since always 
the second test fails for me but never the first run.

this is not a DWPT issue, phew!

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045918#comment-13045918
 ] 

Robert Muir commented on LUCENE-3176:
-

the test cleans itself up in afterClass(), so there is in fact an issue.

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045920#comment-13045920
 ] 

Yonik Seeley commented on SOLR-2564:


Ah, good call Martijn - it must be that pollLast was replaced with two map 
operations.
Too bad Lucene isn't on Java6 yet!

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045924#comment-13045924
 ] 

Robert Muir commented on SOLR-2564:
---

just send an email to the dev list... lots of people will +1, uwe will -1, but 
I dont see why this is any issue for 4.0

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Simon Willnauer
On Wed, Jun 8, 2011 at 2:27 PM, Robert Muir (JIRA) j...@apache.org wrote:

    [ 
 https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045924#comment-13045924
  ]

 Robert Muir commented on SOLR-2564:
 ---

 just send an email to the dev list... lots of people will +1, uwe will -1, 
 but I dont see why this is any issue for 4.0

+1 :)

 Integrating grouping module into Solr 4.0
 -

                 Key: SOLR-2564
                 URL: https://issues.apache.org/jira/browse/SOLR-2564
             Project: Solr
          Issue Type: Improvement
            Reporter: Martijn van Groningen
            Assignee: Martijn van Groningen
             Fix For: 4.0

         Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

 --
 This message is automatically generated by JIRA.
 For more information on JIRA, see: http://www.atlassian.com/software/jira

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13045945#comment-13045945
 ] 

Robert Muir commented on LUCENE-3176:
-

taking a look at this, I don't like the way _TestUtil.getTempDir(String desc) 
was working before... it was basically desc + LTC.random.nextInt(xxx), so if 
you wired the seed like I did, and somehow stuff doesnt totally clean up, then 
its easy to see how it could return an already-created dir.

I changed this method to use _TestUtil.createTempFile... I think this is much 
safer.

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-3.x - Build # 8699 - Failure

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x/8699/

All tests passed

Build Log (for compile errors):
[...truncated 15396 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2137) Function Queries: and() or() not()

2011-06-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-2137.


   Resolution: Duplicate
Fix Version/s: 4.0

This was folded into SOLR-2136

 Function Queries: and() or() not()
 --

 Key: SOLR-2137
 URL: https://issues.apache.org/jira/browse/SOLR-2137
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.4.1
Reporter: Jan Høydahl
 Fix For: 4.0


 Add logical function queries for AND, OR and NOT.
 These can then be used in more advanced conditional functions. May be modeled 
 after OpenOffice Calc functions: 
 http://wiki.services.openoffice.org/wiki/Documentation/How_Tos/Calc:_Logical_functions
 Example:
 and(color==red, or(price100, price200), not(soldout))
 This function will return true if field color is red and price is between 
 100-200, and the field soldout is not true.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-2580) Create a new Search Component to alter queries based on business rules.

2011-06-08 Thread JIRA
Create a new Search Component to alter queries based on business rules. 


 Key: SOLR-2580
 URL: https://issues.apache.org/jira/browse/SOLR-2580
 Project: Solr
  Issue Type: New Feature
Reporter: Tomás Fernández Löbbe


The goal is to be able to adjust the relevance of documents based on user 
defined business rules.

For example, in a e-commerce site, when the user chooses the shoes category, 
we may be interested in boosting products from a certain brand. This can be 
expressed as a rule in the following way:

rule Boost Adidas products when searching shoes
when
$qt : QueryTool()
TermQuery(term.field==category, term.text==shoes)
then
$qt.boost({!lucene}brand:adidas);
end

The QueryTool object should be used to alter the main query in a easy way. Even 
more human-like rules can be written:

rule Boost Adidas products when searching shoes
 when
Query has term shoes in field product
 then
Add boost query {!lucene}brand:adidas
end

These rules are written in a text file in the config directory and can be 
modified at runtime. Rules will be managed using JBoss Drools: 
http://www.jboss.org/drools/drools-expert.html

On a first stage, it will allow to add boost queries or change sorting fields 
based on the user query, but it could be extended to allow more options.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3181) changes2html.pl should collect release dates from JIRA REST API

2011-06-08 Thread Steven Rowe (JIRA)
changes2html.pl should collect release dates from JIRA REST API
---

 Key: LUCENE-3181
 URL: https://issues.apache.org/jira/browse/LUCENE-3181
 Project: Lucene - Java
  Issue Type: Improvement
  Components: general/website
Reporter: Steven Rowe
Assignee: Steven Rowe
Priority: Minor


Since LUCENE-3163 removed release dates from CHANGES.txt, the Changes.html 
generated by changes2html.pl no longer contains release dates, except those 
older release dates that are hard-coded in the script itself.

JIRA exposes a REST API through which project info is available.  For Lucene - 
java, lots of info is available in JSON format through 
https://issues.apache.org/jira/rest/api/2.0.alpha1/project/LUCENE , including a 
full list of each version's release date.



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (SOLR-2580) Create a new Search Component to alter queries based on business rules.

2011-06-08 Thread Muhannad
I think you can boost specific brands based on criteria  you use ,
specifically when using Dismax queryHandler

2011/6/8 Tomás Fernández Löbbe (JIRA) j...@apache.org

 Create a new Search Component to alter queries based on business rules.
 

 Key: SOLR-2580
 URL: https://issues.apache.org/jira/browse/SOLR-2580
 Project: Solr
  Issue Type: New Feature
Reporter: Tomás Fernández Löbbe


 The goal is to be able to adjust the relevance of documents based on user
 defined business rules.

 For example, in a e-commerce site, when the user chooses the shoes
 category, we may be interested in boosting products from a certain brand.
 This can be expressed as a rule in the following way:

 rule Boost Adidas products when searching shoes
when
$qt : QueryTool()
TermQuery(term.field==category, term.text==shoes)
then
$qt.boost({!lucene}brand:adidas);
 end

 The QueryTool object should be used to alter the main query in a easy way.
 Even more human-like rules can be written:

 rule Boost Adidas products when searching shoes
  when
Query has term shoes in field product
  then
Add boost query {!lucene}brand:adidas
 end

 These rules are written in a text file in the config directory and can be
 modified at runtime. Rules will be managed using JBoss Drools:
 http://www.jboss.org/drools/drools-expert.html

 On a first stage, it will allow to add boost queries or change sorting
 fields based on the user query, but it could be extended to allow more
 options.

 --
 This message is automatically generated by JIRA.
 For more information on JIRA, see: http://www.atlassian.com/software/jira

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
*Eng.Muhannad al Hariri *
*
*
*Software Developer*
*email : muh.a...@gmail.com*
*twitter : @muh_acit http://twitter.com/muh_acit*
*Skype : muh.hari*
*phone : *
*  Jordan +962 78 677 5125*
*
*
*حاسبونا فدققوا ثمّ منّوا فأعتقوا ..هكذا شيمة الملوك بالمماليك يرفقوا*
*
*
image001.png

[jira] [Closed] (SOLR-2137) Function Queries: and() or() not()

2011-06-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2137.
-


Thanks

 Function Queries: and() or() not()
 --

 Key: SOLR-2137
 URL: https://issues.apache.org/jira/browse/SOLR-2137
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.4.1
Reporter: Jan Høydahl
 Fix For: 4.0


 Add logical function queries for AND, OR and NOT.
 These can then be used in more advanced conditional functions. May be modeled 
 after OpenOffice Calc functions: 
 http://wiki.services.openoffice.org/wiki/Documentation/How_Tos/Calc:_Logical_functions
 Example:
 and(color==red, or(price100, price200), not(soldout))
 This function will return true if field color is red and price is between 
 100-200, and the field soldout is not true.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3182) TestAddIndexes reproducible test failure on turnk

2011-06-08 Thread selckin (JIRA)
TestAddIndexes reproducible test failure on turnk
-

 Key: LUCENE-3182
 URL: https://issues.apache.org/jira/browse/LUCENE-3182
 Project: Lucene - Java
  Issue Type: Bug
Reporter: selckin


trunk: r1133385

{code}
[junit] Testsuite: org.apache.lucene.index.TestAddIndexes
[junit] Tests run: 2843, Failures: 1, Errors: 0, Time elapsed: 137.121 sec
[junit]
[junit] - Standard Output ---
[junit] java.io.FileNotFoundException: _cy.fdx
[junit] at 
org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
[junit] at 
org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
[junit] at 
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
[junit] at 
org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
[junit] at 
org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
[junit] at 
org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
[junit] at 
org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
[junit] at 
org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
[junit] java.io.FileNotFoundException: _cx.fdx
[junit] at 
org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
[junit] at 
org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
[junit] at 
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
[junit] at 
org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
[junit] at 
org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
[junit] at 
org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
[junit] at 
org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
[junit] at 
org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
[junit] -  ---
[junit] - Standard Error -
[junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes 
-Dtestmethod=testAddIndexesWithRollback 
-Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3
[junit] NOTE: test params are: codec=RandomCodecProvider: {id=SimpleText, 
content=SimpleText, d=MockRandom, c=SimpleText}, locale=fr, 
timezone=Africa/Kigali
[junit] NOTE: all tests run in this JVM:
[junit] [TestAddIndexes]
[junit] NOTE: Linux 2.6.39-gentoo amd64/Sun Microsystems Inc. 1.6.0_25 
(64-bit)/cpus=8,threads=1,free=68050392,total=446234624
[junit] -  ---
[junit] Testcase: 
testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):   FAILED
[junit]
[junit] junit.framework.AssertionFailedError:
[junit] at 
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
[junit]
[junit]
[junit] Test org.apache.lucene.index.TestAddIndexes FAILED
{code}


Fails randomly in my while(1) test run, and Fails after a few min of running: 
{code}
ant test -Dtestcase=TestAddIndexes 
-Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3 
-Dtests.iter=200 -Dtests.iter.min=1
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (SOLR-2580) Create a new Search Component to alter queries based on business rules.

2011-06-08 Thread Tomás Fernández Löbbe
Yes, you can specify a boost query in dismax, but this component will allow
you to add different boost queries based on business rules and the user
query.
Think of this component as something like the Query Elevation Component, but
based on business rules instead of the straight user query, and that it can
boost documents instead of elevating them to the top results.

We'll upload a small POC of this soon to show the idea.

On Wed, Jun 8, 2011 at 11:49 AM, Muhannad muh.a...@gmail.com wrote:

 I think you can boost specific brands based on criteria  you use ,
 specifically when using Dismax queryHandler

 2011/6/8 Tomás Fernández Löbbe (JIRA) j...@apache.org

 Create a new Search Component to alter queries based on business rules.
 

 Key: SOLR-2580
 URL: https://issues.apache.org/jira/browse/SOLR-2580
 Project: Solr
  Issue Type: New Feature
Reporter: Tomás Fernández Löbbe


 The goal is to be able to adjust the relevance of documents based on user
 defined business rules.

 For example, in a e-commerce site, when the user chooses the shoes
 category, we may be interested in boosting products from a certain brand.
 This can be expressed as a rule in the following way:

 rule Boost Adidas products when searching shoes
when
$qt : QueryTool()
TermQuery(term.field==category, term.text==shoes)
then
$qt.boost({!lucene}brand:adidas);
 end

 The QueryTool object should be used to alter the main query in a easy way.
 Even more human-like rules can be written:

 rule Boost Adidas products when searching shoes
  when
Query has term shoes in field product
  then
Add boost query {!lucene}brand:adidas
 end

 These rules are written in a text file in the config directory and can be
 modified at runtime. Rules will be managed using JBoss Drools:
 http://www.jboss.org/drools/drools-expert.html

 On a first stage, it will allow to add boost queries or change sorting
 fields based on the user query, but it could be extended to allow more
 options.

 --
 This message is automatically generated by JIRA.
 For more information on JIRA, see: http://www.atlassian.com/software/jira

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




 --
 *Eng.Muhannad al Hariri *
 *
 *
 *Software Developer*
 *email : muh.a...@gmail.com*
 *twitter : @muh_acit http://twitter.com/muh_acit*
 *Skype : muh.hari*
 *phone : *
 *  Jordan +962 78 677 5125*
 *
 *
 *حاسبونا فدققوا ثمّ منّوا فأعتقوا ..هكذا شيمة الملوك بالمماليك يرفقوا*
 *
 *


image001.png

[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046002#comment-13046002
 ] 

Michael McCandless commented on SOLR-2564:
--

Ahh nice catch Martijn!

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-trunk - Build # 8699 - Failure

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/8699/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: 
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
at 
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)




Build Log (for compile errors):
[...truncated 3254 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-tests-only-trunk - Build # 8699 - Failure

2011-06-08 Thread Michael McCandless
I'll track this down.

Mike McCandless

http://blog.mikemccandless.com


On Wed, Jun 8, 2011 at 12:06 PM, Apache Jenkins Server
jenk...@builds.apache.org wrote:
 Build: https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/8699/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback

 Error Message:
 null

 Stack Trace:
 junit.framework.AssertionFailedError:
        at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
        at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
        at 
 org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)




 Build Log (for compile errors):
 [...truncated 3254 lines...]



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2541) Plugininfo tries to load nodes of type long

2011-06-08 Thread Frank Wesemann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Wesemann updated SOLR-2541:
-

Attachment: PlugininfoTest.java

JUnit tests for PluginInfo

 Plugininfo tries to load nodes of type long
 -

 Key: SOLR-2541
 URL: https://issues.apache.org/jira/browse/SOLR-2541
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.1
 Environment: all
Reporter: Frank Wesemann
 Attachments: PlugininfoTest.java, Solr-2541.patch


 As of version 3.1 Plugininfo adds all nodes whose types are not 
 lst,str,int,bool,arr,float or double to the children list.
 The type long is missing in the NL_TAGS set.
 I assume this a bug because DOMUtil recognizes this type, so I consider it a 
 valid tag in solrconfig.xml
 Maybe it's time for a dtd? Or one may define SolrConfig.nodetypes somewhere.
 I'll add a patch, that extends the NL_TAGS Set.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046066#comment-13046066
 ] 

Simon Willnauer commented on LUCENE-3176:
-

robert can you still reproduce or can we close this issue here?

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046070#comment-13046070
 ] 

Robert Muir commented on LUCENE-3176:
-

i could never really reproduce... but sometimes if i ran all tests with 
-Dtests.seed=0:0 it would happen.

the reason this test is not reproducible is that this test uses 'n seconds' as 
a limit.
so whether it passes or fails depends upon what your computer is doing at the 
moment.

I think we must change it to limit itself by number of docs instead.

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046072#comment-13046072
 ] 

Michael McCandless commented on SOLR-2564:
--

I think we should decouple the should Lucene require Java 6 (which I expect 
to be an. involved, discussion) from making progress here?

My feeling is we should still commit this.  The 16% slowdown is on a very 
synthetic case (MatchAllDocsQuery, sorting by reversed docID, grouping by 
random int field)... unless we also see unacceptable slowdowns in more 
realistic cases?  Also, net/net the user should see a speedup, typically, since 
caching is enabled by default.

We should still open an issue to cutover this code back to the pollLast once we 
can use Java 6 code.

Another option is to allow the grouping module (separately from Lucene core) to 
use Java 6 code but even that could be involved :)

Yonik, how do you create the index used for this test?  Somehow you generate an 
int field w/ random 1000 unique values -- do you have a client-side script you 
use to create random docs in Solr?

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3176) TestNRTThreads test failure

2011-06-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046074#comment-13046074
 ] 

Michael McCandless commented on LUCENE-3176:


bq. I think we must change it to limit itself by number of docs instead.

I agree: let's fix that.

 TestNRTThreads test failure
 ---

 Key: LUCENE-3176
 URL: https://issues.apache.org/jira/browse/LUCENE-3176
 Project: Lucene - Java
  Issue Type: Bug
 Environment: trunk
Reporter: Robert Muir
Assignee: Michael McCandless

 hit a fail in TestNRTThreads running tests over and over:

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2542) dataimport global session putVal blank

2011-06-08 Thread Frank Wesemann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank Wesemann updated SOLR-2542:
-

Attachment: TestContext.java

Updated to trunk version. Uses changed constructor for SolrWriter.

 dataimport global session putVal blank
 --

 Key: SOLR-2542
 URL: https://issues.apache.org/jira/browse/SOLR-2542
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.1
Reporter: Linbin Chen
  Labels: dataimport
 Fix For: 3.3

 Attachments: TestContext.java, TestContext.java, 
 dataimport-globalSession-bug-solr3.1.patch


 {code:title=ContextImpl.java}
   private void putVal(String name, Object val, Map map) {
 if(val == null) map.remove(name);
 else entitySession.put(name, val);
   }
 {code}
 change to 
 {code:title=ContextImpl.java}
   private void putVal(String name, Object val, Map map) {
 if(val == null) map.remove(name);
 else map.put(name, val);
   }
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046091#comment-13046091
 ] 

Yonik Seeley commented on SOLR-2564:


bq. Another option is to allow the grouping module (separately from Lucene 
core) to use Java 6 code

+1

bq. Yonik, how do you create the index used for this test? Somehow you generate 
an int field w/ random 1000 unique values – do you have a client-side script 
you use to create random docs in Solr?

I have some CSV files laying around that I reuse for ad-hoc testing of a lot of 
stuff.  They were created with a simple python script.
Then I simply index with
{code}
URL=http://localhost:8983/solr
curl $URL/update/csv?stream.url=file:/tmp/test.csvoverwrite=falsecommit=true
{code}

It was also my first reaction to think that this is a very synthetic case that 
people are unlikely to hit... until I thought about dates.  Indexing everything 
in date order is a pretty common thing to do, and so is sorting by date - which 
hits the exact same case.  Queries of *:* and simple filter queries on type, 
etc, also tend to be pretty common (i.e. full-text relevance/performance 
actually isn't an important feature for some users).

How complex must queries be for caching to generate a net benefit under load? I 
haven't tried to test this myself.

 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2567) Solr should default to TieredMergePolicy

2011-06-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated SOLR-2567:
--

Attachment: SOLR-2567.patch

ok, here's a patch for committing.

After reviewing SOLR-2572, I think its overkill at the moment.

This is because SolrPluginUtils.applySetters takes care of all possible 
parameters we have in all of our mergepolicies (including contrib) available 
today: all the parameters are setters that take primitive types.

So, I'd like to commit this shortly and cancel SOLR-2572 as won't fix.

I added a test that configures some tiered-mp specific stuff.


 Solr should default to TieredMergePolicy
 

 Key: SOLR-2567
 URL: https://issues.apache.org/jira/browse/SOLR-2567
 Project: Solr
  Issue Type: Bug
  Components: update
Reporter: Robert Muir
 Fix For: 3.3, 4.0

 Attachments: SOLR-2567.patch, SOLR-2567.patch, SOLR-2567.patch


 even if we set a luceneMatchVersion to = 3.2 (SOLR-2557),
 Solr still defaults to LogByte

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2572) improve mergepolicy configuration

2011-06-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-2572.
---

Resolution: Won't Fix

See my note on SOLR-2567, at the moment all merge policies take simple setters 
with primitive types, so you can actually configure all their parameters 
already.

Because of this, I think some factory interface would just be overkill, when 
you can already just do:
{noformat}
mergePolicy class=org.apache.lucene.index.TieredMergePolicy
  int name=maxMergeAtOnceExplicit19/int
  int name=segmentsPerTier9/int
/mergePolicy
{noformat}


 improve mergepolicy configuration
 -

 Key: SOLR-2572
 URL: https://issues.apache.org/jira/browse/SOLR-2572
 Project: Solr
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 3.3, 4.0


 Spinoff from SOLR-2567.
 Currently configuration of a mergepolicy in solr is by lucene classname (must 
 have no-arg ctor), and 
 some merge-policy specific configuration parameters are not per-mergepolicy, 
 but instead with the rest of the index configuration.
 I think we should make this more pluggable, so that we can fully configure 
 things like TieredMergePolicy,
 and also so that if someone wants to plug in their own MP they can do that 
 too.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046103#comment-13046103
 ] 

Robert Muir commented on SOLR-2564:
---

{quote}
Another option is to allow the grouping module (separately from Lucene core) to 
use Java 6 code but even that could be involved 
{quote}

Personally I am against parts of the code being java 6 and other parts being 
java 5. we already have this situation today (the solr part is java 6, 
everyhting else is java 5).

Come on, its a major version, lets just cut over everything to java 6. Java 5 
isn't even supported by oracle anymore, so why the hell do we support it?


 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046105#comment-13046105
 ] 

Ryan McKinley commented on SOLR-2564:
-

bq. Java 5 isn't even supported by oracle anymore, so why the hell do we 
support it?

+1  (though this discussion should happen elsewhere)

If the 16% slowdown is worst case and under 'normal' use it would be 
equivolent/faster, i say lets move forward with that and push for java6 in a 
different issue.  Having a java6 version of lucene module (in solr?) seems like 
an mess.



 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046106#comment-13046106
 ] 

Erik Hatcher commented on SOLR-2564:


Just for grins, here's a Ruby script that'll do it (provided you have the 
solr-ruby gem installed):

{code}
require 'solr'
solr = Solr::Connection.new
1.upto(1000) {|i| solr.add(:id=i, :single1000_i=i)}
solr.commit
{code}


 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2567) Solr should default to TieredMergePolicy

2011-06-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved SOLR-2567.
---

Resolution: Fixed
  Assignee: Robert Muir

 Solr should default to TieredMergePolicy
 

 Key: SOLR-2567
 URL: https://issues.apache.org/jira/browse/SOLR-2567
 Project: Solr
  Issue Type: Bug
  Components: update
Reporter: Robert Muir
Assignee: Robert Muir
 Fix For: 3.3, 4.0

 Attachments: SOLR-2567.patch, SOLR-2567.patch, SOLR-2567.patch


 even if we set a luceneMatchVersion to = 3.2 (SOLR-2557),
 Solr still defaults to LogByte

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3183) TestIndexWriter failure: AIOOBE

2011-06-08 Thread selckin (JIRA)
TestIndexWriter failure: AIOOBE
---

 Key: LUCENE-3183
 URL: https://issues.apache.org/jira/browse/LUCENE-3183
 Project: Lucene - Java
  Issue Type: Bug
Reporter: selckin


trunk: r1133486 
{code}
[junit] Testsuite: org.apache.lucene.index.TestIndexWriter
[junit] Testcase: 
testEmptyFieldName(org.apache.lucene.index.TestIndexWriter):  Caused an 
ERROR
[junit] CheckIndex failed
[junit] java.lang.RuntimeException: CheckIndex failed
[junit] at 
org.apache.lucene.util._TestUtil.checkIndex(_TestUtil.java:158)
[junit] at 
org.apache.lucene.util._TestUtil.checkIndex(_TestUtil.java:144)
[junit] at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:477)
[junit] at 
org.apache.lucene.index.TestIndexWriter.testEmptyFieldName(TestIndexWriter.java:857)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
[junit] 
[junit] 
[junit] Tests run: 39, Failures: 0, Errors: 1, Time elapsed: 17.634 sec
[junit] 
[junit] - Standard Output ---
[junit] CheckIndex failed
[junit] Segments file=segments_1 numSegments=1 version=FORMAT_4_0 [Lucene 
4.0]
[junit]   1 of 1: name=_0 docCount=1
[junit] codec=SegmentCodecs [codecs=[PreFlex], 
provider=org.apache.lucene.index.codecs.CoreCodecProvider@3f78807]
[junit] compound=false
[junit] hasProx=true
[junit] numFiles=8
[junit] size (MB)=0
[junit] diagnostics = {os.version=2.6.39-gentoo, os=Linux, 
lucene.version=4.0-SNAPSHOT, source=flush, os.arch=amd64, 
java.version=1.6.0_25, java.vendor=Sun Microsystems Inc.}
[junit] no deletions
[junit] test: open reader.OK
[junit] test: fields..OK [1 fields]
[junit] test: field norms.OK [1 fields]
[junit] test: terms, freq, prox...ERROR: 
java.lang.ArrayIndexOutOfBoundsException: -1

[junit] java.lang.ArrayIndexOutOfBoundsException: -1
[junit] at 
org.apache.lucene.index.codecs.preflex.TermInfosReader.seekEnum(TermInfosReader.java:212)
[junit] at 
org.apache.lucene.index.codecs.preflex.TermInfosReader.seekEnum(TermInfosReader.java:301)
[junit] at 
org.apache.lucene.index.codecs.preflex.TermInfosReader.get(TermInfosReader.java:234)
[junit] at 
org.apache.lucene.index.codecs.preflex.TermInfosReader.terms(TermInfosReader.java:371)
[junit] at 
org.apache.lucene.index.codecs.preflex.PreFlexFields$PreTermsEnum.reset(PreFlexFields.java:719)
[junit] at 
org.apache.lucene.index.codecs.preflex.PreFlexFields$PreTerms.iterator(PreFlexFields.java:249)
[junit] at 
org.apache.lucene.index.PerFieldCodecWrapper$FieldsReader$FieldsIterator.terms(PerFieldCodecWrapper.java:147)
[junit] at 
org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:610)
[junit] at 
org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:495)
[junit] at 
org.apache.lucene.util._TestUtil.checkIndex(_TestUtil.java:154)
[junit] at 
org.apache.lucene.util._TestUtil.checkIndex(_TestUtil.java:144)
[junit] at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:477)
[junit] at 
org.apache.lucene.index.TestIndexWriter.testEmptyFieldName(TestIndexWriter.java:857)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
[junit] at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
[junit] at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
[junit] at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
[junit] at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:48)
[junit] at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
[junit] at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
[junit] at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
[junit] at 
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
[junit] at 

TestAddIndexes failure

2011-06-08 Thread Ryan McKinley
Hit this while testing some solr changes...  I have not tried on a
clean trunk yet


[junit] Testsuite: org.apache.lucene.index.TestAddIndexes
[junit] Testcase:
testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):
   FAILED
[junit]
[junit] junit.framework.AssertionFailedError:
[junit] at
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
[junit] at
org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
[junit] at
org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
[junit]
[junit]
[junit] Tests run: 20, Failures: 1, Errors: 0, Time elapsed: 9.521 sec
[junit]
[junit] - Standard Output ---
[junit] java.io.FileNotFoundException: _cb_0.tib
[junit] at
org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
[junit] at
org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
[junit] at
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
[junit] at
org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
[junit] at
org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
[junit] at
org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
[junit] at
org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
[junit] at
org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
[junit] java.io.FileNotFoundException: _c3.fdt
[junit] at
org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
[junit] at
org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
[junit] at
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
[junit] at
org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
[junit] at
org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
[junit] at
org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
[junit] at
org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
[junit] at
org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
[junit] java.io.FileNotFoundException: _cu.fdt
[junit] at
org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
[junit] at
org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
[junit] at
org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
[junit] at
org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
[junit] at
org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
[junit] at
org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
[junit] at
org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
[junit] at
org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
[junit] -  ---
[junit] - Standard Error -
[junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes
-Dtestmethod=testAddIndexesWithRollback
-Dtests.seed=-4910267235658469444:-5699216116893861784
[junit] NOTE: test params are: codec=RandomCodecProvider:
{id=MockRandom, content=Standard,
d=MockVariableIntBlock(baseBlockSize=41),
c=MockFixedIntBlock(blockSize=61)}, locale=el_GR, timezone=Eur
ope/Zurich
[junit] NOTE: all tests run in this JVM:
[junit] [TestAssertions, TestCharTermAttributeImpl, TestAddIndexes]
[junit] NOTE: Windows Vista 6.0 amd64/Sun Microsystems Inc.
1.6.0_13 (64-bit)/cpus=8,threads=1,free=124894328,total=220463104
[junit] -  ---
[junit] TEST org.apache.lucene.index.TestAddIndexes FAILED


ryan

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: TestAddIndexes failure

2011-06-08 Thread Robert Muir
In my opinion this test never worked quite right before, i mucked with
its params a bit in LUCENE-3175 now I think the test is actually
testing :)

In general I found that while trying to 'speed up' the tests, changing
the parameters exposed some test bugs (at least). So I think there is
a problem with our tests, that they use too many fixed parameters,
such as number of documents.

I'm gonna open an issue for this soon and hopefully make all the tests
really really angry.

On Wed, Jun 8, 2011 at 2:11 PM, Ryan McKinley ryan...@gmail.com wrote:
 Hit this while testing some solr changes...  I have not tried on a
 clean trunk yet


    [junit] Testsuite: org.apache.lucene.index.TestAddIndexes
    [junit] Testcase:
 testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):
   FAILED
    [junit]
    [junit] junit.framework.AssertionFailedError:
    [junit]     at
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
    [junit]     at
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
    [junit]
    [junit]
    [junit] Tests run: 20, Failures: 1, Errors: 0, Time elapsed: 9.521 sec
    [junit]
    [junit] - Standard Output ---
    [junit] java.io.FileNotFoundException: _cb_0.tib
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] java.io.FileNotFoundException: _c3.fdt
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] java.io.FileNotFoundException: _cu.fdt
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] -  ---
    [junit] - Standard Error -
    [junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes
 -Dtestmethod=testAddIndexesWithRollback
 -Dtests.seed=-4910267235658469444:-5699216116893861784
    [junit] NOTE: test params are: codec=RandomCodecProvider:
 {id=MockRandom, content=Standard,
 d=MockVariableIntBlock(baseBlockSize=41),
 c=MockFixedIntBlock(blockSize=61)}, locale=el_GR, timezone=Eur
 ope/Zurich
    [junit] NOTE: all tests run in this JVM:
    [junit] [TestAssertions, TestCharTermAttributeImpl, TestAddIndexes]
    [junit] NOTE: Windows Vista 6.0 amd64/Sun Microsystems Inc.
 1.6.0_13 (64-bit)/cpus=8,threads=1,free=124894328,total=220463104
    [junit] -  ---
    [junit] TEST org.apache.lucene.index.TestAddIndexes FAILED


 ryan

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: 

[jira] [Commented] (LUCENE-3180) Can't delete a document using deleteDocument(int docID) if using IndexWriter AND IndexReader

2011-06-08 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046120#comment-13046120
 ] 

Simon Willnauer commented on LUCENE-3180:
-

Hey Danny,

you can/should modify the lucene index only with one writer at the same time. 
in your example the IndexReader needs to acquire the lock on the index which is 
hold by the IndexWriter already. In order to modify the index via IndexReader 
you need to open it writeable too (pass false to readOnly).

Usually to update a document you use some kind of Unique ID field and pass the 
ID term plus the document to IndexWriter#updateDocument. This will delete all 
previous documents with the same ID term indexed.  

Hope that helps. You should get some help on the user list too.

 Can't delete a document using deleteDocument(int docID) if using IndexWriter 
 AND IndexReader
 

 Key: LUCENE-3180
 URL: https://issues.apache.org/jira/browse/LUCENE-3180
 Project: Lucene - Java
  Issue Type: Bug
  Components: core/index
Affects Versions: 3.2
 Environment: Windows 
Reporter: Danny Lade
 Attachments: ImpossibleLuceneCode.java


 It is impossible to delete a document with reader.deleteDocument(docID) if 
 using an IndexWriter too.
 using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(writer, true);
 {code}
 results in:
 {code:java}
   Exception in thread main java.lang.UnsupportedOperationException: This 
 IndexReader cannot make any changes to the index (it was opened with readOnly 
 = true)
   at 
 org.apache.lucene.index.ReadOnlySegmentReader.noWrite(ReadOnlySegmentReader.java:23)
   at 
 org.apache.lucene.index.ReadOnlyDirectoryReader.acquireWriteLock(ReadOnlyDirectoryReader.java:43)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:60)
 {code}
 and using:
 {code:java}
 writer = new IndexWriter(directory, config);
 reader = IndexReader.open(directory, false);
 {code}
   
 results in:
 {code:java}
   org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 NativeFSLock@S:\Java\Morpheum\lucene\write.lock
   at org.apache.lucene.store.Lock.obtain(Lock.java:84)
   at 
 org.apache.lucene.index.DirectoryReader.acquireWriteLock(DirectoryReader.java:765)
   at 
 org.apache.lucene.index.IndexReader.deleteDocument(IndexReader.java:1067)
   at 
 de.morpheum.morphy.ImpossibleLuceneCode.main(ImpossibleLuceneCode.java:69)
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: TestAddIndexes failure

2011-06-08 Thread Michael McCandless
I'm currently digging on this... it's a doozie.  Robert fang'd up this test!!

Mike McCandless

http://blog.mikemccandless.com

On Wed, Jun 8, 2011 at 2:11 PM, Ryan McKinley ryan...@gmail.com wrote:
 Hit this while testing some solr changes...  I have not tried on a
 clean trunk yet


    [junit] Testsuite: org.apache.lucene.index.TestAddIndexes
    [junit] Testcase:
 testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):
   FAILED
    [junit]
    [junit] junit.framework.AssertionFailedError:
    [junit]     at
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
    [junit]     at
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
    [junit]
    [junit]
    [junit] Tests run: 20, Failures: 1, Errors: 0, Time elapsed: 9.521 sec
    [junit]
    [junit] - Standard Output ---
    [junit] java.io.FileNotFoundException: _cb_0.tib
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] java.io.FileNotFoundException: _c3.fdt
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] java.io.FileNotFoundException: _cu.fdt
    [junit]     at
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
    [junit]     at
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
    [junit]     at
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
    [junit]     at
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
    [junit]     at
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
    [junit]     at
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
    [junit] -  ---
    [junit] - Standard Error -
    [junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes
 -Dtestmethod=testAddIndexesWithRollback
 -Dtests.seed=-4910267235658469444:-5699216116893861784
    [junit] NOTE: test params are: codec=RandomCodecProvider:
 {id=MockRandom, content=Standard,
 d=MockVariableIntBlock(baseBlockSize=41),
 c=MockFixedIntBlock(blockSize=61)}, locale=el_GR, timezone=Eur
 ope/Zurich
    [junit] NOTE: all tests run in this JVM:
    [junit] [TestAssertions, TestCharTermAttributeImpl, TestAddIndexes]
    [junit] NOTE: Windows Vista 6.0 amd64/Sun Microsystems Inc.
 1.6.0_13 (64-bit)/cpus=8,threads=1,free=124894328,total=220463104
    [junit] -  ---
    [junit] TEST org.apache.lucene.index.TestAddIndexes FAILED


 ryan

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-tests-only-3.x - Build # 8706 - Failure

2011-06-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-tests-only-3.x/8706/

All tests passed

Build Log (for compile errors):
[...truncated 15366 lines...]



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2535) In Solr 3.1.0 the admin/file handler fails to show directory listings

2011-06-08 Thread Peter Wolanin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046122#comment-13046122
 ] 

Peter Wolanin commented on SOLR-2535:
-

This ought to be a trivial fix, so I hope we can get it in 3.1.1, or is 3.3 
going to be the next minor version?

 In Solr 3.1.0 the admin/file handler fails to show directory listings
 -

 Key: SOLR-2535
 URL: https://issues.apache.org/jira/browse/SOLR-2535
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 3.1, 4.0
 Environment: java 1.6, jetty
Reporter: Peter Wolanin
 Fix For: 3.3


 In Solr 1.4.1, going to the path solr/admin/file I see an XML-formatted 
 listing of the conf directory, like:
 {noformat}
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime1/int/lst
 lst name=files
   lst name=elevate.xmllong name=size1274/longdate 
 name=modified2011-03-06T20:42:54Z/date/lst
   ...
 /lst
 /response
 {noformat}
 I can list the xslt sub-dir using solr/admin/files?file=/xslt
 In Solr 3.1.0, both of these fail with a 500 error:
 {noformat}
 HTTP ERROR 500
 Problem accessing /solr/admin/file/. Reason:
 did not find a CONTENT object
 java.io.IOException: did not find a CONTENT object
 {noformat}
 Looking at the code in class ShowFileRequestHandler, it seem like 3.1.0 
 should still handle directory listings if not file name is given, or if the 
 file is a directory, so I am filing this as a bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046125#comment-13046125
 ] 

Yonik Seeley commented on SOLR-2564:


bq. If the 16% slowdown is worst case

Actually, the worst case is twice as slow due to unneeded caching of a simple 
query.  Luckily this can be configured... but I still question the default, 
which can lead to surprisingly huge memory use (think up to a field cache entry 
or more allocated per-request).  One advantage to the dual-pass approach by 
default in the first place was avoiding surprisingly large memory usage by 
default (which can degrade less gracefully by causing OOM exceptions as people 
try to crank up the number of request threads).


 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2535) In Solr 3.2 and trunk the admin/file handler fails to show directory listings

2011-06-08 Thread Peter Wolanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Wolanin updated SOLR-2535:


Affects Version/s: 3.2
  Summary: In Solr 3.2 and trunk the admin/file handler fails to 
show directory listings  (was: In Solr 3.1.0 the admin/file handler fails to 
show directory listings)

 In Solr 3.2 and trunk the admin/file handler fails to show directory listings
 -

 Key: SOLR-2535
 URL: https://issues.apache.org/jira/browse/SOLR-2535
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 3.1, 3.2, 4.0
 Environment: java 1.6, jetty
Reporter: Peter Wolanin
 Fix For: 3.3


 In Solr 1.4.1, going to the path solr/admin/file I see an XML-formatted 
 listing of the conf directory, like:
 {noformat}
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime1/int/lst
 lst name=files
   lst name=elevate.xmllong name=size1274/longdate 
 name=modified2011-03-06T20:42:54Z/date/lst
   ...
 /lst
 /response
 {noformat}
 I can list the xslt sub-dir using solr/admin/files?file=/xslt
 In Solr 3.1.0, both of these fail with a 500 error:
 {noformat}
 HTTP ERROR 500
 Problem accessing /solr/admin/file/. Reason:
 did not find a CONTENT object
 java.io.IOException: did not find a CONTENT object
 {noformat}
 Looking at the code in class ShowFileRequestHandler, it seem like 3.1.0 
 should still handle directory listings if not file name is given, or if the 
 file is a directory, so I am filing this as a bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-542) fl should be a multi-value param

2011-06-08 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-542.


   Resolution: Duplicate
Fix Version/s: 4.0

part of SOLR-2444

 fl should be a multi-value param
 --

 Key: SOLR-542
 URL: https://issues.apache.org/jira/browse/SOLR-542
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: Java 6
 Linux (CentOS)
Reporter: Ezra Epstein
 Fix For: 4.0


 We've got the fq working in the appends section (lst/ element) of our 
 requestHandlers.  We'd like to add other attributes - in particular, the fl 
 attribute, so that, regardless of query, the user is ensured of getting some 
 minimum set of fields in the results.  Yet, when we a setting for fl to the 
 appends section it has no affect.
 On a separate note, when a user specified fl=score in the URL, the results 
 are those one should get for fl=*,score -- that is, all fields, not just the 
 score, is/are returned.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1566) Allow components to add fields to outgoing documents

2011-06-08 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-1566.
-

Resolution: Fixed
  Assignee: Ryan McKinley

This has been in trunk for a while -- any new problems should get their own 
JIRA issue

 Allow components to add fields to outgoing documents
 

 Key: SOLR-1566
 URL: https://issues.apache.org/jira/browse/SOLR-1566
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Noble Paul
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-1566-DocTransformer.patch, 
 SOLR-1566-DocTransformer.patch, SOLR-1566-DocTransformer.patch, 
 SOLR-1566-DocTransformer.patch, SOLR-1566-DocTransformer.patch, 
 SOLR-1566-DocTransformer.patch, SOLR-1566-PageTool.patch, 
 SOLR-1566-gsi.patch, SOLR-1566-rm.patch, SOLR-1566-rm.patch, 
 SOLR-1566-rm.patch, SOLR-1566-rm.patch, SOLR-1566-rm.patch, SOLR-1566.patch, 
 SOLR-1566.patch, SOLR-1566.patch, SOLR-1566.patch, SOLR-1566_parsing.patch


 Currently it is not possible for components to add fields to outgoing 
 documents which are not in the the stored fields of the document.  This makes 
 it cumbersome to add computed fields/metadata .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-1298) FunctionQuery results as pseudo-fields

2011-06-08 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-1298.
-

   Resolution: Fixed
Fix Version/s: (was: 3.3)
   4.0

This has been in trunk for a while -- any problems should get their own issue.

 FunctionQuery results as pseudo-fields
 --

 Key: SOLR-1298
 URL: https://issues.apache.org/jira/browse/SOLR-1298
 Project: Solr
  Issue Type: New Feature
Reporter: Grant Ingersoll
Assignee: Yonik Seeley
Priority: Minor
 Fix For: 4.0

 Attachments: SOLR-1298-FieldValues.patch, SOLR-1298.patch


 It would be helpful if the results of FunctionQueries could be added as 
 fields to a document. 
 Couple of options here:
 1. Run FunctionQuery as part of relevance score and add that piece to the 
 document
 2. Run the function (not really a query) during Document/Field retrieval

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2443) Solr DocValues should have objectVal(int doc)

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046135#comment-13046135
 ] 

Ryan McKinley commented on SOLR-2443:
-

I think this has been committed, but the JIRA issue did not get updated?

 Solr DocValues should have objectVal(int doc)
 -

 Key: SOLR-2443
 URL: https://issues.apache.org/jira/browse/SOLR-2443
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Assignee: Yonik Seeley
 Attachments: SOLR-2443-object-values.patch, SOLR-2443.patch


 DocValues has all versions of intVal, floatVal, strVal, but there is no 
 general way to know what the raw type is.
 We should add a general objetVal( int doc )

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2443) Solr DocValues should have objectVal(int doc)

2011-06-08 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-2443.


   Resolution: Fixed
Fix Version/s: 4.0

Yep, has been committed for a while... my bad.

 Solr DocValues should have objectVal(int doc)
 -

 Key: SOLR-2443
 URL: https://issues.apache.org/jira/browse/SOLR-2443
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Assignee: Yonik Seeley
 Fix For: 4.0

 Attachments: SOLR-2443-object-values.patch, SOLR-2443.patch


 DocValues has all versions of intVal, floatVal, strVal, but there is no 
 general way to know what the raw type is.
 We should add a general objetVal( int doc )

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-705) Distributed search should optionally return docID-shard map

2011-06-08 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-705.


   Resolution: Fixed
Fix Version/s: (was: 3.3)
   4.0

added shard.url=xxx to distributed requests and return that with a 
DocTransformer

 Distributed search should optionally return docID-shard map
 

 Key: SOLR-705
 URL: https://issues.apache.org/jira/browse/SOLR-705
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
 Environment: all
Reporter: Brian Whitman
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
 SOLR-705.patch, SOLR-705.patch, SOLR-705.patch


 SOLR-303 queries with shards parameters set need to return the dociD-shard 
 mapping in the response. Without it, updating/deleting documents when the # 
 of shards is variable is hard. We currently set this with a special 
 requestHandler that filters /update and inserts the shard as a field in the 
 index but it would be better if the shard location came back in the query 
 response outside of the index.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046140#comment-13046140
 ] 

Paul Elschot commented on LUCENE-3179:
--

I did a bit of performance testing (sun java 1.6.0_xx, not the very latest one).

This is a typical output on my machine (the dummy can be ignored, it is only 
there to make sure that nothing is optimized away):
{noformat}
BitUtil nlz time: 5664 picosec/call, dummy: 11572915728
Longnlz time: 8464 picosec/call, dummy: 7715277152
{noformat}

That means that the nlz code in the patch is definitely faster than 
Long.numberOfLeadingZeros for the test arguments used.
The test arguments are divided roughly evenly for the possible numbers of 
leading zero bits.


 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-3179:
-

Attachment: TestBitUtil.java

TestBitUtil.java as the in patch and extended with a testPerfNlz method.

 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-388) Refactor ResponseWriters and Friends.

2011-06-08 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-388.


   Resolution: Duplicate
Fix Version/s: 4.0

the types of thigns this issue points to are taken care of in SOLR-1566

 Refactor ResponseWriters and Friends.
 -

 Key: SOLR-388
 URL: https://issues.apache.org/jira/browse/SOLR-388
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Affects Versions: 1.2
Reporter: Luke Lu
 Fix For: 4.0


 When developing custom request handlers, it's often necessary to create 
 corresponding response writers that extend existing ones. In our case, we 
 want to augment the result list (more attributes other than numFound, 
 maxScore, on the fly per doc attributes that are not fields etc.) , only to 
 find JSONWriter and friends are private to the package. We could copy the 
 whole thing and modify it, but it wouldn't take advantage of recent fixes 
 like Yonik's FastWriter changes without tedious manual intervention. I hope 
 that we can can *at least* extends it and overrides writeVal() to add a new 
 result type to call writeMyType. 
 Ideally the ResponseWriter hierarchy could be rewritten to take advantage of 
 a double dispatching trick to get rid of the ugly if something is instance of 
 someclass else ... list, as it clearly doesn't scale well with number of 
 types (_n_) and depth (_d_) of the writer hierarchy, as the complexity would 
 be O(_nd_), which worse than the O(1) double dispatching mechanism. Some 
 pseudo code here:
 {code:title=SomeResponseWriter.java}
 // a list of overloaded write method
 public void write(SomeType t) {
   // implementation
 }
 {code}
 {code:title=ResponseWritable.java}
 // an interface for objects that support the scheme
 public interface ResponseWritable {
   public abstract void write(ResponseWriter writer);
 }
 {code}
 {code:title=SomeType.java}
 // Sometype needs to implement the ResponseWritable interface
 // to facilitate double dispatching
 public void write(ResponseWriter writer) {
   writer.write(this);
 }
 {code}
 So when adding a new MyType and MySomeResponseWriter, we only need to add 
 these two files without having to muck with the writeVal if-then-else list. 
 Note, you still need to use the if else list for builtin types and any types 
 that you can't modify in the write(Object) method. 
 {code:title=MyType.java}
 // implements the ResponseWritable interface
 public write(ResponseWriter writer) {
   writer.write(this);
 }
 {code}
 {code:title=MySomeResponseWriter.java}
 //  only need to implement this method
 public void write(MyType t) {
   // implementation
 }
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-705) Distributed search should optionally return docID-shard map

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046144#comment-13046144
 ] 

Yonik Seeley commented on SOLR-705:
---

Something to keep in mind is the difference between logical shard and physical 
shard replica.
i.e. shard1 can be located at localhost:8983/solr/shard1 and 
localhost:7574/solr/shard1

Both pieces of info can be useful.

So with ryan's last commit, [shard] gets you something like 
localhost:8983/solr/shard1
We don't have to implement returning the other part now... but we should think 
about the naming.

In the code, I sometimes used slice to mean logical shard (a logical slice of 
the complete index), to avoid overloading shard... but I'm not sure that 
won't cause more confusion than it's worth.  So for a future logical shard 
name, perhaps [shard_id] ?



 Distributed search should optionally return docID-shard map
 

 Key: SOLR-705
 URL: https://issues.apache.org/jira/browse/SOLR-705
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
 Environment: all
Reporter: Brian Whitman
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
 SOLR-705.patch, SOLR-705.patch, SOLR-705.patch


 SOLR-303 queries with shards parameters set need to return the dociD-shard 
 mapping in the response. Without it, updating/deleting documents when the # 
 of shards is variable is hard. We currently set this with a special 
 requestHandler that filters /update and inserts the shard as a field in the 
 index but it would be better if the shard location came back in the query 
 response outside of the index.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Edited] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046141#comment-13046141
 ] 

Paul Elschot edited comment on LUCENE-3179 at 6/8/11 7:22 PM:
--

TestBitUtil.java as in the patch and extended with a testPerfNlz method that 
gave the output above.

  was (Author: paul.elsc...@xs4all.nl):
TestBitUtil.java as the in patch and extended with a testPerfNlz method.
  
 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2564) Integrating grouping module into Solr 4.0

2011-06-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046150#comment-13046150
 ] 

Michael McCandless commented on SOLR-2564:
--

bq. Actually, the worst case is twice as slow due to unneeded caching of a 
simple query.

Sorry, what do you mean here?

bq. but I still question the default, which can lead to surprisingly huge 
memory use (think up to a field cache entry or more allocated per-request).

I agree; -1 is a dangerous default.

But I think caching should still default to on, just limited as a pctg
of the number of docs in the index.  Ie, by default we will cache the
result set if it's less than 20% (say) of total docs in your index.
Else we fallback to 2-pass.

I think this matches how Solr handles caching filters now?  Ie, filter
cache evicts by total filter count and not net MB right, I think?  So
that if you have more docs in your index you'll spending more RAM on
the caching...

Costly queries that return a smallish result set can see big gains
from the caching.


 Integrating grouping module into Solr 4.0
 -

 Key: SOLR-2564
 URL: https://issues.apache.org/jira/browse/SOLR-2564
 Project: Solr
  Issue Type: Improvement
Reporter: Martijn van Groningen
Assignee: Martijn van Groningen
 Fix For: 4.0

 Attachments: LUCENE-2564.patch, SOLR-2564.patch, SOLR-2564.patch, 
 SOLR-2564.patch, SOLR-2564.patch, SOLR-2564.patch


 Since work on grouping module is going well. I think it is time to wire this 
 up in Solr.
 Besides the current grouping features Solr provides, Solr will then also 
 support second pass caching and total count based on groups.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2444) Update fl syntax to support: pseudo fields, AS, transformers, and wildcards

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046149#comment-13046149
 ] 

Yonik Seeley commented on SOLR-2444:


Yep, we get the full power/familiarity of local params, including param 
substitution (e.g. myvar=$other_request_param)

bq. I updated Transformers to take a MapString,String that is parsed using 
the LocalParams syntax.

In the template parsing code I committed first, I had used SolrParams... one 
reason being that for some time I've thought that we might want multi-valued 
parameters in localParams.  If back compat of transformers isn't a big deal, we 
can change MapString,String to MapString,String[] later... but it seems 
like the additional parsing logic of SolrParams might add enough value to use 
that instead of a bare Map anyway?

 Update fl syntax to support: pseudo fields, AS, transformers, and wildcards
 ---

 Key: SOLR-2444
 URL: https://issues.apache.org/jira/browse/SOLR-2444
 Project: Solr
  Issue Type: New Feature
Reporter: Ryan McKinley
 Attachments: SOLR-2444-fl-parsing.patch, SOLR-2444-fl-parsing.patch


 The ReturnFields parsing needs to be improved.  It should also support 
 wildcards

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-705) Distributed search should optionally return docID-shard map

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046154#comment-13046154
 ] 

Ryan McKinley commented on SOLR-705:


interesting.  What defines a 'slice'?  Could it be a system property, or 
something in SolrConfig?

If I understand what you are saying, it would be a label on cores that are 
logically identical.


 Distributed search should optionally return docID-shard map
 

 Key: SOLR-705
 URL: https://issues.apache.org/jira/browse/SOLR-705
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
 Environment: all
Reporter: Brian Whitman
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
 SOLR-705.patch, SOLR-705.patch, SOLR-705.patch


 SOLR-303 queries with shards parameters set need to return the dociD-shard 
 mapping in the response. Without it, updating/deleting documents when the # 
 of shards is variable is hard. We currently set this with a special 
 requestHandler that filters /update and inserts the shard as a field in the 
 index but it would be better if the shard location came back in the query 
 response outside of the index.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2417) Allow explain info directly to response documents

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046158#comment-13046158
 ] 

Ryan McKinley commented on SOLR-2417:
-

Syntax changed to:
{code}
?fl=id,[explain]
{code}
{code}
?fl=id,[explain style=text]
{code}
{code}
?fl=id,[explain style=nl]
{code}

 Allow explain info directly to response documents
 -

 Key: SOLR-2417
 URL: https://issues.apache.org/jira/browse/SOLR-2417
 Project: Solr
  Issue Type: New Feature
Reporter: Ryan McKinley
Assignee: Ryan McKinley
Priority: Minor
 Fix For: 4.0


 Currently explain information in displayed in the debugInfo part of the 
 response.  This requires clients to build a Map and link results later if 
 they want them displayed together.  It also does not nicely allow for 
 multiple queries in one result.
 As part of SOLR-1566, we can add the explain info directly to the result

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2444) Update fl syntax to support: pseudo fields, AS, transformers, and wildcards

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046161#comment-13046161
 ] 

Ryan McKinley commented on SOLR-2444:
-

I used MapString,String because i figured most Transformes won't use the 
params anyway, so it is less work -- I don't feel strongly either way.  

I'll change it to SolrParams

 Update fl syntax to support: pseudo fields, AS, transformers, and wildcards
 ---

 Key: SOLR-2444
 URL: https://issues.apache.org/jira/browse/SOLR-2444
 Project: Solr
  Issue Type: New Feature
Reporter: Ryan McKinley
 Attachments: SOLR-2444-fl-parsing.patch, SOLR-2444-fl-parsing.patch


 The ReturnFields parsing needs to be improved.  It should also support 
 wildcards

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-705) Distributed search should optionally return docID-shard map

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046164#comment-13046164
 ] 

Yonik Seeley commented on SOLR-705:
---

bq. interesting. What defines a 'slice'? Could it be a system property, or 
something in SolrConfig?

It's well defined within SolrCloud... you can actually add 
shards=shard1,shard2 and solr will do the mapping from those logical shards to 
physical shards (via cluster state in zookeeper) and do a load-balanced request 
across them.


 Distributed search should optionally return docID-shard map
 

 Key: SOLR-705
 URL: https://issues.apache.org/jira/browse/SOLR-705
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
 Environment: all
Reporter: Brian Whitman
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
 SOLR-705.patch, SOLR-705.patch, SOLR-705.patch


 SOLR-303 queries with shards parameters set need to return the dociD-shard 
 mapping in the response. Without it, updating/deleting documents when the # 
 of shards is variable is hard. We currently set this with a special 
 requestHandler that filters /update and inserts the shard as a field in the 
 index but it would be better if the shard location came back in the query 
 response outside of the index.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3184) add LuceneTestCase.rarely()/LuceneTestCase.atLeast()

2011-06-08 Thread Robert Muir (JIRA)
add LuceneTestCase.rarely()/LuceneTestCase.atLeast()


 Key: LUCENE-3184
 URL: https://issues.apache.org/jira/browse/LUCENE-3184
 Project: Lucene - Java
  Issue Type: Test
Reporter: Robert Muir
 Fix For: 3.3, 4.0


in LUCENE-3175, the tests were sped up a lot by using reasonable number of 
iterations normally, but cranking up for NIGHTLY.
we also do crazy things more 'rarely' for normal builds (e.g. simpletext, 
payloads, crazy merge params, etc)
also, we found some bugs by doing this, because in general our parameters are 
too fixed.

however, it made the code look messy... I propose some new methods:
instead of some crazy code in your test like:
{code}
int numdocs = (TEST_NIGHTLY ? 1000 : 100) * RANDOM_MULTIPLIER;
{code}

you use:
{code}
int numdocs = atLeast(100);
{code}

this will apply the multiplier, also factor in nightly, and finally add some 
random fudge... so e.g. in local runs its sometimes 127 docs, sometimes 113 
docs, etc.

additionally instead of code like:
{code}
if ((TEST_NIGHTLY  random.nextBoolean()) || (random.nextInt(20) == 17)) {
{code}

you do
{code}
if (rarely()) {
{code}

which applies NIGHTLY and also the multiplier (logarithmic growth).


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046169#comment-13046169
 ] 

Dawid Weiss commented on LUCENE-3179:
-

You're not providing the required contex -- what exact JVM and what exact 
processor did you test on? I've just ran your test on my machine with the 
following result:

BitUtil nlz time: 3109 picosec/call, dummy: 20252602524
Longnlz time: 1279 picosec/call, dummy: 48220482200

I'm guessing yours didn't use the intrinsic inline at all (for whatever 
reason). My machine is a fairly old Intel I7 860 running 64-Bit server hotspot 
1.6.0_24-b07.



 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-705) Distributed search should optionally return docID-shard map

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046171#comment-13046171
 ] 

Ryan McKinley commented on SOLR-705:


OK, that suggests that the parameter should be 'shard.id' rather then 
'shard.url' -- since in SolrCloud it is not a url.  Maybe we should also send 
shard.url so that we do know the URL even within SolrCloud.  Then we should add 
another transformer for [shard_url]  
 

 Distributed search should optionally return docID-shard map
 

 Key: SOLR-705
 URL: https://issues.apache.org/jira/browse/SOLR-705
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
 Environment: all
Reporter: Brian Whitman
Assignee: Ryan McKinley
 Fix For: 4.0

 Attachments: SOLR-705.patch, SOLR-705.patch, SOLR-705.patch, 
 SOLR-705.patch, SOLR-705.patch, SOLR-705.patch


 SOLR-303 queries with shards parameters set need to return the dociD-shard 
 mapping in the response. Without it, updating/deleting documents when the # 
 of shards is variable is hard. We currently set this with a special 
 requestHandler that filters /update and inserts the shard as a field in the 
 index but it would be better if the shard location came back in the query 
 response outside of the index.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3184) add LuceneTestCase.rarely()/LuceneTestCase.atLeast()

2011-06-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046172#comment-13046172
 ] 

Michael McCandless commented on LUCENE-3184:


Looks great!  I love the LTC.rarely, usually, atLeast methods :)

 add LuceneTestCase.rarely()/LuceneTestCase.atLeast()
 

 Key: LUCENE-3184
 URL: https://issues.apache.org/jira/browse/LUCENE-3184
 Project: Lucene - Java
  Issue Type: Test
Reporter: Robert Muir
 Fix For: 3.3, 4.0

 Attachments: LUCENE-3184.patch


 in LUCENE-3175, the tests were sped up a lot by using reasonable number of 
 iterations normally, but cranking up for NIGHTLY.
 we also do crazy things more 'rarely' for normal builds (e.g. simpletext, 
 payloads, crazy merge params, etc)
 also, we found some bugs by doing this, because in general our parameters are 
 too fixed.
 however, it made the code look messy... I propose some new methods:
 instead of some crazy code in your test like:
 {code}
 int numdocs = (TEST_NIGHTLY ? 1000 : 100) * RANDOM_MULTIPLIER;
 {code}
 you use:
 {code}
 int numdocs = atLeast(100);
 {code}
 this will apply the multiplier, also factor in nightly, and finally add some 
 random fudge... so e.g. in local runs its sometimes 127 docs, sometimes 113 
 docs, etc.
 additionally instead of code like:
 {code}
 if ((TEST_NIGHTLY  random.nextBoolean()) || (random.nextInt(20) == 17)) {
 {code}
 you do
 {code}
 if (rarely()) {
 {code}
 which applies NIGHTLY and also the multiplier (logarithmic growth).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3184) add LuceneTestCase.rarely()/LuceneTestCase.atLeast()

2011-06-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-3184:


Attachment: LUCENE-3184.patch

here's a patch, also includes some random speedups to some of these tests.

 add LuceneTestCase.rarely()/LuceneTestCase.atLeast()
 

 Key: LUCENE-3184
 URL: https://issues.apache.org/jira/browse/LUCENE-3184
 Project: Lucene - Java
  Issue Type: Test
Reporter: Robert Muir
 Fix For: 3.3, 4.0

 Attachments: LUCENE-3184.patch


 in LUCENE-3175, the tests were sped up a lot by using reasonable number of 
 iterations normally, but cranking up for NIGHTLY.
 we also do crazy things more 'rarely' for normal builds (e.g. simpletext, 
 payloads, crazy merge params, etc)
 also, we found some bugs by doing this, because in general our parameters are 
 too fixed.
 however, it made the code look messy... I propose some new methods:
 instead of some crazy code in your test like:
 {code}
 int numdocs = (TEST_NIGHTLY ? 1000 : 100) * RANDOM_MULTIPLIER;
 {code}
 you use:
 {code}
 int numdocs = atLeast(100);
 {code}
 this will apply the multiplier, also factor in nightly, and finally add some 
 random fudge... so e.g. in local runs its sometimes 127 docs, sometimes 113 
 docs, etc.
 additionally instead of code like:
 {code}
 if ((TEST_NIGHTLY  random.nextBoolean()) || (random.nextInt(20) == 17)) {
 {code}
 you do
 {code}
 if (rarely()) {
 {code}
 which applies NIGHTLY and also the multiplier (logarithmic growth).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2444) Update fl syntax to support: pseudo fields, AS, transformers, and wildcards

2011-06-08 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046174#comment-13046174
 ] 

Ryan McKinley commented on SOLR-2444:
-

changed in r1133534

 Update fl syntax to support: pseudo fields, AS, transformers, and wildcards
 ---

 Key: SOLR-2444
 URL: https://issues.apache.org/jira/browse/SOLR-2444
 Project: Solr
  Issue Type: New Feature
Reporter: Ryan McKinley
 Attachments: SOLR-2444-fl-parsing.patch, SOLR-2444-fl-parsing.patch


 The ReturnFields parsing needs to be improved.  It should also support 
 wildcards

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046188#comment-13046188
 ] 

David Smiley commented on SOLR-1804:


Quick question: Why is guava in solr/lib instead of solr/contrib/clustering/lib 
? I did a search on trunk for use of Guava and it is strictly limited to this 
contrib module. Does placement of this lib here signal that use of Guava in 
other parts of Solr is okay?  (Guava is pretty cool so that would be nice)

 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-3185) NRTCachingDirectory.deleteFile always throws exception

2011-06-08 Thread Michael McCandless (JIRA)
NRTCachingDirectory.deleteFile always throws exception
--

 Key: LUCENE-3185
 URL: https://issues.apache.org/jira/browse/LUCENE-3185
 Project: Lucene - Java
  Issue Type: Bug
  Components: modules/other
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.3, 4.0


Silly bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3185) NRTCachingDirectory.deleteFile always throws exception

2011-06-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-3185:
---

Attachment: LUCENE-3185.patch

Patch.

 NRTCachingDirectory.deleteFile always throws exception
 --

 Key: LUCENE-3185
 URL: https://issues.apache.org/jira/browse/LUCENE-3185
 Project: Lucene - Java
  Issue Type: Bug
  Components: modules/other
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.3, 4.0

 Attachments: LUCENE-3185.patch


 Silly bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046193#comment-13046193
 ] 

Yonik Seeley commented on SOLR-1804:


IIRC, Noble intended to use it and moved it to lib (see SOLR-1707 for one 
potential use).
IMO, it's use in other parts of Solr are fine (just don't automatically assume 
it's faster/better ;-)



 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046201#comment-13046201
 ] 

Dawid Weiss commented on SOLR-1804:
---

I agree with Yonik -- Guava is compact and neat to use and we use it all the 
time, but I'd be careful with automatic replacement of certain constructs in 
performance-critical loops. It's well written, but certain methods sacrifice 
some performance for code beauty.

 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1707) Use google collections immutable collections instead of Collections.unmodifiable**

2011-06-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046206#comment-13046206
 ] 

David Smiley commented on SOLR-1707:


(The priority of this issue should be trivial or at least minor, not major)
Looking at the patch, it doesn't appear to be using Guava in 
performance-critical parts. I happen to like Guava API better, especially since 
it's updated for Java 5.

 Use google collections immutable collections instead of 
 Collections.unmodifiable**
 --

 Key: SOLR-1707
 URL: https://issues.apache.org/jira/browse/SOLR-1707
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 3.3

 Attachments: SOLR-1707.patch, TestPerf.java


 google collections offer true immutability and more memory efficiency

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread Steven Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046210#comment-13046210
 ] 

Steven Rowe commented on SOLR-1804:
---

FYI, Dawid's SOLR-2378 introduced use of Guava outside of the clustering 
contrib.

 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3185) NRTCachingDirectory.deleteFile always throws exception

2011-06-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-3185:
---

Attachment: LUCENE-3185.patch

New patch, also fixes that we were not overriding the set/getLF methods.

With this patch, all Solr+Lucene tests pass if I use this dir wrapping a RAMDir.

Someday, hopefully, we can have our tests also randomly swap in impls from 
contrib/modules...

 NRTCachingDirectory.deleteFile always throws exception
 --

 Key: LUCENE-3185
 URL: https://issues.apache.org/jira/browse/LUCENE-3185
 Project: Lucene - Java
  Issue Type: Bug
  Components: modules/other
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.3, 4.0

 Attachments: LUCENE-3185.patch, LUCENE-3185.patch


 Silly bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046215#comment-13046215
 ] 

Dawid Weiss commented on SOLR-1804:
---

I honestly don't remember what was it I used Guava for in FSTLookup, but it's 
most likely the generic-less constructors for Lists or Maps... nothing fancy. 
And I think Robert might have removed the dependency when he moved FSTLookup to 
a module.

 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-3185) NRTCachingDirectory.deleteFile always throws exception

2011-06-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-3185.


Resolution: Fixed

 NRTCachingDirectory.deleteFile always throws exception
 --

 Key: LUCENE-3185
 URL: https://issues.apache.org/jira/browse/LUCENE-3185
 Project: Lucene - Java
  Issue Type: Bug
  Components: modules/other
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 3.3, 4.0

 Attachments: LUCENE-3185.patch, LUCENE-3185.patch


 Silly bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-3182) TestAddIndexes reproducible test failure on turnk

2011-06-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned LUCENE-3182:
--

Assignee: Michael McCandless

 TestAddIndexes reproducible test failure on turnk
 -

 Key: LUCENE-3182
 URL: https://issues.apache.org/jira/browse/LUCENE-3182
 Project: Lucene - Java
  Issue Type: Bug
Reporter: selckin
Assignee: Michael McCandless

 trunk: r1133385
 {code}
 [junit] Testsuite: org.apache.lucene.index.TestAddIndexes
 [junit] Tests run: 2843, Failures: 1, Errors: 0, Time elapsed: 137.121 sec
 [junit]
 [junit] - Standard Output ---
 [junit] java.io.FileNotFoundException: _cy.fdx
 [junit] at 
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
 [junit] at 
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
 [junit] at 
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
 [junit] at 
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
 [junit] java.io.FileNotFoundException: _cx.fdx
 [junit] at 
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
 [junit] at 
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
 [junit] at 
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
 [junit] at 
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
 [junit] -  ---
 [junit] - Standard Error -
 [junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes 
 -Dtestmethod=testAddIndexesWithRollback 
 -Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3
 [junit] NOTE: test params are: codec=RandomCodecProvider: {id=SimpleText, 
 content=SimpleText, d=MockRandom, c=SimpleText}, locale=fr, 
 timezone=Africa/Kigali
 [junit] NOTE: all tests run in this JVM:
 [junit] [TestAddIndexes]
 [junit] NOTE: Linux 2.6.39-gentoo amd64/Sun Microsystems Inc. 1.6.0_25 
 (64-bit)/cpus=8,threads=1,free=68050392,total=446234624
 [junit] -  ---
 [junit] Testcase: 
 testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):   
 FAILED
 [junit]
 [junit] junit.framework.AssertionFailedError:
 [junit] at 
 org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
 [junit] at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
 [junit] at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
 [junit]
 [junit]
 [junit] Test org.apache.lucene.index.TestAddIndexes FAILED
 {code}
 Fails randomly in my while(1) test run, and Fails after a few min of running: 
 {code}
 ant test -Dtestcase=TestAddIndexes 
 -Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3 
 -Dtests.iter=200 -Dtests.iter.min=1
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3182) TestAddIndexes reproducible test failure on turnk

2011-06-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-3182:
---

Attachment: LUCENE-3182.patch

Patch.

There was one test bug here (the test wasn't ignoring a FNFE, which can happen 
if you close(false) an IW while other threads are still doing stuff), but there 
were also spooky cases that could in fact corrupt your index!!

 TestAddIndexes reproducible test failure on turnk
 -

 Key: LUCENE-3182
 URL: https://issues.apache.org/jira/browse/LUCENE-3182
 Project: Lucene - Java
  Issue Type: Bug
Reporter: selckin
Assignee: Michael McCandless
 Attachments: LUCENE-3182.patch


 trunk: r1133385
 {code}
 [junit] Testsuite: org.apache.lucene.index.TestAddIndexes
 [junit] Tests run: 2843, Failures: 1, Errors: 0, Time elapsed: 137.121 sec
 [junit]
 [junit] - Standard Output ---
 [junit] java.io.FileNotFoundException: _cy.fdx
 [junit] at 
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
 [junit] at 
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
 [junit] at 
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
 [junit] at 
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
 [junit] java.io.FileNotFoundException: _cx.fdx
 [junit] at 
 org.apache.lucene.store.RAMDirectory.fileLength(RAMDirectory.java:121)
 [junit] at 
 org.apache.lucene.store.MockDirectoryWrapper.fileLength(MockDirectoryWrapper.java:606)
 [junit] at 
 org.apache.lucene.index.SegmentInfo.sizeInBytes(SegmentInfo.java:294)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.size(TieredMergePolicy.java:633)
 [junit] at 
 org.apache.lucene.index.TieredMergePolicy.useCompoundFile(TieredMergePolicy.java:611)
 [junit] at 
 org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2459)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$CommitAndAddIndexes3.doBody(TestAddIndexes.java:847)
 [junit] at 
 org.apache.lucene.index.TestAddIndexes$RunAddIndexesThreads$1.run(TestAddIndexes.java:675)
 [junit] -  ---
 [junit] - Standard Error -
 [junit] NOTE: reproduce with: ant test -Dtestcase=TestAddIndexes 
 -Dtestmethod=testAddIndexesWithRollback 
 -Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3
 [junit] NOTE: test params are: codec=RandomCodecProvider: {id=SimpleText, 
 content=SimpleText, d=MockRandom, c=SimpleText}, locale=fr, 
 timezone=Africa/Kigali
 [junit] NOTE: all tests run in this JVM:
 [junit] [TestAddIndexes]
 [junit] NOTE: Linux 2.6.39-gentoo amd64/Sun Microsystems Inc. 1.6.0_25 
 (64-bit)/cpus=8,threads=1,free=68050392,total=446234624
 [junit] -  ---
 [junit] Testcase: 
 testAddIndexesWithRollback(org.apache.lucene.index.TestAddIndexes):   
 FAILED
 [junit]
 [junit] junit.framework.AssertionFailedError:
 [junit] at 
 org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:932)
 [junit] at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1362)
 [junit] at 
 org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:1280)
 [junit]
 [junit]
 [junit] Test org.apache.lucene.index.TestAddIndexes FAILED
 {code}
 Fails randomly in my while(1) test run, and Fails after a few min of running: 
 {code}
 ant test -Dtestcase=TestAddIndexes 
 -Dtests.seed=9026722750295014952:2645762923088581043 -Dtests.multiplier=3 
 -Dtests.iter=200 -Dtests.iter.min=1
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2580) Create a new Search Component to alter queries based on business rules.

2011-06-08 Thread Simon Rosenthal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046222#comment-13046222
 ] 

Simon Rosenthal commented on SOLR-2580:
---

Tomas:l
I'm not sure why you would want to encapsulate these kind of rules within Solr 
- an e-commerce site would always have an application layer between the UI and 
Solr which seems like the logical place to  apply business rules leading to 
modifying the request by adding boosts, specifying sort order, etc. 

Also, is Drools separate from JBoss (which is used relatively in frequently in 
the Solr community) ?


 Create a new Search Component to alter queries based on business rules. 
 

 Key: SOLR-2580
 URL: https://issues.apache.org/jira/browse/SOLR-2580
 Project: Solr
  Issue Type: New Feature
Reporter: Tomás Fernández Löbbe

 The goal is to be able to adjust the relevance of documents based on user 
 defined business rules.
 For example, in a e-commerce site, when the user chooses the shoes 
 category, we may be interested in boosting products from a certain brand. 
 This can be expressed as a rule in the following way:
 rule Boost Adidas products when searching shoes
 when
 $qt : QueryTool()
 TermQuery(term.field==category, term.text==shoes)
 then
 $qt.boost({!lucene}brand:adidas);
 end
 The QueryTool object should be used to alter the main query in a easy way. 
 Even more human-like rules can be written:
 rule Boost Adidas products when searching shoes
  when
 Query has term shoes in field product
  then
 Add boost query {!lucene}brand:adidas
 end
 These rules are written in a text file in the config directory and can be 
 modified at runtime. Rules will be managed using JBoss Drools: 
 http://www.jboss.org/drools/drools-expert.html
 On a first stage, it will allow to add boost queries or change sorting fields 
 based on the user query, but it could be extended to allow more options.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046227#comment-13046227
 ] 

Paul Elschot commented on LUCENE-3179:
--

The java.vm.version value 1.6.0_03-b05, java.vm.info value is mixed mode.
The processor is an Athlon II X3 450 at 800 MHz.

Since the Long time is about 2.5 times faster than the BitUtil with a 64 bit 
processor, I'll change the patch to use Long. When the hardware allows better 
performance, it should be used.


 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1804) Upgrade Carrot2 to 3.2.0

2011-06-08 Thread Steven Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13046228#comment-13046228
 ] 

Steven Rowe commented on SOLR-1804:
---

bq. if you do a search for com.google.common on any .java file in trunk, 
you'll only find it in the clustering contrib module.

Right: {{grep 'com\.google' $(find . -name '*.java')}} only returns files under 
{{solr/contrib/clustering/}}.

bq. I think Robert might have removed the dependency when he moved FSTLookup to 
a module.

Yup, e.g. Lists.newArrayList() - new ArrayListEntry() : 
http://svn.apache.org/viewvc/lucene/dev/trunk/modules/suggest/src/java/org/apache/lucene/search/suggest/fst/FSTLookup.java?r1=1097216r2=1126642diff_format=h#l154.

So from April 14th through May 23rd, Solr *did* have a Guava dependency :).

 Upgrade Carrot2 to 3.2.0
 

 Key: SOLR-1804
 URL: https://issues.apache.org/jira/browse/SOLR-1804
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Clustering
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: 3.1, 4.0

 Attachments: SOLR-1804-carrot2-3.4.0-dev-trunk.patch, 
 SOLR-1804-carrot2-3.4.0-dev.patch, SOLR-1804-carrot2-3.4.0-libs.zip, 
 SOLR-1804.patch, carrot2-core-3.4.0-jdk1.5.jar


 http://project.carrot2.org/release-3.2.0-notes.html
 Carrot2 is now LGPL free, which means we should be able to bundle the binary!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-3179:
-

Attachment: LUCENE-3197.patch

BitUtil.nlz() and the performance test method (renamed to tstPerfNlz()) are 
still in the patch, even though they are not used.

I think committing this could wait until LUCENE-2454 is committed, and then 
that code can be changed to use prevSetBit() together with this.

 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, LUCENE-3197.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-3179) OpenBitSet.prevSetBit()

2011-06-08 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-3179:
-

Attachment: (was: LUCENE-3197.patch)

 OpenBitSet.prevSetBit()
 ---

 Key: LUCENE-3179
 URL: https://issues.apache.org/jira/browse/LUCENE-3179
 Project: Lucene - Java
  Issue Type: Improvement
Reporter: Paul Elschot
Priority: Minor
 Fix For: 3.3

 Attachments: LUCENE-3179.patch, LUCENE-3179.patch, TestBitUtil.java


 Find a previous set bit in an OpenBitSet.
 Useful for parent testing in nested document query execution LUCENE-2454 .

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >