[jira] Commented: (LUCENE-1609) Eliminate synchronization contention on initial index reading in TermInfosReader ensureIndexIsRead

2009-06-03 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12716118#action_12716118
 ] 

Jed Wesley-Smith commented on LUCENE-1609:
--

We get hit by this too. We'd love to see a fix and we'd agree that up-front 
initialisation would work for us.

AFAICT there are a number of other potential subtle concurrency issues with 
{{TermInfosReader}}:

# lack of {{final}} on fields - a number of fields ({{directory}}, {{segment}}, 
{{fieldInfos}}, {{origEnum}}, {{enumerators}} etc.) are never written to after 
construction and should be declared {{final}} for better publication semantics
# unsafe publication of {{indexDivisor}} and {{totalIndexInterval}} these 
fields are not written to under lock and in a worst-case could be unstable 
under use.
# {{close()}} calls {{enumerators.set(null)}} which only clears the value for 
the calling thread.

Making the {{TermInfosReader}} more immutable would address some of these 
issues.

As far as the root problem goes, uncontended synchronisation is generally _very 
fast_, but significantly slows down once a lock becomes contended. The kind of 
pattern employed here (do something quite expensive but only once) is not an 
ideal use of synchronisation as it commonly leads to a contended lock, which 
remains a slow lock well after it is required\*. That being said, it isn't easy 
to do correctly and performantly under 1.4. 

\* An alternative approach is something like this 
[LazyReference|http://labs.atlassian.com/source/browse/CONCURRENT/trunk/src/main/java/com/atlassian/util/concurrent/LazyReference.java?r=2242]
 class, although this kind of thing really requires Java5 for full value.

 Eliminate synchronization contention on initial index reading in 
 TermInfosReader ensureIndexIsRead 
 ---

 Key: LUCENE-1609
 URL: https://issues.apache.org/jira/browse/LUCENE-1609
 Project: Lucene - Java
  Issue Type: Improvement
  Components: Index
Affects Versions: 2.9
 Environment: Solr 
 Tomcat 5.5
 Ubuntu 2.6.20-17-generic
 Intel(R) Pentium(R) 4 CPU 2.80GHz, 2Gb RAM
Reporter: Dan Rosher
 Fix For: 2.9

 Attachments: LUCENE-1609.patch, LUCENE-1609.patch


 synchronized method ensureIndexIsRead in TermInfosReader causes contention 
 under heavy load
 Simple to reproduce: e.g. Under Solr, with all caches turned off, do a simple 
 range search e.g. id:[0 TO 99] on even a small index (in my case 28K 
 docs) and under a load/stress test application, and later, examining the 
 Thread dump (kill -3) , many threads are blocked on 'waiting for monitor 
 entry' to this method.
 Rather than using Double-Checked Locking which is known to have issues, this 
 implementation uses a state pattern, where only one thread can move the 
 object from IndexNotRead state to IndexRead, and in doing so alters the 
 objects behavior, i.e. once the index is loaded, the index nolonger needs a 
 synchronized method. 
 In my particular test, this uncreased throughput at least 30 times.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-1429) close() throws incorrect IllegalStateEx after IndexWriter hit an OOME when autoCommit is true

2008-10-28 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12643360#action_12643360
 ] 

Jed Wesley-Smith commented on LUCENE-1429:
--

Thanks Michael, I'll try and work out the best policy for the client code that 
should notice OOME and react appropriately.

 close() throws incorrect IllegalStateEx after IndexWriter hit an OOME when 
 autoCommit is true
 -

 Key: LUCENE-1429
 URL: https://issues.apache.org/jira/browse/LUCENE-1429
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 2.3, 2.3.1, 2.3.2, 2.4
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 2.9


 Spinoff from 
 http://www.nabble.com/IllegalStateEx-thrown-when-calling-close-to20201825.html
 When IndexWriter hits an OOME, it records this and then if close() is
 called it calls rollback() instead.  This is a defensive measure, in
 case the OOME corrupted the internal buffered state (added/deleted
 docs).
 But there's a bug: if you opened IndexWriter with autoCommit true,
 close() then incorrectly throws an IllegalStatException.
 This fix is simple: allow rollback to be called even if autoCommit is
 true, internally during close.  (External calls to rollback with
 autoCommmit true is still not allowed).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-1282) Sun hotspot compiler bug in 1.6.0_04/05 affects Lucene

2008-07-11 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12613018#action_12613018
 ] 

Jed Wesley-Smith commented on LUCENE-1282:
--

Sun has posted their evaluation on the bug above and accepted it as High 
priority.

 Sun hotspot compiler bug in 1.6.0_04/05 affects Lucene
 --

 Key: LUCENE-1282
 URL: https://issues.apache.org/jira/browse/LUCENE-1282
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: 2.3, 2.3.1
Reporter: Michael McCandless
Assignee: Michael McCandless
Priority: Minor
 Fix For: 2.4

 Attachments: corrupt_merge_out15.txt, crashtest, crashtest.log, 
 hs_err_pid27359.log


 This is not a Lucene bug.  It's an as-yet not fully characterized Sun
 JRE bug, as best I can tell.  I'm opening this to gather all things we
 know, and to work around it in Lucene if possible, and maybe open an
 issue with Sun if we can reduce it to a compact test case.
 It's hit at least 3 users:
   
 http://mail-archives.apache.org/mod_mbox/lucene-java-user/200803.mbox/[EMAIL 
 PROTECTED]
   
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200804.mbox/[EMAIL 
 PROTECTED]
   
 http://mail-archives.apache.org/mod_mbox/lucene-java-user/200805.mbox/[EMAIL 
 PROTECTED]
 It's specific to at least JRE 1.6.0_04 and 1.6.0_05, that affects
 Lucene.  Whereas 1.6.0_03 works OK and it's unknown whether 1.6.0_06
 shows it.
 The bug affects bulk merging of stored fields.  When it strikes, the
 segment produced by a merge is corrupt because its fdx file (stored
 fields index file) is missing one document.  After iterating many
 times with the first user that hit this, adding diagnostics 
 assertions, its seems that a call to fieldsWriter.addDocument some
 either fails to run entirely, or, fails to invoke its call to
 indexStream.writeLong.  It's as if when hotspot compiles a method,
 there's some sort of race condition in cutting over to the compiled
 code whereby a single method call fails to be invoked (speculation).
 Unfortunately, this corruption is silent when it occurs and only later
 detected when a merge tries to merge the bad segment, or an
 IndexReader tries to open it.  Here's a typical merge exception:
 {code}
 Exception in thread Thread-10 
 org.apache.lucene.index.MergePolicy$MergeException: 
 org.apache.lucene.index.CorruptIndexException:
 doc counts differ for segment _3gh: fieldsReader shows 15999 but 
 segmentInfo shows 16000
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:271)
 Caused by: org.apache.lucene.index.CorruptIndexException: doc counts differ 
 for segment _3gh: fieldsReader shows 15999 but segmentInfo shows 16000
 at 
 org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:313)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:262)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:221)
 at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3099)
 at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:2834)
 at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:240)
 {code}
 and here's a typical exception hit when opening a searcher:
 {code}
 org.apache.lucene.index.CorruptIndexException: doc counts differ for segment 
 _kk: fieldsReader shows 72670 but segmentInfo shows 72671
 at 
 org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:313)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:262)
 at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:230)
 at 
 org.apache.lucene.index.DirectoryIndexReader$1.doBody(DirectoryIndexReader.java:73)
 at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:636)
 at 
 org.apache.lucene.index.DirectoryIndexReader.open(DirectoryIndexReader.java:63)
 at org.apache.lucene.index.IndexReader.open(IndexReader.java:209)
 at org.apache.lucene.index.IndexReader.open(IndexReader.java:173)
 at 
 org.apache.lucene.search.IndexSearcher.init(IndexSearcher.java:48)
 {code}
 Sometimes, adding -Xbatch (forces up front compilation) or -Xint
 (disables compilation) to the java command line works around the
 issue.
 Here are some of the OS's we've seen the failure on:
 {code}
 SuSE 10.0
 Linux phoebe 2.6.13-15-smp #1 SMP Tue Sep 13 14:56:15 UTC 2005 x86_64 
 x86_64 x86_64 GNU/Linux 
 SuSE 8.2
 Linux phobos 2.4.20-64GB-SMP #1 SMP Mon Mar 17 17:56:03 UTC 2003 i686 
 unknown unknown GNU/Linux 
 Red Hat Enterprise Linux Server release 5.1 (Tikanga)
 Linux lab8.betech.virginia.edu 2.6.18-53.1.14.el5 #1 

[jira] Commented: (LUCENE-140) docs out of order

2007-01-10 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463781
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Michael, Doron, you guys are legends!

Indeed the problem is using only the IndexWriter with create true to recreate 
the directory. Creating a new Directory with create true does fix the problem. 
The javadoc for this constructor is fairly explicit that it should recreate the 
index for you (no caveat), so I would consider that a bug, but - given that 
head fixes it - not one that requires any action.

Thanks guys for the prompt attention, excellent and thorough analysis.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Updated: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jed Wesley-Smith updated LUCENE-140:


Attachment: indexing-failure.log

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463440
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Hi Michael,

Thanks for the patch, applied and recreated. Attached is the log.

To be explicit, we are recreating the index via the IndexWriter ctor with the 
create flag set and then completely rebuilding the index. We are not completely 
deleting the entire directory. There ARE old index files (_*.cfs  _*.del) in 
the directory with updated timestamps that are months old. If I completely 
recreate the directory the problem does go away. This is a fairly trivial 
fix, but we are still investigating as we want to know if this is indeed the 
problem, how we have come to make it prevalent, and what the root cause is.

Thanks for all the help everyone.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-09 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463470
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

BTW. We have looked at all the open files referenced by the VM when the 
indexing errors occur, and there does not seem to be any reference to the old 
index segment files, so I am not sure how those files are influencing this 
problem.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar, 
 indexing-failure.log, LUCENE-140-2007-01-09-instrumentation.patch


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-08 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463202
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Hi Michael,

This is awesome, I have prepared a patched 1.9.1: 
http://jira.atlassian.com/secure/attachment/19390/lucene-core-1.9.1-atlassian-patched-2007-01-09.jar

Unfortunately we don't have a repeatable test for this so we will have to 
distribute to afflicted customers and - well, pray I guess. We have been seeing 
this sporadically in our main JIRA instance http://jira.atlassian.com so we 
will hopefully not observe it now.

We do only use the deleteDocuments(Term) method, so we are not sure whether 
this will truly fix our problem, but we note that that method calls 
deleteDocument(int) based on the TermDocs returned for the Term - and maybe 
they can be incorrect???

Out of interest, apart from changing from 1.4.3 to 1.9.1, in the JIRA 3.7 
release we changed our default merge factor to 4 from 10. We hadn't seen this 
problem before, and suddenly we have had a reasonable number of occurrences. 

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-08 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463203
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

Alas, this doesn't appear to be the problem. We are still getting it, but we do 
at least have a little more info. We added the doc and lastDoc to the 
IllegalArgEx and we are getting very strange numbers:

java.lang.IllegalStateException: docs out of order (-1764  0)
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(SegmentMerger.java:335)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(SegmentMerger.java:298)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(SegmentMerger.java:272) 
at 
org.apache.lucene.index.SegmentMerger.mergeTerms()V(SegmentMerger.java:236)
at org.apache.lucene.index.SegmentMerger.merge()I(SegmentMerger.java:89)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
 
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

where doc = -1764 and lastDoc is zero

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Michael McCandless
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462949
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have now seen this in a number of customer sites since upgrading JIRA to use 
Lucene 1.9.1. The JIRA report is here: 
http://jira.atlassian.com/browse/JRA-11861

We only seem to have seen it since the upgrade from 1.4.3 to 1.9.1, we hadn't 
seen it before then.

This is now a major issue for us, it is hitting a number of our customers. I am 
trying to generate a repeatable test for it as a matter of urgency.

As a follow-up we sometimes see the old ArrayIndexOutOfBoundsEx in 
BitVector.get() (BitVector.java:63)

will post more if I find something worth sharing.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462949
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have now seen this in a number of customer sites since upgrading JIRA to use 
Lucene 1.9.1. The JIRA report is here: 
http://jira.atlassian.com/browse/JRA-11861

We only seem to have seen it since the upgrade from 1.4.3 to 1.9.1, we hadn't 
seen it before then.

This is now a major issue for us, it is hitting a number of our customers. I am 
trying to generate a repeatable test for it as a matter of urgency.

As a follow-up we sometimes see the old ArrayIndexOutOfBoundsEx in 
BitVector.get() (BitVector.java:63)

will post more if I find something worth sharing.

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462950
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

and we also see ArrayIndexOutOfBoundsEx in the SegmentReader.isDeleted() method:

java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.index.SegmentReader.isDeleted(I)Z(Optimized Method)
at org.apache.lucene.index.SegmentMerger.mergeFields()I(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2007-01-07 Thread Jed Wesley-Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12462950
 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

and we also see ArrayIndexOutOfBoundsEx in the SegmentReader.isDeleted() method:

java.lang.ArrayIndexOutOfBoundsException
at org.apache.lucene.index.SegmentReader.isDeleted(I)Z(Optimized Method)
at org.apache.lucene.index.SegmentMerger.mergeFields()I(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)

 docs out of order
 -

 Key: LUCENE-140
 URL: https://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, data_wstawienia));
 doc.add(Field.Text(tresc, reader));
 return doc;
 }
 Sincerely,
 legez

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-18 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-748?page=comments#action_12459489 ] 

Jed Wesley-Smith commented on LUCENE-748:
-

I guess, particularly in light of LUCENE-702 that this behavior is OK - and the 
IndexReader.unlock(dir) is a good suggestion. My real problem was that the 
finalize() method does eventually remove the write lock. 

For me then the suggestion would be to document the exceptional behavior of the 
close() method (ie. it means that changes haven't been written and the write 
lock is still held) and link to the IndexReader.unlock(Directory) method.

 Exception during IndexWriter.close() prevents release of the write.lock
 ---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith

 After encountering a case of index corruption - see 
 http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
 encounters an exception in the flushRamSegments() method, the index 
 write.lock is not released (ie. it is not really closed).
 The writelock is only released when the IndexWriter is GC'd and finalize() is 
 called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-18 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-748?page=comments#action_12459502 ] 

Jed Wesley-Smith commented on LUCENE-748:
-

Awesome, thanks!

 Exception during IndexWriter.close() prevents release of the write.lock
 ---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith
 Assigned To: Michael McCandless

 After encountering a case of index corruption - see 
 http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
 encounters an exception in the flushRamSegments() method, the index 
 write.lock is not released (ie. it is not really closed).
 The writelock is only released when the IndexWriter is GC'd and finalize() is 
 called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-140) docs out of order

2006-12-14 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-140?page=comments#action_12458669 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have seen this one as well. We don't have the same usage as above, we only 
ever delete documents with IndexReader.deleteDocuments(Term)

We are using Lucene 1.9.1

It occurs in two places, inside IndexWriter.addDocument():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

and inside IndexWriter.close():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.flushRamSegments()V(IndexWriter.java:628)
at org.apache.lucene.index.IndexWriter.close()V(IndexWriter.java:375)

The second one exposes a problem in the close() method which is that the index 
write.lock is not released when exceptions are thrown in close() causing 
subsequent attempts to open an IndexWriter to fail.

 docs out of order
 -

 Key: LUCENE-140
 URL: http://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, 

[jira] Commented: (LUCENE-140) docs out of order

2006-12-14 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-140?page=comments#action_12458669 ] 

Jed Wesley-Smith commented on LUCENE-140:
-

We have seen this one as well. We don't have the same usage as above, we only 
ever delete documents with IndexReader.deleteDocuments(Term)

We are using Lucene 1.9.1

It occurs in two places, inside IndexWriter.addDocument():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.maybeMergeSegments()V(IndexWriter.java:646)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/analysis/Analyzer;)V(IndexWriter.java:453)
at 
org.apache.lucene.index.IndexWriter.addDocument(Lorg/apache/lucene/document/Document;)V(IndexWriter.java:436)

and inside IndexWriter.close():

java.lang.IllegalStateException: docs out of order
at 
org.apache.lucene.index.SegmentMerger.appendPostings([Lorg/apache/lucene/index/SegmentMergeInfo;I)I(Optimized
 Method)
at 
org.apache.lucene.index.SegmentMerger.mergeTermInfo([Lorg/apache/lucene/index/SegmentMergeInfo;I)V(Optimized
 Method)
at org.apache.lucene.index.SegmentMerger.mergeTermInfos()V(Optimized 
Method)
at org.apache.lucene.index.SegmentMerger.mergeTerms()V(Optimized Method)
at org.apache.lucene.index.SegmentMerger.merge()I(Optimized Method)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(II)V(IndexWriter.java:681)
at 
org.apache.lucene.index.IndexWriter.mergeSegments(I)V(IndexWriter.java:658)
at 
org.apache.lucene.index.IndexWriter.flushRamSegments()V(IndexWriter.java:628)
at org.apache.lucene.index.IndexWriter.close()V(IndexWriter.java:375)

The second one exposes a problem in the close() method which is that the index 
write.lock is not released when exceptions are thrown in close() causing 
subsequent attempts to open an IndexWriter to fail.

 docs out of order
 -

 Key: LUCENE-140
 URL: http://issues.apache.org/jira/browse/LUCENE-140
 Project: Lucene - Java
  Issue Type: Bug
  Components: Index
Affects Versions: unspecified
 Environment: Operating System: Linux
 Platform: PC
Reporter: legez
 Assigned To: Lucene Developers
 Attachments: bug23650.txt, corrupted.part1.rar, corrupted.part2.rar


 Hello,
   I can not find out, why (and what) it is happening all the time. I got an
 exception:
 java.lang.IllegalStateException: docs out of order
 at
 org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:219)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfo(SegmentMerger.java:191)
 at
 org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:172)
 at 
 org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:135)
 at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:88)
 at 
 org.apache.lucene.index.IndexWriter.mergeSegments(IndexWriter.java:341)
 at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:250)
 at Optimize.main(Optimize.java:29)
 It happens either in 1.2 and 1.3rc1 (anyway what happened to it? I can not 
 find
 it neither in download nor in version list in this form). Everything seems 
 OK. I
 can search through index, but I can not optimize it. Even worse after this
 exception every time I add new documents and close IndexWriter new segments is
 created! I think it has all documents added before, because of its size.
 My index is quite big: 500.000 docs, about 5gb of index directory.
 It is _repeatable_. I drop index, reindex everything. Afterwards I add a few
 docs, try to optimize and receive above exception.
 My documents' structure is:
   static Document indexIt(String id_strony, Reader reader, String 
 data_wydania,
 String id_wydania, String id_gazety, String data_wstawienia)
 {
 Document doc = new Document();
 doc.add(Field.Keyword(id, id_strony ));
 doc.add(Field.Keyword(data_wydania, data_wydania));
 doc.add(Field.Keyword(id_wydania, id_wydania));
 doc.add(Field.Text(id_gazety, id_gazety));
 doc.add(Field.Keyword(data_wstawienia, 

[jira] Created: (LUCENE-748) Exception during IndexWriter.close() prevents release of the write.lock

2006-12-14 Thread Jed Wesley-Smith (JIRA)
Exception during IndexWriter.close() prevents release of the write.lock
---

 Key: LUCENE-748
 URL: http://issues.apache.org/jira/browse/LUCENE-748
 Project: Lucene - Java
  Issue Type: Bug
Affects Versions: 1.9
 Environment: Lucene 1.4 through 2.1 HEAD (as of 2006-12-14)
Reporter: Jed Wesley-Smith


After encountering a case of index corruption - see 
http://issues.apache.org/jira/browse/LUCENE-140 - when the close() method 
encounters an exception in the flushRamSegments() method, the index write.lock 
is not released (ie. it is not really closed).

The writelock is only released when the IndexWriter is GC'd and finalize() is 
called.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[jira] Commented: (LUCENE-681) org.apache.lucene.document.Field is Serializable but doesn't have default constructor

2006-12-10 Thread Jed Wesley-Smith (JIRA)
[ 
http://issues.apache.org/jira/browse/LUCENE-681?page=comments#action_12457253 ] 

Jed Wesley-Smith commented on LUCENE-681:
-

worksforme

public class SerializationTest
{
public static void main(String[] args) throws Exception
{
Field field = new Field(name, value, Field.Store.YES, 
Field.Index.TOKENIZED);
System.out.println(field);
final Object field2 = new SerializationTest().serialize(field);
System.out.println(field2);
System.out.println(field == field2);
}

Object serialize(Object input) throws IOException, ClassNotFoundException
{
ByteArrayOutputStream outBytes = new ByteArrayOutputStream();
ObjectOutputStream outObjects = new ObjectOutputStream(outBytes);
outObjects.writeObject(input);

ByteArrayInputStream inBytes = new 
ByteArrayInputStream(outBytes.toByteArray());
ObjectInputStream inObjects = new ObjectInputStream(inBytes);
return inObjects.readObject();
}
}

Its a final class dude, what does it need a default constructor for?

Consider closing.

 org.apache.lucene.document.Field is Serializable but doesn't have default 
 constructor
 -

 Key: LUCENE-681
 URL: http://issues.apache.org/jira/browse/LUCENE-681
 Project: Lucene - Java
  Issue Type: Bug
  Components: Other
Affects Versions: 1.9, 2.0.0, 2.1, 2.0.1
 Environment: doesn't depend on environment
Reporter: Elijah Epifanov
Priority: Critical

 when I try to pass Document via network or do anyhing involving 
 serialization/deserialization I will get an exception.
 the following patch should help (Field.java):
   public Field () {
   }
   private void writeObject (java.io.ObjectOutputStream out)
   throws IOException {
 out.defaultWriteObject ();
   }
   private void readObject (java.io.ObjectInputStream in)
   throws IOException, ClassNotFoundException {
 in.defaultReadObject ();
 if (name == null) {
   throw new NullPointerException (name cannot be null);
 }
 this.name = name.intern ();// field names are interned
   }
 Maybe other classes do not conform to Serialization requirements too...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]