Paolo, this is all very strange. I see a problem almost immediately. I normally run with IBM's JRE 6, but I now tested with Oracle's JRE 6 and I still see the same problem. I also run on 64 bit, but I am running on Windows 7, not linux. Find it hard to believe that would matter, but who knows. I checked and I am pretty sure I am having the HEAD of the Tx TDB branch. To make sure, I also updated ARQ to the latest and made the dependency of TxTDB directly to that project (instead of the Maven repository), but I keep getting the same results.
Could you tell me how I can see in both ARQ and TDB that I am effectively on the latest? E.g. something I should see on a certain line number in a certain class? Perhaps something is wrong in my setup thanks, Simon From: "Paolo Castagna (JIRA)" <[email protected]> To: [email protected] Date: 09/14/2011 04:14 AM Subject: [jira] [Commented] (JENA-91) extremely large buffer is being created in ObjectFileStorage [ https://issues.apache.org/jira/browse/JENA-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13104341#comment-13104341 ] Paolo Castagna commented on JENA-91: ------------------------------------ (Hi Simon, welcome back) I have applied your patch to TestTransSystem to TxTDB trunk. I run it a few times and I do not see any problem. All the times this is the output: START (disk, 100 iterations) 000: .......... [...] 090: .......... DONE (100) FINISH I am using a 64 bits OS (i.e. Linux) and the Oracle JDK 1.6, 64 bits. Which OS and JVM are you using? (Could that be the reason why you see problems and I don't?) > extremely large buffer is being created in ObjectFileStorage > ------------------------------------------------------------ > > Key: JENA-91 > URL: https://issues.apache.org/jira/browse/JENA-91 > Project: Jena > Issue Type: Bug > Components: TDB > Reporter: Simon Helsen > Assignee: Andy Seaborne > Priority: Critical > Attachments: JENA-91_NodeTableTrans_r1159121.patch, TestTransSystem.patch, TestTransSystem2.patch, TestTransSystem3.patch, TestTransSystem4.patch > > > I tried to debug the OME and check why a bytebuffer is causing my native memory to explode in almost no time. It all seems to happen in this bit of code in com.hp.hpl.jena.tdb.base.objectfile.ObjectFileStorage (lines 243 onwards) > // No - it's in the underlying file storage. > lengthBuffer.clear() ; > int x = file.read(lengthBuffer, loc) ; > if ( x != 4 ) > throw new FileException("ObjectFile.read("+loc+")["+filesize+"]["+file.size()+"]: Failed to read the length : got "+x+" bytes") ; > int len = lengthBuffer.getInt(0) ; > ByteBuffer bb = ByteBuffer.allocate(len) ; > My debugger shows that x==4. It also shows the lengthBuffer has the following content: [111, 110, 61, 95]. This amounts to the value of len=1869495647, which is rather a lot :-) Obviously, the next statement (ByteBuffer.allocate) causes the OME. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
