I just tested the TestTransSystem using the August 31 build and I get 
exactly the same symptons. Because the only difference between you and me 
is linux versus windows, I decided to run the test in direct mode (since 
windows mapped I/O is shaky - in fact, it is slower than linux and may 
have other problems)

When I run in direct mode, the (patched) test runs longer, but eventually 
fails with another problem. I am not sure if this helps. I am obviously 
observing an issue and the patched version of TestTransSystem shows the 
problem in my environment, so it would help if someone else is able to 
reproduce. I have added the complete TestTransSystem I am used (not just 
the patch)

Simon



START (disk, 100 iterations)
000: ..........
010: ..........
020: ........com.hp.hpl.jena.tdb.TDBException: Different ids for 
"7293708485020687997"^^http://www.w3.org/2001/XMLSchema#long: allocated: 
expected [00000000000001EA], got [00000000000001A7]
        at com.hp.hpl.jena.tdb.transaction.NodeTableTrans.append(
NodeTableTrans.java:178)
        at 
com.hp.hpl.jena.tdb.transaction.NodeTableTrans.writeNodeJournal(
NodeTableTrans.java:210)
        at com.hp.hpl.jena.tdb.transaction.NodeTableTrans.commitPrepare(
NodeTableTrans.java:190)
        at com.hp.hpl.jena.tdb.transaction.Transaction.prepare(
Transaction.java:94)
        at com.hp.hpl.jena.tdb.transaction.Transaction.commit(
Transaction.java:77)
        at com.hp.hpl.jena.tdb.DatasetGraphTxn.commit(
DatasetGraphTxn.java:26)
        at 
com.ibm.team.jfs.rdf.internal.jenatdbtx.temp.TestTransSystem$Writer.call(
TestTransSystem.java:209)
        at java.util.concurrent.FutureTask$Sync.innerRun(
FutureTask.java:314)
        at java.util.concurrent.FutureTask.run(FutureTask.java:149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
ThreadPoolExecutor.java:897)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:919)
        at java.lang.Thread.run(Thread.java:736)




From:
"Paolo Castagna (JIRA)" <[email protected]>
To:
[email protected]
Date:
09/14/2011 04:14 AM
Subject:
[jira] [Commented] (JENA-91) extremely large buffer is being created in 
ObjectFileStorage




    [ 
https://issues.apache.org/jira/browse/JENA-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13104341#comment-13104341
 
] 

Paolo Castagna commented on JENA-91:
------------------------------------

(Hi Simon, welcome back)

I have applied your patch to TestTransSystem to TxTDB trunk.
I run it a few times and I do not see any problem. All the times this is 
the output:

START (disk, 100 iterations)
000: ..........
[...]
090: ..........

DONE (100)
FINISH

I am using a 64 bits OS (i.e. Linux) and the Oracle JDK 1.6, 64 bits.

Which OS and JVM are you using? (Could that be the reason why you see 
problems and I don't?)

> extremely large buffer is being created in ObjectFileStorage
> ------------------------------------------------------------
>
>                 Key: JENA-91
>                 URL: https://issues.apache.org/jira/browse/JENA-91
>             Project: Jena
>          Issue Type: Bug
>          Components: TDB
>            Reporter: Simon Helsen
>            Assignee: Andy Seaborne
>            Priority: Critical
>         Attachments: JENA-91_NodeTableTrans_r1159121.patch, 
TestTransSystem.patch, TestTransSystem2.patch, TestTransSystem3.patch, 
TestTransSystem4.patch
>
>
> I tried to debug the OME and check why a bytebuffer is causing my native 
memory to explode in almost no time. It all seems to happen in this bit of 
code in com.hp.hpl.jena.tdb.base.objectfile.ObjectFileStorage (lines 243 
onwards)
>   // No - it's in the underlying file storage.
>         lengthBuffer.clear() ;
>         int x = file.read(lengthBuffer, loc) ;
>         if ( x != 4 )
>             throw new 
FileException("ObjectFile.read("+loc+")["+filesize+"]["+file.size()+"]: 
Failed to read the length : got "+x+" bytes") ;
>         int len = lengthBuffer.getInt(0) ;
>         ByteBuffer bb = ByteBuffer.allocate(len) ;
> My debugger shows that x==4. It also shows the lengthBuffer has the 
following content: [111, 110, 61, 95]. This amounts to the value of 
len=1869495647, which is rather a lot :-) Obviously, the next statement 
(ByteBuffer.allocate) causes the OME.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

 


Reply via email to