Andy, others,
I tried to debug the OME and check why the bytebuffer is causing my native
memory to explode in almost no time. It all seems to happen in this bit of
code in com.hp.hpl.jena.tdb.base.objectfile.ObjectFileStorage (lines 243
onwards)
// No - it's in the underlying file storage.
lengthBuffer.clear() ;
int x = file.read(lengthBuffer, loc) ;
if ( x != 4 )
throw new FileException("ObjectFile.read("+loc+")["+filesize+
"]["+file.size()+"]: Failed to read the length : got "+x+" bytes") ;
int len = lengthBuffer.getInt(0) ;
ByteBuffer bb = ByteBuffer.allocate(len) ;
My debugger shows that x==4. It also shows the lengthBuffer has the
following content: [111, 110, 61, 95], however the value of len=1869495647
, which is rather a lot :-) Obviously, the next statement
(ByteBuffer.allocate) causes the OME
So, looking into it, lengthBuffer.getInt(0)==1869495647, but
lenghtBuffer.get(0)==111, which seems more correct. Then I noticed that
the bytebuffer also says bigEndian==true. I am running all of this on
Windows 7. which is to the best of my knowledge little endian
I think this can only work correctly if these byteBuffers have their order
explicilty set to that of the platform. So, somewhere in the code, you'd
need to make sure to set
lengthBuffer.order(ByteOrder.nativeOrder())
And this probably for all usages of ByteBuffer. That will probably solve
the problem on little endian systems such as Windows
Any thoughts or should I create a Jira Issue?
Simon