I have some code which uses the sqlite3_blob_write() function, but I 
observe some odd behaviour.
When I pass it a QVector (Qt vector class - similar to std::vector, 
contiguous memory) of a class C (for example) when the vector is of 
significant size (over 200 entries), towards the end of the blob, the 
data is just set to 0 - rather than the values in the vector members.
Class C just consists of 6 floats and a short (as well as some functions 
in its definition).  The precise point where this happens in the blob 
seems to vary between machines.
I have checked the sqlite limits and the blob handle is of the desired 
size.  I pass the function the value of constData() which returns a 
const pointer to the data in the vector.

The call is as follows
(body is a QVector<C>,
i_blobHandle is sqlite3_blob* where sqlite3_blob_bytes(i_blobHandle) == 
( length of vector * size of class C in bytes) )

sqlite3_blob_write(i_blobHandle, (body.constData()), ( body.size() * 
SIZE_OF_C ), 0);

If I immediately read the blob afterwards as follows
 QVector<C> rmcv_d(body.size());

  sqlite3_blob_read(i_blobHandle, (rmcv_d.constData()), (body.size()* 
SIZE_OF_C), 0);
  for ( int rmcv_a(0); rmcv_a < rmcv_d.size(); ++rmcv_a )
  {
    if( d != body[rmcv_a] )
    {
      // GET HERE AROUND THE  rmcv_a == 230 mark (though this varies)
    }
  }

Has anybody seen this type of behaviour before?  Is it a known 
limitation/bug with BLOB writing?  If I write the items one at a time in 
a loop, there is no problem.  But I thought to save time, passing in the 
address of the vector would be enough - yet it behaves in this odd way.

Any feedback appreciated.

R



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to