Hello.

I agree that it is too low-level. I also see the problem in my solution that could happen if a Blob got invalidated while it is being read from. Lets go for the BlobCallback interface that you sketched out as it solves this problem.

Do you think I could implement BlobHolder.readBlob with plain jdbc (generated from the model) as I did in BlobFault.getBlob()? The reason for doing this is that I need to bypass the ExtendedType stuff and have full control of the selecting transaction. Or is it possible to have the same level of control in a Query?

Writing is pretty easy if you just provide a MemoryBlob and set it. I think streaming insert would require som special stuff for each jdbc- driver.

BlobHolder could probably be the Fault like I did with BlobFault?

Regards,
 - Tore.


On Nov 22, 2006, at 5:20 , Andrus Adamchik wrote:
My concern is that it seems too low-level, accessing DataNode directly (hence you have to manage connection state and all that). But there is another more fundamental problem of a Blob becoming invalid when the connection that generated it is closed.

So I wonder if instead of mapping a column directly as a Blob, we should implement our own mapping primitive (BlobHolder?), that can re-read the Blob as many times as requested. The other piece is a callback interface that is provided with the regular Blob instance guaranteed to be in the valid state (i.e. inside a Cayenne transaction). A callback can read the stream, and do whatever with it (like pushing data to HttpResponse):

interface (or class) BlobHolder {
   void readBlob(BlobCallback callback);
}

interface BlobCallback {
// this is called within transaction span and the Blob is guaranteed to be valid
   void blobRead(Blob);
}

We can use a BlobFault similar to what you wrote as a way to initialize BlobHolder with an object qualifier (so that it knows how to fetch itself).

Reply via email to