Hi,

I'm a developer in the Apache Jackrabbit project and in the process of
making it clusterable we're facing some serious problem: we have all
nodes in the cluster share the repository data and are using derby as
database back end, running standalone. When retrieving large binary
data through the derby client driver, I get OutOfMemoryErrors. The
resource paper "Derby Network Client" says:

 "The client driver fully materializes LOBS when the row is retrieved"

which would of course explain those errors when the data gets too
large. Is this limitation being addressed? Or is there some other way
of retrieving binary data from a standalone derby database that
bypasses it?

Thanks
Dominique

Reply via email to