I have a model with, say, a one-to-many relationship, where there may
be an enormous number of child records.  I see that there is thorough
documentation treatment on the subject of reading such object FROM the
database, e.g. fetch=FetchType.LAZY attr and/or @LRS (large result
set), etc.  but this seems to only optimize READING.

My concern is how can I achieve a similar, converse, optimization when
WRITING?  i.e. inserting INTO the database.  For example, as I
understand it, once I persist the master object, (enhanced, of
course), then the collection field which keeps a set of child records
uses transitive persistence to automatically write a child record
whenever a child object is added to the collection - but here's the
thing; if I'm just doing an initial load of the master object and all
it's children, and there could be a million children - I don't want
each newly added child to stay around in memory (in the master's
collection field) once it has been persisted via transitive
persistence, otherwise I'll run out of memory.

Is there some mechanism to have child objects be removed from memory
once persisted?

Thanks,

Chris

Reply via email to