"Husek, Paul" <[EMAIL PROTECTED]> schrieb am 11.03.2005 17:01:45:
> I've been using Torque for almost a year now and am very happy with it.
> Recently though I found something that confuses me.
>
>
>
> All along I've been deleting all History books like:
>
>
>
> Criteria c=new Criteria();
>
> c.add(BookPeer.TYPE,"HISTORY");
>
> BookPeer.doDelete(c)
>
>
>
> And it works fine. But recently I tried this when there over 100,000
> history books. I was greeted with a java "out of memory" error. Is
Torque
> trying to load all records before deleting each of them?
Yes, it does. Code form BasePeer.doDelete(Criteria criteria, Connection
con)
tds.where(sqlSnippet);
tds.fetchRecords();
if (tds.size() > 1 && criteria.isSingleRecord())
{
handleMultipleRecords(tds);
}
for (int j = 0; j < tds.size(); j++)
{
Record rec = tds.getRecord(j);
rec.markToBeDeleted();
rec.save();
}
> Why would it?
I am not sure about this. It seems than in its early days, Torque has
relied a lot on the village library and this is the way village handles
deletes. Not a convincing explanation,though. Perhaps some old Torque guru
can think of another reason...
> Is there a work around?
>
I can think of two things: either patch Torque or create the SQL yourself
and use executeStatement(String).
I will ask on the dev list if anybody can think of a reason why the records
are loaded before they are deleted. If nobody has a reason for it, chances
are good that it will be changed.
Thomas
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]