On Jul 30, 2008, at 00:19, Jeremias Maerki wrote:

On 28.07.2008 21:03:09 Andreas Delmelle wrote:

As Jeremias already noted, no obvious bugs present. All FOP does, is
open a standard InputStream and properly close it when it is no
longer needed. After that, it's up to the InputStream implementation
to release the native file handle. It could be that this last step
does not occur until the InputStream itself is cleared by the GC. So,
it's possible that 'arbitrarily' simply means 'upon the next implicit
or explicit garbage-collection'.

I consider it highly unlikely (and unwise) that this is implemented like that. Resources need to be releases ASAP. That's what InputStream.close
() is there for.

Well, I'm not passing judgment, but I'm not ruling it out either, since I have seen bug reports in those areas where people actually suggested to tie the clearing of native file-handles to the GC cycle (to release handles held due to deleteOnExit() where the file had already been deleted). Besides, as I suspected, and Mathieu seemed to confirm (?), on *nix-based platforms, this is not really a problem. I'm assuming this to be because 'concurrent access to resources' is something that is already present at the lowest levels in the kernel and filesystem. IIRC, on Unix it is possible to create a file, write to it, then delete it, without the file ever having existed physically on disk (if you're fast enough ;-)). You'll never notice, but on Windows, one does notice that writing to a file *always* implies disk I/O.

Anyway, unfortunately it seems like it's not something we can
directly help you with. The only workaround may even be to explicitly
ask for Runtime.gc() after every run. Then again, I'm wondering
whether this could be a showstopper when running multiple
simultaneous FOP-sessions in the same environment. :-S

I don't think triggering garbage collection will help at all.
Runtime.gc() is not even guaranteed to initiate a GC run.

In theory indeed, but in practice the next run will follow ASAP after the call has been made. /Especially/ so on Windows, where thread- priority sometimes appears to be completely absent, or at least the deeper levels of the kernel seem to consider it a joke. What does it matter if you assign a priority to processes if, at the lowest levels the code does not seem to take them into account? Why does this GUI 'hang' every once in a while, no matter how much RAM or CPUs you have at your disposal? Why does the system suddenly become unresponsive when you transfer large files? Because at some point, assumptions are made about priorities that seem very wise --from a human point-of- view. Not if you think like a machine, though... ;-)


OK, 'nuff ranting done. Back to work.

Cheers

Andreas

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to