Hi David,

Okay, I see what you are saying now -- I assumed that OS X was accurately reporting the size of Fin temp files, whereas Windows was just giving you zero KB for the whole temp files folder. But what you say makes sense. I would like some confirmation from Coda about this, though. Seriously, I wouldn't be surprised if the temp files were small enough to be stored in RAM, and the whole "temp file" system was just an inefficient holdover from Finale's earlier incarnations. (I mean, think of all the other aspects of Finale that meet that description!)

(1) Segregate the temp files so that they are not shared between
multiple documents.

I've never quite understood the logic of the explanation we were long ago given for the combined temp files, as it's an optimization for something I never do (I just never copy data between files, or maybe once every 6 months).

I do it a little more frequently, but it's certainly not something I do every day. And I would *much* prefer waiting a little longer when copying between files to the current situation of taking a massive gamble every time I want to have multiple simultaneous files open -- which *is* something I do every day. (In fact, it astounds me that so many people on this list never have multiple documents open simultaneously.)


No, it's because you now have good disk caching, so that the temp
files are in RAM already (in the disk cache).

Ah. I see now. I guess that hadn't occurred to me -- probably because Finale performance in Mac OS X is already so poor, I found it difficult to entertain the idea that things could be any *worse*.


Cheers,

- Darcy
-----
[EMAIL PROTECTED]
Brooklyn, NY
On 21 Jan 2005, at 7:45 PM, David W. Fenton wrote:

On 21 Jan 2005 at 19:32, Darcy James Argue wrote:

Brooklyn, NYOn 21 Jan 2005, at 7:10 PM, David W. Fenton wrote:

I don't know what the actual filespace usage of Finale temp files on
disk

Mine are typically 11-15 MB total, which is practically nothing. It seems silly to write files that small to disk, instead of to RAM.

That's about what Windows reports, too, but, as I explained, what your shell reports may or may not indicate exactly how much file space is actually being used. Your shell may report the size of a file as 0 bytes when it's actually taking up hundreds of MBs (if it's in the process of being written to in a single operation, the file size won't be updated until the whole write operation is complete).

So, as I said, we don't really know what the actual filespace usage
of Finale's temp files really is.

So, my assumption is that Finale is solving the same kinds of
problems in temp files instead of in RAM precisely because the space
needed is quite huge, too large for standard RAM configurations.

The space needed is trivial -- like I said, 11-15 MB. Temp files in Finale are simply a legacy from the days when 15 MB of RAM *was* a big deal. That's the only reason these are temp files and not written to RAM by default.

You're relying on information that is not necessarily accurate.

When Jet is writing its 100MB files, the file system doesn't reflect
that until the file is finished writing, and in most cases, the file
is immediately deleted after it's been used. That means, unless
you're looking at the exact second the file is written to and in the
instant before it's deleted, you'll never see the *actual* disk space
being used.

So, you really don't *know* whether the amount of space the temp
files are using is trivial or not.

(Of course, they could also just fix the problem where the window
handle/ID starts to map to the wrong Enigma Doc ID, but apparently
that's easier said than done.)

Storing temp files in RAM (which is really not using temp files at all, of course) would do *nothing* to fix the problem if the problem is messed up pointers.

No, but segregating temp files for each document -- instead of having the same temp file map to all open documents -- *would* solve the problem. . . .

True, but it has zilch to do with where the temporary data is stored (on disk or in RAM).

. . . *That's* what I'm advocating -- or at least, that's a
possibility I would like Coda to investigate.  Their stated reason for
not doing so is that it would make inter-document copying slower.  I'm
suggesting they try using segregated temp files for each document, but
store them in RAM to make them faster.  Maybe then the speed would
remain about the same, and we wouldn't have the corruption problems
we're having now.

And I'm suggesting that the kind of database operations that the temp files are being used for actually take up orders of magnitude more disk space than you ever know about, and that this is why the files are *not* stored in RAM already.

I'm perfectly aware that storing temp files in RAM *alone* would do
nothing to fix the problem.  But I'm suggesting a (possible)
two-pronged solution:

(1) Segregate the temp files so that they are not shared between
multiple documents.

I've never quite understood the logic of the explanation we were long ago given for the combined temp files, as it's an optimization for something I never do (I just never copy data between files, or maybe once every 6 months).

(2) Store the temp files in RAM to alleviate the performance issues
caused by (1).

My bet is that this would cause more performance problems than it would solve because if temp files truly were so tiny, why in the world wouldn't those temporary operations be done in RAM already? It makes no sense to rely on the file system for working with data structures that would easily fit in RAM.

So, I can only conclude that these temp files are just stumps (the
smallest size the files ever get) and that when actually being
actively used balloon to much larger size and then shrink back to the
stump size as soon as whatever process they were being used for is
complete.

I have no idea if that's a workable solution, but I would like Coda to
at least *try* it.  Since the problem with the corrupted pointers is
proving elusive, maybe it's time to address the root of the problem --
shared temp files.  If the same temp files weren't shared by multiple
documents, then the file overwrite bug would never occur.  (Or at
least, that's what Jari seems to be saying.)

I would agree on the shared temp files.

I disagree on moving temp files to RAM -- no programmer would write
to disk data that can easily be processed in RAM, so I can only
conclude that the amount of data being processed in the temp files is
actually much larger than is ever reported by the shells we are using
to examine the size of the temp files.

 If all you're doing is changing whether the
pointers point to files or pages in RAM, you're not actually
addressing the fundamental problem at all.

You may get a performance increase.

You may not.

Back in OS 9, we got a *dramatic* performance increase by storing temp files on RAM disks.

This doesn't happen with the RAM disks available to us in OS X -- I
suspect because of poor implementation.  (Also, I don't even know of a
RAM disk that works with 10.3.  Most only support 10.2 and lower.)

No, it's because you now have good disk caching, so that the temp files are in RAM already (in the disk cache).

And that's another argument as to why moving the temp files to RAM
wouldn't result in much of a performance increase -- they are already
in RAM.

--
David W. Fenton                        http://www.bway.net/~dfenton
David Fenton Associates                http://www.bway.net/~dfassoc

_______________________________________________
Finale mailing list
Finale@shsu.edu
http://lists.shsu.edu/mailman/listinfo/finale


_______________________________________________ Finale mailing list Finale@shsu.edu http://lists.shsu.edu/mailman/listinfo/finale

Reply via email to