I just finished catching up on the discussion and keep coming back to the 
fragile nature of relying on retainCount. For now you do, indeed, have the 
option to vaporize your toes; later you may not have access to a ray gun if 
Apple decides that it's in the best interests of all concerned that they retain 
their toes, not their objects.

I don't know what your app architecture looks like, but was wondering if you 
could build in a supportable method of managing what's in the cache without 
having to do other than a few localized changes.

One thought I had was to base all your cacheable objects on a class whose sole 
function is to notify the cache when the last reference goes away, i.e., when 
dealloc is called. If the cache kept track of all cached objects using a 
NSMapTable with weak references then the cache itself wouldn't affect the 
retain count, so you'd never have to be looking at that. Reads from and writes 
to the NSMapTable would need to be protected for consistency.

Does this sound like it could work?


> On May 19, 2015, at 12:55 AM, Britt Durbrow 
> <bdurb...@rattlesnakehillsoftworks.com> wrote:
> 
>> On May 18, 2015, at 8:59 PM, Quincey Morris 
>> <quinceymor...@rivergatesoftware.com> wrote:
>> 
>> Let me try and summarize where we are, in response to the several recent 
>> suggestions:
> 
> Close, but not quite:
> 
> I have several apps which are built on an existing, fully functional data 
> management layer that was originally built for a full desktop virtual-memory 
> environment (FWIW, it pre-dates the existence of Core Data). I now need to 
> move some of this to run under iOS. I also don’t want to make changes to this 
> data management layer that would require widespread changes to my other apps 
> that rely on it.
> 
> The objects in question are document model objects; and retrieving them from 
> file storage is indeed non-trivial; and so should be minimized when possible. 
> It is not, however, the end of the world if some of them gets evicted from 
> RAM, and then has to be later reloaded (such is life on iOS).
> 
> Because these are document model objects, they must be unique in RAM and in 
> file storage; and maintain their identity across RAM/file-storage 
> transitions. If an object is evicted from RAM, it’s contents are written to 
> it’s file storage first, and then later when it is recreated in RAM those 
> contents are reloaded from the file. I am using the UUID of each object to 
> establish the object’s identity across these transitions. However, the object 
> graph while in RAM references other in-memory objects by pointer, not UUID 
> (changing this basic fact of the system would require wide-spread rewrites of 
> all the applications that use this data management system; and I thusly 
> consider it infeasible).
> 
>> In addition, because of the overhead of managing the uniqueness of the 
>> UUIDs, it’s too expensive to create new objects regularly on demand. The 
>> purpose of the cache is to extend the lifetime of otherwise unreferenced 
>> objects so that they can be reused rather than reincarnated. It does this by 
>> taking ownership of (retaining, keeping a strong reference to, however you 
>> want to think of it) every object in the cache.
> 
> It’s not maintaining the UUIDs as unique that makes for the expense, it’s the 
> loading and unloading of the document model objects that does so; and to a 
> lesser extent, keeping the object graph coherent. The objects in the system 
> are not interchangeable or reusable; each one must maintain it’s identity 
> even if it’s in-memory pointer address changes over time. The file storage 
> system also enforces this identity by storing and retrieving objects by UUID.
> 
>> This means that objects can exist in memory in one of these states:
> 
> The way I see it, any given object can be in one of these states:
> 
> * Not in RAM at all; only in file storage (and stored under it’s UUID)
>       
> * In RAM as a fault object. Faults are (like as in Core-Data) proxys for 
> objects in file storage, that reserve RAM for the object, but don’t have any 
> of the object’s contents loaded yet. Because they don’t have any contents, 
> they also don’t store any links to other objects in RAM. When any message is 
> sent to a fault object, it causes the object’s contents to be loaded from 
> file storage, and the class to be swizzled back to it’s actual class (this 
> may in turn cause the creation of other fault objects in RAM for objects that 
> are not in RAM but referenced by the fault’s contents).
> 
> * In RAM as a fully “inflated” object. This is a regular Objective-C object, 
> with ivars, pointers, methods, and all the associated stuff.
> 
> Additionally, the objects in RAM (fault or inflated) can be in one of these 
> states:
> 
> * Referenced. This is as you stated - something has a strong link to it (be 
> it in the graph of document objects, or some other object in the system, or 
> some variable on the stack someplace).
> 
> * Unreferenced. Also as you stated… except:
> 
> I’m not drawing a distinction between Referenced and Inaccessible. If there 
> is a link to it somewhere in the system, my reasoning goes, that the object 
> pool controller shouldn’t let it go yet.
> 
> Obviously, if a given object is not in RAM, it falls in the Unreferenced 
> category, as there can’t be any links to it, strong or weak. :-) 
> 
>> I don’t have any other plausible housekeeping reasons, but I do know that 
>> “No one else should be using this object right now” is a common prelude to a 
>> self-inflicted injury.
> 
> I am aware of the fact that if I do use a retainCount based system, I’m 
> aiming a ray-gun at my foot and trying to shoot between my toes, without 
> vaporizing them… :-)
> 
>> The easiest solution to conceptualize is to give the cache both a strong 
>> (owning) and a weak reference to each object. Then, to purge the cache of 
>> unreferenced objects, all that’s necessary is to nil the strong references. 
>> Thereafter the unreferenced objects will, we hope, become deallocated, 
>> though some may remain merely inaccessible. In particular, any objects 
>> referred to by an autorelease pool won’t get deallocated until the pool is 
>> drained. Once that’s done, as far as the app’s concerned there should be no 
>> more inaccessible objects, but in reality we don’t know this for sure — vid. 
>> "housekeeping reasons" — and we have no valid way of *reasoning* them out of 
>> existence.
> 
> Nitpick: if the system has any strong link to an object somewhere it won’t be 
> immediately evicted from the object pool controller’s weak pointer 
> collection; that will only happen when the system no longer holds those 
> strong links (FWIW, this is one of the reasons why I am not drawing a 
> distinction between Referenced and Inaccessible).
> 
>> So, I’ve been trying to sell the idea that you only need one pointer — the 
>> weak one — and that the full effect of the strong pointer can be obtained by 
>> merely incrementing the retain count (once) when an object enters the cache. 
>> To purge the cache, decrement all the retain counts, drain the autorelease 
>> pools, then increment them again. There are a couple of ways of doing this 
>> in an ARC environment, but I (in a similar situation) use CFRetain (or 
>> perhaps CFBridgingRetain, can’t remember) because I’ve found it most 
>> readable.
>> 
>> Britt’s hesitant about this solution, too, because it apparently reasons 
>> about retain counts, which we’re told not to do, and I seem to have just 
>> finished saying so too. I claim that there’s no contradiction. It’s valid in 
>> this situation, because the only reasoning we’re doing is about *our* 
>> ownership reference, not about any others that we don’t control. Indeed, 
>> we’re not so much reasoning, as relying on the definition of ownership.
> 
> Ahh-HA! Now I get what you meant!
> 
> Hmm… I’ll have to think about the implementation details.
> 
> I’m still a bit hesitant about using the weak link system, owing to the 
> overhead involved (there will potentially be hundreds of thousands of objects 
> in the documents this thing is supposed to handle - fortunately they don’t 
> all have to be in RAM at once!) I guess I’m going to have to do some testing 
> and profiling tomorrow…
> 
>> I think the memory pressure we’re talking about here is something like a 
>> low-memory warning because *other* apps are trying to use all the memory and 
>> we’re trying to be helpful. We’re not really talking about the case where 
>> the cache itself is absolutely too big.
> 
> Um… actually, we are. The documents are large, much larger than available 
> memory on iOS. The engine is supposed to be able to handle spaghetti-like 
> graph structures also (it’s inherent to the problem space that some of the 
> apps that use this engine work in), so the memory management needs to be 
> algorithmic, not heuristic (although I’m going to give it as many hints as 
> possible to try to maximize performance).
> 
> 
> *****************************************************************************************************************************************************
>> I'm going to butt in here and say that if you've got so many objects that it 
>> is causing memory pressure, you really just need to reevaluate and blow up 
>> your model. 
>> Consider using a database or Core Data. 
> 
> 
> Changing the overall model is unfeasible (this might have been possible 12 
> years ago, but by now I’ve got just way too much stuff that depends on it). 
> However, FWIW, the engine does use SQLite as the low-level storage system. 
> 
> 
> P.s. - I really appreciate the help of everybody who has slogged thru this 
> mess with me! :-)


_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to