Dale is pretty darn close, so I'll only hit on the differences. And some about other emails I have seen in this thread. :)
* We do garbage collection on the asset system. We scan for references to other assets (and anything that looks like an asset ID etc) and move unreferenced objects off to the side. After some period of no requests for those assets they are deleted. This is relatively effective at our current rate of asset creation. The actual growth rate of the asset server is well within reason at this point and is not unbounded. (I say 'current' and 'at this point' because this hasn't always been the case and I've been involved in more than a couple projects to reduce the rate at which we create new assets. :) ) * A large part of the "new asset for every save" is pure legacy. The system started and was designed as "write once", and there are a lot of optimizations you can make if you know a specific UUID will always point to the exact same thing. Now our system is large and complex. We have worked on rewritable assets for various reasons (usually attachment related) but these projects are difficult and complicated and tend to break things you wouldn't expect. * Content creators tend to think notecards because notecards are what we have and what we can store manually entered data into and read from. They aren't optimal though even if we ignore the show stopping asset creation rate in the current system. Every time you read from a notecard the simulator must fetch that notecard from the asset server and load it into memory. Sure it keeps a LRU cache of cards around so it doesn't fetch for *every* read, but this is hardly the right way to go about script data storage. Also this data isn't exactly random access, I think reading a notecard via lsl line by line is an O(n^2) operation. And another point - how do you handle the inevitable race conditions as two scripts read and write to the same asset? SVC-1406 that Argent mentions is a good example of better out of the box thinking. * I *hate* the project name "memory limits". While it is true that part of this project will be limiting total sim-wide script memory available, we are likely talking about levels that already significantly degrade simulator performance. As a content creator I am *really* looking forward to "memory limits" because with it we can introduce dynamic memory sizes for individual scripts. Forget that project that needs 10 scripts, half of each of which is the communication glue for them to work together. Instead have one script that can use the memory it needs. I don't have all the details on the final design, and I'm sure it will be adjusted with every round of statistics we collect, but you could look at how we handle URLs for HTTP-In as a starting point. In short, writable notecards just aren't going to happen. It would be a horrible hack anyway with crappy performance. We have llHTTPRequest already which is ideal for accessing and storing data on an external host. I have even seen services specifically for LSL that will store a small amount for free. Perhaps http_request, maybe specifically because it can do obj->obj, will open new options in 1.27 (on aditi now!). And so probably will the "script memory limits" project. - Kelly > Saving a notecard makes a new one. The old data hangs around in the > asset server apparently forever, so if you have 16k of temporary data > and you change 1 byte of it and save the change, you're burning 16 kb > per write. Over time this could end up being gigabytes to terabytes of > wasted space in the asset system across thousands of program write > operations, and LL has to provide expensive RAID and do tape > archiving, etc etc of all that.. > > UUIDs are not reused when objects are modified because of some design > concept regarding data-access efficiency and caching. Woohoo, don't > you love clear and direct answers? I think this was discussed a long > while back but I don't see anything in my SLDev keyword searches. > > > Okay, um, if I recall right, saving UUID changes would make the cached > asset state stale and then you need to add mechanisms to keep the > cache up to date with the main db, or for the main db to notify cache > siblings of UUID state changes. > > If you don't ever allow UUID changes then the cached state never needs > to be checked and can always be handed out to clients at much greater > speed than if state checks were needed, but this speed comes at the > price of poor storage efficiency of not recovering space used by > changed UUIDs that may never be accessed ever again. > > > There's no way to assess whether a saved asset was just temporary data > that will never be used again vs a UUID that just won't be used again > for a very very long time. LL can prune "infrequently used" UUIDs out > of the main db running on expensive 15,000 RPM SAS drives, and move > the infrequent assets to slower less-expensive "nearline storage", but > LL can't ever really delete anything since there's no way to know if > it might be needed by a user, or really never again. > > The slack from unused objects that were temporary and will never be > accessed, probably accounts for a certain sizable chunk of those > terabytes of growing asset storage you sometimes hear about. > > - Dale Mahalko / Scalar Tardis > > > On Wed, Jun 10, 2009 at 7:35 PM, Fire<[email protected]> wrote: >> just wondering (and I am sure this question has been asked before) >> but why, oh why, hasn't LL implemented the feature of writing to >> notecards? >> >> Wouldnt this solve a lot of our scripting nightmares? Ie: List memory >> limits >> etc? > _______________________________________________ > Policies and (un)subscribe information available here: > http://wiki.secondlife.com/wiki/SLDev > Please read the policies before posting to keep unmoderated posting > privileges > _______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/SLDev Please read the policies before posting to keep unmoderated posting privileges
