For the few other people who are interested, and since it's 3 AM and I can't sleep, and as long as we're talking long-term directions, let me suggest a goal to work towards for the next decade or so -- an asset service which gets faster with use and takes up less absolute space as more and more people use it. Imagine an algorithm that actually improves as the number of users N increases, versus one that just gets worse less rapidly than another as N increases.

Picture that, instead of an asset service that hits a ceiling, like Second Life, where redoubling the number of users would probably take 8 minutes to retrieve a pair of shoes from inventory.

At a minimum, imagine breaking the barrier from lossless-storage (for images, say TIFF or PNG) to lossy-storage ( JPG) at user-controlled acceptable quality levels. With images we're talking reducing some 10 Meg images to 50 K in size, for example. And with successive resolution, we're talking improving BOTH retrieval speed AND storage space, with models, like mesh, that are "low", "medium", and "high" resolutions and using the lowest one that works for, say, a speeding flyby.

But the real power would be in an artificially intelligent system that did far more than just identifying exact duplicates, but one that hashed objects into object categories, then conceptualized the object-oriented definitions and mapped, say, ALL chairs into a "CHAIR" category, where it only needed to specify the parameters that distinguish THIS instance of a chair from the base object. And over time, with a Google-like size memory, it would get better and better at recognizing familiar categories of objects that people use.

As with the ALICE chatbot strategy, even a dumb parsing algorithm might handle 80% of what people often do, and 98% of what people almost always do, which is 95% of the asset server.

Everyone uses "chairs", "vehicles", "walls", "shoes", "walls", "windows", "doors", etc. and they will predictably continue to do so for the next decade, so why not get better and better at implementing new sub-sub-flavors of such common classes? Or, for a creative user, why not use a "snap-to-grid" type solution? "Hey, are you trying to make a chair? How about THIS?" (Except please skip the annoying animated paperclip and puppy part!)

Of course, this would ALSO get vastly better if the entire system, conceptually, understood hierarchical-objects with unlimited or even 16 sub-groupings, so that the "CAR" category expected that part of most CAR's would be something like "WHEELS", and there are only a finite number of types of wheels that would suffice, and for that matter all four on a vehicle should be instances of each other, as the original no-duplicates proposal implements.

Then, rather than creating a face with an artists brush, the "rest of us" could use a police-IDENTIKIT type system to fill-in, top down, "Chair, office, ..." and in 15 mouseclicks have located in the marketplace or "created" a new version of "chair" that meets our needs, and at the same time generated a data-base representation of it as the 12 parameters needed to generate it from known object classes, instead of storing each and every aspect of each and every chair as if there was no savings to be had in learning about "chairs" and generalizing.

Wade













On 3/3/12 1:35 AM, M.E. Verhagen wrote:
+1

spitting up the asset tabel is brilliant.
loading an iar or oar of a few 100mb wich was previously saved on the same grid would than not increase the database with the few 100mb but just a few 100k :)

Op zaterdag 3 maart 2012 schreef Justin Clark-Casey ([email protected] <mailto:[email protected]>) het volgende: > Hi folks. As we know, the space required for asset storage in OpenSimulator grows continuously over time.
>
> I think this is inevitable in a web-like virtual world - distributed garbage collection is practically impossible. However, the current OpenSimulator asset service stores much more data than it needs to since it fails to identify assets that are exact duplicates of each other.
>
> Previous work in places such as OSGrid, which now uses Dave Coyle's Simple Ruby Asset Service (SRAS) [1], reveals that preventing duplicate assets has a significant effect on storage requirements (I can't remember the exact figures but I think that it's >30%).
>
> Therefore, I propose to introduce a new core asset service (xassetservice) that will implement asset de-duplication via asset data hashing. This has already been shown to work by SRAS. I regard this feature as critical for future plans to extend IAR loading and to improve the 3-months-out-of-the-box OpenSimulator experience. It does not aim to replace external projects such as SRAS for heavy users.
>
> I already have a working implementation in the xassetservice git branch (configuration instructions at [2]). This should not be used in way except for testing - it is still in the prototype stage and can change at any time. Only a MySQL implementation exists right now.
>
> The plan would be to have xassetservice exist alongside and independent of the existing asset service. Only one service can be used at a time and this is determined via config files. After considerable testing, xassetservice would become the default asset service for new OpenSimulator installations. The existing asset service would continue alongside for a very, very, very long time.
>
> Since asset datasets are so large and critical there would be no automatic migration between assetservice data and xassetservice. Instead, there would be an external migration tool.
>
> I may also take this opportunity to implement other asset service features, such as data storage on disk instead of database (possibly nicking stuff from kcozen's previous patch for this) and maybe compression (though I'm currently thinking that the cons of this outweight the pros).
>
> More detail is at [3]. Comments or alternative implementation suggestions from developers, etc., are welcome.
>
> [1] https://github.com/coyled/sras
> [2] http://opensimulator.org/wiki/Feature_Proposals/Deduplicating_Asset_Service#Testing > [3] http://opensimulator.org/wiki/Feature_Proposals/Deduplicating_Asset_Service
>
> --
> Justin Clark-Casey (justincc)
> http://justincc.org/blog
> http://twitter.com/justincc
> _______________________________________________
> Opensim-dev mailing list
> [email protected] <mailto:[email protected]>
> https://lists.berlios.de/mailman/listinfo/opensim-dev
>

--
Groningen en Hannover Opensims: secondlife://meverhagen.nl:8002:Hannover ZW/


_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev

_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev

Reply via email to