Hi
Alessandro

To did you get a chance to read Florent's latest blog 
http://blog.iks-project.eu/thesaurus-management-tool-linked-heritage-project/
You may also want to follow the work of the Skos-js editor that our colleague 
here at Salzburg Research is working on, See https://github.com/tkurz/skosjs


Thanks
john 


On Apr 11, 2012, at 12:29 PM, Alessandro Adamou wrote:

> Hi Florent, all,
> 
> Just curious - have you been able to make any progress on using the Ontology 
> Manager for multi-user thesaurus management in your application?
> 
> Should you have any questions or inquiries please do not hesitate to ask.
> 
> Alessandro
> 
> 
> On 1/19/12 2:52 PM, florent andré wrote:
>> Hi Alessandro,
>> 
>> Thanks for answers.
>> 
>> What I clearly understand now it that all is store in the Stanbol/clerezza 
>> store.
>> And that I can store thesauri directly via repository or via Ontonet.
>> 
>> What is not totally clear for me now are the concepts of "spaces", "session" 
>> and "scope"...
>> 
>> In my usecase I will have many users that each save one to many thesaurus :
>> - user A will store thesaurus 1 (TA1) and TA2
>> - user B will store thesaurus 1 (TB1) and TB2 and ...
>> 
>> When user C store his TC1 he will choose to map-it to one already existing 
>> depending on the more appropriate one.
>> 
>> Let's say he select the TA2.
>> 
>> So, the mapping will be done between TC1 and TA2 (and any others 
>> combinations can be done afterwards).
>> 
>> So...
>> 
>> On 01/17/2012 10:54 AM, Alessandro Adamou wrote:
>> ...
>>> 
>>>> - User will be able to map concepts from one skos to another one.
>>> 
>>> Setting up one Session per active user, where the mappings are managed,
>>> should do the trick. To obtains the entities to map from and to, you
>>> could set up a "my-skos-thesaurus" scope, load SKOS in its core space
>>> and the thesaurus in its custom space.
>> 
>> ... for my user C :
>> - I create a sessionC
>> - I create a "C-skos-thesaurus" scope
>> - Load TC1 in "C-skos-thesaurus".coreSpace
>> - load TA2 in "C-skos-thesaurus".customSpace
>> - then store mappings done in sessionC
>> 
>> That's a good use of session, scope and space ?
>> 
>>> 
>>> Even better, if you think you can benefit from partitioning the
>>> thesaurus somehow, you can manage multiple scopes with one partition in
>>> the custom space of each. This usually comes into play if you need to
>>> perform some reasoning.
>>> 
>>>> - Standard user can only modify his maps ; power users can modify all
>>>> maps (latter requirement)
>>> 
>>> Rule of thumb (which however is currently not enforced by the framework)
>>> is:
>>> 
>>> * sessions are managed by unprivileged users or client applications
>>> * scopes can be read-accessed by anyone, but only privileged users or
>>> Stanbol plugins should create or tamper with them.
>>> 
>>> As a matter of fact, anyone can do anything right now because we've no
>>> REST API with authentication (yet? should we?)
>> 
>> Yep I know that, let's see what append on this subject... even without a 
>> framework level solution a little workaround could not be so hard to set-up.
>> 
>>> 
>>>> - Skos thesauri and concept have to be dereferencables.
>>> 
>>> OntoNet has a mechanism for "hijacking" every loaded ontology into
>>> Stanbol, and creating dynamic import statements. It is mainly designed
>>> for ontology collectors, but can also be applied to ontologies not
>>> loaded in a scope/session.
>>> 
>>> As for the *concepts*, there's no rewriting of entity IRIs, nor were we
>>> sure to do it as logically it would open a can of worms - that is,
>>> unless we add an OWL equivalence statement everytime a concept is
>>> "moved", but even so all the "old" names should still be dereferenceable!
>> 
>> Thesauri I will import don't have prior IRIs (they are in CSV).
>> So I can set up them as I want and in line with the server name.
>> 
>> Get old names is really problematic... only currents one will be 
>> interesting...
>> Redirect from old to current with the help of modifications history could be 
>> really good...
>> 
>>> 
>>>> I "feel" that ontonet/kres can be great help on it, I read
>>>> documentation I find about (mails and [1] essentially), but can't get
>>>> clear picture of what is already there and what it not for this
>>>> usecase...
>>> 
>>> More documentation is coming right these days, in the meantime I hope
>>> I've given you a clearer picture.
>>> 
>>> I'd have a few questions, too:
>>> 
>>> * what would your mappings look like? depending on the complexity, you
>>> could find Stanbol Rules to be of use too.
>> 
>> For now (it's not clearly define though), mapping will be done with SKOS 
>> properties.
>> 
>> * It's better to use rules in this case (mapping TC1 / TA2) ?
>> Constraints (for now) are to be able to get :
>> - original thesaurus (just TC1)
>> - or the complete one (TC1 and TA2 with mappings)
>> Also be able to do some reasoning on it will be great value added.
>> 
>>> * do you have an insight on the size of your thesaurus, in an
>>> entries/triples? Is it a huge, undivided bulk or would it make sense to
>>> partition it?
>> 
>> No clear idea of the size of each individuals thesaurus... The point here is 
>> more the amount of thesaurus...
>> IMO : 15+ of not so big thesauri.
>> 
>>> * I assume you would interact with OntoNet via the REST API, or would
>>> you need to add some server-side interaction with the Java API using a
>>> new OSGi bundle or so?
>> 
>> Don't know for now, depending how I can answer to requirements...
>> 
>>> 
>>> Please feel free to write to the list on my attention for further
>>> inquiries.
>>> 
>>> Alessandro
>>> 
>> 
> 
> 
> -- 
> M.Sc. Alessandro Adamou
> 
> Alma Mater Studiorum - Università di Bologna
> Department of Computer Science
> Mura Anteo Zamboni 7, 40127 Bologna - Italy
> 
> Semantic Technology Laboratory (STLab)
> Institute for Cognitive Science and Technology (ISTC)
> National Research Council (CNR)
> Via Nomentana 56, 00161 Rome - Italy
> 
> 
> "I will give you everything, so long as you do not demand anything."
> (Ettore Petrolini, 1930)
> 
> Not sent from my iSnobTechDevice
> 

Reply via email to