Re: Microsoft Access for RDF?
Hello Stian, On Fri, Feb 20, 2015 at 09:54:33AM +, Stian Soiland-Reyes wrote: So if you tell the user his information is just RDF, but neglect to mention and then some, he could wrongfully think that his list of say preferred president has its order preserved in any exposed RDF. Then tell the user his information is just a RDF dataset. My apologies, I got the impression there was a suggestion to control ordering of triples without making any collection statements. I would suggest to do that with named graphs. We are talking about a generic triple editor and IMO most properties are not compatible with collections. Of course, there would also be a default graph mode in the editor that does not use named graphs and does not support ordering. Don't let the user encode information he considers important in a way that is not preserved semantically. Named graphs can be queried via SPARQL. You can query the default (union) graph where this information would be lost or the named graphs where it is preserved semantically and publicly accessible. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel pgphihQ85nkXl.pgp Description: PGP signature
Re: Microsoft Access for RDF?
Hello Paul, On Thu, Feb 19, 2015 at 09:19:06PM +0100, Michael Brunnbauer wrote: Another case is where there really is a total ordering. For instance, the authors of a scientific paper might get excited if you list them in the wrong order. One weird old trick for this is RDF containers, which are specified in the XMP dialect of Dublin Core How do you bring this in line with property rdfs:range datatype, especially property rdfs:range rdf:langString? I do not see a contradiction but this makes things quite ugly. How about all the SPARQL queries that assume a literal as object and not a RDF container? Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? If one can put a container where a container is not explicitly sanctioned by the semantics of the property, then I have missed something important. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel pgpChUwXQbAuk.pgp Description: PGP signature
Re: Microsoft Access for RDF?
On 19 Feb 2015 21:42, Kingsley Idehen kide...@openlinksw.com wrote: No, this is dangerous and is hiding the truth. What? (Just to clarify my view, obviously you know this :) ) That RDF Triples are not ordered in an RDF Graph. They might be ordered in something else, but that is not part of the RDF graph. (Reification statements can easily also become something else) So if you tell the user his information is just RDF, but neglect to mention and then some, he could wrongfully think that his list of say preferred president has its order preserved in any exposed RDF. If you don't tell him it is RDF (this is now the trend of Linked Data movement..), fine! It's just a technology - he doesn't need to know. You can describe collections using RDF statements, I don't have any idea how what I am talking about implies collection exclusion. My apologies, I got the impression there was a suggestion to control ordering of triples without making any collection statements. Don't let the user encode information he considers important in a way that is not preserved semantically. ?? I simply meant to not store such information out of band, e.g. by virtue of triple order or comments in a Turtle file, or by magic extra bits in some database that don't transfer along to other consumers of the produced RDF. It should be fine to store view-metadata out of bands (e.g. which field was last updated) - but if it has a conceptual meaning to the user I think it should also have meaning in the RDF and the vocabularies used. If you are able to transparently do the right thing semantically, then hurray! Why do you think we've built an RDF editor without factoring in OWL? Many people are still allergic to OWL :-( And also I am still eager to actually see what you are talking about rather than guessing! :-) I think we are better off waiting until we release our RDF Editor. We actually built this on the request of a vary large customer. This isn't a speculative endeavor. It's actually being used by said organization as I type Looking forward to have a go. Great that you will open source it!
Re: Microsoft Access for RDF?
Sorry, now I forgot my strawman! Too late on a Friday.. So say the user of an triple-order-preserving UI says: document prov:wasAttributedTo :alice, :charlie, :bob. .. And consider the order important because Bob didn't contribute as much to the document as Alice and Charlie. In that case the above statements is not detailed enough and some new property or resource is needed to represent this distinction in RDF. Here I would think OWL fear combined with desire to reuse existing vocabularies mean that you don't get specific enough. Its OK to state the same relation with two different properties, and even better to make a new sub property that explains the combination. In the strawman, using more specific properties like pav:authoredBy and prov:wasInfluencedBy would clarify the distinction much more than an ordered list with an unspecified order criteria. In other cases the property is really giving a shortcut, say; meeting :attendedBy :john, :alice, :charlie . ..And the user is also encoding arrival time at the meeting by the list order. But this is using :attendesBy to describe both who were there, and when they arrived. In this case, the event of arriving could better be modelled separately with a partial ordering. If you don't like double housekeeping (most programmers know the pitfalls here), then using OWL or inference rules you can also infer attendance from the arrival events.
Re: Microsoft Access for RDF?
On 2/20/15 1:19 PM, Graham Klyne wrote: Hi Stian, Thanks for the mention :) Graham Klyne's Annalist is perhaps not quite what you are thinking of (I don't think it can connect to an arbitrary SPARQL endpoint), but I would consider it as falling under a similar category, as you have a user interface to define record types and forms, browse and edit records, with views defined for different record types. Under the surface it is however all RDF and REST - so you are making a schema by stealth. http://annalist.net/ http://demo.annalist.net/ Annalist is still in its prototype phase, but it's available to play with if anyone wants to try stuff. See also https://github.com/gklyne/annalist for source. There's also a Dockerized version. It's true that Annalist does not currently connect to a SPARQL endpoint, but have recently been doing some RDF data wrangling and starting to think about how to connect to public RDF (e.g. http://demo.annalist.net/annalist/c/CALMA_data/d/ is a first attempt at creating an editable version of some music data from your colleague Sean). In this case, the record types and views have been created automatically from the raw data, and are pretty basic - but that automatic extraction can serve as a starting point for subsequent editing. (The reverse of this, creating an actual schema from the defined types and views, is an exercise for the future, or maybe even for a reader :) ) Internally, the underlying data access is isolated in a single module, intended to facilitate connecting to alternative backends, which could be via SPARQL access. (I'd also like to connect up with the linked data fragments work at some stage.) If this looks like something that could be useful to anyone out there, about now might be a good time to offer feedback. Once I have what I feel is a minimum viable product release, hopefully not too long now, I'm hoping to use feedback and collaborations to prioritize ongoing developments. #g -- It is very good and useful, in my eyes! My enhancement requests would be that you consider supporting of at least one of the following, in regards to storage I/O: 1. LDP 2. WebDAV 3. SPARQL Graph Protocol 4. SPARQL 1.1 Insert, Update, Delete. As for Access Controls on the target storage destinations, don't worry about that in the RDF editor itself, leave that to the storage provider [1] that supports any combination of the protocols above. Links: [1] http://kidehen.blogspot.com/2014/07/loosely-coupled-read-write-interactions.html -- Loosely-Coupled Read-Write Web pattern example. -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this smime.p7s Description: S/MIME Cryptographic Signature
Re: Microsoft Access for RDF?
This is what I meant in my earlier message when touching on collection. If the order of the resources (let's stick with foaf:Person) matter, then the property used should not have a range of (only) foaf:Person. So say One problem is that say in OWL you don't really have an easy way to type collections, e.g. ListPerson in Java. The rdf:List is integrated in multiple serializations, but has had issues with queries (sparql property paths help) and usage in OWL, where you easily get moanings about rdf: namespace being special. In OWL collections like co: you can use OWL restrictions to type collection members, but this does push OWL into AI land as mentioned earlier - OWL is not a schema language. When we made prov:Collection it was meant as a genetic upper type of a collection entity that could be used in-place of its members entities. http://www.w3.org/TR/prov-o/#Collection http://www.w3.org/TR/2013/REC-prov-dm-20130430/Overview.html#term-collection In the discussion for this, statements about a Collection, say policy prov:wasAttributedTo :theBoard . :theBoard a prov:Collection, prov:Agent ; prov:hadMember :alice, :bob, :charlie . Then you still can't conclude that: policy prov:wasAttributedTo :bob . as he might or might not have contributed to the policy document whilst on the board, but still is part-responsible for its creation (e.g. he didn't veto it). In the extension PROV Dictionary we agreed that order within a collection was often important, and that arbitrary literal keys as commonly used in JSON maps can have a meaning, even if just programmatic and not semantically detailed. A List can be just a dictionary using nonnegative integers as their keys. (But you would have no EndOfList markers or guarantee that all keys were described). http://www.w3.org/TR/2013/NOTE-prov-dictionary-20130430/#dictionary-ontological-definition You see the dictionary entries are here typed as prov:KeyValuePair, which imply that the value is a member of the collection. http://www.w3.org/TR/2013/NOTE-prov-dictionary-20130430/#dmembership-cmembership-inference This is very similar to how CO has done inference between hasElement and the property chain has item - hasElementContent using just OWL. http://www.essepuntato.it/lode/owlapi/http://purl.org/co/#d4e76 One great advantage of CO collections is that they can easily be subclassed and typed by restrictions, e.g. hasElement only foaf:Person Such collections can then be used as the range of a property in Union with foaf:Person. On 20 Feb 2015 19:32, Michael Brunnbauer bru...@netestate.de wrote: Hello Pat, On Fri, Feb 20, 2015 at 11:45:12AM -0600, Pat Hayes wrote: Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. I am seriously interested in your position on the topic. Do you say that anything goes as long as it stays satisfiable? Should I assume that some property applying to some container/collection also applies to its members (which seems to be the implicit assumption here)? Should I modify my SPARQL queries accordingly? Let me play the censor a bit more :-) Let's admit that Dan also means legal person with person. But not every group of individuals acting together is a legal person. The example here was a group of people co-authoring a paper. Also, the notion that foaf:Group is a subclass of foaf:Person does not make any sense to me. Why then introduce foaf:Group at all? Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
Re: Microsoft Access for RDF?
If you don't like double housekeeping (most programmers know the pitfalls here), then using OWL or inference rules you can also infer attendance from the arrival events. Are most programmers who work for the Human Resources Department ignorant or just really scary ? It's Friday. Get thee to beer. Quickly. On Fri, 2/20/15, Stian Soiland-Reyes soiland-re...@cs.manchester.ac.uk wrote: Subject: Re: Microsoft Access for RDF? To: Michael Brunnbauer bru...@netestate.de Cc: public-lod@w3.org, Pat Hayes pha...@ihmc.us Date: Friday, February 20, 2015, 3:53 PM Sorry, now I forgot my strawman! Too late on a Friday.. So say the user of an triple-order-preserving UI says: document prov:wasAttributedTo :alice, :charlie, :bob. .. And consider the order important because Bob didn't contribute as much to the document as Alice and Charlie. In that case the above statements is not detailed enough and some new property or resource is needed to represent this distinction in RDF. Here I would think OWL fear combined with desire to reuse existing vocabularies mean that you don't get specific enough. Its OK to state the same relation with two different properties, and even better to make a new sub property that explains the combination. In the strawman, using more specific properties like pav:authoredBy and prov:wasInfluencedBy would clarify the distinction much more than an ordered list with an unspecified order criteria. In other cases the property is really giving a shortcut, say; meeting :attendedBy :john, :alice, :charlie . ..And the user is also encoding arrival time at the meeting by the list order. But this is using :attendesBy to describe both who were there, and when they arrived. In this case, the event of arriving could better be modelled separately with a partial ordering. If you don't like double housekeeping (most programmers know the pitfalls here), then using OWL or inference rules you can also infer attendance from the arrival events.
Re: Microsoft Access for RDF?
I find it funny that people on this list and semweb lists in general like discussing abstractions, ideas, desires, prejudices etc. However when a concrete example is shown, which solves the issue discussed or at least comes close to that, it receives no response. So please continue discussing the ideal RDF environment and its potential problems while we continue improving our editor for users who manage RDF already now. Have a nice weekend everyone! On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote: So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. As a way of documenting types and properties it is tolerable. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com and j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. The other one is that as Kingsley points out, the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical. I am building an app right now where I call an API and get back chunks of JSON which I cache, and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got. Being able to do other kind of queries and such is sugar on top, but being able to reconstruct an original record, ordered collections and all, is an absolute requirement. So far my infovore framework based on Hadoop has avoided collections, containers and all that because these are not used in DBpedia and Freebase, at least not in the A-Box. The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster, the act of reconstituting a container would require an unbounded number of passes, which is no fun at all with Hadoop. (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that) I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies.
Re: Microsoft Access for RDF?
On 2/20/15 4:54 AM, Stian Soiland-Reyes wrote: On 19 Feb 2015 21:42, Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com wrote: No, this is dangerous and is hiding the truth. What? (Just to clarify my view, obviously you know this :) ) That RDF Triples are not ordered in an RDF Graph. Correct. They might be ordered in something else, but that is not part of the RDF graph. You can produce an order using: select * where {graph named-graph-iri {?s ?p ?o}} order by ?p offset and limit can be used to create a paging mechanism, if required. Subqueries can be used for further optimize when the named graph is very large etc. This UI would be for the user to alter relation subjects or objects. Predicate alternation isn't an option in this UI. Basically, the user it focused on relationship entities for a specific relationship type. (Reification statements can easily also become something else) UI leveraging Reification: This provides an ability to let the user interact with a collection of statements in a UI where subject, predicate, and objects can be altered. Basically, they have a UX oriented towards sentence editing. So if you tell the user his information is just RDF, but neglect to mention and then some, he could wrongfully think that his list of say preferred president has its order preserved in any exposed RDF. Not the intent here at all. If you don't tell him it is RDF (this is now the trend of Linked Data movement..), fine! It's just a technology - he doesn't need to know. In our case we are showcasing RDF as Language, and using the UI/UX to bolster that point of view, using different UI/UX patterns to address the different ways a user can create or alter relations. You can describe collections using RDF statements, I don't have any idea how what I am talking about implies collection exclusion. My apologies, I got the impression there was a suggestion to control ordering of triples without making any collection statements. An RDF editor has to allow users create any kind of relation that's possible in the RDF Language. Don't let the user encode information he considers important in a way that is not preserved semantically. ?? I simply meant to not store such information out of band, e.g. by virtue of triple order or comments in a Turtle file, or by magic extra bits in some database that don't transfer along to other consumers of the produced RDF. Okay, we don't do that. In fact, exposing relations as groups of rdf statements grouped by predicate enables clients lever optimistic concurrency patterns since they can make hash based checksums on the predicate based grouping that are tested prior to final persistence on the target store (SPARQL, WebDAV, LDP compliant). It should be fine to store view-metadata out of bands (e.g. which field was last updated) - but if it has a conceptual meaning to the user I think it should also have meaning in the RDF and the vocabularies used. We have a View that works with Controls en route to Data Persistence at storage location that supports one of: SPARQL 1.1 Insert, Update, Delete, SPARQL Graph Protocol, LDP, and WebDAV . The Editor we've built is Javascript based. It also makes use of rdfstore.js, our generic I/O layer (also in Javascript) and a few other bits ontology lookups etc.., plus some bits JQuery integration etc.. If you are able to transparently do the right thing semantically, then hurray! I think we do, but we'll see what everyone thinks once its released :) Why do you think we've built an RDF editor without factoring in OWL? Many people are still allergic to OWL :-( We aren't :) And also I am still eager to actually see what you are talking about rather than guessing! :-) I think we are better off waiting until we release our RDF Editor. We actually built this on the request of a vary large customer. This isn't a speculative endeavor. It's actually being used by said organization as I type Looking forward to have a go. Great that you will open source it! Okay. -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this smime.p7s Description: S/MIME Cryptographic Signature
Re: Microsoft Access for RDF?
So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. As a way of documenting types and properties it is tolerable. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com and j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. The other one is that as Kingsley points out, the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical. I am building an app right now where I call an API and get back chunks of JSON which I cache, and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got. Being able to do other kind of queries and such is sugar on top, but being able to reconstruct an original record, ordered collections and all, is an absolute requirement. So far my infovore framework based on Hadoop has avoided collections, containers and all that because these are not used in DBpedia and Freebase, at least not in the A-Box. The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster, the act of reconstituting a container would require an unbounded number of passes, which is no fun at all with Hadoop. (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that) I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies.
Re: Microsoft Access for RDF?
On 2/20/15 10:23 AM, Martynas Jusevičius wrote: I find it funny that people on this list and semweb lists in general like discussing abstractions, ideas, desires, prejudices etc. That's because dog-fooding hasn't yet become second nature, across the aforementioned communities. Don't give up, just keep pushing the case via real examples etc.. However when a concrete example is shown, which solves the issue discussed or at least comes close to that, it receives no response. Yes, that is the general case, but don't give up. Keep pushing, things will change, they have too! So please continue discussing the ideal RDF environment and its potential problems while we continue improving our editor for users who manage RDF already now. Please keep up your good work and overall effort in general. Don't get frustrated (I know that's easier said than done). RDF Editors are vital, in regards to bootstrapping a Read-Write Linked Open Data ecosystem. The more the merrier, as long as the solutions in question are based on open standards (RDF, SPARQL, LDP, HTTP, URIs, WebDAV etc..). Kingsley Have a nice weekend everyone! On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote: So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. As a way of documenting types and properties it is tolerable. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com and j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. The other one is that as Kingsley points out, the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical. I am building an app right now where I call an API and get back chunks of JSON which I cache, and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got. Being able to do other kind of queries and such is sugar on top, but being able to reconstruct an original record, ordered collections and all, is an absolute requirement. So far my infovore framework based on Hadoop has avoided collections, containers and all that because these are not used in DBpedia and Freebase, at least not in the A-Box. The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster, the act of reconstituting a container would require an unbounded number of passes, which is no fun at all with Hadoop. (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that) I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies. -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this smime.p7s Description: S/MIME Cryptographic Signature
Re: Microsoft Access for RDF?
Hello Martynas, sorry! You mean this one? http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode Nice! Looks like a template but you still may have the triple object ordering problem. Do you? If yes, how did you address it? Regards, Michael Brunnbauer On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote: I find it funny that people on this list and semweb lists in general like discussing abstractions, ideas, desires, prejudices etc. However when a concrete example is shown, which solves the issue discussed or at least comes close to that, it receives no response. So please continue discussing the ideal RDF environment and its potential problems while we continue improving our editor for users who manage RDF already now. Have a nice weekend everyone! On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote: So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. As a way of documenting types and properties it is tolerable. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com and j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. The other one is that as Kingsley points out, the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical. I am building an app right now where I call an API and get back chunks of JSON which I cache, and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got. Being able to do other kind of queries and such is sugar on top, but being able to reconstruct an original record, ordered collections and all, is an absolute requirement. So far my infovore framework based on Hadoop has avoided collections, containers and all that because these are not used in DBpedia and Freebase, at least not in the A-Box. The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster, the act of reconstituting a container would require an unbounded number of passes, which is no fun at all with Hadoop. (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that) I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies. -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel pgpPXP1tggEvs.pgp Description: PGP signature
Re: Microsoft Access for RDF?
On 2/20/15 10:09 AM, Paul Houle wrote: So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. I think that's a little too generic a comment. Describing the nature of relations using relations is vital. Not all of OWL is vital, at the onset. Basically, OWL doesn't need to be at the front-door per se., but understanding its role, in regards to relations semantics description and exploitation is important. RDF/XML's problems have tarnished OWL, as it has the notion of a Semantic Web in general. For starters, too many OWL usage examples (circa., 2105) are *still* presented using RDF/XML :( The creation and management of RDF/XML is THE real problem. It messed up everything, and stayed at the fore front (as the sole official W3C RDF notation standard) for way too long. Exponential decadence++ par excellence! As a way of documenting types and properties it is tolerable. Methinks, very useful. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. You need both, with rules being much clearer starting points for users and developers. It's a journey back to Prolog [1] i.e., long awaited 5GL == Webby Prolog. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com mailto:j...@example.com and j...@example.com mailto:j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. Task better handled via rules. Links: [1] http://www.jfsowa.com/logic/prolog1.htm -- A Prolog to Prolog by John F. Sowa (Last Modified: 11/07/2001 14:13:17) . -- Regards, Kingsley Idehen Founder CEO OpenLink Software Company Web: http://www.openlinksw.com Personal Weblog 1: http://kidehen.blogspot.com Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen Twitter Profile: https://twitter.com/kidehen Google+ Profile: https://plus.google.com/+KingsleyIdehen/about LinkedIn Profile: http://www.linkedin.com/in/kidehen Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this smime.p7s Description: S/MIME Cryptographic Signature
Re: Microsoft Access for RDF?
On Feb 20, 2015, at 2:42 AM, Michael Brunnbauer bru...@netestate.de wrote: Hello Paul, On Thu, Feb 19, 2015 at 09:19:06PM +0100, Michael Brunnbauer wrote: Another case is where there really is a total ordering. For instance, the authors of a scientific paper might get excited if you list them in the wrong order. One weird old trick for this is RDF containers, which are specified in the XMP dialect of Dublin Core How do you bring this in line with property rdfs:range datatype, especially property rdfs:range rdf:langString? I do not see a contradiction but this makes things quite ugly. How about all the SPARQL queries that assume a literal as object and not a RDF container? Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. Pat If one can put a container where a container is not explicitly sanctioned by the semantics of the property, then I have missed something important. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel IHMC (850)434 8903 home 40 South Alcaniz St.(850)202 4416 office Pensacola(850)202 4440 fax FL 32502 (850)291 0667 mobile (preferred) pha...@ihmc.us http://www.ihmc.us/users/phayes
Re: Microsoft Access for RDF?
On 2/20/15 12:04 PM, Martynas Jusevičius wrote: Hey Michael, this one indeed. The layout is generated with XSLT from RDF/XML. The triples are grouped by resources. Not to criticize, but to seek clarity: What does the term resources refer to, in your usage context? In a world of Relations (this is what RDF is about, fundamentally) its hard for me to understand what you mean by grouped by resources. What is the resource etc? Within a resource block, properties are sorted alphabetically by their rdfs:labels retrieved from respective vocabularies. How do you handle the integrity of multi-user updates, without killing concurrency, using this method of grouping (which in of itself is unclear due to the use resources term) ? How do you minimize the user interaction space i.e., reduce clutter -- especially if you have a lot of relations in scope or the possibility that such becomes the reality over time? Kingsley On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer bru...@netestate.de wrote: Hello Martynas, sorry! You mean this one? http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode Nice! Looks like a template but you still may have the triple object ordering problem. Do you? If yes, how did you address it? Regards, Michael Brunnbauer On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote: I find it funny that people on this list and semweb lists in general like discussing abstractions, ideas, desires, prejudices etc. However when a concrete example is shown, which solves the issue discussed or at least comes close to that, it receives no response. So please continue discussing the ideal RDF environment and its potential problems while we continue improving our editor for users who manage RDF already now. Have a nice weekend everyone! On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote: So some thoughts here. OWL, so far as inference is concerned, is a failure and it is time to move on. It is like RDF/XML. As a way of documenting types and properties it is tolerable. If I write down something in production rules I can generally explain to an average joe what they mean. If I try to use OWL it is easy for a few things, hard for a few things, then there are a few things Kendall Clark can do, and then there is a lot you just can't do. On paper OWL has good scaling properties but in practice production rules win because you can infer the things you care about and not have to generate the large number of trivial or otherwise uninteresting conclusions you get from OWL. As a data integration language OWL points in an interesting direction but it is insufficient in a number of ways. For instance, it can't convert data types (canonicalize mailto:j...@example.com and j...@example.com), deal with trash dates (have you ever seen an enterprise system that didn't have trash dates?) or convert units. It also can't reject facts that don't matter and so far as both timespace and accuracy you do much easier if you can cook things down to the smallest correct database. The other one is that as Kingsley points out, the ordered collections do need some real work to square the circle between the abstract graph representation and things that are actually practical. I am building an app right now where I call an API and get back chunks of JSON which I cache, and the primary scenario is that I look them up by primary key and get back something with a 1:1 correspondence to what I got. Being able to do other kind of queries and such is sugar on top, but being able to reconstruct an original record, ordered collections and all, is an absolute requirement. So far my infovore framework based on Hadoop has avoided collections, containers and all that because these are not used in DBpedia and Freebase, at least not in the A-Box. The simple representation that each triple is a record does not work so well in this case because if I just turn blank nodes into UUIDs and spray them across the cluster, the act of reconstituting a container would require an unbounded number of passes, which is no fun at all with Hadoop. (At first I though the # of passes was the same as the length of the largest collection but now that I think about it I think I can do better than that) I don't feel so bad about most recursive structures because I don't think they will get that deep but I think LISP-Lists are evil at least when it comes to external memory and modern memory hierarchies. -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel -- Regards, Kingsley Idehen Founder CEO
Re: Microsoft Access for RDF?
Hi All, The infrastructure used in [1,2] to get transparency and auditability may be of interest for this discussion. Thanks for comments, -- Adrian [1] www.astd.org/Publications/Magazines/The-Public-Manager/Archives/2013/Fall/Social-Knowledge-Transfer-Using-Executable-English [2] www.reengineeringllc.com/demo_agents/GrowthAndDebt1.agent On Fri, Feb 20, 2015 at 12:45 PM, Pat Hayes pha...@ihmc.us wrote: On Feb 20, 2015, at 2:42 AM, Michael Brunnbauer bru...@netestate.de wrote: Hello Paul, On Thu, Feb 19, 2015 at 09:19:06PM +0100, Michael Brunnbauer wrote: Another case is where there really is a total ordering. For instance, the authors of a scientific paper might get excited if you list them in the wrong order. One weird old trick for this is RDF containers, which are specified in the XMP dialect of Dublin Core How do you bring this in line with property rdfs:range datatype, especially property rdfs:range rdf:langString? I do not see a contradiction but this makes things quite ugly. How about all the SPARQL queries that assume a literal as object and not a RDF container? Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. Pat If one can put a container where a container is not explicitly sanctioned by the semantics of the property, then I have missed something important. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel IHMC (850)434 8903 home 40 South Alcaniz St.(850)202 4416 office Pensacola(850)202 4440 fax FL 32502 (850)291 0667 mobile (preferred) pha...@ihmc.us http://www.ihmc.us/users/phayes
Re: Microsoft Access for RDF?
Hi All, The infrastructure used in [1,2] to get transparency and auditability may be of interest for this discussion. Thanks for comments, -- Adrian [1] www.astd.org/Publications/Magazines/The-Public-Manager/Archives/2013/Fall/Social-Knowledge-Transfer-Using-Executable-English [2] www.reengineeringllc.com/demo_agents/GrowthAndDebt1.agent On Fri, Feb 20, 2015 at 12:45 PM, Pat Hayes pha...@ihmc.us wrote: On Feb 20, 2015, at 2:42 AM, Michael Brunnbauer bru...@netestate.de wrote: Hello Paul, On Thu, Feb 19, 2015 at 09:19:06PM +0100, Michael Brunnbauer wrote: Another case is where there really is a total ordering. For instance, the authors of a scientific paper might get excited if you list them in the wrong order. One weird old trick for this is RDF containers, which are specified in the XMP dialect of Dublin Core How do you bring this in line with property rdfs:range datatype, especially property rdfs:range rdf:langString? I do not see a contradiction but this makes things quite ugly. How about all the SPARQL queries that assume a literal as object and not a RDF container? Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. Pat If one can put a container where a container is not explicitly sanctioned by the semantics of the property, then I have missed something important. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel IHMC (850)434 8903 home 40 South Alcaniz St.(850)202 4416 office Pensacola(850)202 4440 fax FL 32502 (850)291 0667 mobile (preferred) pha...@ihmc.us http://www.ihmc.us/users/phayes
Re: Microsoft Access for RDF?
Hi Stian, Thanks for the mention :) Graham Klyne's Annalist is perhaps not quite what you are thinking of (I don't think it can connect to an arbitrary SPARQL endpoint), but I would consider it as falling under a similar category, as you have a user interface to define record types and forms, browse and edit records, with views defined for different record types. Under the surface it is however all RDF and REST - so you are making a schema by stealth. http://annalist.net/ http://demo.annalist.net/ Annalist is still in its prototype phase, but it's available to play with if anyone wants to try stuff. See also https://github.com/gklyne/annalist for source. There's also a Dockerized version. It's true that Annalist does not currently connect to a SPARQL endpoint, but have recently been doing some RDF data wrangling and starting to think about how to connect to public RDF (e.g. http://demo.annalist.net/annalist/c/CALMA_data/d/ is a first attempt at creating an editable version of some music data from your colleague Sean). In this case, the record types and views have been created automatically from the raw data, and are pretty basic - but that automatic extraction can serve as a starting point for subsequent editing. (The reverse of this, creating an actual schema from the defined types and views, is an exercise for the future, or maybe even for a reader :) ) Internally, the underlying data access is isolated in a single module, intended to facilitate connecting to alternative backends, which could be via SPARQL access. (I'd also like to connect up with the linked data fragments work at some stage.) If this looks like something that could be useful to anyone out there, about now might be a good time to offer feedback. Once I have what I feel is a minimum viable product release, hopefully not too long now, I'm hoping to use feedback and collaborations to prioritize ongoing developments. #g --
Re: Microsoft Access for RDF?
Pat, so far as corporation is a person that is what we have foaf:Agent for. A corporation can sign contracts and be an endpoint for communication and payments the same as a person so to model the world of law, business, finance and stuff that is a very real thing. If you take that idea too literally, however, it conflicts with a person is an animal in terms of physiology, but that too can be modelled. Cristoph, the trouble with OWL is that things that almost work have a way of displacing things that do work, particularly in a community that has the incentive structures that the semweb community has. We have the problem of a really bad rep in many circles, I see people say stuff like this all the time http://lemire.me/blog/archives/2014/12/02/when-bad-ideas-will-not-die-from-classical-ai-to-linked-data/ and I have to admit that back in 2004 I was the guy who stood in the back of the conference room and said isn't this like the stuff they tried in the 80's that didn't work. A lot of people believe that guff, and combine that with the road rage of people who look for US states in DBpedia and find that 3 of them got dropped on the floor, it can be very hard to get taken seriously. Lemire's unconstructive criticism displaces real criticism, but that kind of criticism could be displaced by constructive criticism about the standards we have. For instance, I think RDF Data Shapes is a great idea but I needed it back in 2007 and it just astonishing to me that it took so long for it to happen. (Now I must admit I am most curious about why it is standards for rules interchange, i.e. the RuleML family, KIF, and a few others have had such a hard time going, whereas you find things like Drools, Blaze Advisor, and iLog running many real world systems.) On Fri, Feb 20, 2015 at 12:45 PM, Pat Hayes pha...@ihmc.us wrote: On Feb 20, 2015, at 2:42 AM, Michael Brunnbauer bru...@netestate.de wrote: Hello Paul, On Thu, Feb 19, 2015 at 09:19:06PM +0100, Michael Brunnbauer wrote: Another case is where there really is a total ordering. For instance, the authors of a scientific paper might get excited if you list them in the wrong order. One weird old trick for this is RDF containers, which are specified in the XMP dialect of Dublin Core How do you bring this in line with property rdfs:range datatype, especially property rdfs:range rdf:langString? I do not see a contradiction but this makes things quite ugly. How about all the SPARQL queries that assume a literal as object and not a RDF container? Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. Pat If one can put a container where a container is not explicitly sanctioned by the semantics of the property, then I have missed something important. Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel IHMC (850)434 8903 home 40 South Alcaniz St.(850)202 4416 office Pensacola(850)202 4440 fax FL 32502 (850)291 0667 mobile (preferred) pha...@ihmc.us http://www.ihmc.us/users/phayes -- Paul Houle Expert on Freebase, DBpedia, Hadoop and RDF (607) 539 6254paul.houle on Skype ontolo...@gmail.com http://legalentityidentifier.info/lei/lookup
1st Summer School on Smart Cities and Linked Open Data (LD4SC-15)
http://smartcity.linkeddata.es/LD4SC/ *Registration deadline: 15th March* The 1st Summer School on Smart Cities and Linked Open Data (LD4SC-15) will be held from June 7th to 12th 2015 at Residencia Lucas Olazábal of Universidad Politécnica de Madrid in Cercedilla, a municipality of the autonomous community of Madrid in central Spain. The LD4SC-15 summer school has the main goal of teaching people from industry and academia in an easy and guided way how to use Linked Open Data technologies in the domain of smart cities, facilitating through a simple approach a first contact with these technologies. This summer school is the first one organized in this topic worldwide and is supported by the READY4SmartCities (http://www.ready4smartcities.eu/) FP7 Coordination and Support Action. By the end of the summer school, students will: * Understand the role of Open Data and Linked Open Data in Smart cities. * Know how to generate and publish Open Linked Data from some existing data source. * Know how to define and reuse vocabularies that can be used to represent Linked Data. * Know about the different existing open data portals and be able of using them. * Know the different alternatives for using Linked Data in the context of smart cities. * Have followed the whole process of generating and publishing Open Linked Data with some existing data set. With the objective of avoiding passive learning, the summer school will contain three types of lessons: * Keynotes to show novel aspects and discuss selected topics. * Theoretical lessons to introduce the basic foundations of each topic, methods, and technologies. * Hands-on sessions where students will follow the whole process of generating and publishing Open Linked Data with some existing data set. Students are encouraged to bring to the summer school some data set produced by their organizations in order to leave the summer school with the data set transformed into Linked Data. Confirmed Invited Speakers and Tutors = * Leandro Madrazo (Universitat Ramon Llull) * Freddy Lecue (IBM Research Dublin) * Edward Curry (Insight Centre for Data Analytics) * Jerome Euzenat (INRIA) * Pieter Pawels (Ghent University) * Alvaro Sicilia (Universitat Ramon Llull) Registration We welcome students from anywhere in the world and coming from industry or academia. Some basic acquaintance with software development and Web technologies is required. Students are expected to participate fully in the activities of the school until its conclusion. Due to the practical orientation of the summer school and to the effort needed to supervise and tutor students, the number of students in the summer school will be limited to 50. The cost of the attending the LD4SC Summer School is 450€, including lectures and hands-on sessions, accommodation, meals, social events and excursion. Organizers == Raúl García Castro (Universidad Politécnica de Madrid) Dimitrios Tzovaras (Informatics and Telematics Institute - CERTH)
Re: Microsoft Access for RDF?
Hello Pat, On Fri, Feb 20, 2015 at 11:45:12AM -0600, Pat Hayes wrote: Another simpler example would be property rdfs:range foaf:Person. http://xmlns.com/foaf/spec/#term_Person says that Something is a Person if it is a person. How can an RDF container of several persons be a person? According the US Supreme Court a corporation is a person, so I would guess that a mere container would have no trouble geting past the censors. I am seriously interested in your position on the topic. Do you say that anything goes as long as it stays satisfiable? Should I assume that some property applying to some container/collection also applies to its members (which seems to be the implicit assumption here)? Should I modify my SPARQL queries accordingly? Let me play the censor a bit more :-) Let's admit that Dan also means legal person with person. But not every group of individuals acting together is a legal person. The example here was a group of people co-authoring a paper. Also, the notion that foaf:Group is a subclass of foaf:Person does not make any sense to me. Why then introduce foaf:Group at all? Regards, Michael Brunnbauer -- ++ Michael Brunnbauer ++ netEstate GmbH ++ Geisenhausener Straße 11a ++ 81379 München ++ Tel +49 89 32 19 77 80 ++ Fax +49 89 32 19 77 89 ++ E-Mail bru...@netestate.de ++ http://www.netestate.de/ ++ ++ Sitz: München, HRB Nr.142452 (Handelsregister B München) ++ USt-IdNr. DE221033342 ++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer ++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel pgpXp_61os0Ig.pgp Description: PGP signature