Re: RDFa vs RDF/XML and content negotiation
Thank you for the excellent questions, Bill. Right now IMHO the best bet is probably just to pick whichever format you are most comfortable with (yup it depends) and use that as the single source, transforming perhaps with scripts to generate the alternate representations for conneg. As far as I'm aware we don't yet have an easy templating engine for RDFa, so I suspect having that as the source is probably a good choice for typical Web applications. As mentioned already GRDDL is available for transforming on the fly, though I'm not sure of the level of client engine support at present. Ditto providing a SPARQL endpoint is another way of maximising the surface area of the data. But the key step has clearly been taken, that decision to publish data directly without needing the human element to interpret it. I claim *win* for the Semantic Web, even if it'll still be a few years before we see applications exploiting it in a way that provides real benefit for the end user. my 2 cents. Cheers, Danny.
Redundancy (was Re: RDFa vs RDF/XML and content negotiation)
2009/6/24 Ivan Herman i...@w3.org: With the increasing popularity of RDFa our system guys have already complained about sudden server request surges on that service. Ie, although it is fine to use the service as it is in the .htaccess example (with full URI-s, though) if you (or anybody else) uses it with a large number of calls, it is better to install the service locally an run it from there (it is a bunch of python files, it should not be difficult to install it). Ivan, do you know of any easy transparent way for an agent to choose another equivalent service if there are load issues? -- http://danny.ayers.name
Re: LOD Data Sets, Licensing, and AWS
Hi, 2009/6/23 Kingsley Idehen kide...@openlinksw.com: All, As you may have noticed, AWS still haven't made the LOD cloud data sets -- that I submitted eons ago -- public. Basically, the hold-up comes down to discomfort with the lack of license clarity re. some of the data sets. Yes, this is an issue that Amazon mentioned when I discussed mirroring data from the Connected Commons with them a few months ago. Its a reasonable concern as, being a large organization, they are the obvious target for any potential lawsuit w.r.t. licensing or copyright infringement. Other organizations may have similar concerns and we need to anticipate that. I'm glad that this issue is starting to get more attention, and there's been some useful discussion so far. Licensing and rights waivers, are topics that need to be addressed if we are to move forward with building a sustainable infrastructure that can be reliably and legally used for both commercial and non-commercial usage. As Ian mentioned, a tutorial proposal has been submitted to ISWC by representatives of the Open Data Commons, Science Commons, and Talis on precisely these topics, and will cover both legal and social frameworks that relate to open data publishing. I hope that we'll also be able to provide some clear advice on what is/isn't covered by copyright and database licensing law to also ensure that people scraping and converting facts from existing websites can have a clearer understanding of what they legally can and can't do. I think as the discussion proceeds we need to be clear about several different issues: what mechanisms exist for waiving or granting licenses to data and content and their applicability, and the social norms that should underpin a community of good data reusers; attribution is one of these. At the moment many datasets are either not explicitly licensed or incorrectly licensed, e.g. using a CC-By-SA license for data. The latter typically expresses the wishes or intentions of the data publisher (please acknowledge my efforts) but is not legally enforceable. Cheers, L. -- Leigh Dodds Programme Manager, Talis Platform Talis leigh.do...@talis.com http://www.talis.com
Re: LOD Data Sets, Licensing, and AWS
On 24 Jun 2009, at 00:04, Peter Ansell wrote: 2009/6/24 Ian Davis li...@iandavis.com On Tue, Jun 23, 2009 at 11:11 PM, Kingsley Idehen kide...@openlinksw.com wrote: Using licensing to ensure the data providers URIs are always preserved delivers low cost and implicit attribution. This is what I believe CC-BY-SA delivers. There is nothing wrong with granular attribution if compliance is low cost. Personally, I think we are on the verge of an Attribution Economy, and said economy will encourage contributions from a plethora of high quality data providers (esp. from the tradition media realm). Regardless of any attribution economy, CC-BY-SA is basically unenforceable for data so is not appropriate. You can't copyright the diameter of the moon. Ian Interestingly, there is a large economy involved with patenting gene sequences. Aren't they facts also? Why is patenting different to copyright in this respect? #random_aside_about_copyright_and_patent Patents and Copyright differ in many respects. Firstly, Copyright protection is given to creative works automatically with no need to register. Simply by authoring something that shows a basic level of creative expression I am granted Copyright protection over that work. This is fairly uniform throughout countries that trade with the US as the US has pushed very hard to unify the protection of its own IP globally. Copyright only applies to the work I've done though, characters, ideas and many other aspects are not covered. Patents on the other hand require a successful patent application and (though this is debatable in many cases) have a rigourous set of rules about the novelty of the invention applied. In the case of gene sequences it is not the sequence alone that is patented, but inventive description of the possible treatments, cures or other benefits of manipulating the gene (http://www.guardian.co.uk/science/2000/nov/15/genetics.theissuesexplained ). That is, Patent protection covers the idea where Copyright does not. The other major difference is in how they can apply to what you do. If you create something that is very similar to somebody else's work, but can show that the original work was not referenced in any way, then you have not infringed the copyright of that work (of course, that's difficult to show). With a patent, however, the idea is protected exclusively for the original inventor even if you came up with the same idea completely independently. rob Cheers, Peter Rob Styles tel: +44 (0)870 400 5000 fax: +44 (0)870 400 5001 mobile: +44 (0)7971 475 257 msn: m...@yahoo.com irc: irc.freenode.net/mrob,isnick web: http://www.talis.com/ blog: http://www.dynamicorange.com/blog/ blog: http://blogs.talis.com/panlibus/ blog: http://blogs.talis.com/nodalities/ blog: http://blogs.talis.com/n2/ Please consider the environment before printing this email. Find out more about Talis at www.talis.com shared innovationTM Any views or personal opinions expressed within this email may not be those of Talis Information Ltd or its employees. The content of this email message and any files that may be attached are confidential, and for the usage of the intended recipient only. If you are not the intended recipient, then please return this message to the sender and delete it. Any use of this e-mail by an unauthorised recipient is prohibited. Talis Information Ltd is a member of the Talis Group of companies and is registered in England No 3638278 with its registered office at Knights Court, Solihull Parkway, Birmingham Business Park, B37 7YB.
Re: http://ld2sd.deri.org/lod-ng-tutorial/
While we could have countless arguments over the appropriateness of DL (or OWL 2) in the Web environment, the bottom line is whether or not owl:imports adds useful information - seems hard to see a problem with that, whether agents can reason or not. The follow your nose thing. What's the problem with more data? -- http://danny.ayers.name
Re: Redundancy (was Re: RDFa vs RDF/XML and content negotiation)
2009/6/24 Ivan Herman i...@w3.org: Unfortunately, no:-( concise, but to the point, thanks :) -- http://danny.ayers.name
Re: LOD Data Sets, Licensing, and AWS
On Jun 23, 2009, at 7:04 PM, Peter Ansell wrote: Interestingly, there is a large economy involved with patenting gene sequences. Aren't they facts also? Why is patenting different to copyright in this respect? It isn't. I don't know of any gene sequence patent that was just that and withstood being challenged in court. The gene sequence patents that I'm aware of and are active aren't for the sequence, but for an application of the sequence, such as as a diagnostic of a certain disease, or a drug target for a certain indication, or a biological therapeutic. Those kinds of discoveries aren't typically facts of nature, and hence eligible for intellectual property. -hilmar -- === : Hilmar Lapp -:- Durham, NC -:- hlapp at gmx dot net : ===
RE: RDFa vs RDF/XML and content negotiation
Ivan Thanks very much. I'll take a look at your python scripts, which should be very useful. Cheers Bill Van: Ivan Herman [mailto:i...@w3.org] Verzonden: wo 24-6-2009 9:14 Aan: Bill Roberts CC: public-lod@w3.org Onderwerp: Re: RDFa vs RDF/XML and content negotiation Bill, a while ago I wrote a blog on how I do it on the Semantic Web Activity home page: http://www.w3.org/QA/2008/05/using_rdfa_to_add_information.html the blog is from the early days of RDFa, some of the specific issues may be different today (see below), but the overall line, I believe, works well. It may be helpful... What is different or should be different: - The .htaccess example refers to the RDFa distiller at W3C (which, well, I wrote, so of course I had to eat my own dogfood:-). With the increasing popularity of RDFa our system guys have already complained about sudden server request surges on that service. Ie, although it is fine to use the service as it is in the .htaccess example (with full URI-s, though) if you (or anybody else) uses it with a large number of calls, it is better to install the service locally an run it from there (it is a bunch of python files, it should not be difficult to install it). (Of course, an alternative is to run the script only once, when updating the html file. But, if not done manually, this needs some server magic...) - I use http://www.w3.org/2001/sw/ as an example, though _that_ one has changed a little bit and is more complicated today (Essentially, the HTML file has become too large and I had to cut into several files, so I have to merge the RDF graphs. This is something different...) Cheers Ivan Bill Roberts wrote: Thanks everyone who replied. It seems that there's a lot of support for the RDFa route in that (perhaps not statistically significant) sample of opinion. But to summarise my understanding of your various bits of advice: since there aren't currently so many applications out there consuming RDF, a good RDF publisher should provide as many options as possible. Therefore rather than deciding for either RDFa or a content-negotiated approach, why not do both (and provide a dump file too) Cheers Bill -- Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 PGP Key: http://www.ivan-herman.net/pgpkey.html FOAF: http://www.ivan-herman.net/foaf.rdf
Re: RDFa vs RDF/XML and content negotiation
Ivan, two words : more python! 2009/6/24 bill.robe...@planet.nl: Ivan Thanks very much. I'll take a look at your python scripts, which should be very useful. Cheers Bill Van: Ivan Herman [mailto:i...@w3.org] Verzonden: wo 24-6-2009 9:14 Aan: Bill Roberts CC: public-lod@w3.org Onderwerp: Re: RDFa vs RDF/XML and content negotiation Bill, a while ago I wrote a blog on how I do it on the Semantic Web Activity home page: http://www.w3.org/QA/2008/05/using_rdfa_to_add_information.html the blog is from the early days of RDFa, some of the specific issues may be different today (see below), but the overall line, I believe, works well. It may be helpful... What is different or should be different: - The .htaccess example refers to the RDFa distiller at W3C (which, well, I wrote, so of course I had to eat my own dogfood:-). With the increasing popularity of RDFa our system guys have already complained about sudden server request surges on that service. Ie, although it is fine to use the service as it is in the .htaccess example (with full URI-s, though) if you (or anybody else) uses it with a large number of calls, it is better to install the service locally an run it from there (it is a bunch of python files, it should not be difficult to install it). (Of course, an alternative is to run the script only once, when updating the html file. But, if not done manually, this needs some server magic...) - I use http://www.w3.org/2001/sw/ as an example, though _that_ one has changed a little bit and is more complicated today (Essentially, the HTML file has become too large and I had to cut into several files, so I have to merge the RDF graphs. This is something different...) Cheers Ivan Bill Roberts wrote: Thanks everyone who replied. It seems that there's a lot of support for the RDFa route in that (perhaps not statistically significant) sample of opinion. But to summarise my understanding of your various bits of advice: since there aren't currently so many applications out there consuming RDF, a good RDF publisher should provide as many options as possible. Therefore rather than deciding for either RDFa or a content-negotiated approach, why not do both (and provide a dump file too) Cheers Bill -- Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 PGP Key: http://www.ivan-herman.net/pgpkey.html FOAF: http://www.ivan-herman.net/foaf.rdf -- http://danny.ayers.name
Re: LOD Data Sets, Licensing, and AWS
On Wed, Jun 24, 2009 at 4:05 PM, Kingsley Idehen kide...@openlinksw.comwrote: My comments are still fundamentally about my preference for CC-BY-SA. Hence the transcopyright reference :-) I want Linked Data to have its GPL equivalent; a license scheme that: Have you read the licenses at http://opendatacommons.org/ ? Ian
Re: LOD Data Sets, Licensing, and AWS
2009/6/24 Kingsley Idehen kide...@openlinksw.com: My comments are still fundamentally about my preference for CC-BY-SA. Hence the transcopyright reference :-) Unfortunately your preference doesn't actually it make it legally applicable to data and databases. The problem, as I see it, at the moment is that this is what the majority of people are doing: using a CC license to capture their desire or intent with respect to licensing, rights waivers, attribution, intended uses, etc. The disconnect is between what people want to do with the license, and what's actually supported in law. I want Linked Data to have its GPL equivalent; a license scheme that: 1. protects the rights of data contributors; 2. easy to express; 3. easy to adhere to; 4. easy to enforce. Then the best way to do this is to engage with the communities that are attempting to do exactly that: the open data commons and creative commons. We shouldn't be encouraging people to do the wrong thing and use licenses and waivers that don't actually do what they want them to do. The science commons protocol is a good example of best practices w.r.t data licensing that are being agreed to within a specific community; one that has a a long standing culture of citation and attribution. IMHO much of the advice and reasoning that has gone into the definition and publishing of the science commons protocol is applicable to the the web of data as a whole. Convergence on a commons -- which can still support and encourage attribution through community norms -- is a Good Thing. As I stated during one of the Semtech 2009 sessions. HTTP URIs provide a closed loop re. the above. When you visit my data space you leave your fingerprints in my HTTP logs. I can follow the log back to your resources to see if you are conforming with my terms. I can compare the data in your resource against my and sniff out if you are attributing your data sources (what you got from me) correctly. If all the major media companies grok the above, there will be far less resistance to publishing linked data since they will actually have better comprehension of its inherent virtues and positive impact on their bottom line. I'm not sure that understanding the value of a unique uri for every resource, and the benefits of a larger surface area of their website, is the primary barrier to entry for those companies. One might build similar arguments around SEO and APIs. IMO, the understanding has to come through the network effects created by opening up the data for widest possible reuse. Clear and liberal licensing is a part of that. Cheers, L. -- Leigh Dodds Programme Manager, Talis Platform Talis leigh.do...@talis.com http://www.talis.com
Re: LOD Data Sets, Licensing, and AWS
Leigh Dodds wrote: 2009/6/24 Kingsley Idehen kide...@openlinksw.com: My comments are still fundamentally about my preference for CC-BY-SA. Hence the transcopyright reference :-) Unfortunately your preference doesn't actually it make it legally applicable to data and databases. The problem, as I see it, at the moment is that this is what the majority of people are doing: using a CC license to capture their desire or intent with respect to licensing, rights waivers, attribution, intended uses, etc. The disconnect is between what people want to do with the license, and what's actually supported in law. I want Linked Data to have its GPL equivalent; a license scheme that: 1. protects the rights of data contributors; 2. easy to express; 3. easy to adhere to; 4. easy to enforce. Then the best way to do this is to engage with the communities that are attempting to do exactly that: the open data commons and creative commons. We shouldn't be encouraging people to do the wrong thing and use licenses and waivers that don't actually do what they want them to do. The science commons protocol is a good example of best practices w.r.t data licensing that are being agreed to within a specific community; one that has a a long standing culture of citation and attribution. IMHO much of the advice and reasoning that has gone into the definition and publishing of the science commons protocol is applicable to the the web of data as a whole. Convergence on a commons -- which can still support and encourage attribution through community norms -- is a Good Thing. To save time etc.. What is the URI of a license that effectively enables data publishers to express and enforce how they are attributed? Whatever that is I am happy with. Whatever that is will be vital to attracting curators of high quality data to the LOD fold. If you have a an example URI even better. As I stated during one of the Semtech 2009 sessions. HTTP URIs provide a closed loop re. the above. When you visit my data space you leave your fingerprints in my HTTP logs. I can follow the log back to your resources to see if you are conforming with my terms. I can compare the data in your resource against my and sniff out if you are attributing your data sources (what you got from me) correctly. If all the major media companies grok the above, there will be far less resistance to publishing linked data since they will actually have better comprehension of its inherent virtues and positive impact on their bottom line. I'm not sure that understanding the value of a unique uri for every resource, and the benefits of a larger surface area of their website, is the primary barrier to entry for those companies. One might build similar arguments around SEO and APIs. IMO, the understanding has to come through the network effects created by opening up the data for widest possible reuse. Clear and liberal licensing is a part of that. Take a look at Freebase, and how they are effectively doing what I espouse. Google uses Freebase URIs, and they attribute by URI. I see Freebase using CC-BY-SA to effectively propagate their URIs. I also see all consumers of Freebase URIs honoring the terms without any issues. Kingsley Cheers, L. -- Regards, Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen President CEO OpenLink Software Web: http://www.openlinksw.com
Re: Contd LOD Data Sets, Licensing, and AWS
Leigh Dodds wrote: Hi, 2009/6/24 Kingsley Idehen kide...@openlinksw.com: Kingsley Idehen wrote: Leigh Dodds wrote: Hi, 2009/6/24 Kingsley Idehen kide...@openlinksw.com: To save time etc.. What is the URI of a license that effectively enables data publishers to express and enforce how they are attributed? Whatever that is I am happy with. Whatever that is will be vital to attracting curators of high quality data to the LOD fold. If you have a an example URI even better. You can chose from several at http://www.opendatacommons.org/ Take a look at Freebase, and how they are effectively doing what I espouse. Google uses Freebase URIs, and they attribute by URI. I have. I've read the licensing, terms and policies of a number of different websites. I see Freebase using CC-BY-SA to effectively propagate their URIs. I also see all consumers of Freebase URIs honoring the terms without any issues. Really? I'm not trying to be unfair, but where on: You're not being unfair. We are trying to get to the bottom of something that really important. http://lod.openlinksw.com/ Or http://lod.openlinksw.com/describe/?url=http%3A%2F%2Ffreebase.com%2Fguid%2F9202a8c04000641f883d84dd The URIs are in full view. Use a Linked Data Aware user agent against the URIs and you end up in the originating Freebase Data Space. This is my fundamental point re. preservation of original URIs. The fact that the URIs are in full view accords with your view as URI as sole means of attribution, but its irrelevant as far as the Freebase terms goes. Where's the text, logo, etc that they're asking for? Thats how the rights holder is asking to be attributed. I stand by my position, we are adhering to their terms. What they seek is de-referencable via their URIs which remain in scope at both the data presentation and representation layers. I am sure Jamie and the folks at Freebase are party to this conversation and would chime in should we be violating the terms of their license etc.. re: specific ODC license. I think the ODBL license does what you want. Or PDDL with specified community norms. ODBL license URI please. Kingsley Cheers, L. -- Regards, Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen President CEO OpenLink Software Web: http://www.openlinksw.com
Re: LOD Data Sets, Licensing, and AWS
Alan Ruttenberg wrote: Kingsley, Encouraging attribution by URI is a bad idea because it encourages people or organizations to create URIs where perfectly good ones exist, solely so that they can get their attribution. Were this no cost, I wouldn't mind. But having more than one URI for a resource causes real trouble for data integration. Let's try to look at this matter slightly differently, putting some the labels in this conversation to one side, for a second. Scenario: I am the New York Time or Times of London, I've decided to expose my treasure troves to the Web (highly quality data assembled since day one of our existence) in line with the guidelines intrinsic to the Linked Data meme. But, I am wary of the fact that anyone can some along to my newly unveiled Linked Data space, grab my data, and reconstitute in a new Linked Data Space on the Web without any reference back to me. Incidentally, there is a legal difference between attribution and citation. Virtually all of academic credit is based on citation, not attribution. Hence, my request to put the labels aside (above). It might be that what I am seeking via HTTP URIs is a Citation/Attribution hybrid (like Reference Access duality inherent to HTTP URIs re. Linked Data meme) that acknowledges data sources via their originating URIs thereby bringing citation and attribution together coherently. Ultimately owners of high quality databases have to realize the following re. their data and publication on the Web: 1. Separation of value from medium of value exchange; 2. HTTP URIs are effective mediums of value exchange on the Web. The NYT, London Times, and others of this ilk, are more likely to contribute their quality data to the LOD cloud if they know there is a vehicle (e.g., a license scheme) that ensures their HTTP URIs are protected i.e., always accessible to user agents at the data representation (HTML, XML, N3, RDF/XML, Turtle etc..) level; thereby ensuring citation and attribution requirements are honored. Attribution is the kind of thing one gives as the result of a license requirement in exchange for permission to copy. In the academic world for journal articles this doesn't come into play at all, since there is no copying (in the usual case). Instead people cite articles because the norms of their community demand it. Yes, and the HTTP URI ultimately delivers the kind mechanism I believe most traditional media companies seek (as stated above). They ultimately want people to use their data with low cost citation and attribution intrinsic to the medium of value exchange. btw - how are you dealing with this matter re. the nuerocommons.org linked data space? How do you ensure your valuable work is fully credited as it bubbles up the value chain? -Alan -- Regards, Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen President CEO OpenLink Software Web: http://www.openlinksw.com
The Public Domain (was Re: LOD Data Sets, Licensing, and AWS)
On Wed, Jun 24, 2009 at 9:56 PM, Kingsley Idehen kide...@openlinksw.comwrote: The NYT, London Times, and others of this ilk, are more likely to contribute their quality data to the LOD cloud if they know there is a vehicle (e.g., a license scheme) that ensures their HTTP URIs are protected i.e., always accessible to user agents at the data representation (HTML, XML, N3, RDF/XML, Turtle etc..) level; thereby ensuring citation and attribution requirements are honored. I agree with that, but it only covers a small portion of what is needed. You fail to consider the situations where people publish data about other people's URIs, as reviews or annotation. The foaf:primaryTopic mechanism isn't strong enough if the publisher requires full attribution for use of their data. If I use SPARQL to extract a subset of reviews to display on my site then in all likelihood I have lost that linkage with the publishing document. Attribution is the kind of thing one gives as the result of a license requirement in exchange for permission to copy. In the academic world for journal articles this doesn't come into play at all, since there is no copying (in the usual case). Instead people cite articles because the norms of their community demand it. Yes, and the HTTP URI ultimately delivers the kind mechanism I believe most traditional media companies seek (as stated above). They ultimately want people to use their data with low cost citation and attribution intrinsic to the medium of value exchange. The BBC is a traditional media company. Its data is licensed only for personal, non-commercial use: http://www.bbc.co.uk/terms/#3 btw - how are you dealing with this matter re. the nuerocommons.org linked data space? How do you ensure your valuable work is fully credited as it bubbles up the value chain? I found this linked from the RDF Distribution page on neurocommons.org : http://svn.neurocommons.org/svn/trunk/product/bundles/frontend/nsparql/NOTICES.txt Everyone should read it right now to appreciate the complexity of aggregating data from many sources when they all have idiosyncratic requirements of attribution. Then read http://sciencecommons.org/projects/publishing/open-access-data-protocol/ to see how we should be approaching the licensing of data. It explains in detail the motivations for things like CC-0 and PDDL which seek to promote open access for all by removing restrictions: Thus, to facilitate data integration and open access data sharing, any implementation of this protocol MUST waive all rights necessary for data extraction and re-use (including copyright, sui generis database rights, claims of unfair competition, implied contracts, and other legal rights), and MUST NOT apply any obligations on the user of the data or database such as “copyleft” or “share alike”, or even the legal requirement to provide attribution. Any implementation SHOULD define a non-legally binding set of citation norms in clear, lay-readable language. Science Commons have spent a lot of time and resources to come to this conclusion, and they tried all kinds of alternatives such as attribution and share alike licences (as did Talis). The final consensus was that the public domain was the only mechanism that could scale for the future. Without this kind of approach, aggregating, querying and reusing the web of data will become impossibly complex. This is a key motivation for Talis starting the Connected Commons programme ( http://www.talis.com/platform/cc/ ). We want to see more data that is unambiguously reusable because it has been placed in the public domain using CC-0 or the Open Data Commons PDDL. So, I urge everyone publishing data onto the linked data web to consider waiving all rights over it using one of the licenses above. As Kingsley points out, you will always be attributed via the URIs you mint. Ian PS. This was the subject of my keynote at code4lib 2009 If you love something, set it free, which you can view here http://www.slideshare.net/iandavis/code4lib2009-keynote-1073812
Re: Contd LOD Data Sets, Licensing, and AWS
Ian Davis wrote: On Wed, Jun 24, 2009 at 7:40 PM, Kingsley Idehen kide...@openlinksw.com mailto:kide...@openlinksw.com wrote: I stand by my position, we are adhering to their terms. What they seek is de-referencable via their URIs which remain in scope at both the data presentation and representation layers. I am sure Jamie and the folks at Freebase are party to this conversation and would chime in should we be violating the terms of their license etc.. I think the onus is on the consumer to ensure they abide with the supplier's wishes, not the other way round. It's really a matter or respect and politeness to give people the credit they ask for. Sadly, there lies the root of most problems re. present and prior economies past :-) We end up doing the wrong thing for a myriad of reasons and the net result is a completely broken value chain. I believe you can define terms of data use and enforce them at minimum cost, courtesy of HTTP URIs. We've done it with software (eons ago re. our data access drivers) and it will also work fine for Linked Data, and on this statement I am ready to stake anything :-) re: specific ODC license. I think the ODBL license does what you want. Or PDDL with specified community norms. ODBL license URI please. http://www.opendatacommons.org/licenses/odbl/ I'll take a look. Kingsley Ian -- Regards, Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen President CEO OpenLink Software Web: http://www.openlinksw.com
Re: Contd LOD Data Sets, Licensing, and AWS
2009/6/25 Ian Davis li...@iandavis.com: I think the onus is on the consumer to ensure they abide with the supplier's wishes, not the other way round. It's really a matter or respect and politeness to give people the credit they ask for. Certainly in principle, but the supplier should know what they are doing. It would be their loss after all. -- http://danny.ayers.name