Hosting your WebID with WordPress
Hi folks, I am glad to announce version 0.3 of my WordPress plugin wp-linked-data The new version allows you to host a WebID profile within your WordPress blog. If you already have one, you may although link that one with your blog account. You may add a public key in your profile section, so that your WordPress-hosted WebID can be used to authenticated on the web. It is also possible to add custom RDF triples. I hope you find this useful! The plugin can be found in the plugin repository, and here: http://wordpress.org/plugins/wp-linked-data/ Best regards, Angelo
Re: Publishing Wordpress contents as linked data
Am 20.04.2013 10:25, schrieb Angelo Veltens: The plugin is not yet available in the wordpress plugin repo, but on github: https://github.com/angelo-v/wp-linked-data Now it is available in the official repository. Just search for wp-linked-data or Linked Data in your wordpress admin backend. Plugin site: http://wordpress.org/extend/plugins/wp-linked-data/ Have fun! Best regards, Angelo
Re: Fwd: Publishing Wordpress contents as linked data
Am 22.04.2013 13:18, schrieb Phillip Lord: Interesting. I've been meaning to get Wordpress content negotiating for a long time, so I shall take a look at this to see how you have done it. I hook into the 'wp' action to do stuff, before anything is rendered: https://github.com/angelo-v/wp-linked-data/blob/master/src/wp-linked-data.php#L36 The RequestInterceptor then does the CN and responses with RDF if negotiated, otherwise it lets wordpress do it's normal work: https://github.com/angelo-v/wp-linked-data/blob/master/src/request/RequestInterceptor.php#L35 You might also be interested in a couple of tools that we have written. Kblog-metadata also adds metadata in a variety of formats to wordpress, with support for per post authors, container titles (and dates although this is not released yet). http://wordpress.org/extend/plugins/kblog-metadata/ Thanks, will take a look at it [...] We weren't able to scrap that much from your page, incidentally. http://greycite.knowledgeblog.org/?uri=http://datenwissen.de/2013/04/wordpress-bloginhalte-als-linked-data/ We need to check for content negotiation; I'm not clear, though, how we are supposed to know what forms of content are available. Is there anyway we can tell from your website that content negotiation is possible? Content negotiation is always done between the client and the server. The question is, what's in your accept header? If you prefer html you will get html. If you want other types, like 'application/rdf+xml' ask for it, and you might get it. I don't know, if there is a way to ask the server hey, before I request something, what content types do you support for the ressource http://...?; Perhaps anyone else here can answer this? May it also be possible that I add some meta tag to the HTML to inform about other representations? Best regards, Angelo
Publishing Wordpress contents as linked data
Hi, I coded a small wordpress plugin, that enables linked data publishing of blog post author data. The plugin is installed on my blog http://datenwissen.de. Feel free to request Linked Data via application/rdf+xml or text/turtle Accept-Header. The data of my latest blog post in the QD RDF Browser: http://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fdatenwissen.de%2F2013%2F04%2Fwordpress-bloginhalte-als-linked-data%23it Blog authors get a FOAF-Profile that I plan to extend to a fully functional WebID: http://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fdatenwissen.de%2Fauthor%2Fangelo%23me#http://datenwissen.de/author/angelo#me The plugin is not yet available in the wordpress plugin repo, but on github: https://github.com/angelo-v/wp-linked-data Contributions and feedback are welcome. Kind regards, Angelo Veltens
Re: Publishing Wordpress contents as linked data
Am 20.04.2013 13:26, schrieb KANZAKI Masahide: Hi, thanks for nice plugin. Content-negotiation seems quite useful. Re post_resource description, there is no such property as dc:content in dcterms namespace, unfortunately. You may want to use something like schema:articleBody instead. Yeah, you are totally right... Where the hell did I get that from? Oo I will re-evaluate the used ontologies and predicates for the next releases. I focussed on the general architecture for the first release. If anyone sees further problems concerning ontologies please add a comment to this issue: https://github.com/angelo-v/wp-linked-data/issues/1 Or open a new one if something else is wrong ;-) Looking forward to seeing progress in the plugin. Me too :-) Contributions are welcome. Next step for me is to allow users publish a RSA public key in their profiles, so that they can use it as a WebID. But this will take some time, since I have much other work to do in the next weeks. If anyone has further ideas I appreciate pull requests on github. Best regards, Angelo
groovyrdf 0.2 released
Hi all, I've just released groovyrdf version 0.2, a library to build and consume RDF with the Groovy programming language. In addition to building RDF, it is now possible to read and process RDF data from linked data resources in an easy manner: // Deklare namespace def foaf = new RdfNamespace ('http://xmlns.com/foaf/0.1/') // Load resource via RdfLoader RdfLoader rdfLoader = new JenaRdfLoader() RdfResource person = rdfLoader.loadResource( 'http://me.desone.org/person/aveltens#me' ) println person(foaf.name) // Prints 'Angelo Veltens' RdfLoader takes care of content negotiation and will load any RDF data it can get for the given URI. No matter if the server returns TURTLE, RDF/XML or any other common syntax: groovyrdf will handle it for you and provide access to the data over an easy-to-use API. For further details take a look at the use guide [1], the source code [2], or ask me any questions. The groovyrdf JAR-file can be downloaded from [3] [1] http://angelo-v.github.com/groovyrdf/ [2] https://github.com/angelo-v/groovyrdf [3] http://datenwissen.de/groovyrdf Those of you who speak german, may also take a look at my blogpost about it: http://datenwissen.de/2012/12/groovyrdf-0-2-veroffentlicht/ Kind regards happy coding, Angelo Veltens
RDF the groovy way - Domain-specific language for building RDF with Groovy
Hi all! I just released a Groovy library for building RDF data in a groovy way. Example: RdfData rdfData = rdfBuilder { http://example.com/resource/alice; { a http://example.com/vocab/Person; http://example.com/vocab/name; Alice } } is equivalent to the following RDF in TURTLE syntax: http://example.com/resource/alice a http://example.com/vocab/Person; http://example.com/vocab/name Alice. The benefit is, that you can use all the features of the groovy language to build your RDF dynamically. Imagine something like the following: def person = new Person (...) RdfData rdfData = rdfBuilder { http://example.com/resource/${person.nick}; { a http://example.com/vocab/Person; http://example.com/vocab/name; person.name http://example.com/vocab/knows; { person.friends.each { friend - http://example.com/resource/${friend.nick}; {} } } } } More examples explanation can be found in the user guide: http://datenwissen.de/projekte/groovyrdf/userguide/ The sourcecode is available at GitHub (participation welcome): https://github.com/angelo-v/groovyrdf I am awaiting your feedback! Kind regards, Angelo Veltens
Re: Linked Data Thesaurus online
Am 12.11.2010 12:25, schrieb Angelo Veltens: Hi there, I have used skos:Concept for the synonym sets now and skos-xl:Label for the terms. I linked one of the terms as prefLabel to a synset and its synonyms as altLabel But you can try it yourself - its online! Example query: http://thesaurus.datenwissen.de/offen#term Do not forget to chance the accept-header to application/rdf+xml or text/turtle. Otherwise you will get the original xml-data from openthesaurus.de Awaiting your feedback :-) Opnions anyone? ;-) Kind regards, Angelo
Linked Data Thesaurus online (was: Re: synonym / thesaurus data)
Hi there, I have used skos:Concept for the synonym sets now and skos-xl:Label for the terms. I linked one of the terms as prefLabel to a synset and its synonyms as altLabel But you can try it yourself - its online! Example query: http://thesaurus.datenwissen.de/offen#term Do not forget to chance the accept-header to application/rdf+xml or text/turtle. Otherwise you will get the original xml-data from openthesaurus.de Awaiting your feedback :-) Kind regards, Angelo Am 21.10.2010 15:25, schrieb Andreas Blumauer (punkt. netServices): Hi Angelo, great idea to publish openthesaurus as linked data! Regarding your question, here are my opinions: 1) should skos:Concept be used for a term? Yes! 2) is skos:closeMatch a good predicate to define synonyms? synonyms should rather be expressed via skos:altLabel (see: http://www.w3.org/TR/skos-reference/skos.html#altLabel) since each concept has exactly one URI but may have one or many altLabels 3) can skos:closeMatch relate to skos:Collection or only to other skos:Concepts? only to other skos:Concepts - domain and range of closeMatch are both skos:Concept Greetings, Andreas *Von: *Angelo Veltens angelo.velt...@online.de *An: *public-...@w3.org *Gesendet: *Donnerstag, 21. Oktober 2010 15:16:57 *Betreff: *synonym / thesaurus data Hi, i am going to transform the data from http://openthesaurus.de to linked data and want to discuss how to organize it. openthesaurus.de is a german thesaurus, that can expose it's data as xml. Example request for the term lustig (funny): Web-Access: http://www.openthesaurus.de/synonyme/search?q=lustig API-Access: http://www.openthesaurus.de/synonyme/search?q=lustigformat=text/xml In XML the synonyms are grouped in synsets. Each term in the synsets is a synonym to the requested term, but the different synsets have different meanings. This is my idea to model this as linked data (example at the end of the mail): A term is identified like this: http://localhost:8080/thesaurus/lustig#term URI of the RDF-Document: http://localhost:8080/thesaurus/lustig I model a term as a skos:Concept. I group the terms of a synset in a skos:Collection. I relate these collections with a skos:closeMatch to the requested term. What I am not sure about: 1) should skos:Concept be used for a term? 2) is skos:closeMatch a good predicate to define synonyms? 3) can skos:closeMatch relate to skos:Collection or only to other skos:Concepts? Kind regards, Angelo Example: #term a http://www.w3.org/2004/02/skos/core#Concept ; http://www.w3.org/2004/02/skos/core#prefLabel lustig ; http://www.w3.org/2004/02/skos/core#closeMatch [ a http://www.w3.org/2004/02/skos/core#Collection ; http://www.w3.org/2004/02/skos/core#member http://localhost:8080/thesaurus/possierlich#term , http://localhost:8080/thesaurus/drollig#term , http://localhost:8080/thesaurus/herzig#term ] ; http://www.w3.org/2004/02/skos/core#closeMatch [ a http://www.w3.org/2004/02/skos/core#Collection ; http://www.w3.org/2004/02/skos/core#member http://localhost:8080/thesaurus/fidel#term , http://localhost:8080/thesaurus/beschwingt#term , http://localhost:8080/thesaurus/mopsfidel#term ] ; http://www.w3.org/2004/02/skos/core#closeMatch [ a http://www.w3.org/2004/02/skos/core#Collection ; http://www.w3.org/2004/02/skos/core#member http://localhost:8080/thesaurus/lustig#term , http://localhost:8080/thesaurus/humorig#term , http://localhost:8080/thesaurus/scherzhaft#term ] ; .
Re: ANNOUNCE: lod-announce list
Hi, Ian Davis schrieb: Hi all, Now we are getting a steady growth in the number of Linked Data sites, products and services I thought it was time to create a low-volume announce list for Linked Data related announcements so people can keep up to date without needing to wade through the LOD discussion. You can join the list at http://groups.google.com/group/lod-announce Sounds find, but is it possible to subscribe to the list without a google account? Kind regards, Angelo
Re: ANNOUNCE: lod-announce list
Dan Brickley schrieb: On Sun, Jun 13, 2010 at 7:44 PM, Angelo Veltens angelo.velt...@online.de wrote: Hi, Ian Davis schrieb: Hi all, Now we are getting a steady growth in the number of Linked Data sites, products and services I thought it was time to create a low-volume announce list for Linked Data related announcements so people can keep up to date without needing to wade through the LOD discussion. You can join the list at http://groups.google.com/group/lod-announce Sounds find, but is it possible to subscribe to the list without a google account? Yes. The Google Groups site doesn't make it particularly easy to find from the lod-announce group homepage, but see http://groups.google.com/support/bin/answer.py?answer=46606cbid=-o2vzb2h0iyxwsrc=cblev=index Thanks for the hint! But one of the moderators added me directly. Kind regards, Angelo
Re: Organization ontology
Dave Reynolds schrieb: We would like to announce the availability of an ontology for description of organizational structures including government organizations. Great! This comes in due time :-) I was just looking for something like that. I'll take a deeper look at it. Kind regards, Angelo
Cool URIs (was: Re: Java Framework for Content Negotiation)
On 27.05.2010 15:51, Richard Cyganiak wrote: On 27 May 2010, at 10:47, Angelo Veltens wrote: What I am going to implement is this: http://www.w3.org/TR/cooluris/#r303uri I think, this is the way dbpedia works and it seems a good solution for me. It's the way DBpedia works, but it's by far the worst solution of the three presented in the document. DBpedia has copied the approach from D2R Server. The person who came up with it and designed and implemented it for D2R Server is me. This was back in 2006, before the term Linked Data was even coined, so I didn't exactly have a lot of experience to rely on. With what I know today, I would never, ever again choose that approach. Use 303s if you must; but please do me a favour and add that generic document, and please do me a favour and name the different variants foo.html and foo.rdf rather than page/foo and data/foo. Thanks a lot for sharing your experience with me. I will follow your advice. So if i'm going to implement what is described in section 4.2. i have to - serve html at http://www.example.org/doc/alice if text/html wins content negotiation and set content-location header to http://www.example.org/doc/alice.html - serve rdf/xml at http://www.example.org/doc/alice if application/rdf+xml wins content negotiation and set content-location header to http://www.example.org/doc/alice.rdf - serve html at http://www.example.org/doc/alice.html always - serve rdf/xml at http://www.example.org/doc/alice.rdf always Right? By the way: Is there any defined behavior for the client, what to do with the content-location information? Do Browsers take account of it? The DBpedia guys are probably stuck with my stupid design forever because changing it now would break all sorts of links. But the thing that really kills me is how lots of newbies copy that design just because they saw it on DBpedia and therefore think that it must be good. I think the problem is not only, that dbpedia uses that design, but that it is described in many examples as a possible or even cool solution, e.g. http://www4.wiwiss.fu-berlin.de/bizer/pub/LinkedDataTutorial/ (one of the first documents i stumbled upon) If we want to prevent people from using that design it should be clarified that and why it is a bad choice. Kind regards and thanks for your patience, Angelo
Java Framework for Content Negotiation
Hello, I am just looking for a framework to do content negotiation in java. Currently I am checking the HttpServletRequest myself quickdirty. Perhaps someone can recommend a framework/library that has solved this already. Thanks in advance, Angelo
Re: Java Framework for Content Negotiation
On 20.05.2010 12:18, Michael Hausenblas wrote: There's also Jersey [1] ... +1 to Jersey - had overall very good experience with it. If you want to have a quick look (not saying it's beautiful/exciting, but might helps to kick-start things) see [1] for my hacking with it. Cheers, Michael [1] http://bitbucket.org/mhausenblas/sparestfulql/ Mmh, i have been thinking about using REST-Webservice already, but there is one thing i'm quite unsteady with: I might have a non-information resource http://example.org/resource/foo I could place a REST-Webservice there and do content negotiation with @GET / @Produces Annotations. But this seems not correct to me, because it is a non-information resource and not a html or rdf/xml document. So it should never return html or rdf/xml but do a 303 redirect to an information resource instead, doesn't it? Kind regards, Angelo
Re: Fwd: Preventing SPARQL injection
Davide Palmisano schrieb: I'm not sure I well understood your problem. Anyway may be worth give a look to this: http://clarkparsia.com/weblog/2010/02/03/empire-0-6/ QuerySolutionMap was exactly what I need at the moment, but Empire seems to be very interesting beyond that. I will take a look at it some time. Thanks kind regards, Angelo
Preventing SPARQL injection
Hi all, my name is Angelo Veltens, i'm studying computer science in germany. I am using the jena framework with sdb for a student research project. I'm just wondering how to prevent sparql injections. It seems to me, that i have to build my queries from plain strings and do the sanitizing on my own. Isn't there something like prepared statements as in SQL/JDBC? This would be less risky. Kind regards, Angelo Veltens