One thing I have done is make a set of integration tests with JUnit, so the queries are embedded and you check that you get the right answers.
On Sun, Apr 26, 2015 at 4:14 PM, Neubert, Joachim <j.neub...@zbw.eu> wrote: > Hi Niklas, > > Github (and similar services) offer a great platform to publish > vocabularies and queries, particularly if they are evolving in sync and are > backed with a corresponding endpoint. How we managed to complement this > with a "SPARQL-IDE", which allows people to experiment with queries and > immediately see the results, is described here: > > http://zbw.eu/labs/en/blog/publishing-sparql-queries-live > > The approach is used extensively in the skos-history project ( > https://github.com/jneubert/skos-history). > > Cheers, Joachim > > > -----Original Message----- > > From: Niklas Petersen [mailto:peter...@cs.uni-bonn.de] > > Sent: Sunday, April 26, 2015 1:01 PM > > To: semantic-...@w3.org; public-lod@w3.org > > Subject: Best practices on how to publish SPARQL queries? > > > > Hi all, > > > > I am currently developing a vocabulary which has "typical" queries > related > > to it. I am wondering if there exist any "best practices" to publish them > > together with the vocabulary? > > > > The best practices on publishing Linked Data [1] only focuses on the > > endpoints, but not on the queries. > > > > Has anyone else been in that situation? > > > > > > Best regards, > > Niklas Petersen > > > > [1] http://www.w3.org/TR/ld-bp/#MACHINE > > > > -- > > Niklas Petersen, > > Organized Knowledge Group @Fraunhofer IAIS, Enterprise Information > > Systems Group @University of Bonn. > > -- Paul Houle *Applying Schemas for Natural Language Processing, Distributed Systems, Classification and Text Mining and Data Lakes* (607) 539 6254 paul.houle on Skype ontolo...@gmail.com https://legalentityidentifier.info/lei/lookup <http://legalentityidentifier.info/lei/lookup>