Re: Which datatype to use for time intervals
An interesting new bit of news on the topic today, from http://semanticweb.com/schema-org-chat-googles-r-v-guha// / /The Semantic Web Blog/: Where do challenges still lie for schema.org? /Guha/: We have to get to the next level, to represent time which is always a challenge in plain old RDF. And we are working with the W3C folks on trying to come up with ways to represent time. Bob DuCharme On 11/12/2013 10:04 AM, Thomas Kurz wrote: Hi Lars! Maybe this is what you are searching for: http://ceur-ws.org/Vol-665/CorrendoEtAl_COLD2010.pdf Best regards Thomas Am 12.11.2013 um 15:55 schrieb Martynas Jusevičius marty...@graphity.org mailto:marty...@graphity.org: Lars, I'm using the Time ontology for this purpose: http://www.w3.org/TR/owl-time/ Martynas graphityhq.com http://graphityhq.com On Tue, Nov 12, 2013 at 3:47 PM, Svensson, Lars l.svens...@dnb.de wrote: Is there a standard (recommended) datatype to use when I want to specify a time interval (e. g. 2013-11-13--2013-11-14)? The XML Schema types [1] don't include a time interval format (unless you want to encode it as starting time + duration). There seems to be a way to encode it using ISO 8601, the Wikipedia says that intervals can be expressed as 'Start and end, such as 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z' [2], but I haven't found a formally defined datatype to use with RDF data. [1] www.w3.org/TR/xmlschema-2/ [2] http://en.wikipedia.org/wiki/ISO_8601#Time_intervals Thanks for any help, Lars - *Thomas Kurz* /Knowledge and Media Technologies/ Salzburg Research Tel: +43/662/2288-253
Re: Basic OWL editor/viewer?
Hi Mike, TopQuadrant's TopBraid Composer is the leading tool in the industry for this: http://topquadrant.com/products/TB_Composer.html. The Maestro and Standard editions offer additional levels of features such as connectivity to other data sources (e.g. Oracle) and application development, but the free edition is great for editing of RDF data and OWL (and RDFS) data models. (Full disclosure: I work for TopQuadrant, but was using the free edition well before I started working for them.) Bob On 10/16/2012 7:50 PM, Mike Liebhold wrote: Hello, A colleague and I are beginning a project to investigate the roles of Internets of Things in Smart Cities. We are finding pretty decent ontologies in both domains, and want to create some hybrid representations.Neither of us are deeply skilled in SemWeb data structures, but are looking fro some simple tools to get started editing existing structures. My colleague (Scott Minneman cc'd here) is asking if anyone here recommend a good, basic OWL editor/viewer or other related simple tools to get started? Many thanks, in advance. for any pointers. Mike Michael Liebhold Senior Researcher, Distinguished Fellow Institute for the Future @mikeliebhold @iftf
Re: SPARQL 1.1 query question
You can try this yourself with ARQ. Your query uses the property abc:hasMember, which makes sense in the context of each triple, but your data uses the the property abc:hasMembers, so the query won't find them. Once those were corrected in the data and the prefixes were declared, the following query did what you wanted: PREFIX abc: http://www.example.com/ns/abc# SELECT ?league (COUNT (?member) AS ?membercount) WHERE { ?league abc:hasMember ?member . } GROUP BY ?league Bob On 10/7/2010 12:02 PM, Michael Ransom wrote: Hello All, I have a question about SPARQL 1.1 queries. If have the following triples: :LeagueA abc:hasMembers :Alice, :Bob, :Carol . :LeagueB abc:hasMembers :Dante, :Edward. If I want the following table of results: ?league ?membercount LeagueA 3 LeagueB 2 Given the data and my desired results, will the following SPARQL 1.1 query work? SELECT ?league (COUNT(?member) AS ?membercount) WHERE { SELECT ?league ?member WHERE { ?league abc:hasMember ?member . } GROUP BY ?league } Whether or not this query works, is there a way I can write this query without a subquery? Thank you.
Re: Organization ontology
Is any sample instance data available, whether it's using real or fake organizations? thanks, Bob
Re: Organization ontology
Dave, Does this mean that no sample data has been created yet, or that samples used in the course of development are not data that you are free to share? thanks, Bob Dave Reynolds wrote: On Thu, 2010-06-03 at 09:29 -0400, Bob DuCharme wrote: Is any sample instance data available, whether it's using real or fake organizations? Not yet, but there will be. Dave
Re: Tools for transforming data to RDF
As Irene said, http://esw.w3.org/topic/ConverterToRdf is the best place to start, but I thought I'd ramble a bit about some of the broader issues. If the data to convert is in a file, as opposed to being delivered from a server with an interface that you can write to (as D2RQ and OpenLink do for relational data), then the first step is to parse the input, so tools will be built around parsers for each input format. Any modern programming language can parse CSV easily, and most tools that advertise the ability to convert spreadsheets to RDF actually expect CSV input. (TopQuadrant's tools can read binary Excel files. Full disclosure: I work for them.) When your input is XML (which can include HTML if you use TagSoup or Tidy to clean it up), XSLT is a popular way to create triples. This is the principle behind GRDDL (http://www.w3.org/2004/01/rdxh/spechttp://www.w3.org/2004/01/rdxh/spec). TopQuadrant also has a more general-purpose XML-to-RDF converter that takes the structure of the input document into account so that it can round-trip the RDF back to XML. With plain text, something needs to identify structure within the text so that it can work out what the subjects, predicates, and objects are, and that structure depends on the needs of the application. (That actually applies to CSV and XML as well, but commas and tags give you more to go on if you understand the purpose of the input data.) Semweb meetups are seeing more interest from the Natural Language Processing community--I think the NYC semweb meetup actually has a subgroup of people dedicated to NLP issues--so there could be more interesting work coming from them in the future. Thomson Reuters Calais is the most well-known example that comes to mind of a tool that takes plain text as input and returns it with embedded RDF. Bob Alasdair Logan wrote: Hey all, I was wondering if anyone is familiar with tools to convert data into RDF triples and Linked Data. They can be for any data format i.e. XML, CSV, plain text etc. Im doing this as part of a pilot study for my Master's project so i'm just trying get a general view of any tools used. Thanks in advance Ally
Re: RESTful API for accessing DBpedia
Monika Solanki wrote: I am looking for a REST based API for programmatically accessing DBpedia's SPARQL end point. Any pointers much appreciated. A SPARQL endpoint is by its nature already a REST-based API. You send it HTTP GETs, and it returns data laid out in a specific protocol (http://www.w3.org/TR/2008/REC-rdf-sparql-protocol-20080115/). To create the URL for the GET for DBpedia, you can escape the SPARQL query (most programming languages have a function for this, but http://www.xs4all.nl/~jlpoutre/BoT/Javascript/Utils/endecode.html is nice for experiments) and append it to the following: http://dbpedia.org/sparql?format=XMLdefault-graph-uri= For example, doing this with this query SELECT ?p ?o WHERE { http://dbpedia.org/resource/IBM ?p ?o } gets you this URL, which you can paste into your browser: http://dbpedia.org/sparql?format=XMLdefault-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT%20%3Fp%20%3Fo%20%20%20WHERE%20%7B%20%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FIBM%3E%20%3Fp%20%3Fo%20%7D Virtuoso provides the dbpedia endpoint, so you'll see more doc on this at http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VOSSparqlProtocol. Or am I misunderstanding what you're looking for? Bob
Re: Querying dbpedia from the command line?
Richard Cyganiak wrote: 1. SPARQL is great, but too verbose for the command line. I don't worry about this much, because I'm not interested in using it from the command line per se as much as the ability to use a script to retrieve data from a SPARQL endpoint, and doing it from the command line is the first step toward that. Doing it from a python/perl/etc. script and loading it into data structures from these languages is the next step I want to pursue. 3. We are all waiting for SPARQL processors that federate multiple SPARQL endpoints transparently into a single endpoint. Progress is being made in this area, but it's slow. Meanwhile, there is a very nice 80/20 solution to this problem: Andy Seaborne has implemented a SERVICE keyword for his extended variant of SPARQL, which allows you to address parts of a SPARQL query to a specific endpoint. This seems like an easy win for data integration demos. Definitely, and to help the concept of Linked Data live up to its name. I had been planning a blog post titled Linking Linked Data but wasn't sure how I was going to go about it. I look forward to playing with the SERVICE keyword. Bob
Re: Querying dbpedia from the command line?
Sergio Fernández wrote: Did you try to quote the URL? Yes, see http://lists.w3.org/Archives/Public/public-lod/2008Sep/0032.html. It may be one of those things that has a different effect on the Windows and Linux command lines. Bob
Re: Querying dbpedia from the command line?
Richard Cyganiak wrote: I'm not sure if it works on every SPARQL endpoint, but try this: curl -F 'query=SELECT * WHERE { ?s ?p ?o } LIMIT 3' http://dbpedia.org/sparql The key is using curl's -F parameter (which takes a key-value pair and urlencodes the value), and putting the 'query=...' part into quotes. Bingo! As written, this query didn't work in Windows and did in Linux, but when I changed the the single quotes to double quotes, it worked in Windows both from the command line and from a batch file. It's also great because it's so much terser and more readable than all the escaped versions. As a side note, I think this is going to be very big, because while Linked Data (and much of the semantic web) is theoretically about exposing data to programs instead of to eyeballs like the traditional web, most of the linked data and semantic web demos I see out there are about visual browsing of linked data--displaying it to eyeballs. When we can grab the results of a linked data SPARQL query with a script, then we can really start doing new and interesting things with it. Bob
Querying dbpedia from the command line?
Has anyone managed to pass a URL with a SPARQL query to wget or curl and successfully retrieved data from dbpedia? When I take the wget example at http://groups.yahoo.com/group/govtrack/message/570 and paste the query at dbpedia's SNORQL interface it works[1], and when I paste the wget command on that page without any carriage returns onto a (Windows) command line [2] it doesn't get any errors, which certainly felt like a minor victory, but the result set was empty. Any suggestions? thanks, Bob [1] http://dbpedia.org/snorql/?query=PREFIX+db%3A%0D%0A%3Chttp%3A%2F%2Fdbpedia.org%2Fproperty%2F%3E%0D%0ASELECT+*+WHERE+{%0D%0A%3Fcity+db%3AleaderName+%3Fleader+%3B%0D%0Adb%3AsubdivisionName+%3Fsubdiv+%3B%0D%0Adb%3Aelevation+%3Felevation+.%0D%0A}+LIMIT+5 [2] wget -O temp.txt http://DBpedia.org/sparql?query=PREFIX dbpedia2: http://dbpedia.org/property/ PREFIX skos: http://www.w3.org/2004/02/skos/core# SELECT ?presName,?birthday, ?startDate WHERE { ?presName skos:subject http://dbpedia.org/resource/Category:Presidents_of_the_United_States. ?presName dbpedia2:birth ?birthday. ?presName dbpedia2:presidentStart ?startDate. }
Re: Querying dbpedia from the command line?
- Bob DuCharme [EMAIL PROTECTED] wrote: Has anyone managed to pass a URL with a SPARQL query to wget or curl and successfully retrieved data from dbpedia? Peter Ansell wrote: You need to URLEncode the SPARQL query. I am not sure how to do that with a command-line or wget but that is what the browser has done to the query when it submitted the form using HTTP GET. I had tried several variations on that. (First I realized that [2] in my original email had the wrong URL; it should have said the following, without carriage returns:) wget -O temp.txt http://DBpedia.org/sparql?query=PREFIX db:http://dbpedia.org/property/ SELECT * WHERE { ?city db:leaderName ?leader ; db:subdivisionName ?subdiv ; db:elevation ?elevation . } LIMIT 5 Using the very handy escape/unescape page at http://www.xs4all.nl/~jlpoutre/BoT/Javascript/Utils/endecode.html, I made this escaped version wget -O temp.txt http://DBpedia.org/sparql?query%3D%22PREFIX%20db%3A%3Chttp%3A%2F%2Fdbpedia.org%2Fproperty%2F%3E%20SELECT%20*%20WHERE%20%7B%20%3Fcity%20db%3AleaderName%20%3Fleader%20%3B%20db%3AsubdivisionName%20%3Fsubdiv%20%3B%20db%3Aelevation%20%3Felevation%20.%20%7D%20LIMIT%205%22 and when trying to run it I just get error messages whether I quoted the URL, put the line in a batch file and tried it from there... (At least the unescaped version gave me well-formed XML as a response, even if the result set was empty.) After trying several more combinations of escaped vs. unescaped, wget vs. curl, quoting vs. not quoting, Windows vs. Linux, and using the command directly from the command line vs. putting it in a batch file or shell script and calling that, I just got it to work calling the unescaped version with wget from a shell script on a Linux box. I'd still like to hear about any luck others might have had. Bob
Re: fans of Linked Open Data and the NFL, NHL, or Major League Baseball?
Kingsley Idehen wrote: Today we have ESPN and co. offering analysis via the traditional one-way TV medium. Tomorrow, I envisage a conversation space connected by analytic insights from Joe Public the analyst. Also, what's good for sports applies to Politics, Finance, Soap Operas, and other realms. What I'd really like to do as a next step is to identify a Linked Open Data advocate who happens to watch a lot of one of the sports for which more data is becoming available, and then encourage that person to get in touch with one of the stats-oriented communities within that sport to help make these stats available as a SPARQL endpoint. If it was soccer, Uche would be an obvious candidate, but judging by http://www.sportsstandards.org/oc the NHL, NFL, or MLB seem to be the best places to start. At the next Boston/Cambridge/semweb/LOD meetup, I suggest you keep you eye out for Red Sox, Bruins, or Pats-themed clothing... Bob
fans of Linked Open Data and the NFL, NHL, or Major League Baseball?
One thing missing from Richard Cygniak's Linking Open Data Set Cloud interactive diagram at http://richard.cyganiak.de/2007/10/lod/ (which every other presentation at Linked Data Planet included in their slides) is some sports data. If people are arguing about numbers in a bar anywhere in the world, it's probably sports-related numbers, and open sports data that anyone can query and manipulate would be a great contribution to the LOD movement. I emailed an old New York XML friend, Alan Karben (founder of http://www.xmlteam.com), to ask about publicly available sports data, and he pointed me to http://www.sportsstandards.org/oc. For now, the NHL link is broken, and I've mentioned this to him. Meanwhile, there are apparently some organized efforts to make large amounts of NFL and Major League Baseball data available to the public, so it would be a nice project for anyone who's a big fan of LOD and one of these sports to work toward creating a SPARQL endpoint for one of these data sets. As http://www.sportsstandards.org/oc makes clear, they'd be more than happy to publicize data-gathering efforts for other sports. Bob DuCharme www.snee.com/bobdc.blog