Web Based RDFa Editor

2009-05-16 Thread richard . hancock
Hi all,

I am looking for an RDFa XHTML Editor to embed in Drupal and Wordpress.

Currently I am trialling WYMeditor ( http://www.wymeditor.org/ ) in
Drupal. WYMeditor is based on JQuery and provides an RDFa proof of concept
http://files.wymeditor.org/wymeditor/trunk/src/examples/15-rdfa-editor.html.

If anyone done something similar, i.e. embedded an RDFa XHTML Editor in
Drupal or Wordpress I'd be interested in hearing more about how well it
has worked, either using WYMeditor or an alternative.

Cheers,

Richard Hancock

Blog: http://blog.3kbo.com





Re: bootstrapping decentralized sparql

2009-05-16 Thread Peter Ansell
2009/5/17 Giovanni Tummarello :
>> for graphs which use a (specific) FOAF term.  It's a bit like
>> PingTheSemanticWeb or Sindice, but decentralized based on the ontologies
>> used.
>
> []
>
> Isnt this like saying  "why set up  an infrastructure with
> professional developers and administrators,  in a backed up , UPSed
> datacenter..
> .. when you can ask each ontology creator to do the same."

Why spend resources on creation and upkeep of a triple store and
search service when an ontology creator can distribute the information
more efficiently using references to endpoints in their ontologies?

>> The result here will be that a query for a foaf:Person with a
>> foaf:firstName of "Sandro" can be *complete*, at least across all graphs
>> which choose to register themselves as having data about instances of
>> the foaf:Person class and triples using the foaf:firstName property.
>
> please explain how this that you describe is different from what's
> possible already
>
> http://sindice.com/search?q=foaf%3Aname&qv=Sandro+Hawke&qt=ifp

It doesn't rely on a single search engine. What happens when the rdf
web equivalents of google go down? ;) Granted that the ontology
authors being required for the loop with distributing query
information is not perfect, but it shows there are other possibilities
to solving the discovery and query federation problem than just brute
force and large data stores.

Another alternative to predicate based query federations are the URI
prefix solutions that I have been developing for the Bio2RDF server
(although they can be applicable to any domain). The set of providers
at [1] can be used in a similar way to the predicate and type
federations described by Sandro here. If you configure queries in [1]
without regard to the URI prefixes the result is similar to predicate
based distribution systems. The configuration information is also
lightweight enough to be distributed easily.

For almost any query that is resolvable with the Bio2RDF server you
can fetch the instructions using URI's like [2] and perform the query
resolution process yourself using the same algorithm that the server
would have used by finishing the resolution. The process is simply
performed for any RDF URI's that come back from [2] with the rdf:type
[3]. Being able to have several servers that enable RDF query planning
without needing to perform any resolution or data storage themselves
might be a good step towards common federation models in a lightweight
manner.

Cheers,

Peter

[1] http://qut.bio2rdf.org/admin/configuration/rdfxml
[2] http://qut.bio2rdf.org/queryplan/pageoffset1/links/geneid:11234
[3] http://bio2rdf.org/ns/querybundle#QueryBundle



Re: bootstrapping decentralized sparql

2009-05-16 Thread Giovanni Tummarello
> for graphs which use a (specific) FOAF term.  It's a bit like
> PingTheSemanticWeb or Sindice, but decentralized based on the ontologies
> used.

[]

Isnt this like saying  "why set up  an infrastructure with
professional developers and administrators,  in a backed up , UPSed
datacenter..
.. when you can ask each ontology creator to do the same."


> The result here will be that a query for a foaf:Person with a
> foaf:firstName of "Sandro" can be *complete*, at least across all graphs
> which choose to register themselves as having data about instances of
> the foaf:Person class and triples using the foaf:firstName property.

please explain how this that you describe is different from what's
possible already

http://sindice.com/search?q=foaf%3Aname&qv=Sandro+Hawke&qt=ifp

Giovanni



Re: [ANN] Linking Open Data Triplification Challenge 2009

2009-05-16 Thread Kingsley Idehen

Michael Hausenblas wrote:

All,

We'd like to draw your attention to the second edition of the Linking Open
Data Triplification Challenge, again collocated with the I-Semantics [1]
conference. 


Please see http://triplify.org/Challenge/2009 for details on submission and
prices. Note that submission deadline is 9 August 2009.

With the recent uptake of structured data/RDF by major players such as
Google the motivation for exposing relational data and other structured data
sources on the Web entered a new stage. We encourage participants to publish
existing structured (relational) representations, which are already backing
most of the existing Web sites and demonstrate useful and usable
applications on top of it.

The challenge awards attractive prices (MacBook Air or 1000€, and other
prices) to the most innovative and promising triplifications. We thank our
sponsors Ontos AG [2] and Punkt.NetServices [3] for supporting this
challenge. 

Cheers, 
   Michael (on behalf of the Organizing Committee)


[1] http://www.i-semantics.at
[2] http://www.ontos.com/
[3] http://www.punkt.at/

Cheers,
  Michael
  

Micheal et. al,

Note, that it is now possible to produce a Data Source Ontology and 
Instance Data from ODBC and JDBC accessible data sources using a 
Virtuoso Conductor (Admin UI) hosted Wizard (I'll make a screencast very 
soon to demonstrate this feature). You don't have to write a single line 
of code, it's now completely automated.


The steps are simply as follows (assuming your SQL Data isn't Virtuoso 
hosted i.e. external):


1. Use the Virtual Database feature to Link any ODBC or JDBC accessible 
DBMS into Virtuoso

2. Go to the RDF Schema Tab
3. Select the list of tables or views that you seek to expose in RDF 
Linked Data form

4. Click "Generate"
5. Answer questions presented re. deployment endpoint
6. Done!

You will end up with the following:

1. Data Source Ontology (with de-referencable URIs for Classes and 
Properties) derived from the Relational Schema

2. Instance Data for the Data Source Ontology
3. Re-write rules for handling resource and resource description 
disambiguation aspect of Linked Data URI de-referencing.


RDF Views still provide significant performance advantages over RDF 
Stores when the source is Relational.  Also note, that you can further 
finesse the generated ontology (e.g., mesh it with others) and instance 
data scripts for more complex mappings.



Also, note the following about Virtuoso's RDF Views:

1. It's based on SQL for generating the data used in the RDF Views
2. it's based on SPARQL for actual graph pattern declarations.

We describe the above as being SPASQL based because the SPARQL inside 
SQL is processed via the SQL processor (i.e. you can execute these 
constructs via ODBC, JDBC,  or ADO.NET sessions against Virtuoso instances).



You can also use the Sponger to produce RDF Linked Data from SQL and 
other HTTP accessible data sources [2][3].


Links:

1. http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VOSSQL2RDF
2. http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtSponger
3. 
http://virtuoso.openlinksw.com/Whitepapers/html/VirtSpongerCartridgeProgrammersGuide.html


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: bootstrapping decentralized sparql

2009-05-16 Thread Kingsley Idehen

Sandro Hawke wrote:

The interesting questions is can we have stateless SPARQL servers that
distribute the query to other SPARQL servers, and what metadata do they
need to do that well?  
I guess voiD is supposed to address that; I don't

know how well it does it, etc.  (I haven't had a chance to follow this
work much recently.)
  
  
Yes, VoiD graphs cover that. The thing we need to do is standardize the 
auto-discovery patterns so that smart federated SPARQL is feasible :-)


Example of a VoiD graph: http://lod.openlinksw.com/void/Dataset .



Thanks.  Yeah, I looked at VoiD, briefly, after we talked about it
Tuesday, although I don't fully understand it.

But I think I'm picturing something a little different.  (I think.)  The
key part I'm imagining is back-links (or track-backs).  I think folks
who publish ontologies ought, generally, to keep track (on a voluntary,
automatic, delegated basis) of who is using them.

For example, I suggest the RDF graph at "http://xmlns.com/foaf/0.1/";
(which introduces all the FOAF terms) should include some triples like:
  <> rx:tracker 
  <> rx:tracker 

... and those two trackers should be (REST) services where folks can
report a graph which uses a (specific) FOAF term and folks also query
for graphs which use a (specific) FOAF term.  It's a bit like
PingTheSemanticWeb or Sindice, but decentralized based on the ontologies
used.

Obviously there are some scaling details to work out, but my sense is
it's generally doable.  It may be that some terms (like rdf:type) are
too common to be worth indexing.  And some sites will have complex,
dynamic graph structures and will want to make sure they are registered
properly.  (For instance, livejournal should probably register one
SPARQL endpoint instead of its 10+ million dynamically-generated foaf
files.)

The result here will be that a query for a foaf:Person with a
foaf:firstName of "Sandro" can be *complete*, at least across all graphs
which choose to register themselves as having data about instances of
the foaf:Person class and triples using the foaf:firstName property.

I think running the tracker for an ontology should fundamentally be the
responsibility of the ontology hoster/maintainers (eg Dan and Libby for
FAOF), although I would expect there to be public tracking services, so
all they really have to do is sign up with one or more and point at them
with some rx:tracker triples.

(My apologies if someone has already proposed this, or even built it.  I
can't come close to following everything going on.)
  
What you suggest and where this is ultimately heading are in sync. We 
just need the make a lose federation of SPARQL endpoints that expose 
stats about what they have, as part of the eventual solution. From this, 
we can build a federation of lookup and sync services (RDFSync protocol 
has been lying in wait for while now).  Thus, be rest assured that what 
you describe above will be part of the final solution, pre commencement 
of standardization process :-)


Also note SPARQL endpoints can be discovered via DNS [1].  We need to be 
able to discover, describe, and then sync stats across data spaces.


Links:

1. http://blogs.talis.com/nodalities/2009/04/discovering-sparql.php

Kingsley

  -- Sandro

  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: [ANN] Linking Open Data Triplification Challenge 2009

2009-05-16 Thread Ian Davis
On Sat, May 16, 2009 at 10:03 AM, Michael Hausenblas <
michael.hausenb...@deri.org> wrote:


> With the recent uptake of structured data/RDF by major players such as
> Google the motivation for exposing relational data and other structured
> data
> sources on the Web entered a new stage. We encourage participants to
> publish
> existing structured (relational) representations, which are already backing
> most of the existing Web sites and demonstrate useful and usable
> applications on top of it.
>
>
Entrants for the competition might find the Talis Connected Commons scheme
useful. It provides free hosting and services such as full text search,
faceting and sparql for public domain datasets up to 50 million triples.

See http://www.talis.com/platform/cc/ for details

Ian


[ANN] Linking Open Data Triplification Challenge 2009

2009-05-16 Thread Michael Hausenblas

All,

We'd like to draw your attention to the second edition of the Linking Open
Data Triplification Challenge, again collocated with the I-Semantics [1]
conference. 

Please see http://triplify.org/Challenge/2009 for details on submission and
prices. Note that submission deadline is 9 August 2009.

With the recent uptake of structured data/RDF by major players such as
Google the motivation for exposing relational data and other structured data
sources on the Web entered a new stage. We encourage participants to publish
existing structured (relational) representations, which are already backing
most of the existing Web sites and demonstrate useful and usable
applications on top of it.

The challenge awards attractive prices (MacBook Air or 1000€, and other
prices) to the most innovative and promising triplifications. We thank our
sponsors Ontos AG [2] and Punkt.NetServices [3] for supporting this
challenge. 

Cheers, 
   Michael (on behalf of the Organizing Committee)

[1] http://www.i-semantics.at
[2] http://www.ontos.com/
[3] http://www.punkt.at/

Cheers,
  Michael
-- 
Dr. Michael Hausenblas
DERI - Digital Enterprise Research Institute
National University of Ireland, Lower Dangan,
Galway, Ireland, Europe
Tel. +353 91 495730
http://sw-app.org/about.html
http://webofdata.wordpress.com/