[Fwd: 2nd CFP: ISWC'09 workshop on Ontology Matching (OM-2009)]

2009-07-08 Thread Dan Brickley


I don't normally forward conference CFPs, but it seems it would be 
useful to build some links with this community. Aw crap, can't believe I 
typed that. But you know what I mean...


Dan

 Original Message 
Subject:2nd CFP: ISWC'09 workshop on Ontology Matching (OM-2009)
Date:   Wed, 8 Jul 2009 09:28:34 +0200
From:   Pavel Shvaiko pa...@dit.unitn.it
To: pavel.shva...@infotn.it



Apologies for cross-postings

--
CALL FOR PAPERS
--


The Fourth International Workshop on
ONTOLOGY MATCHING
(OM-2009)
http://om2009.ontologymatching.org/
October 25, 2009, ISWC'09 Workshop Program, Fairfax, near Washington
DC., USA


BRIEF DESCRIPTION AND OBJECTIVES
Ontology matching is a key interoperability enabler for the Semantic Web,
as well as a useful tactic in some classical data integration tasks.
It takes the ontologies as input and determines as output an alignment,
that is, a set of correspondences between the semantically
related entities of those ontologies. These correspondences can be used
for various tasks, such as ontology merging and data translation.
Thus, matching ontologies enables the knowledge and data expressed
in the matched ontologies to interoperate.

The workshop has three goals:
1. To bring together leaders from academia, industry and user institutions
to assess how academic advances are addressing real-world requirements.
The workshop will strive to improve academic awareness of industrial
and final user needs, and therefore, direct research towards those needs.
Simultaneously, the workshop will serve to inform industry and user
representatives about existing research efforts that may meet their
requirements. The workshop will also investigate how the ontology
matching technology is going to evolve.

2. To conduct an extensive and rigorous evaluation of ontology matching
approaches through the OAEI (Ontology Alignment Evaluation Initiative)
2009 campaign: http://oaei.ontologymatching.org/2009/
This year's OAEI campaign introduces two new tracks about
oriented alignments and about instance matching (a timely topic for
the linked data community). Therefore, the ontology matching evaluation
initiative itself will provide a solid ground for discussion of how well
the current approaches are meeting business needs.

3. To examine similarities and differences from database schema matching,
which has received decades of attention but is just beginning to transition
to mainstream tools.


TOPICS of interest include but are not limited to:
Business cases for matching;
Requirements to matching from specific domains;
Application of matching techniques in real-world scenarios;
Formal foundations and frameworks for ontology matching;
Large-scale ontology matching evaluation;
Performance of matching techniques;
Matcher selection and self-configuration;
Uncertainty in ontology matching;
User involvement (including both technical and organizational aspects);
Explanations in matching;
Social and collaborative matching;
Alignment management;
Reasoning with alignments;
Matching for traditional applications (e.g., information integration);
Matching for dynamic applications (e.g., peer-to-peer, web-services).



SUBMISSIONS
Contributions to the workshop can be made in terms of technical papers and
posters/statements of interest addressing different issues of ontology
matching
as well as participating in the OAEI 2009 campaign. Technical papers should
be not longer than 12 pages using the LNCS Style:
http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-164-2-72376-0,00.html
Posters/statements of interest should not exceed 2 pages and
should be handled according to the guidelines for technical papers.
All contributions should be prepared in PDF format and should be submitted
through the workshop submission site at:

http://www.easychair.org/conferences/?conf=om20090

Contributors to the OAEI 2009 campaign have to follow the campaign
conditions
and schedule at http://oaei.ontologymatching.org/2009/.


IMPORTANT DATES FOR TECHNICAL PAPERS:
August 11, 2009: Deadline for the submission of papers.
September 6, 2009: Deadline for the notification of acceptance/rejection.
October 2, 2009: Workshop camera ready copy submission.
October 25, 2009: OM-2009, Westfields Conference Center, Fairfax, near
Washington DC., USA.


ORGANIZING COMMITTEE
1. Pavel Shvaiko (Main contact)
TasLab, Informatica Trentina SpA, Italy

2. Jérôme Euzenat
INRIA  LIG, France

3. Fausto Giunchiglia
University of Trento, Italy

4. Heiner Stuckenschmidt
University of Mannheim, Germany

5. Natasha Noy
Stanford Center for Biomedical Informatics Research, USA

6. Arnon Rosenthal
The MITRE Corporation, USA


PROGRAM COMMITTEE
Yuan An, Drexel University, USA
Zohra Bellahsene, LIRMM, France
Paolo Besana, University of Edinburgh, UK
Olivier Bodenreider, National Library of Medicine, USA

Re: Minting clean URLs to be hosted at a 3rd party

2009-07-08 Thread Leigh Dodds
Hi Christopher,

2009/7/7 Christopher St John ckstj...@gmail.com:
 On a vaguely similar topic as the .htaccess discussion, I ran into
 a problem using a third party service to host some triples. I want
 my URLs to be pretty, and based on a domain that I control.
 Something like:

  http://nrhpdata.daytripr.com/site/72001552

 (which describes a United States National Register of Historic
 Places site.) But when hosting at Talis[1] (or anywhere but
 nrhpdata.daytripr.com) I end up with something like:

  http://api.talis.com/stores/ckstjohn-dev1/meta?about=http://nrhpdata.daytripr.com/site/72001552

The describe service (/meta?about=...) provides a default linked data
description for any URI that is mentioned in a store. I agree that you
don't necessarily want to expose the mechanism for generating the RDF
to the consumer, and so generally speaking you'll want to hide this
from consumers.

The approach that we and some of the other people who are hosting data
with us have taken is to proxy the urls through from a more specific
domain. This is relatively easy to do, although obviously requires
that you have some technical smarts in order to set things up. We
should shortly be publishing some recipes to illustrate how thats done
in a number of ways.

We're also planning to roll-out domain hosting to help further lower
the barrier to publishing linked data (in fact we set on of these up
for a customer yesterday)

The redirector also looks like a useful service!

Let me know when you're ready to move the data to the Connected Commons :)

Cheers,

L.

-- 
Leigh Dodds
Programme Manager, Talis Platform
Talis
leigh.do...@talis.com
http://www.talis.com



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Martin Hepp (UniBW)

Google has just changed the wording of the documentation:

http://knol.google.com/k/google-rich-snippets/google-rich-snippets/32la2chf8l79m/1#

The mentioning of cloaking risk is removed. While this is not final 
clearance,

it is a nice sign that our concerns are heard.

Best
Martin


Martin Hepp (UniBW) wrote:

Dear all:
Fyi - I am in contact with Google as for the clarification of what 
kind of empty div/span elements are considered acceptable in the 
context of RDFa. It may take a few days to get an official statement. 
Just so that you know it is being taken care of...


Martin



Mark Birbeck wrote:

Hi Martin,

 
b) download RDFa snippet that just represents the RDF/XML content 
(i.e. such
that it does not have to be consolidated with the presentation 
level part

of the Web page.



By coincidence, I just read this:

  Hidden div's -- don't do it!
  It can be tempting to add all the content relevant for a rich snippet
  in one place on the page, mark it up, and then hide the entire block
  of text using CSS or other techniques. Don't do this! Mark up the
  content where it already exists. Google will not show content from
  hidden div's in Rich Snippets, and worse, this can be considered
  cloaking by Google's spam detection systems. [1]

Regards,

Mark

[1] 
http://knol.google.com/k/google-rich-snippets/google-rich-snippets/32la2chf8l79m/1# 



  




--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail:  mh...@computer.org
phone:   +49-(0)89-6004-4217
fax: +49-(0)89-6004-4620
www: http://www.unibw.de/ebusiness/ (group)
http://www.heppnetz.de/ (personal)
skype:   mfhepp 
twitter: mfhepp


Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!


Webcast:
http://www.heppnetz.de/projects/goodrelations/webcast/

Talk at the Semantic Technology Conference 2009: 
Semantic Web-based E-Commerce: The GoodRelations Ontology

http://tinyurl.com/semtech-hepp

Tool for registering your business:
http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

Overview article on Semantic Universe:
http://tinyurl.com/goodrelations-universe

Project page and resources for developers:
http://purl.org/goodrelations/

Tutorial materials:
Tutorial at ESWC 2009: The Web of Data for E-Commerce in One Day: A Hands-on 
Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey

http://www.ebusiness-unibw.org/wiki/GoodRelations_Tutorial_ESWC2009




begin:vcard
fn:Martin Hepp
n:Hepp;Martin
org:Bundeswehr University Munich;E-Business and Web Science Research Group
adr:;;Werner-Heisenberg-Web 39;Neubiberg;;D-85577;Germany
email;internet:mh...@computer.org
tel;work:+49 89 6004 4217
tel;pager:skype: mfhepp
url:http://www.heppnetz.de
version:2.1
end:vcard



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Pat Hayes


On Jul 5, 2009, at 10:16 AM, Hugh Glaser wrote:


OK, I'll have a go :-)
Why did I think this would be fun to do on a sunny Sunday morning  
that has turned into afternoon?

Here are the instructions:



And here is why I cannot follow them.



1.  Create a web-accessible directory, let's say foobar, with all  
your .rdf, .ttl, .ntriples and .html files in it.

2.  Copy lodpub.php and path.php into it.


OK so far...


3.  Access path.php from your web server.


I can see this file, but I cannot access it. Attempting to do so gives  
me the message


Can not open file .htaccess
Reason: Could not download file (403:HTTP/1.1
403 forbidden)

I have checked with my system admin, and they tell me, Yes that is  
correct. You cannot access your .htaccess file. You cannot modify it  
or paste anything into it. Only we have access to it. No, we will not  
change this policy for you, no matter how important you think you are.  
Although they do not say it openly, the implicit message is, we don't  
give a damn what the W3C thinks you ought to be able to do on our  
website.


Now, has anyone got any OTHER ideas?  An idea that does not involve  
changing any actual code, and so can be done using a text editor on an  
HTML text file, would be a very good option.


Pat Hayes



4.  Follow the instruction to paste that text into .htaccess
5.  You can remove path.php if you like, it was only there to help  
you get the .htaccess right.


That should be it.
The above text and files are at
http://www.rkbexplorer.com/blog/?p=11

Of course, I expect that you can tell me all sorts of problems/ 
better ways, but I am hoping it works for many.


Some explanation:
We use a different method, and I have tried to extract the essence,  
and keep the code very simple.
We trap all 404 (File not Found) in the directory, and then any  
requests coming in for non-existent files will generate a 303 with  
an extension added, depending on the Accept header.
Note that you probably need the leading / followed by the full  
path from the domain root, otherwise it will just print out the text  
lodpub.php;
(That is not what the apache specs seem to say, but it is what seems  
to happen).
If you get Additionally, a 404 Not Found error was encountered  
while trying to use an ErrorDocument to handle the request., then  
it means that web server is not finding your ErrorDocument .
Put the file path.php in the same directory and point your browser  
at it - this will tell you what the path should be.


Note that the httpd.conf (in /etc/httpd/conf) may not let your  
override, if your admins have tied things down really tight.

Mine says:
   AllowOverride All

Finally, at the moment, note that I think that apache default does  
not put the correct MIME type on rdf files, but that is a separate  
issue, and it makes no difference that the 303 happened.


Best
Hugh

On 05/07/2009 01:52, Pierre-Antoine Champin swlists-040...@champin.net 
 wrote:



Le 03/07/2009 15:14, Danny Ayers a écrit :

2009/7/2 Bill Robertsb...@swirrl.com:
I thought I'd give the .htaccess approach a try, to see what's  
involved in
actually setting it up.  I'm no expert on Apache, but I know the  
basics of
how it works, I've got full access to a web server and I can read  
the online

Apache documentation as well as the next person.


I've tried similar, even stuff using PURLs - incredibly difficult to
get right. (My downtime overrides all, so I'm not even sure if I got
it right in the end)

I really think we need a (copy  paste) cheat sheet.

Volunteers?


(raising my hand) :)*

Here is a quick python script that makes it easier (if not completely
immediate). It may still requires a one-liner .htaccess, but one  
that (I

think) is authorized by most webmasters.

I guess a PHP version would not even require that .htaccess, but  
sorry,

I'm not fluent in PHP ;)

So, assuming you want to publish a vocabulary with an RDF and an HTML
description at http://example.com/mydir/myvoc, you need to:

1. Make `myvoc` a directory at the place where your HTTP server will
   serve it at the desired URI.
2. Copy the script in this directory as 'index.cgi' (or  
'index.wsgi' if

   your server as WSGI support).
3. In the same directory, put two files named 'index.html' and
   'index.rdf'

If it does not work now (it didn't for me),you have to tell your HTTP
server that the directory index is index.wsgi. In apache, this is  
done

by creating (if not present) a `.htaccess` file in the `myvoc`
diractory, and adding the following line::

DirectoryIndex index.cgi

(or `index.wsgi`, accordingly)

There is more docs in the script itself. I think the more recipes
(including for other httpds) we can provide with the script, the more
useful it will be. So feel free to propose other ones.

 enjoy

  pa


path.phplodpub.php



IHMC (850)434 8903 or (650)494 3973
40 South Alcaniz St.   (850)202 

Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Mark Birbeck
Hi Pat,

 I have checked with my system admin, and they tell me, Yes that is correct.
 You cannot access your .htaccess file. You cannot modify it or paste
 anything into it. Only we have access to it. No, we will not change this
 policy for you, no matter how important you think you are. Although they do
 not say it openly, the implicit message is, we don't give a damn what the
 W3C thinks you ought to be able to do on our website.

I agree that this seems to be getting like Groundhog Day. :)

The original point of this thread seemed to me to be saying that if
.htaccess is the key to the semantic web, then it's never going to
happen.

I.e., .htaccess is a major bottleneck.

The initial discussion around that theme was then followed by all
sorts of discussions about how people could create scripts that would
choose between different files, and deliver the correct one to the
user. But the fact remained -- as you rightly point out here -- that
you still need to modify .htaccess.


 Now, has anyone got any OTHER ideas?  An idea that does not involve changing
 any actual code, and so can be done using a text editor on an HTML text
 file, would be a very good option.

:)

Did I mention RDFa?

Regards,

Mark

-- 
Mark Birbeck, webBackplane

mark.birb...@webbackplane.com

http://webBackplane.com/mark-birbeck

webBackplane is a trading name of Backplane Ltd. (company number
05972288, registered office: 2nd Floor, 69/85 Tabernacle Street,
London, EC2A 4RR)



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Pierre-Antoine Champin

Mark,

disclaimer: I have nothing against the RDFa solution; I just don't think 
that one size fits all :)


ok, the solutions proposed here (by myself and others) still involve 
editing the .htaccess. However, compared to configuring HTTP 
redirections using mod_rewrite, they have two advantages:


- they are shorter and hopefully easier to adapt
- they are more likely to be allowed for end users

So I think it is a progress.

Furthermore, some of the recipes may work without even touching the 
.htaccess file, providing that


- executable files are automatically considered as CGI scripts
- index.php is automatically considered as a directory index

One size does not fit all, that is why we should provide several simple 
recipes in which people may find the one that works for them.


This is why I'm asking (again) to IIS-users and (other httpd)-users to 
provide non apache recipes as well.


Of course, the publish it in RDFa recipe is a perfectly legal one !

  pa

Le 08/07/2009 15:13, Mark Birbeck a écrit :

Hi Pat,


I have checked with my system admin, and they tell me, Yes that is correct.
You cannot access your .htaccess file. You cannot modify it or paste
anything into it. Only we have access to it. No, we will not change this
policy for you, no matter how important you think you are. Although they do
not say it openly, the implicit message is, we don't give a damn what the
W3C thinks you ought to be able to do on our website.


I agree that this seems to be getting like Groundhog Day. :)

The original point of this thread seemed to me to be saying that if
.htaccess is the key to the semantic web, then it's never going to
happen.

I.e., .htaccess is a major bottleneck.

The initial discussion around that theme was then followed by all
sorts of discussions about how people could create scripts that would
choose between different files, and deliver the correct one to the
user. But the fact remained -- as you rightly point out here -- that
you still need to modify .htaccess.



Now, has anyone got any OTHER ideas?  An idea that does not involve changing
any actual code, and so can be done using a text editor on an HTML text
file, would be a very good option.


:)

Did I mention RDFa?

Regards,

Mark






Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Toby Inkster
On Wed, 2009-07-08 at 15:13 +0100, Mark Birbeck wrote:
 The original point of this thread seemed to me to be saying that if
 .htaccess is the key to the semantic web, then it's never going to
 happen.

It simply isn't the key to the semantic web though.

.htaccess is a simple way to configure Apache to do interesting things.
It happens to give you a lot of power in deciding how requests for URLs
should be translated into responses of data. If you have hosting which
allows you such advanced control over your settings, and you can create
nicer URLs, then by all means do so - and not just for RDF, but for all
your URLs. It's a Good Thing to do, and in my opinion, worth switching
hosts to achieve.

But all that isn't necessary to publish linked data. If you own
example.com, you can upload foaf.rdf and give yourself a URI like:

http://example.com/foaf.rdf#alice

(Or foaf.ttl, foaf.xhtml, whatever.)

No, that's not as elegant as http://example.com/alice with a
connection negotiated 303 redirect to representations in various
formats, but it does work, and it won't break anything. 

Let's not blow this all out of proportion.

-- 
Toby A Inkster
mailto:m...@tobyinkster.co.uk
http://tobyinkster.co.uk




[Ann] LinkedGeoData.org

2009-07-08 Thread Sören Auer

Dear Colleagues,

On behalf of the AKSW research group [1] I'm pleased to announce the 
first public version of the LinkedGeoData.org datasets and services.


LinkedGeoData is a comprehensive dataset derived from the OpenStreetMap 
database covering RDF descriptions of more than 350 million spatial 
features (i.e. nodes, ways, relations).


LinkedGeoData currently comprises RDF dumps, Linked Data and REST 
interfaces, links to DBpedia as well as a prototypical user interface 
for linked-geo-data browsing and authoring.


More information can be found at: http://linkedgeodata.org

Best,

Sören Auer


[1] http://aksw.org

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] LinkedGeoData.org

2009-07-08 Thread Ian Davis
On Wednesday, July 8, 2009, Sören Auer a...@informatik.uni-leipzig.de wrote:
 Dear Colleagues,

 On behalf of the AKSW research group [1] I'm pleased to announce the first 
 public version of the LinkedGeoData.org datasets and services.

 LinkedGeoData is a comprehensive dataset derived from the OpenStreetMap 
 database covering RDF descriptions of more than 350 million spatial features 
 (i.e. nodes, ways, relations).

 LinkedGeoData currently comprises RDF dumps, Linked Data and REST interfaces, 
 links to DBpedia as well as a prototypical user interface for linked-geo-data 
 browsing and authoring.


Very nice. How long do you think it will take for the entire dataset
to be available?

 Open streetmap are voting soon on whether to adopt the open data
commons sharealike database license. If they adopt it will you also
adopt it for this data?



 Sören Auer


Ian

 [1] http://aksw.org

 --
 Sören Auer, AKSW/Computer Science Dept., University of Leipzig
 http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer





Re: [Ann] LinkedGeoData.org

2009-07-08 Thread Sören Auer

Ian Davis wrote:
 Very nice. How long do you think it will take for the entire dataset
 to be available?

That might take another week or so, but for most use cases the elements 
data set should be sufficient, since it contains the most interesting 
information.
I guess the complete dataset will be a real challenge for most triple 
stores - not that they won't be able to store the data, but efficient 
querying will be very challenging and I even have some doubts that it is 
reasonable to use this data with a triple store at all. But we will try 
to make it available anyway ;-)


  Open streetmap are voting soon on whether to adopt the open data
 commons sharealike database license. If they adopt it will you also
 adopt it for this data?

Sure!


--Sören



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Ian Davis
On Wednesday, July 8, 2009, Toby Inkster t...@g5n.co.uk wrote:
 On Wed, 2009-07-08 at 15:13 +0100, Mark Birbeck wrote:
 The original point of this thread seemed to me to be saying that if
 .htaccess is the key to the semantic web, then it's never going to
 happen.

 It simply isn't the key to the semantic web though.

 .htaccess is a simple way to configure Apache to do interesting things.
 It happens to give you a lot of power in deciding how requests for URLs
 should be translated into responses of data. If you have hosting which
 allows you such advanced control over your settings, and you can create
 nicer URLs, then by all means do so - and not just for RDF, but for all
 your URLs. It's a Good Thing to do, and in my opinion, worth switching
 hosts to achieve.

 But all that isn't necessary to publish linked data. If you own
 example.com, you can upload foaf.rdf and give yourself a URI like:

         http://example.com/foaf.rdf#alice

 (Or foaf.ttl, foaf.xhtml, whatever.)

This just works and is how the html web grew. Write a document and
save it into a publuc spaxe. Fancy stuff like pretty URIs need more
work but are not at all necessary for linked data or the semantic web.



 Let's not blow this all out of proportion.

Hear hear!

 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread David Booth
On Wed, 2009-07-08 at 15:50 +0100, Pierre-Antoine Champin wrote:
[ . . . ]
 ok, the solutions proposed here (by myself and others) still involve 
 editing the .htaccess. 

Once again, use of a 303-redirect service such as
http://thing-described-by.org/ or http://t-d-b.org/ 
does not require *any* configuration or .htaccess editing.  It does not
address the problem of setting the content type correctly, but it *does*
provide an easy way to generate 303 redirects, in conformance with Cool
URIs for the Semantic Web:
http://www.w3.org/TR/cooluris/#r303gendocument 

Hmm, I thought the use of a 303-redirect service was mentioned in Cool
URIs for the Semantic Web, but in looking back, I see it was in Best
Practice Recipes for Publishing RDF Vocabularies:
http://www.w3.org/TR/swbp-vocab-pub/#redirect
Maybe it should be mentioned in a future version of the Cool URIs
document as well.


-- 
David Booth, Ph.D.
Cleveland Clinic (contractor)

Opinions expressed herein are those of the author and do not necessarily
reflect those of Cleveland Clinic.




Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Hugh Glaser
Sorry to hear that, Pat.

On 08/07/2009 14:51, Pat Hayes pha...@ihmc.us wrote:

 
 
 On Jul 5, 2009, at 10:16 AM, Hugh Glaser wrote:
 
 OK, I'll have a go :-)
 Why did I think this would be fun to do on a sunny Sunday morning
 that has turned into afternoon?
 Here are the instructions:
 
 
 And here is why I cannot follow them.
 
 
 1.  Create a web-accessible directory, let's say foobar, with all
 your .rdf, .ttl, .ntriples and .html files in it.
 2.  Copy lodpub.php and path.php into it.
 
 OK so far...
 
 3.  Access path.php from your web server.
 
 I can see this file, but I cannot access it. Attempting to do so gives
 me the message
 
 Can not open file .htaccess
 Reason: Could not download file (403:HTTP/1.1
 403 forbidden)
Just a clarification, which probably doesn't help you, but just might.
When you try to access path.php, you should either get some text in which
the string htaccess appears (success), or some indication that you cannot
access path.php or run php.
I see no reason why you would get the message above trying to access
path.php.
(Unless somehow the attempt to run php has resulted in an attempt to access
.htaccess because of a local issue, in which case the system is badly
configured in its error reporting.)
I guess that what you have seen is the result of creating a file called
.htaccess on your local machine, and then trying to upload it to the server,
using some sort of web-based upload facility?
Best
Hugh
 
 I have checked with my system admin, and they tell me, Yes that is
 correct. You cannot access your .htaccess file. You cannot modify it
 or paste anything into it. Only we have access to it. No, we will not
 change this policy for you, no matter how important you think you are.
 Although they do not say it openly, the implicit message is, we don't
 give a damn what the W3C thinks you ought to be able to do on our
 website.
 
 Now, has anyone got any OTHER ideas?  An idea that does not involve
 changing any actual code, and so can be done using a text editor on an
 HTML text file, would be a very good option.
 
 Pat Hayes
 
 

 4.  Follow the instruction to paste that text into .htaccess
 5.  You can remove path.php if you like, it was only there to help
 you get the .htaccess right.
 
 That should be it.
 The above text and files are at
 http://www.rkbexplorer.com/blog/?p=11
 
 Of course, I expect that you can tell me all sorts of problems/
 better ways, but I am hoping it works for many.
 
 Some explanation:
 We use a different method, and I have tried to extract the essence,
 and keep the code very simple.
 We trap all 404 (File not Found) in the directory, and then any
 requests coming in for non-existent files will generate a 303 with
 an extension added, depending on the Accept header.
 Note that you probably need the leading / followed by the full
 path from the domain root, otherwise it will just print out the text
 lodpub.php;
 (That is not what the apache specs seem to say, but it is what seems
 to happen).
 If you get Additionally, a 404 Not Found error was encountered
 while trying to use an ErrorDocument to handle the request., then
 it means that web server is not finding your ErrorDocument .
 Put the file path.php in the same directory and point your browser
 at it - this will tell you what the path should be.
 
 Note that the httpd.conf (in /etc/httpd/conf) may not let your
 override, if your admins have tied things down really tight.
 Mine says:
AllowOverride All
 
 Finally, at the moment, note that I think that apache default does
 not put the correct MIME type on rdf files, but that is a separate
 issue, and it makes no difference that the 303 happened.
 
 Best
 Hugh
 
 On 05/07/2009 01:52, Pierre-Antoine Champin swlists-040...@champin.net
 wrote:
 
 Le 03/07/2009 15:14, Danny Ayers a écrit :
 2009/7/2 Bill Robertsb...@swirrl.com:
 I thought I'd give the .htaccess approach a try, to see what's
 involved in
 actually setting it up.  I'm no expert on Apache, but I know the
 basics of
 how it works, I've got full access to a web server and I can read
 the online
 Apache documentation as well as the next person.
 
 I've tried similar, even stuff using PURLs - incredibly difficult to
 get right. (My downtime overrides all, so I'm not even sure if I got
 it right in the end)
 
 I really think we need a (copy  paste) cheat sheet.
 
 Volunteers?
 
 (raising my hand) :)*
 
 Here is a quick python script that makes it easier (if not completely
 immediate). It may still requires a one-liner .htaccess, but one
 that (I
 think) is authorized by most webmasters.
 
 I guess a PHP version would not even require that .htaccess, but
 sorry,
 I'm not fluent in PHP ;)
 
 So, assuming you want to publish a vocabulary with an RDF and an HTML
 description at http://example.com/mydir/myvoc, you need to:
 
 1. Make `myvoc` a directory at the place where your HTTP server will
serve it at the desired URI.
 2. Copy the script in this directory as 'index.cgi' (or
 'index.wsgi' if

Re: Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread David Booth
 On Wed 08/07/09  5:08 PM , Olivier Rossel olivier.ros...@gmail.com sent:
 Do you mean that all deferencable URIs of a RDF document should have
 their domain name to end with t-d-b.org, so their resolution leads to
 the TDB server which redirects to the final location?

No, I'm not suggesting that *all* deferenceable RDF URIs should use t-d-b.org.  
I'm just pointing out that it is an alternative if you cannot configure your 
own server to do 303 redirects.  Using it does require putting 
http://t-d-b.org?; at the beginning of your URI, so if you do not want to do 
that then you should use a different approach.  To be clear, if you use this 
approach, then instead of writing a URI such as

  http://example/mydata.rdf

you would write it as

 http://t-d-b.org?http://example/mydata.rdf

and if that URI is dereferenced, the 303-redirect service will automatically 
return a 303 redirect to

http://example/mydata.rdf

David Booth

 
 On Wednesday, July 8, 2009, David Booth da...@dbooth
 .org wrote: On Wed, 2009-07-08 at 15:50 +0100,
 Pierre-Antoine Champin wrote: [ . . . ]
  ok, the solutions proposed here (by myself
 and others) still involve editing the .htaccess.
 
  Once again, use of a 303-redirect service such
 as http://thing-described-by.org/ or
 http://t-d-b.org/ does not require *any* configuration or
 .htaccess editing.  It does not address the problem of setting the content 
 type
 correctly, but it *does* provide an easy way to generate 303 redirects,
 in conformance with Cool URIs for the Semantic Web:
  http://www.w3.org/TR/cooluris/#r303gendocument
  Hmm, I thought the use of a 303-redirect service
 was mentioned in Cool URIs for the Semantic Web, but in looking
 back, I see it was in Best Practice Recipes for Publishing RDF
 Vocabularies: http://www.w3.org/TR/swbp-vocab-pub/#redirect
  Maybe it should be mentioned in a future version
 of the Cool URIs document as well.
 
 
  --
  David Booth, Ph.D.
  Cleveland Clinic (contractor)
 
  Opinions expressed herein are those of the
 author and do not necessarily reflect those of Cleveland Clinic.
 
 
 
 
 
 
 
 
 



Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Toby A Inkster

On 8 Jul 2009, at 19:58, Seth Russell wrote:

Is it not true that everything past the hash (#alice) is not  
transmitted back to the server when a browser clicks on a  
hyperlink ?   If that is true, then the server would not be able to  
serve anything different if a browser clicked upon http:// 
example.com/foaf.rdf or if they clicked upon http://example.com/ 
foaf.rdf#alice .


Indeed - the server doesn't see the fragment.

If that is true, and it probably isn't, then is not the Semantic  
Web crippled from using that techniqe to distinguish between  
resources and at the same time hyper linking between those  
different resources?



Not at all.

Is the web of documents crippled because the server can't distinguish  
between requests for http://example.com/document.html and http:// 
example.com/document.html#part2 ? Of course it isn't - the server  
doesn't need to distinguish between them - it serves up the same web  
page either way and lets the user agent distinguish.


Hash URIs are very valuable in linked data, precisely *because* they  
can't be directly requested from a server - they allow us to bypass  
the whole HTTP 303 issue.


--
Toby A Inkster
mailto:m...@tobyinkster.co.uk
http://tobyinkster.co.uk




Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Hugh Glaser
On 09/07/2009 00:38, Toby A Inkster t...@g5n.co.uk wrote:

 On 8 Jul 2009, at 19:58, Seth Russell wrote:
 
 Is it not true that everything past the hash (#alice) is not
 transmitted back to the server when a browser clicks on a
 hyperlink ?   If that is true, then the server would not be able to
 serve anything different if a browser clicked upon http://
 example.com/foaf.rdf or if they clicked upon http://example.com/
 foaf.rdf#alice .
 
 Indeed - the server doesn't see the fragment.
 
 If that is true, and it probably isn't, then is not the Semantic
 Web crippled from using that techniqe to distinguish between
 resources and at the same time hyper linking between those
 different resources?
 
 
 Not at all.
 
 Is the web of documents crippled because the server can't distinguish
 between requests for http://example.com/document.html and http://
 example.com/document.html#part2 ? Of course it isn't - the server
 doesn't need to distinguish between them - it serves up the same web
 page either way and lets the user agent distinguish.
 
 Hash URIs are very valuable in linked data, precisely *because* they
 can't be directly requested from a server - they allow us to bypass
 the whole HTTP 303 issue.
Mind you, it does mean that you should make sure that you don't put too many
LD URIs in one document.
If dbpedia decided to represent all the RDF in one document, and then use
hash URIs, it would be somewhat problematic.
 
 --
 Toby A Inkster
 mailto:m...@tobyinkster.co.uk
 http://tobyinkster.co.uk
 
 
 




Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Roberto García
Hi Martin, all,

I would like to point to something that might be useful for RDF data
publishing. The ReDeFer RDF2HTML service
(http://rhizomik.net/redefer/) renders input RDF/XML data as HTML for
user interaction (e.g. as used in http://rhizomik.net/rhizomer/). Now,
it also embeds RDFa that facilitates retrieving the source RDF back.

I've tested it with a pair of GoodRelations examples:
http://rhizomik.net/redefer-services/rdf2html?rdf=http://www.heppnetz.de/projects/goodrelations/minimalExampleGoodRelations.owl
http://rhizomik.net/redefer-services/rdf2html?rdf=http://www.heppnetz.de/projects/goodrelations/goodrelationsExamplesPrimerFinalOWL.owl

I've been able to check that it works for these examples by comparing
the triples generated by RDFa Distiller and RDFa Bookmarklet from the
previous HTML+RDFa pages to those generated by any23 and Triplr from
the original OWL files.

The generated HTML+RDFa can be then used in order to publish RDF just
by CutPaste, e.g. using an online editor like FCKEditor. This has
been the procedure followed in order to publish the RDF in
http://rhizomik.net/redefer/rdf2html/minimalExampleGoodRelations/

The HTML+RDFa view might be customised using CSS and made more usable
if the source RDF contains rdfs:labels for the involved resources,
which are used instead of the last part of the URIs if available.

In any case, if it is no to be shown to the user, it is easier to just
model triples using hidden spans instead of using this service...

Best regards,


Roberto García
http://rhizomik.net/~roberto

PD: Caution, this is work in progress. Feedback appreciated :-)



On Wed, Jul 8, 2009 at 12:59 PM, Martin Hepp
(UniBW)martin.h...@ebusiness-unibw.org wrote:
 Google has just changed the wording of the documentation:

 http://knol.google.com/k/google-rich-snippets/google-rich-snippets/32la2chf8l79m/1#

 The mentioning of cloaking risk is removed. While this is not final
 clearance,
 it is a nice sign that our concerns are heard.

 Best
 Martin


 Martin Hepp (UniBW) wrote:

 Dear all:
 Fyi - I am in contact with Google as for the clarification of what kind of
 empty div/span elements are considered acceptable in the context of RDFa. It
 may take a few days to get an official statement. Just so that you know it
 is being taken care of...

 Martin



 Mark Birbeck wrote:

 Hi Martin,



 b) download RDFa snippet that just represents the RDF/XML content (i.e.
 such
 that it does not have to be consolidated with the presentation level
 part
 of the Web page.


 By coincidence, I just read this:

  Hidden div's -- don't do it!
  It can be tempting to add all the content relevant for a rich snippet
  in one place on the page, mark it up, and then hide the entire block
  of text using CSS or other techniques. Don't do this! Mark up the
  content where it already exists. Google will not show content from
  hidden div's in Rich Snippets, and worse, this can be considered
  cloaking by Google's spam detection systems. [1]

 Regards,

 Mark

 [1]
 http://knol.google.com/k/google-rich-snippets/google-rich-snippets/32la2chf8l79m/1#




 --
 --
 martin hepp
 e-business  web science research group
 universitaet der bundeswehr muenchen

 e-mail:  mh...@computer.org
 phone:   +49-(0)89-6004-4217
 fax:     +49-(0)89-6004-4620
 www:     http://www.unibw.de/ebusiness/ (group)
        http://www.heppnetz.de/ (personal)
 skype:   mfhepp twitter: mfhepp

 Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
 

 Webcast:
 http://www.heppnetz.de/projects/goodrelations/webcast/

 Talk at the Semantic Technology Conference 2009: Semantic Web-based
 E-Commerce: The GoodRelations Ontology
 http://tinyurl.com/semtech-hepp

 Tool for registering your business:
 http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

 Overview article on Semantic Universe:
 http://tinyurl.com/goodrelations-universe

 Project page and resources for developers:
 http://purl.org/goodrelations/

 Tutorial materials:
 Tutorial at ESWC 2009: The Web of Data for E-Commerce in One Day: A Hands-on
 Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey

 http://www.ebusiness-unibw.org/wiki/GoodRelations_Tutorial_ESWC2009








Re: RDFa vs RDF/XML and content negotiation

2009-07-08 Thread Sergey Chernyshev
In MySemanticProfile I use both RDFa and XHTML + RDF/XML using content
negotiation (N3/Turtle will be there at some point) plus it also contains
Microformats (when applicable).

I think that if your goal is to publish it to public, publish in all
formats, including CSV or vcard as long as there is at least one tool that
will potentially consume this information.

Now the question of why would somebody use any of the formats is a different
story and this question applies to every format including HTML (after all I
have a regular homepage so I don't need another HTML page to display data to
people, just to computers).

Thank you,

Sergey


--
Sergey Chernyshev
http://www.sergeychernyshev.com/


On Tue, Jun 23, 2009 at 7:09 AM, bill.robe...@planet.nl wrote:

  I've been trying to weigh up the pros and cons of these two approaches to
 understand more clearly when you might want to use each.  I hope that the
 list members will be able to provide me with the benefit of their experience
 and insight!

 So the situation is that I have some information on a topic and I want to
 make it available both in machine readable form and in human readable form,
 for example a company wanting to publish information on its products, or a
 government department wanting to publish some statistics.

 I can either:
 1) include 'human' and 'machine' representations in the same web page using
 RDFa
 2) have an HTML representation and a separate RDF/XML representation (or N3
 or whatever) and decide which to provide via HTTP content negotiation.

 So which should I use? I suppose it depends on how the information will be
 produced, maintained and consumed.  Some generic requirements/wishes:

 - I only want to have one place where the data is managed.
 - I want people to be able to browse around a nicely formatted
 representation of the information, ie a regular web page, probably
 incorporating all sorts of other stuff as well as the data itself.
 - I don't want to type lots of XHTML or XML.
 - I want the data to be found and used by search engines and aggregators.


 The approach presented by Halb, Raimond and Hausenblas (
 http://events.linkeddata.org/ldow2008/papers/06-halb-raimond-building-linked-data.pdf)
 seems attractive: to summarise crudely, auto-generate some RDFa from your
 database, but provide an RDF/XML dump too.

 On the other hand I find that RDFa leads to rather messy markup - I prefer
 the 'cleanliness' of the separate representations.

 For any non-trivial amount of data, then we will need a templating engine
 of some sort for either approach.  I suppose what may tip the balance is
 that Yahoo and Google are starting to make use of RDFa, but AFAIK they are
 not (yet) doing anything with classic content-negotiated linked data.

 Anyone care to argue for one approach or the other?  I suppose the answer
 may well be it depends :-)  But if so, what does it depend on?

 Thanks in advance

 Bill Roberts