Re: New github project for RDFizer scripts

2010-02-03 Thread Kingsley Idehen

Melvin Carvalho wrote:

One for the collection?

http://code.google.com/p/lindenb/source/browse/trunk/src/xsl/linkedin2foaf.xsl


On 21 May 2009 19:53, Kingsley Idehen > wrote:


All,

The 30+ xslt stylesheets [1]used by the our collection Sponger
Cartridges are now available for community development and
enhancement via a github [2].

Links:

1.

http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/ClickableVirtSpongerCloud
2. http://tr.im/m0PT


Sure!

Kingsley



-- 



Regards,

Kingsley Idehen   Weblog:
http://www.openlinksw.com/blog/~kidehen

President & CEO OpenLink Software Web: http://www.openlinksw.com









--

Regards,

Kingsley Idehen	  
President & CEO 
OpenLink Software 
Web: http://www.openlinksw.com

Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter: kidehen 









Re: New github project for RDFizer scripts

2010-02-03 Thread Melvin Carvalho
One for the collection?

http://code.google.com/p/lindenb/source/browse/trunk/src/xsl/linkedin2foaf.xsl


On 21 May 2009 19:53, Kingsley Idehen  wrote:

> All,
>
> The 30+ xslt stylesheets [1]used by the our collection Sponger Cartridges
> are now available for community development and enhancement via a github
> [2].
>
> Links:
>
> 1.
> http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/ClickableVirtSpongerCloud
> 2. http://tr.im/m0PT
>
>
> --
>
>
> Regards,
>
> Kingsley Idehen   Weblog: 
> http://www.openlinksw.com/blog/~kidehen
> President & CEO OpenLink Software Web: http://www.openlinksw.com
>
>
>
>
>
>


Re: [fresnel] Re: Fresnel: State of the Art?

2010-02-03 Thread Pierre-Antoine Champin
On 02/02/2010 16:20, Leo Sauermann wrote:
> * you won't find anything else that really fits RDF because of the
> "subclass/multiclass/missing properties/too many properties" dynamics
> you have in RDF. Templating languages are not good for this, also
> fresnel data can spread and grow on the web like RDF - there are no
> security problems associated with it (as would maybe be with templating)

Can you develop on that, or point me to a document that does?

> sure, it is bad for many cases and could be improved, but the general
> concept of Lenses/Views/display/hide properties and ordering properties
> is essential and working.

It is a very elegant and powerful approach, indeed, but it also has
quite an overhead. Template languages have a "quick and dirty" quality,
which may appeal to some users... -- and I put no pejorative connotation
to "quick and dirty" ;)

  pa



HPC-accelerated entity recognition

2010-02-03 Thread Joe Presbrey
Hi all,

Has anyone seen good papers on entity recognition, topic extraction,
or similar correlation algorithms in a semantic context using high
performance computing tech: FPGAs, GPUs, Cuda, OpenCL?

I have done a fair bit of searching but must be using the wrong
keywords.  Thanks for any pointers!

--
Joe Presbrey

On Tue, Feb 2, 2010 at 3:50 PM, Tim Finin  wrote:
> This closely related to the task of the Knowledge Base Population
> track [1] that was run as part of the NIST 2009 Text Analysis
> Conference [2].  The KBP track required systems to to two tasks:
> entity linking and slot filling.



[JOB] 2 Doctorate and 1 PostDoc position at AKSW / Uni Leipzig

2010-02-03 Thread Sören Auer
For collaborative research projects in the area of Linked Data 
technologies and Semantic Web the research group Agile Knowledge 
Engineering and Semantic Web (AKSW) at Universität Leipzig opens 
positions for:



 *1 Postdoctoral Researcher (TV-L E13/14)*

The ideal candidate holds a doctoral degree in Computer Science or a 
related field and is able to combine theoretical and practical aspects 
in her/his work. The candidate is expected to build up a small team by 
successfully competing for funding, supervising doctoral students, and 
collaborating with industry. Fluent English communication and software 
technology skills are fundamental requirements. The candidate should 
have a background in at least one of the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* database technologies and data integration
* HCI and user interface design for Web/multimedia content

The position starts as soon as possible, is open until filed and will be 
granted for initially two years with extension possibility.



 *2 Doctoral Students (50% TV-L E13 or equivalent stipend)*

The ideal candidate holds a MS degree in Computer Science or related 
field and is able to consider both theoretical and practical 
implementation aspects in her/his work. Fluent English communication and 
programming skills are fundamental requirements. The candidate should 
have experience and commitment to work on a doctoral thesis in one of 
the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* database technologies and data integration
* HCI and user interface design for Web/multimedia content

The position starts as soon as possible and will be granted for 
initially one year with an extension to overall 3 years.



HOW TO APPLY

Excellent candidates are invited to apply with:
* Curriculum vitae and copies of degree certificates/transcripts,
* Writing samples/copies of relevant scientific papers (e.g. thesis),
* Letters of recommendation.

Further information can be also found at: http://aksw.org/Jobs

Please send your application in PDF format indicating in the subject
'Application for PhD/PostDoc position‘ to a...@uni-leipzig.de.

Further information can be also found at: http://aksw.org/Jobs


--
Sören Auer - University of Leipzig - Dept. of Computer Science
http://www.informatik.uni-leipzig.de/~auer, +49 (341) 97-32323



Re: [fresnel] Fresnel: State of the Art?

2010-02-03 Thread Axel Rauschmayer
> Our goal with the first release of the Fresnel vocabulary in 2006 was to have 
> more people (beyond us) play with it in different contexts and get feedback 
> so that the language could be enhanced iteratively. Maybe it is now time to 
> do such an iteration?

I am working on my own Fresnel 2. The spec should be finished in the coming 3 
months. It strips Fresnel to what features I consider minimal and adds other 
things that I've found useful, including editing features. If anyone is 
interested, I can make this spec public once it is finished and then everyone 
can comment on it. If someone thinks that I've left out an important feature, 
we now have the advantage of concrete use cases when adding it back in. That 
way, we should arrive at a streamlined new Fresnel. I would argue in favor of 
breaking compatibility, for the sake of simplicity. A script could be used to 
translate F1 to F2.

I do not want to impose and if what I do proves too controversial, I can always 
fork.

If there is to be a version 2 of Fresnel, a small group of people (5 max) 
should have the final word, to avoid "design by committee", where one tries to 
fulfill all wishes, but ends up fulfilling none. All this after carefully 
considering all community input, obviously.

Greetings,

Axel

-- 
axel.rauschma...@ifi.lmu.de
http://www.pst.ifi.lmu.de/~rauschma/






Re: DBpedia-based entity recognition service / tool?

2010-02-03 Thread Gunnar Aastrand Grimnes
Matthias,

Our epiphany project will also do part of what you want, although it is
geared towards providing RDFa output, i.e. annotating your html page
with links to dbpedia etc.

http://projects.dfki.uni-kl.de/epiphany/

- Gunnar

Ivan Herman wrote:
> Not providing an answer, but... if such tools are around, I would love
> to see them added to the SWSWiki[1]. At the moment, there is a generic
> category 'Tagging', with the following input:
> 
> http://www.w3.org/2001/sw/wiki/Category:Tagging
> 
> More would be good...
> 
> Ivan
> 
> [1] http://www.w3.org/2001/sw/wiki/
> 
> On 2010-2-2 13:26 , Matthias Samwald wrote:
>> Dear LOD community,
>>
>> I would be glad to hear your advice on how to best accomplish a simple
>> task: extracting DBpedia entities (identified with DBpedia URIs) from a
>> string of text. With good accuracy and recall, possibly with some
>> options to constraint the recognized entities to some subset of DBpedia,
>> based on categories. The tool or service should be performant enough to
>> process large numbers of strings in a reasonable amount of time.
>> Given the prolific creation of tiny tools and services in this community
>> I am puzzled about my inability to find anything that accomplishes this
>> task.
>> Could you point me to something like that? Are there tools/services for
>> Wikipedia that I could use?
>> Zemanta seems to be too much geared towards 'enhanced blogging', while
>> OpenCalais does not return Wikipedia/DBpedia identifiers. Please correct
>> me if I am wrong.
>>
>> Cheers,
>> Matthias
>>
> 


-- 
Gunnar Aastrand Grimnes
gunnar.grimnes [AT] dfki.de

DFKI GmbH
Knowledge Management
Trippstadter Strasse 122
D-67663 Kaiserslautern
Germany

Office: +49 631 205 75-117
Mobile: +49 177 277 4397





Re: foaf dataset

2010-02-03 Thread Melvin Carvalho
http://wiki.foaf-project.org/w/DataSources

If anyone replies.  Would be great if you could add any other sources that
you find to the wiki.

On 3 February 2010 03:29, Jie Bao  wrote:

> Dear LODers
>
> We are looking for a foaf dataset. UMBC has collect one some years ago
> [1]. Does anyone know a newer or bigger dataset?
>
> Thanks!
>
> [1] http://ebiquity.umbc.edu/blogger/2005/01/25/foaf-dataset-available/
>
>
> -
> Jie Bao
> http://www.cs.rpi.edu/~baojie 
>
>


3rd CfP: LDOW2010 - 3rd International Workshop on Linked Data on the Web, at WWW2010, Raleigh, USA

2010-02-03 Thread Michael Hausenblas
Dear LODers,

This is a reminder that the deadline for submissions to LDOW2010,
the 3rd International Workshop on Linked Data on the Web, is coming up
in less than two weeks time:

Submission deadline: 15th February 2010, 23.59 Hawaii time

We are looking forward to receiving your submissions for LDOW2010 - hope to
see you in Raleigh!

Cheers,

Chris Bizer
Tom Heath
Tim Berners-Lee
Michael Hausenblas

==  Call for Papers  ===

Linked Data on the Web (LDOW2010) Workshop at WWW2010

=

April 27th, 2010, Raleigh, North Carolina, USA

=

Objectives

The Web is increasingly understood as a global information space
consisting not just of linked documents, but also of linked data. More
than just a vision, the resulting Web of Data has been brought into
being by the maturing of the Semantic Web technology stack, and by the
publication of large datasets according to the principles of Linked
Data. To date, the Web of Data has grown to a size of roughly 13.1
billion RDF triples, with contributions coming increasingly from
companies, government and public sector projects, as well as from
individual Web enthusiasts. In addition to publishing and interlinking
datasets, there is intensive work on Linked Data browsers, Web of Data
search engines and other applications that consume Linked Data from
the Web.

LDOW2010 follows the successful LDOW2008 workshop at WWW2008 in
Beijing and the LDOW2009 workshop at WWW2009 in Madrid. As the
publication of Linked Data on the Web continues apace, the need
becomes more pressing for principled research in the areas of user
interfaces for the Web of Data as well as on issues of quality, trust
and provenance in Linked Data. We also expect to see a number of
submissions related to current areas of high Linked Data activity,
such as government transparency, life sciences and the media industry.
The goal of this workshop is to provide a forum for exposing high
quality, novel research and applications in these (and related) areas.
In addition, by bringing together researchers in this field, we expect
the event to further shape the ongoing Linked Data research agenda.

=

Topics of Interest

Topics of interest for the workshop include, but are not limited to,
the following:

1. Linked Data Application Architectures
   * crawling, caching and querying Linked Data
   * dataset dynamics and synchronization
   * Linked Data mining

2. Data Linking and Data Fusion
   * linking algorithms and heuristics, identity resolution
   * Web data integration and data fusion
   * link maintenance
   * performance of linking infrastructures/algorithms on Web data

3. Quality, Trust and Provenance in Linked Data
   * tracking provenance and usage of Linked Data
   * evaluating quality and trustworthiness of Linked Data
   * profiling of Linked Data sources

4. User Interfaces for the Web of Data
   * approaches to visualizing and interacting with distributed Web data
   * Linked Data browsers and search engines

5. Data Publishing
   * tools for publishing large data sources as Linked Data on the Web
(e.g. relational databases, XML repositories)
   * embedding data into classic Web documents (e.g. RDFa, Microformats)
   * describing data on the Web (e.g. voiD, Semantic Site Map)
   * licensing issues in Linked Data publishing

6. Business models for Linked Data publishing and consumption

=

Submissions

We seek three kinds of submissions:

1. Full technical papers: up to 10 pages in ACM format
2. Short technical and position papers: up to 5 pages in ACM format
3. Demo description: up to 2 pages in ACM format

Submissions must be formatted using the WWW2010 templates available at
http://www2010.org/www/authors/submissions/formatting-guidelines/.

Submissions will be peer reviewed by three independent reviewers.
Accepted papers will be presented at the workshop and included in the
workshop proceedings.

Please submit your paper via EasyChair at
http://www.easychair.org/conferences/?conf=ldow2010

=

Important Dates

Submission deadline: 15th February 2010, 23.59 Hawaii time
Notification of acceptance: 8th March 2010
Camera-ready versions of accepted papers: 21st March 2010
Workshop date: 27th or 28th April 2010

=

Organising Committee

Christian Bizer, Freie Universität Berlin, Germany
Tom Heath, Talis Information Ltd, UK
Tim Berners-Lee, MIT CSAIL, USA
Michael Hausenblas, DERI, NUI Galway, Ireland

=

Programme Committee

Alexandre Passant, DERI, NUI Galway, Ireland
Andreas Langegger, University of Linz, Austria
Andy Seaborne, Talis Information Ltd, UK
Axel Polleres, DERI, NUI Galway, Ireland
Bernhard Schandl, University of Vienna, Austria
Christopher Brewster, Aston Business School, UK
Dave Reynolds, Epimorph

Re: [fresnel] Fresnel: State of the Art?

2010-02-03 Thread Emmanuel Pietriga
On Feb 1, 2010, at 3:09 PM, Axel Rauschmayer wrote:

> I think, it would make sense at some point in time to work on "Fresnel 2".

[...]

> Axel



On Feb 2, 2010, at 8:18 PM, Hugh Glaser wrote:

[...]

> However, it is now a bit fragile to use - not because of the software (we
> use Jfresnel), but by the time you have over 800 lines of fresnel n3 with
> terms coming from more than 15 ontologies, it becomes a bit like writing
> machine code. And as hard to debug.
> 
> I keep wanting to write a system to generate or maintain it, but can't find
> the time.
> Mind you, not sure what it would look like in Protégé - maybe that is the
> answer? But then would need to find the time to investigate, and in the end
> it ain't broke so I haven't fixed it. :-)
> 
> But it is certainly an appropriate component in the scheme of the Web of
> Data, and a polishing might be beneficial, especially if it resulted in
> support tools.

> 
> Best
> Hugh



Our goal with the first release of the Fresnel vocabulary in 2006 was to have 
more people (beyond us) play with it in different contexts and get feedback so 
that the language could be enhanced iteratively. Maybe it is now time to do 
such an iteration?

--
Emmanuel Pietriga
INRIA Saclay - Projet In Situ
http://www.lri.fr/~pietriga