Research Associate/PhD Student on Open Access Publishing in Social Science

2016-07-31 Thread Christoph LANGE
The Enterprise Information Systems (EIS) research group at the
University of Bonn, Germany, is searching for a

Research Associate (Wissenschaftliche(r) Mitarbeiter(in))

(initially 50% full-time equivalent TV-L 13)

to work in the OSCOSS (Opening Scholarly Communication in Social
Sciences) project at the Institute of Computer Science III.  Initial
appointment will be at least until the end of 2017 (or up to 3 years
if this is the start of your PhD studies), starting as soon as
possible.  A combination with contracts in other, related projects of
the EIS group is possible.

You will design a software architecture for collaboratively authoring,
reviewing and reading social science papers connected to research
datasets, source code repositories and publication databases.  You will
implement this architecture based on existing software systems and
support our application partner GESIS (Leibniz Institute for the Social
Sciences) in evaluating this implementation in the real-world scenario
of publication workflows for their journals.

You will be able to use the results you achieve in the project for
working towards a doctoral dissertation.

We offer:

● Close supervision by the senior members of the OSCOSS research team at
the University of Bonn and GESIS
● Financial support to attend relevant conferences
● Close interaction with colleagues working on related projects in the
fields of scientific information systems and open educational resources
(OpenAIRE2020, SlideWiki)
● The possibility to teach and supervise students on topics related to
the project
● The possibility to obtain a discounted public transport ticket

Requirements:

● A Master degree in a relevant field (Computer Science, Information
Sciences or equivalent)
● Proficiency in spoken and written English.  Proficiency in German is a
plus.
● Proficiency in modern programming languages and modern software
engineering methodologies.  In particular, proficiency in
collaborative editing solutions with JavaScript frontends, PHP and
Python for web application backends, XML technologies and web services
is required for OSCOSS.
● Proven team software development skills.
● Familiarity with Digital Libraries, Semantic Web, Text Mining, Data
Analytics, and Social Science is an asset.

To apply, please send to the Enterprise Information Systems group
<eis-applicati...@lists.iai.uni-bonn.de> a CV, a master certificate or
university transcripts, a motivation letter clearly addressing each of
the requirements laid out above and including a short research plan
focused on the topics of the OSCOSS project, two letters of
recommendation, and an English writing sample (e.g. prior publication
or master thesis excerpt).  Please include "OSCOSS" in your email
subject and indicate whether you would like to do a PhD with us.
Please do not send emails larger than 10 MB.

Applications will be processed continuously.  Please indicate your
intent to apply as soon as possible. Full applications arriving before
12 August 2016 will be given full consideration.

Please direct informal enquiries to the same email address; see
http://eis.iai.uni-bonn.de/Projects/OSCOSS for further information
about the project.

The University of Bonn is an equal opportunities employer.

-- 
Dr. Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701

→ Please note: I will be on parental leave from 29 July to 28 October.
  Colleagues will stand in for me by project.



Research Associate/PhD Student on Open Access Publishing in Social Science

2015-10-26 Thread Christoph LANGE
The University of Bonn, Germany, is searching for a

Research Associate (Wissenschaftliche(r) Mitarbeiter(in))

(initially 50% full-time equivalent TV-L 13, up to 75% possible)

to work in the OSCOSS (Opening Scholarly Communication in Social
Sciences) project at the Institute of Computer Science III.  Initial
appointment will be for 2 years, starting as soon as possible.

You will design a software architecture for collaboratively authoring,
reviewing and reading social science papers connected to research
datasets, source code repositories and publication databases.  You will
implement this architecture based on existing software systems and
support our application partner GESIS (Leibniz Institute for the Social
Sciences) in evaluating this implementation in the real-world scenario
of publication workflows for their journals.

You will be able to use the results you achieve in the project for
working towards a doctoral dissertation.

We offer:

● Close supervision by the senior members of the OSCOSS research team at
the University of Bonn and GESIS
● Financial support to attend relevant conferences
● Close interaction with colleagues working on related projects in the
fields of scientific information systems and open educational resources
(OpenAIRE2020, SlideWiki)
● The possibility to teach and supervise students on topics related to
the project
● The possibility to obtain a discounted public transport ticket

Requirements:

● A Master degree in a relevant field (Computer Science, Information
Sciences or equivalent)
● Proficiency in spoken and written English.  Proficiency in German is a
plus.
● Proficiency in modern programming languages and modern software
engineering methodologies.  In particular, proficiency in JavaScript,
PHP, Python, XML technologies and web services is required for OSCOSS.
● Familiarity with Digital Libraries, Semantic Web, Text Mining, Data
Analytics, and Social Science is an asset.

To apply, please send to Dr. Christoph Lange <lan...@cs.uni-bonn.de> a
CV, a master certificate or university transcripts, a motivation letter
including a short research plan focused on OSCOSS, two letters of
recommendation, and an English writing sample (e.g. prior publication or
master thesis excerpt).  Please include "OSCOSS" in your email subject
and indicate whether you would like to do a PhD with us.  Applications
arriving before 13 November 2015 will be given full consideration.

Please direct informal enquiries to the same email address or phone +49
2241 14-2428; see http://eis.iai.uni-bonn.de/Projects/OSCOSS for further
information about the project.

The University of Bonn is an equal opportunities employer.

-- 
Dr. Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701

→ Job offer: Research associate (with PhD option) on Transforming a Social
  Science Journal's Authoring/Reviewing/Publishing Workflow to Web
  Technology. Apply by 13 Nov. http://eis.iai.uni-bonn.de/Jobs.html#oscoss



Two fully funded PhD positions on Answering Questions using Web Data

2015-02-08 Thread Christoph LANGE
Fraunhofer IAIS is pleased to announce two PhD positions - fully-funded 
with the EU research project: “WDAqua: Answering questions using Web 
Data”, which has started in January 2015.


Research Area:

The project will undertake advanced fundamental and applied research 
into models, methods, and tools for data-driven question answering on 
the Web, spanning over a diverse range of areas and disciplines (data 
analytics, data mining, information retrieval, social computing, cloud 
computing, large-scale distributed computing, Linked Data, and Web 
science). Potential topics for a PhD dissertation include, but are not 
limited to:


● Design of a cloud-based system architecture for question answering 
(QA), extensible by plugins for all stages of the process of QA and Web data
● High-quality interpretation of voice input and natural language text 
as database queries for question answering.
● Leveraging Web Data for advanced entity disambiguation and 
contextualisation of queries given as natural language.
● Question answering methods using ecosystems of heterogeneous data sets 
(structured, unstructured, linked, stream-like, uncertain).


Institution

The about 200 employees of the Fraunhofer Institute for Intelligent 
Analysis and Information Systems (IAIS; http://www.iais.fraunhofer.de) 
investigate and develop innovative systems for data analysis and 
information management. Specific areas of competence include information 
integration (represented by the IAIS department Organized Knowledge), 
big data (department Knowledge Discovery), and multimedia technologies 
(department NetMedia).


Requirements:
1. Master Degree in Computer Science (or equivalent).
2. You must not have resided or worked for more than 12 months in 
Germany in the 3 years before starting to work.
3. Proficiency in spoken and written English. Proficiency in German is a 
plus but not required.
4. Proficiency in Programming languages like Java/Scala or JavaScript, 
and modern software engineering methodology.
5. Familiarity with Semantic Web technologies, Natural Language 
Processing, Speech Recognition, Indexing Technologies, Distributed 
Systems and Cloud Computing is an asset.


As a successful candidate for this award, you will:
1. Spend the majority of your time at Fraunhofer IAIS, where you will 
research and write a dissertation leading to a PhD (awarded by the 
University of Bonn).

2. Have a minimum of two academic supervisors from the WDAqua project.
3. Receive a full salary and a support grant to attend conferences, 
summer schools, and other events related to your research each year.
4. Engage with other researchers and participate in the training program 
offered by the WDAqua project, including internships at other partners 
in the project.


Further Information

For further information, please see the WDAqua homepage at 
http://www.iai.uni-bonn.de/~langec/wdaqua/.


How to apply

Applications should include a CV and a letter of motivation. Applicants 
should list two referees that may be contacted by the Department and are 
moreover invited to submit a research sample (publication or research 
paper). Applications will be evaluated on a rolling basis. For full 
consideration, please apply until 27.02.2015.


Applications should be sent to Dr. Christoph Lange-Bever.
E-Mail: christoph.lange-be...@iais.fraunhofer.de
Tel.: +49 2241/14-2428

--
Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701

→ WDAqua (Answering Questions using Web Data) Marie Skłodowska-Curie
Intl. Training Network (ITN) seeking 15 highly skilled PhD candidates.
Apply by February: http://www.iai.uni-bonn.de/~langec/wdaqua/



15 PhD positions on Question Answering in 4 EU countries; information event 21 Jan @ Bonn, Germany; apply by February

2014-12-05 Thread Christoph LANGE
 a short research plan targeted to one of the 
above projects, two letters of recommendation and English writing sample 
(e.g. prior publication or master thesis excerpt). The individual 
organisations will announce specific requirements and guidelines.


== Questions? ==

The information event on 21 January is a good opportunity to ask 
questions. You may also, at any time, email or phone

Christoph Lange
WDAqua assistant coordinator
lan...@cs.uni-bonn.de
Phone +49 2241 14-2428
http://langec.wordpress.com/contact/

--
Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701




Postdoc position on Linked Data / Enterprise Information Integration at Uni Bonn Fraunhofer IAIS

2014-10-09 Thread Christoph LANGE
*** One or more Post-doctoral Researcher / Research Group Leader 
position in Linked Data / Enterprise Information Integration ***

at Uni Bonn  Fraunhofer IAIS (Bonn, Germany)

The Enterprise Information Systems (EIS) group at the University of Bonn 
[1] and the Organized Knowledge department at Fraunhofer IAIS [2] are 
hiring a postdoc [3].


We are running several research projects on applying Linked Data 
technology to information integration in enterprises and other 
organisations.


The ideal candidate holds a doctoral degree in computer science or a 
related field and is able to combine theoretical and practical aspects 
in her/his work.  The candidate should have a track record in at least 
three (and be committed to expand it to more) of the following areas:


* publication in renowned journals/conferences
* proven software engineering skills
* successful student supervision
* close collaboration with other groups/companies/organisations
* successful competition for funding
* transfer and commercialisation of research results

*Fluent communication in English and German is a fundamental requirement.*

(If you do not speak German, note that we are also happy to support 
strong candidates in applying for a fellowship with us.)


The candidate should have experience and commitment to work on the 
forefront of research in one of the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* software engineering and modern application development
* database technologies and data integration
* HCI and user interface design for Web content
* data analytics

All details can be found at: http://eis.iai.uni-bonn.de/Jobs.html

We provide a scientifically and intellectually inspiring environment 
with an entrepreneurial mindset embedded in a world-leading university 
and one of the largest applied research organizations (Fraunhofer). 
Bonn, the former West German capital city on the banks of the Rhine 
river is located right next to Germany's fourth largest city Cologne; it 
offers an outstanding quality of life, has developed into a hub of 
international cooperation and is in easy reach of many European metropoles.


The position starts as soon as possible, is open until filled (for full 
consideration please apply until 7 November) and will be granted 
initially for 2 years with envisioned extension.  Please send your CV, 
an English writing sample (e.g. your master thesis or a publication), a 
letter of reference and a short motivational statement (incl. research 
and technology interests) to eis-lead...@lists.iai.uni-bonn.de.  We also 
always happy to support strong candidates in applying for Marie 
Skłodowska Curie Individual Fellowships (MSCA-IF; next deadline 9 
September 2015).


[1] http://eis.iai.uni-bonn.de/
[2] https://www.iais.fraunhofer.de/index.php?id=5988L=1
[3] http://eis.iai.uni-bonn.de/Jobs.html

--
Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701

→ Postdoc position on Linked Data / Enterprise Information Integration
  with the EIS group at Uni Bonn  Fraunhofer IAIS (Bonn, Germany)
  http://eis.iai.uni-bonn.de/Jobs – apply until 7 November



Several PhD positions on Linked Data / Enterprise Information Integration at Uni Bonn Fraunhofer IAIS

2014-07-03 Thread Christoph LANGE
*** Several PhD positions in Linked Data / Enterprise Information 
Integration ***

at Uni Bonn  Fraunhofer IAIS (Bonn, Germany)

The Enterprise Information Systems (EIS) group at the University of Bonn 
[1] and the Organized Knowledge department at Fraunhofer IAIS [2] are 
hiring PhD students [3].


We will soon be starting several research projects on applying Linked 
Data technology to information integration in enterprises and other 
organisations.


The ideal candidate holds an MS degree in computer science or a related 
field and is able to consider both theoretical and practical 
implementation aspects in her/his work. Fluent English communication and 
a passion for developing modern software solutions (e.g. in Java/Scala, 
JavaScript and/or PHP) are fundamental requirements. Command of German 
language is a plus, but not a requirement. The candidate should have 
experience and commitment to work on the forefront of research in one of 
the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* software engineering and modern application development
* database technologies and data integration
* HCI and user interface design for Web content
* data analytics

All details can be found at: http://eis.iai.uni-bonn.de/#phd

We provide a scientifically and intellectually inspiring environment 
with an entrepreneurial mindset embedded in a world-leading university 
and one of the largest applied research organizations (Fraunhofer). 
Bonn, the former West German capital city on the banks of the Rhine 
river is located right next to Germany's fourth largest city Cologne; it 
offers an outstanding quality of life, has developed into a hub of 
international cooperation and is in easy reach of many European metropoles.


The positions start as soon as possible, is open until filled (for full 
consideration please apply until 31 July) and will be, after an initial 
probation period, granted for 3 years with the option for extension and 
promotion.  Please send your CV, an English writing sample (e.g. your 
master thesis or a publication), a letter of reference and a short 
motivational statement (incl. research and technology interests) to 
eis-lead...@cs.uni-bonn.de.


Rather interested in a postdoc with us? – We do not currently have 
vacancies, but we are happy to support strong candidates in applying for 
a Marie Skłodowska Curie Individual Fellowship with us (next deadline 11 
September [4]).


[1] http://eis.iai.uni-bonn.de/
[2] https://www.iais.fraunhofer.de/index.php?id=5988L=1
[3] http://eis.iai.uni-bonn.de/#phd
[4] http://eis.iai.uni-bonn.de/#postdoc

--
Christoph Lange, Enterprise Information Systems Department
Applied Computer Science @ University of Bonn; Fraunhofer IAIS
http://langec.wordpress.com/about, Skype duke4701

→ Several PhD positions in Linked Data / Enterprise Information Integration
  with the EIS group at Uni Bonn  Fraunhofer IAIS (Bonn, Germany).
  http://eis.iai.uni-bonn.de/#phd – apply until 31 July for full 
consideration.





Computer science publisher needs help with RDFa/HTTP technical issue [Re: How are RDFa clients expected to handle 301 Moved Permanently?]

2013-10-25 Thread Christoph LANGE
Dear all,

it seems the RDFa mailing list is not that active any more, as I haven't
got an answer for this question for two weeks.  As my question is also
related to LOD publishing, let me try to ask it here.  We, the
publishers of CEUR-WS.org, are facing a technical issue involving RDFa
and hash vs. slash URIs/URLs.

I believe that, when an open access publisher that is a big player at
least in the field of computer science workshops, introduces RDFa, this
has the potential to become a very interesting use case for RDFa.
(Please see also our blog at http://ceurws.wordpress.com/ for further
planned innovations.)

While I think I have very good knowledge of RDFa, we are in an early
phase of implementing RDFa in the specific setting of CEUR-WS.org.
Therefore we would highly appreciate any input on how to get our RDFa
implementation right.  Please see below for the original message with
the gory technical details.

Cheers, and thanks in advance,

Christoph (CEUR-WS.org technical editor)

On 2013-10-10 16:54, Christoph LANGE wrote:
 Dear RDFa community,

 I am writing in the role of technical editor of the CEUR-WS.org open
 access publishing service (http://ceur-ws.org/), which many of you have
 used before.

 We provide a tool that allows proceedings editors to include RDFa
 annotations into their tables of content
 (https://github.com/clange/ceur-make).  FYI: roughly 1 in 6 proceedings
 volumes has been using RDFa recently.

 We are now possibly running into a problem by having changed the
 official URLs of our volume pages from, e.g.,
 http://ceur-ws.org/Vol-994/ into http://ceur-ws.org/Vol-994, i.e.
 dropping the trailing slash.  In short, RDFa requested from
 http://ceur-ws.org/Vol-994 contains broken URIs in outgoing links, as
 RDFa clients don't seem to follow the HTTP 301 Moved Permanently,
 which points from the slash-less URL to the slashed URL (which still
 exists, as our server-side directory layout hasn't changed).  And I'm
 wondering whether that's something we should expect an RDFa client to
 do, or whether we need to fix our RDFa instead.

 Our rationale for dropping the trailing slash was the following:

 1. While at the moment all papers inside our volumes are PDF files, e.g.
 http://ceur-ws.org/Vol-994/paper-01.pdf, we are thinking about other
 content types (see
 http://ceurws.wordpress.com/2013/09/25/is-a-paper-just-a-pdf-file/), in
 particular directories containing accompanying data such as original
 research data, and the main entry point to such a paper could then be
 another HTML page in a subdirectory.

 2. As the user (here we mean a human using a browser) should not be
 responsible for knowing whether a paper, or a volume, is a file or a
 directory, we thought we'd use slash-less URLs throughout, and then let
 the server tell the browser (and thus the user) when some resource
 actually is a directory.

 (Do these considerations make sense?)

 This behaviour is implemented as follows (irrelevant headers stripped):

 $ wget -O /dev/null -S http://ceur-ws.org/Vol-1010
 --2013-10-10 16:33:57--  http://ceur-ws.org/Vol-1010
 Resolving ceur-ws.org... 137.226.34.227
 Connecting to ceur-ws.org|137.226.34.227|:80... connected.
 HTTP request sent, awaiting response...
HTTP/1.1 301 Moved Permanently
Location: http://ceur-ws.org/Vol-1010/
 Location: http://ceur-ws.org/Vol-1010/ [following]
 --2013-10-10 16:33:57--  http://ceur-ws.org/Vol-1010/
 Reusing existing connection to ceur-ws.org:80.
 HTTP request sent, awaiting response...
HTTP/1.1 200 OK

 But now RDFa clients don't seem to respect this redirect.  Please try
 for yourself with http://www.w3.org/2012/pyRdfa/ and
 http://linkeddata.uriburner.com/.  These are two freely accessible RDFa
 extractors I could think of, and I think they are based on different
 implementations.  (Am I right?)

 When you enter a slashed URI, e.g. http://ceur-ws.org/Vol-1010/, you get
 correct RDFa, in particular outgoing links to, e.g.,
 http://ceur-ws.org/Vol-1010/paper-01.pdf.  When you enter the same URI
 without a slash, the relative URIs that point from index.html to the
 papers like ol rel=dcterms:hasPartli about=paper-01.pdf resolve
 to http://ceur-ws.org/paper-01.pdf.

 Now I have the following questions:

 Are these RDFa clients broken?

 If they are not broken, what is broken on our side, and how can we
fix it?

 Is it acceptable that RDFa retrieved from a slash-less URL is broken,
 whereas RDFa from the slashed URL works?

 Is it OK to say that the canonical URL of something should be
 slash-less, whereas the semantic identifier of the same thing (if
 that's what we mean by its RDFa URI) should have a slash?  Or should
 both be the same?  (Note: I am well aware of the difference between
 information resources and non-information resources, but IMHO this
 difference doesn't apply here, as we publish online proceedings.
 http://ceur-ws.org/Vol-1010 _is_ the workshop volume, which has editors
 and contains papers; it is not just a page

Semantic publications at CEUR-WS.org (well, at least RDFa-enhanced tables of content)

2013-07-04 Thread Christoph LANGE
Dear LOD community,

for those who publish their workshop proceedings at CEUR-WS.org, there
is now a possibility to enrich the index.html table of contents of their
volume with RDFa in a convenient way.

The first workshop to make use of this is SePublica, the workshop on
Semantic Publishing (http://ceur-ws.org/Vol-994/).  You can try it by
feeding that URL into http://www.w3.org/2012/pyRdfa/.

Due to the complexity of these RDFa annotations it is not recommended to
create them manually; instead you can use the
https://github.com/clange/ceur-make tool to create a CEUR-WS.org
compliant, RDFa-enriched index.html table of contents
semi-automatically.  This is particularly convenient for workshops that
use EasyChair to collect their submissions, as ceur-make
semi-automatically generates CEUR-WS.org proceedings volumes from
EasyChair proceedings downloads.

I'm sure there is room for further improvement.  Please let me know,
either in this thread, or at https://github.com/clange/ceur-make/issues
for more technical issues.  I have recently joined CEUR-WS.org as a
technical editor and will therefore be able to put some more things into
practice.

Cheers,

Christoph

-- 
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701

→ Intelligent Computer Mathematics, 8–12 July, Bath, UK.
  http://cicm-conference.org/2013/
→ Modular Ontologies (WoMO), 15 September, Corunna, Spain.
  Submission until 12 July; http://www.iaoa.org/womo/2013.html
→ Knowledge and Experience Management, 7-9 October, Bamberg, Germany.
  Submission until 15 July; http://minf.uni-bamberg.de/lwa2013/cfp/fgwm/
→ Mathematics in Computer Science Special Issue on “Enabling Domain
  Experts to use Formalised Reasoning”; submission until 31 October.
  http://cs.bham.ac.uk/research/projects/formare/pubs/mcs-doform/



CfP Knowledge Experience Management (FGWM), Bamberg, Germany, Oct. 7-9; Deadline July 1

2013-06-04 Thread Christoph LANGE
===
CALL FOR PAPERS
KNOWLEDGE AND EXPERIENCE MANAGEMENT (FGWM-2013)
Track of LWA 2013 - http://www.minf.uni-bamberg.de/lwa2013/
===

The annual workshop Knowledge and Experience Management is organized
by the Special Interest Group on Knowledge Management of the German
Informatics society (GI), that aims at enabling and promoting the
exchange of innovative ideas and practical applications in the field of
knowledge and experience management.

IMPORTANT DATES
- Submission of papers:   July 1, 2013
- Notification of acceptance: July 29, 2013
- Camera-ready copy:   August 19, 2013
- Workshop FGWM@LWA: October 7-9, 2013

All submissions of current research from the following topics and
adjacent areas are welcome, in particular, work in progress
contributions. The latter can serve as a basis for interesting
discussions among the participants and provide young researchers with
feedback. We also invite researchers to contribute to the workshop by
resubmitting conference papers and share their ideas with the research
community.

TOPICS OF INTEREST
- Experience  knowledge search and knowledge integration approaches:
case-based reasoning, logic-based approaches, text-based approaches,
semantic portals/wikis/blogs, Web 2.0, etc.
- Applications of knowledge and experience management (corporate
memories, e-commerce, design, tutoring/e-learning, e-government,
software engineering, robotics, medicine, etc.)
- Big Data and Knowledge Management (KM)
- (Semantic) Web Services for KM
- Agile approaches within the KM domain
- Agent-based  Peer-to-Peer KM
- Just-in-time retrieval and just-in-time knowledge capturing
- Knowledge representation (ontologies, similarity, retrieval, adaptive
knowledge, etc.)
- Support of authoring and maintenance processes (change management,
requirements tracing, (distributed) version control, etc.)
- Evaluation of KM systems
- Practical experiences (lessons learned) with IT aided KM approaches
- Integration of KM and business processes
- Introspection and explanation capabilities of KM systems
- Application of Linked Data
- Combination of KM with other systems and concepts (e.g. Decision
Support, Business Intelligence, etc.)

WORKSHOP CHAIRS
- Dr. Andrea Kohlhase
- Prof. Dr.-Ing. Bodo Rieger

More detailed information is available on the following website:
http://www.minf.uni-bamberg.de/lwa2013/cfp/fgwm/
If you have any questions regarding the organization of the workshop,
please don't hesitate to contact the organizer Axel Benjamins
(abenjam...@uos.de).

See you in Bamberg! :-)

-- 
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701

→ Intelligent Computer Mathematics, 8–12 July, Bath, UK.
  Work-in-progress deadline 7 June; http://cicm-conference.org/2013/
→ OpenMath Workshop, 10 July, Bath, UK.
  Submission deadline 7 June; http://cicm-conference.org/2013/openmath/



2nd CfP: OpenMath workshop at CICM (10 July, Bath, UK), submission deadline 7 June

2013-05-20 Thread Christoph LANGE
25th OpenMath Workshop
Bath, UK
10 July 2013
co-located with CICM 2013
Submission deadline 7 June

http://www.cicm-conference.org/2013/openmath/

OBJECTIVES

OpenMath (http://www.openmath.org) is a language for exchanging
mathematical formulae across applications (such as computer algebra
systems).  From 2010 its importance has increased in that OpenMath
Content Dictionaries were adopted as a foundation of the MathML 3 W3C
recommendation (http://www.w3.org/TR/MathML), the standard for
mathematical formulae on the Web.

Topics we expect to see at the workshop include

   * Feature Requests (Standard Enhancement Proposals) and Discussions
 for going beyond OpenMath 2;
   * Further convergence of OpenMath and MathML 3;
   * Reasoning with OpenMath;
   * Software using or processing OpenMath;
   * OpenMath on the Semantic Web;
   * New OpenMath Content Dictionaries;

Contributions can be either full research papers, Standard Enhancement
Proposals, or a description of new Content Dictionaries, particularly
ones that are suggested for formal adoption by the OpenMath Society.

IMPORTANT DATES (all times are anywhere on earth)

   * 7 June: Submission
   * 20 June: Notification of acceptance or rejection
   * 5 July: Final revised papers due
   * 10 July: Workshop

SUBMISSIONS

Submission is via EasyChair
(http://www.easychair.org/conferences?conf=om20131).  Final papers
must conform to the EasyChair LaTeX style.  Initial submissions in
this format are welcome but not mandatory – but they should be in PDF
and within the given limit of pages/words.

Submission categories:

   * Full paper: 5–10 EasyChair pages
   * Short paper: 1–4 EasyChair pages
   * CD description: 1-6 EasyChair pages; a .zip or .tgz file of the
 CDs must be attached, or a link to the CD provided.
   * Standard Enhancement Proposal: 1-10 EasyChair pages (as
 appropriate w.r.t. the background knowledge required); a .zip or
 .tgz file of any related implementation (e.g. a Relax NG schema)
 should be attached.

If not in EasyChair format, 500 words count as one page.

PROCEEDINGS

Electronic proceedings will be published with CEUR-WS.org.

ORGANISATION COMMITTEE

   * Christoph Lange (University of Birmingham, UK)
   * James Davenport (University of Bath, UK)
   * Michael Kohlhase (Jacobs University Bremen, Germany)

PROGRAMME COMMITTEE

   * Lars Hellström (Umeå Universitet, Sweden)
   * Jan Willem Knopper (Technische Universiteit Eindhoven, Netherlands)
   * Paul Libbrecht (Center for Educational Research in Mathematics
 and Technology, Martin-Luther-University Halle-Wittenberg)
   (to be completed)

Comments/questions/enquiries: to be sent to
openmath-works...@googlegroups.com

-- 
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701

→ Intelligent Computer Mathematics, 8–12 July, Bath, UK.
  Work-in-progress deadline 7 June; http://cicm-conference.org/2013/
→ OpenMath Workshop, 10 July, Bath, UK.
  Submission deadline 7 June; http://cicm-conference.org/2013/openmath/



CfP: OpenMath workshop at CICM (10 July, Bath, UK), submission deadline 7 June

2013-04-19 Thread Christoph LANGE
25th OpenMath Workshop
Bath, UK
10 July 2013
co-located with CICM 2013
Submission deadline 7 June

http://www.cicm-conference.org/2013/cicm.php?event=openmath

OBJECTIVES

OpenMath (http://www.openmath.org) is a language for exchanging
mathematical formulae across applications (such as computer algebra
systems).  From 2010 its importance has increased in that OpenMath
Content Dictionaries were adopted as a foundation of the MathML 3 W3C
recommendation (http://www.w3.org/TR/MathML), the standard for
mathematical formulae on the Web.

Topics we expect to see at the workshop include

   * Feature Requests (Standard Enhancement Proposals) and Discussions
 for going beyond OpenMath 2;
   * Further convergence of OpenMath and MathML 3;
   * Reasoning with OpenMath;
   * Software using or processing OpenMath;
   * OpenMath on the Semantic Web;
   * New OpenMath Content Dictionaries;

Contributions can be either full research papers, Standard Enhancement
Proposals, or a description of new Content Dictionaries, particularly
ones that are suggested for formal adoption by the OpenMath Society.

IMPORTANT DATES (all times are anywhere on earth)

   * 7 June: Submission
   * 20 June: Notification of acceptance or rejection
   * 5 July: Final revised papers due
   * 10 July: Workshop

SUBMISSIONS

Submission is via EasyChair
(http://www.easychair.org/conferences?conf=om20131).  Final papers
must conform to the EasyChair LaTeX style.  Initial submissions in
this format are welcome but not mandatory – but they should be in PDF
and within the given limit of pages/words.

Submission categories:

   * Full paper: 5–10 EasyChair pages
   * Short paper: 1–4 EasyChair pages
   * CD description: 1-6 EasyChair pages; a .zip or .tgz file of the
 CDs must be attached, or a link to the CD provided.
   * Standard Enhancement Proposal: 1-10 EasyChair pages (as
 appropriate w.r.t. the background knowledge required); a .zip or
 .tgz file of any related implementation (e.g. a Relax NG schema)
 should be attached.

If not in EasyChair format, 500 words count as one page.

PROCEEDINGS

Electronic proceedings will be published with CEUR-WS.org.

ORGANISATION COMMITTEE

   * Christoph Lange (University of Birmingham, UK)
   * James Davenport (University of Bath, UK)
   * Michael Kohlhase (Jacobs University Bremen, Germany)

Comments/questions/enquiries: to be sent to
openmath-works...@googlegroups.com

-- 
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701

→ Intelligent Computer Mathematics, 7–12 July, Bath, UK.
  Work-in-progress deadline 7 June; http://cicm-conference.org/2013/



Participate: Enabling Domain Experts to use Formalised Reasoning (AISB 2013, Exeter, UK, 3-5 Apr 2013). Tutorials on Matching, Auctions, Finance.

2013-02-19 Thread Christoph LANGE
Do-Form: Enabling Domain Experts to use Formalised Reasoning
http://cs.bham.ac.uk/research/projects/formare/events/aisb2013

CALL FOR PARTICIPATION

Symposium at the annual convention of the
AISB (Society for the Study of
  Artificial Intelligence and Simulation of Behaviour;
  http://www.aisb.org.uk)
University of Exeter, UK
3-5 April 2013
http://emps.exeter.ac.uk/computer-science/research/aisb/
(early registration deadline 5 March)

HANDS-ON TUTORIAL SESSIONS (details below) with
* M. Utku Ünver (matching markets)
* Peter Cramton (auctions)
* Neels Vosloo (finance markets regulation)
(http://www.cs.bham.ac.uk/research/projects/formare/events/aisb2013/invited.php)

PAPER and DEMO PRESENTATIONS on
* environmental models
* controlled natural languages
* ontologies
* auction theory
* software verification
* formal specification
* autonomous systems
* self-explaining systems
(http://www.cs.bham.ac.uk/research/projects/formare/events/aisb2013/proceedings.php)

This symposium is motivated by the long-term VISION of making information
systems dependable.  In the past even mis-represented units of
measurements caused fatal ENGINEERING disasters.  In ECONOMICS, the
subtlety of issues involved in good auction design may have led to low
revenues in auctions of public goods such as the 3G radio spectra.
Similarly, banks' value-at-risk (VaR) models – the leading method of
financial risk measurement – are too large and change too quickly to be
thoroughly vetted by hand, the current state of the art; in the London
Whale incident of 2012, JP Morgan claimed that its exposures were $67mn
under one of its VaR models, and $129 under another one.  Verifying a
model's properties requires formally specifying them; for VaR models, any
work would have to start with this most basic step, as regulators' current
desiderata are subjective and ambiguous.

We believe that these problems can be addressed by representing the
knowledge underlying such models and mechanisms in a formal, explicit,
machine-verifiable way.  Contemporary computer science offers a wide
choice of knowledge representation languages well supported by
verification tools.  Such tools have been successfully applied, e.g., for
verifying software that controls commuter rail or payment systems.  Still,
DOMAIN EXPERTS without a strong computer science background find it
challenging to choose the right tools and to use them.  This symposium
aims at investigating ways to support them.  Some problems can be
addressed now, others will bring new challenges to computer science.

THE SYMPOSIUM is designed to bring domain experts and formalisers into
close and fruitful contact with each other: domain experts will be able to
present their fields and problems to formalisers; formalisers will be
exposed to new and challenging problem areas. We will combine talks and
hands-on sessions to ensure close interaction among participants from both
sides.

World-class economists will offer HANDS-ON TUTORIAL SESSIONS on the
following topics:

* MATCHING MARKETS (M. Utku Ünver, Boston College): These include matching
  students to schools, interns to hospitals, and kidney donors to
  recipients. See the documentation for the 2012 Nobel Memorial Prize in
  Economic Sciences for more background information.

* AUCTIONS (Peter Cramton, University of Maryland): Peter has been working
  on auctions for Ofcom UK (4G spectrum auction), the UK Department of the
  Environment and Climate Change, and others – and most recently on the
  “applicant auctions” for the new top-level Internet domains issued by the
  ICANN.

* FINANCE MARKETS REGULATION (Neels Vosloo, Financial Services Authority,
  UK): It is currently impossible for regulators to properly inspect risk
  management models. Test portfolios are a promising tool for identifying
  problems with risk management models. To what extent can techniques from
  mechanised reasoning automate some of the inspection process?

COMMENTS/QUESTIONS/ENQUIRIES to be sent to doform2...@easychair.org

-- 
Christoph Lange, School of Computer Science, University of Birmingham
http://cs.bham.ac.uk/~langec/, Skype duke4701

→ SePublica Workshop @ ESWC 2013.  Montpellier, France, 26-30 May.
  Deadline 4 Mar; http://sepublica.mywikipaper.org
→ Intelligent Computer Mathematics, 7–12 Jul, Bath, UK; Deadline 8 Mar
  http://cicm-conference.org/2013/
→ Enabling Domain Experts to use Formalised Reasoning @ AISB 2013
  3–5 April 2013, Exeter, UK.  3 Hands-on Tutorials on Economics
  http://cs.bham.ac.uk/research/projects/formare/events/aisb2013/



SePublica Semantic Publishing Workshop@ESWC (Montpellier 26-30 May); deadline 4 March

2013-01-18 Thread Christoph LANGE
Call for Participation: Sepublica 2013 -an ESWC Workshop

Machine-comprehensible Documents Bridging the Gap between Publications
and Data.

** May 26-30, 2013, Montpelier, France.

Workshop Web site: http://sepublica.mywikipaper.org/drupal/

*** Relevant dates ***

Submission Deadline: March 4,2013
Acceptance Notification: April 1,2013
Camera-Ready: April 15,2013


*** Topics ***

Publishing of scholarly works is on the cusp of great change. Data is
now routinely published accompanied by or in some semantic form, but
this is not the case for scholarly works. Advances in technology have
made it possible for the scientific article to adopt electronic
dissemination channels, from paper-based journals to purely electronic
formats. Yet, despite the improvements in the distribution,
accessibility and retrieval of information, little has really changed in
the publishing of scholarly works compared to that of the data about
which scholarly works are written. The availability of data and the
open, digital form of scholarly works is leading to a drive to
semantically enable scholarly works to make the works themselves more
computationally useful as well as to link them intimately to the data
about which they are written. Sepublica is a forum in which to discuss
and present what is best and up and coming in semantic publishing.

How are new technologies changing scholarly communication? How do we
want scholarly communication to change? Where do we want it to go?
Semantics, within publication workflows, is usually added post hoc, how
could we support publications to be born semantic? At Sepublica we will
discuss and present new ways of publishing, sharing, linking, and
analyzing such scientific resources as well as reasoning over the data
to discover new links and scientific insights. Sepublica is not,
however, limited to the scientific domain; the humanities, cultural
industries, news, commerce etc. all have published works that can
benefit from semantic enhancement and data to which they can link; all
are welcome.

topics include, but are not limited to:
* How could we realize a paper with an API? How could we have a
paper as a database, as a knowledge base?
* How is the paper an interface, gateway, to the web of data? How
could such an interface be delivered in a contextual manner?
* How are semantic scholarly works to be created?
* How are news agencies adopting technologies in support of their
publications? Has the delivered technology been adopted? What are the
experiences from news agencies been so far? Lessons learnt.
* How could semantic technologies be used to represent the knowledge
encoded in scientific documents and in general-interest media publications?
* Connecting scientific publications with underlying research data sets
* What semantics and ontologies do we need for representing
structural elements in a document?
* Moving from the bibliographic reference to the full content within
a linked
environment?

*** Call for Papers ***

Sepublica 2013 is soliciting submissions of novel (not previously
published nor concurrently submitted) research papers in the areas of
the topics outlined above. The organizing committee is happy to discuss
possible submissions with authors.

Submissions will be welcome from a broad range ofapproaches to semantic
publishing. We are particularly keen on submissions that are themselves
examples of semantic publishing of scholarly works. LaTeX documents in
the LNCS format can, e.g., be annotated using SALT or sTeX. We also
invite submissions in XHTML+RDFa or in the format of YOUR semantic
publishing tool. However, to ensure a fair review procedure, authors
must additionally produce a narrative submitted as a PDF that is
submitted as normal.

Submission is via EasyChair
(https://www.easychair.org/conferences/?conf=sepublica2013).
Papers must formatted according to the LNCS format

*** Submission Types ***

1. Full paper, 12 pages
2. Position paper, 5 pages.
3. Software demo papers, 2 pages
4. Late-breaking news, 1 page.

*** Contact ***

Please email sepublica2...@easychair.org For any enquiries.

*** Organizing Committee ***

Alexander Garcia Castro, alexgarc...@gmail.com, Florida State University
Christoph Lange, math.semantic@gmail.com, University of Birmingham
Phillip Lord, phillip.l...@newcastle.ac.uk, University of Newcastle
Robert Stevens, robert.stev...@manchester.ac.uk, University of Manchester




Deadline Extended to 1 June: OpenMath workshop at CICM (11 July, Bremen, Germany)

2012-05-24 Thread Christoph LANGE

24th OpenMath Workshop
Bremen, Germany
11 July 2012
co-located with CICM 2012
Submission deadline (EXTENDED) 1 June

http://www.informatik.uni-bremen.de/cicm2012/cicm.php?event=openmath

OBJECTIVES

OpenMath (http://www.openmath.org) is a language for exchanging
mathematical formulae across applications (such as computer algebra
systems).  From 2010 its importance has increased in that OpenMath
Content Dictionaries were adopted as a foundation of the MathML 3 W3C
recommendation (http://www.w3.org/TR/MathML), the standard for
mathematical formulae on the Web.

Topics we expect to see at the workshop include

   * Feature Requests (Standard Enhancement Proposals) and Discussions
 for going beyond OpenMath 2;
   * Further convergence of OpenMath and MathML 3;
   * Reasoning with OpenMath;
   * Software using or processing OpenMath;
   * New OpenMath Content Dictionaries;

Contributions can be either full research papers, Standard Enhancement
Proposals, or a description of new Content Dictionaries, particularly
ones that are suggested for formal adoption by the OpenMath Society.

IMPORTANT DATES (all times are anywhere on earth)

   * 1 June: Submission (EXTENDED)
   * 20 June: Notification of acceptance or rejection
   * 04 July: Final revised papers due
   * 11 July: Workshop

SUBMISSIONS

Submission is via EasyChair
(http://www.easychair.org/conferences?conf=om20120).  Final papers
must conform to the EasyChair LaTeX style.  Initial submissions in
this format are welcome but not mandatory – but they should be in PDF
and within the given limit of pages/words.

Submission categories:

   * Full paper: 5–10 EasyChair pages
   * Short paper: 1–4 EasyChair pages
   * CD description: 1-6 EasyChair pages; a .zip or .tgz file of the
 CDs must be attached, or a link to the CD provided.
   * Standard Enhancement Proposal: 1-10 EasyChair pages (as
 appropriate w.r.t. the background knowledge required); a .zip or
 .tgz file of any related implementation (e.g. a Relax NG schema)
 should be attached.

If not in EasyChair format, 500 words count as one page.

PROCEEDINGS

Electronic proceedings will be published with CEUR-WS.org in time for
the conference.

ORGANISATION COMMITTEE

   * Christoph Lange (University of Bremen and Jacobs University
 Bremen, Germany)
   * James Davenport (University of Bath, UK)

Comments/questions/enquiries: to be sent to om201...@easychair.org



CfP: OpenMath workshop at CICM (11 July, Bremen, Germany), submission deadline 25 May

2012-04-02 Thread Christoph LANGE

24th OpenMath Workshop
Bremen, Germany
11 July 2012
co-located with CICM 2012
Submission deadline 25 May

http://www.informatik.uni-bremen.de/cicm2012/cicm.php?event=openmath

OBJECTIVES

OpenMath (http://www.openmath.org) is a language for exchanging
mathematical formulae across applications (such as computer algebra
systems).  From 2010 its importance has increased in that OpenMath
Content Dictionaries were adopted as a foundation of the MathML 3 W3C
recommendation (http://www.w3.org/TR/MathML), the standard for
mathematical formulae on the Web.

Topics we expect to see at the workshop include

   * Feature Requests (Standard Enhancement Proposals) and Discussions
 for going beyond OpenMath 2;
   * Further convergence of OpenMath and MathML 3;
   * Reasoning with OpenMath;
   * Software using or processing OpenMath;
   * New OpenMath Content Dictionaries;

Contributions can be either full research papers, Standard Enhancement
Proposals, or a description of new Content Dictionaries, particularly
ones that are suggested for formal adoption by the OpenMath Society.

IMPORTANT DATES (all times are anywhere on earth)

   * 25 May: Submission
   * 20 June: Notification of acceptance or rejection
   * 04 July: Final revised papers due
   * 11 July: Workshop

SUBMISSIONS

Submission is via EasyChair
(http://www.easychair.org/conferences?conf=om20120).  Final papers
must conform to the EasyChair LaTeX style.  Initial submissions in
this format are welcome but not mandatory – but they should be in PDF
and within the given limit of pages/words.

Submission categories:

   * Full paper: 5–10 EasyChair pages
   * Short paper: 1–4 EasyChair pages
   * CD description: 1-6 EasyChair pages; a .zip or .tgz file of the
 CDs must be attached, or a link to the CD provided.
   * Standard Enhancement Proposal: 1-10 EasyChair pages (as
 appropriate w.r.t. the background knowledge required); a .zip or
 .tgz file of any related implementation (e.g. a Relax NG schema)
 should be attached.

If not in EasyChair format, 500 words count as one page.

PROCEEDINGS

Electronic proceedings will be published with CEUR-WS.org in time for
the conference.

ORGANISATION COMMITTEE

   * Christoph Lange (University of Bremen and Jacobs University
 Bremen, Germany)
   * James Davenport (University of Bath, UK)

Comments/questions/enquiries: to be sent to om201...@easychair.org



SePublica Submission Deadline Extended to March 18. Workshop@ESWC (May 27/28): Future of Scholarly Communication and Scientific Publishing

2012-03-04 Thread Christoph LANGE
 and economic aspects of Linked Data in science

--
Christoph Lange, Jacobs University Bremen
http://kwarc.info/clange, Skype duke4701

→ SePublica Workshop @ ESWC 2012.  Crete, Greece, 27/28 May 2012.
  Deadline 4 Mar.  http://sepublica.mywikipaper.org
→ I-SEMANTICS 2012.  Graz, Austria, 5-7 September 2012
  Abstract Deadline 2 April.  http://www.i-semantics.at



Re: ANN: SKOS implementation of the ACM Communication Classification System

2011-06-02 Thread Christoph Lange

Hi Leigh,

6/2/2011 2:07 PM Leigh Dodds:
 Very happy to see more data appearing, congrats :)

Actually there is prior work here (by Dragan Gasevic; search Google for 
gasevic skos acm ccs), but that implementation had, in contrast to the 
RKB Explorer RDFS implementation.  So we would probably be the first one 
to release a complete SKOS implementation in a linked data compliant way.


 I won't reproduce the text here, and I'm not a lawyer, but the wording
 says ...to republish, to post on servers, or to redistribute to
 lists, requires prior specific permission and/or a fee. Request
 permission to republish from..[address+email].

That is a valid point -- even my publishing of the SKOS implementation 
can be considered a republication, posted on a server.  Therefore I 
have put the implementation offline, linking to this e-mail for 
explanation.


 I think one reasonable thing that someone may want to do is mirror the
 data, e.g. to provide a public SPARQL endpoint or other services.
 Currently it doesn't look like I can do that without contacting the
 ACM directly, which I assume you've also done.

Actually not, but I'm now sending them a request for permission.  I 
started quite naïvely.  The RKB Explorer RDFS implementation of the ACM 
CCS, which was my original source (before completing the missing 
information from the ACM site), was publicly available 
(http://acm.rkbexplorer.com/ontologies/acm#).  According to the homepage 
they had obtained permission from ACM to republish metadata about 
publications (http://acm.rkbexplorer.com/), but that is not obvious when 
accessing the ACM CCS implementation in a linked data way.


 It's not clear to me whether I could even copy parts of the data and
 index it to use in an application, as that potentially falls out side
 of the personal and classroom use.

Indeed, that argument convinces me, so I will ask them.

 I fully support arguments to the effect of use and seek forgiveness
 later when using data, but as we see more and more commercial usage
 of Linked Data, I think we really need to see clearer licensing around
 data. Otherwise feels like we're building on uncertain ground.

And if they answer something like you can put it online, but it may 
only be used for classroom use, that is not really helpful, as there is 
not yet a practically working mechanism that would prevent linked data 
from being used for something else (e.g. joined with some e-business 
data in the same query).  An interesting research problem in itself, but 
here and now I am merely interested in using the ACM CCS for our own 
purposes, plus making our implementation available, so that others don't 
have to reimplement it over and over again.


Cheers,

Christoph

--
Christoph Lange, Coordinator CASE Research Center
http://www.jacobs-university.de/case/, http://kwarc.info/clange

Mathematical Wiki workshop at ITP 2011, August 27, Nijmegen, Netherlands
Submission deadline May 30, http://www.cs.ru.nl/mwitp/

Jacobs University Bremen gGmbH, Campus Ring 1, 28759 Bremen, Germany
Commercial registry: Amtsgericht Bremen, HRB 18117
CEO: Prof. Dr. Joachim Treusch
Chair Board of Governors: Prof. Dr. Karin Lochte



Deadline Extension (March 4): Semantic Publication Workshop SePublica@ESWC (May 30, Crete, Greece)

2011-02-25 Thread Christoph LANGE
 documents. LaTeX documents in
the LNCS format can, e.g., be annotated using SALT
(http://salt.semanticauthoring.org) or sTeX
(http://trac.kwarc.info/sTeX/). We also invite submissions in
XHTML+RDFa or in the format or YOUR semantic publishing tool.
However, to ensure a fair review procedure, authors must additionally
export them to PDF.  For submissions that are not in the LNCS PDF
format, 400 words count as one page. Submissions that exceed the page
limit will be rejected without review.

Depending on the number and quality of submissions, authors might
be invited to present their papers during a poster session.

Please submit your paper via EasyChair at
http://www.easychair.org/conferences/?conf=sepublica2011

The author list does not need to be anonymized, as we do not have a
double-blind review process in place.

Submissions will be peer reviewed by three independent
reviewers. Accepted papers have to be presented at the workshop
(requires registering for the ESWC conference and the workshop) and
will be included in the workshop proceedings that are published online
at CEUR-WS.

PROGRAM COMMITTEE

• Christopher Baker, University of New Brunswick, Saint John, Canada
• Paolo Ciccarese, Harvard Medical School, USA
• Tim Clark, Harvard Medical School, USA
• Oscar Corcho, Politecnica de Madrid, Spain
• Stéphane Corlosquet, Massachusetts General Hospital, USA
• Joe Corneli, Open University, UK
• Michael Dreusicke, PAUX Technologies, Germany
• Henrik Eriksson,  Linköping University, Sweden
• Benjamin Good, Genomic Institute, Novartis, USA
• Tudor Groza, University of Queensland, Australia
• Michael Kohlhase, Jacobs University, Germany
• Sebastian Kruk, knowledgehives.com, Poland
• Thomas Kurz, Salzburg Research, Austria
• Steve Pettifer, Manchester University, UK
• Matthias Samwald, Information Retrieval Facility, Austria
• Jodi Schneider, DERI, NUI Galway, Ireland
• Dagobert Soergel, University of Maryland, USA
• Robert Stevens, Manchester University, UK

ORGANIZING COMMITTEE

• Alexander García Castro, University of Bremen, Germany
• Christoph Lange, Jacobs University Bremen, Germany
• Anita de Waard, Elsevier, USA/Netherlands
• Evan Sandhaus, New York Times, USA

QUESTIONS? → sepubl...@googlegroups.com

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype
duke4701
Semantic Publication workshop at ESWC 2011, May 30, Hersonissos, Crete,
Greece
Submission deadline March 4, http://SePublica.mywikipaper.org
LNCS Post-proceedings of selected submissions, Best Paper Award by Elsevier



signature.asc
Description: OpenPGP digital signature


ESWC 2011: AI Mashup Challenge – 2nd Call for Submissions (deadline April 1, event May 29-June 2)

2011-02-18 Thread Christoph Lange
---
* Second Call for Submissions and Papers *
---

AI Mashup Challenge 2011
http://sites.google.com/a/fh-hannover.de/aimashup11/

of the

8th Extended Semantic Web Conference (ESWC)
http://www.eswc2011.org/
May 29 - June 2, 2011, Heraklion, Greece

Topics of interest

A mashup is a lightweight (web) application that offers new
functionality by combining, aggregating and transforming resources and
services available on the web.
The AI mashup challenge accepts and awards intelligent mashups that
use AI technology, including but not restricted to
• machine learning and data mining
• machine vision
• natural language processing
• reasoning
• ontologies and the semantic web.
The emphasis is not on providing and consuming semantic markup, but
rather on using intelligence to mashup these resources in a more
powerful way.


Some examples:
• Information extraction or automatic text summarization to create a
task-oriented overview mashup for mobile devices.
• Semantic Web technology and data sources adapting to user and
task-specific configurations.
• Semantic background knowledge (such as ontologies, WordNet or Cyc)
to improve search and content combination.
• Machine translation for mashups that cross language borders.
• Machine vision technology for novel ways of aggregating images, for
instance mixing real and virtual environments.
• Intelligent agents taking over simple household planning tasks.
• Text-to-speech technology creating a voice mashup with intelligent
and emotional intonation.
• The display of Pub Med articles on a map based on geographic entity
detection referring to diseases or health centers.

Awards

• € 1750 sponsored by Elsevier
• Speech outfit from Linguatec
• 10 O'Reilly e-books
• 2 x up to 5 mashup books from Addison-Wesley

Submission and deadline

The challenge tries to mediate between a grassroot bar-camp style and
standard conference organization. This means for submitters:
• You announce your mashup as soon as you are ready, simply sending an
email to the organizers (address below).
• The deadline is April 1, 2011.
• At a subpage of the mashup website provided by the organizers, you
explain your work and refer to its URL.
• Your mashup stays at your URL and under your control. You can go on
improving it.
• At review time (1st April 2011), reviewers need a 5 page paper (LNCS
format) that explains the mashup.
• The reviewers select the most interesting mashups for presentation
and vote during the conference.
• Vote is public for all conference participants, but the reviewer
quota makes up 40%.
• Be prepared to a give a brief talk and a demo during the conference.
• Awards will be handed over during the conference, and everybody will
congratulate the winners!


Organizers

• Brigitte Endres-Niggemeyer, Hannover, Germany
       with the support of
• Pascal Hitzler, Dayton, OH


Program Committee

• Adrian Giurca, Brandenburg University, Cottbus, Germany
• Christoph Lange, Jacobs University, Bremen, Germany
• Emilian Pascalau, University of Potsdam, Germany
• Giuseppe Di Fabbrizio, ATT Labs, Florham Park NJ, USA
• Jevon Wright, Massey University, Palmerston North, NZ
• Sven Windisch, Univ. of Leipzig, Germany
• Alexandre Passant, DERI Galway, Ireland
• Emanuele Della Valle, Politecnico di Milano, Italy
• Giovanni Tummarello, DERI Galway, Ireland
• Gregoire Burel,OAK, Univ. of Sheffield, UK
• Krzysztof Janowicz, Pennsylvania State University, USA
• Thorsten Liebig, Univ. of Ulm  derivo GmbH, Germany


Main Contact

• brigitte.endres-niggeme...@fh-hannover.de
• brigitt...@googlemail.com



-
Brigitte Endres-Niggemeyer
Hannover Univ. of Applied Sciences
Faculty 3, Media, Information and Design
Expo Plaza 12
30539 Hannover
+49 511 92 96 2641
brigitte.endres-niggeme...@fh-hannover.de
brigitt...@googlemail.com
http://sites.google.com/a/fh-hannover.de/brigitte-endres-niggemeyer/home

--
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype
duke4701
Semantic Publication workshop, May 29 or May 30, Hersonissos, Crete, Greece
Submission deadline February 28, http://SePublica.mywikipaper.org
LNCS Post-proceedings of selected submissions, Best Paper Award by Elsevier



Call for Papers Demos: Semantic Publication Workshop SePublica@ESWC (May 29 or 30, Crete, Greece) – Deadline Feb 28

2011-01-11 Thread Christoph LANGE

The author list does not need to be anonymized, as we do not have a
double-blind review process in place.

Submissions will be peer reviewed by three independent
reviewers. Accepted papers have to be presented at the workshop
(requires registering for the ESWC conference and the workshop) and
will be included in the workshop proceedings that are published online
at CEUR-WS.

PROGRAM COMMITTEE

• Robert Stevens, Manchester University, UK
• Benjamin Good, Genomic Institute, Novartis, USA
• Michael Kohlhase, Jacobs University, Germany
• Oscar Corcho, Politecnica de Madrid, Spain
• Steve Pettifer, Manchester University, UK
• Jodi Schneider, DERI, NUI Galway, Ireland
• Sebastian Kruk, knowledgehives.com, Poland
• Henrik Eriksson,  Linköping University, Sweden
• Dagobert Soergel, University of Maryland, USA
• Tim Clark, Harvard Medical School, USA
• Paolo Ciccarese, Harvard Medical School, USA

ORGANIZING COMMITTEE

• Alexander García Castro, University of Bremen, Germany
• Christoph Lange, Jacobs University Bremen, Germany
• Anita de Waard, Elsevier, USA/Netherlands
• Evan Sandhaus, New York Times, USA

QUESTIONS? → sepubl...@googlegroups.com

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype
duke4701
Semantic Publication workshop, May 29 or May 30, Hersonissos, Crete, Greece
Submission deadline February 28, http://SePublica.mywikipaper.org



signature.asc
Description: OpenPGP digital signature


303 redirect to a fragment – what should a linked data client do?

2010-06-10 Thread Christoph LANGE
Hi all,

  in our setup we are still somehow fighting with ill-conceived legacy URIs
from the pre-LOD age.  We heavily make use of hash URIs there, so it could
happen that a client, requesting http://example.org/foo#bar (thus actually
requesting http://example.org/foo) gets redirected to
http://example.org/baz#grr (note that I don't mean
http://example.org/baz%23grr here, but really the un-escaped hash).  I
observed that when serving such a result as XHTML, the browser (at least
Firefox) scrolls to the #grr fragment of the resulting page.

But what should an RDF-aware client do?  I guess it should still look out for
triples with the originally requested subject http://example.org/foo#bar, e.g. 
rdf:Description rdf:about=http://example.org/foo#bar;, or (assuming
xml:base=http://example.org/foo;) for rdf:Description rdf:ID=bar.  Is my
assumption right?

Thanks in advance for any help,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: 303 redirect to a fragment – what should a linked data client do?

2010-06-10 Thread Christoph LANGE
2010-06-10 13:40 Christoph LANGE ch.la...@jacobs-university.de:
   in our setup we are still somehow fighting with ill-conceived legacy URIs
 from the pre-LOD age.  We heavily make use of hash URIs there, so it could
 happen that a client, requesting http://example.org/foo#bar (thus actually
 requesting http://example.org/foo) gets redirected to
 http://example.org/baz#grr (note that I don't mean
 http://example.org/baz%23grr here, but really the un-escaped hash).  I
 observed that when serving such a result as XHTML, the browser (at least
 Firefox) scrolls to the #grr fragment of the resulting page.

Update for those who are interested (all tested on Linux, test with
http://kwarc.info/lodtest#misc --303--
http://kwarc.info/clange/publications.html#inproc for yourself):

* Firefox: #inproc
* Chromium: #inproc
* Konqueror: #inproc
* Opera: #misc

That given, what would an _RDFa_-compliant client have to do?  I guess it
would have to do the same as an RDF client, i.e. look into @about attributes
if in doubt.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: 303 redirect to a fragment – what should a linked data client do?

2010-06-10 Thread Christoph LANGE
Hi Nathan,

  thanks for your clarifying reply!  That gave me the confirmation that we
were on the right track.  Indeed I should not judge such issues from the
behavior of browsers that are not even RDF-aware.

Cheers,

Christoph

2010-06-10 14:24 Nathan nat...@webr3.org:
 ...
 
 long:
 I've asked this question and several related a few times over the past 
 few months (hence responding).
 
  From what I can tell what URI Identifier and dereferencing process (+ 
 Request Response chain which follows) are entirely orthogonal issues.
 
 To illustrate, if you dereference http://dbpedia.org/resource/London 
 then the final RDF representation you get will be courtesy of 
 http://dbpedia.org/data/London.[n3/rdf/ttl], but the RDF will still 
 describe http://dbpedia.org/resource/London.
 
 If you consider from a client / code standpoint in a setup where you 
 have two modules abstracted from each other, an HTTP Client and an RDF 
 Parser, the RDF Parser will request something like:
rdf = HTTP-get( uri );
 What the HTTP Client does, the deferencing process, the request response 
 chain which follows, the values in the HTTP Header fields, is completely 
 abstracted, transparent to the RDF Parser and indeed of no concern.
 
 Thus regardless of how the HTTP request chain works out, if you try to 
 get a RDF description for http://example.org/foo#bar then you'll still 
 be looking for http://example.org/foo#bar in the RDF that you get back.

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: 303 redirect to a fragment ­ what should a linked data client do?

2010-06-10 Thread Christoph LANGE
2010-06-10 14:01 Michael Hausenblas michael.hausenb...@deri.org:
 Are you aware of the respective HTTPbis ticket [1]?
 
 [1] http://trac.tools.ietf.org/wg/httpbis/trac/ticket/43

Thanks, good to know – no, I didn't know that.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


How to serve hash URIs from a (Virtuoso) SPARQL endpoint?

2010-05-19 Thread Christoph LANGE
Dear all,

  the data we would like to publish have hash URIs.  We have translated them
to RDF and store the RDF in a Virtuoso triple store with a SPARQL endpoint.

Now, when a client requests application/rdf+xml from a cool URI like
http://our.server/data/document#fragment, it actually makes a request for
http://our.server/data/document.  In the resulting RDF/XML it expects to find
the desired information unter rdf:ID=fragment or
rdf:about=http://...#fragment;, i.e. resolving everything behind the # is up
to the client.  That is, the RDF/XML document the server returns for
http://our.server/data/document must contain all triples relevant for
http://our.server/data/document and for any
http://our.server/data/document#whatever – i.e. we would essentially like
our SPARQL endpoint to emulate the behavior of a stupid web server serving a
static http://.../document.rdf file, which contains all those triples.

So far, our solution is that we rewrite the cool URI into the following query
to the SPARQL endpoint:

construct { ?s ?p ?o }
where { ?s ?p ?o .
filter(
?s = http://our.server/data/document;
||
regex(?s,^http://our.server/data/document#;))
}

That works, it's even efficient – but I wonder whether there is any better way
of doing it.

Thanks for your feedback,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: [Virtuoso-users] [Fwd: How to serve hash URIs from a (Virtuoso) SPARQL endpoint?]

2010-05-19 Thread Christoph LANGE
Hi Hugh,

[...@nathan, thanks for forwarding it to the Virtuoso list; I subscribed there
now, but I thought the question might also be of a more general interest
w.r.t.  deploying hash URIs.]

2010-05-19 14:51 Hugh Williams hwilli...@openlinksw.com:
 The following Virtuoso Linked Data Deployment Guide details how hash URIs can 
 be handled by the server using transparent content negotiation:
 
   
 http://www.openlinksw.com/virtuoso/Whitepapers/html/vdld_html/VirtDeployingLinkedDataGuide.html
 
 Which would also apply to the data you are attempting to publish ...

Thanks, I had looked there before, but got the impression that that guide only
deals with the very special case of the this pseudo fragment ID, i.e. a
workaround of introducing hash URIs to facilitate content negotiation.  I got
that impression because the guide talks about http://.../ALFKI#this, where
ALFKI is the entity of interest.  Please let me know if I got something
wrong.

In our situation, we have many entities of interest, with the following URIs:

http://.../document (without fragment)
http://.../document#fragment1
http://.../document#fragment2
...

and when a client requests RDF/XML from http://.../document, the client should
get a document that contains all triples for http://.../document,
http://.../document#fragment1, http://.../document#fragment2, etc.

(Note that we were not free to choose this URI format; it was given before we
went linked data.)

Cheers, and thanks,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: Linked data in packaged content (ePub)

2010-04-28 Thread Christoph LANGE
2010-04-27 22:40 Stuart A. Yeates syea...@gmail.com:
 I'm interested in putting linked data into eBooks published in the
 (open standard) ePub format (http://www.openebook.org/ ). The format
 is essentially a relocatable zip file of XHTML, associated media files
 and a few metadata files.
 
 ...
 
 Does anyone know of any other attempts to put linked data into
 packages like this?

The mere embedding is a current topic of interest of the RDFa WG (see
http://lists.w3.org/Archives/Public/public-rdfa-wg/2010Mar/0200.html), and I
suppose they will be quite interested in the further implications you
mentioned.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Deadline Extension/Keynote: ESWC Workshop Ontology Repositories and Editors for the Semantic Web (ORES)

2010-02-27 Thread Christoph LANGE
ESWC 2010 Workshop on Ontology Repositories and Editors for the Semantic Web
ORES 2010 - Call for papers and system 
descriptions - http://www.ontologydynamics.org/od/index.php/ores2010/
DEADLINE EXTENDED TO  March 7, 2010


The deadline for the ORES workshop has been extended to March 7, 2010.


We would also like to announce the invited talk as part of the workshop, which 
will be given by Nigam Shah from the Stanford Center for Biomedical 
Informatics Research (http://www.stanford.edu/~nigam/).


The growing number of online ontologies makes the availability of ontology 
repositories, in which ontology practitioners can easily find, select and 
retrieve reusable components, a crucial issue. The recent emergence of several 
ontology repository systems is a further sign of this. However, in order for 
these systems to be successful, it is necessary to provide a forum for 
researchers and developers to discuss features and exchange ideas on the 
realization of ontology repositories in general and to consider explicitly 
their role in the ontology lifecycle. In addition, it is now critical to 
achieve interoperability between ontology repositories, through common 
interfaces, standard metadata formats, etc. ORES10 intends to provide such a 
forum.


Illustrating the importance of the problem, significant initiatives are now 
emerging. One example is the Open Ontology Repositories (OOR) working group 
set up by the Ontolog community. Within this effort regular virtual meetings 
are organized and actively attended by ontology experts from around the world; 
The Ontolog OOR 2008 meeting was held at the National Institute for Standards 
in Technology (NIST), generating a joint communiqué outlining requirements and 
paving the way for collaborations. Another example is the Ontology Metadata 
Vocabulary (OMV) Consortium, addressing metadata for describing ontologies. 
Despite these initial efforts, ontology repositories are hardly interoperable 
amongst themselves. Although sharing similar aims (providing easy access to 
Semantic Web resources), they diverge in the methods and techniques employed 
for gathering these documents and making them available; each interprets and 
uses metadata in a different manner. Furthermore, many features are still 
poorly supported, such as modularization and versioning, as well as the 
relationship between ontology repositories and ontology engineering 
environments (editors) to support the entire ontology lifecycle.


Submitting papers and system descriptions


We want to bring together researchers and practitioners active in the design, 
development and application of ontology repositories, repository-aware 
editors, modularization techniques, versioning systems and issues around 
federated ontology systems. We therefore encourage the submission of research 
papers, position papers and system descriptions discussing some of the 
following questions:


 * How can ontology repositories “talk” to each other? 
 * How can the abundant and complex knowledge contained in an ontology 
repository be made comprehensible for users? 
 * What is the role of ontology repositories in the ontology lifecycle? 
 * How can branching and versioning be managed in and across ontology 
repositories? 
 * How can ontology repositories interoperate with ontology editors, and other 
applications and legacy systems? 
 * How can connections across ontologies be managed within and across ontology 
repositories? 
 * How can modularity be better supported in ontology repositories and 
editors? 
 * How can ontology repositories and editors use distributed reasoning? 
 * How can ontology repositories support corporate, national and domain 
specific semantic infrastructures?  
 * How do ontology repositories support novel semantic applications? 
 * What measurements for describing and comparing ontologies can we use? How 
could ontology repositories use these?  


Research papers are limited to 12 pages and position papers to 5 pages. For 
system descriptions, a 5 page paper should be submitted. All papers and system 
descriptions should be formatted according to the LNCS format 
(http://www.springer.com/computer/lncs?SGWID=0-164-2-72376-0). Proceedings of 
the workshop will be published online. Depending on the number and quality of 
the submissions, authors might be invited to present their papers during a 
poster session.


Submissions can be realized through the easychair system 
at http://www.easychair.org/conferences/?conf=ores2010.


Important dates


Papers and demo submission: March 7, 2010 (23:59 Hawaii Time)
Notification: April 5, 2010
Camera ready version: April 18, 2010
Workshop: May 30 or 31, 2010


Organizing committee


Mathieu d'Aquin, the Open University, UK
Alexander García Castro, Bremen University, Germany
Christoph Lange, Jacobs University Bremen, Germany
Kim Viljanen, Aalto University, Finland 


Program committee


Ken Baclawski, Northeastern University, USA. 
Leo J. Obrst, MITRE

Re: Colors

2010-02-24 Thread Christoph LANGE
2010-02-24 08:31 Pat Hayes pha...@ihmc.us:
 Does anyone know of URIs which identify colors? Umbel has the general
 notion of Color, but I want the actual colors, like, you know, red,
 white, blue and yellow. I can make up my own, but would rather use
 some already out there, if they exist.

Do you really need URIs?  I.e. do you want to add further descriptions or
links to colors, such as color1 is nicer than color2, or color can be
produced from material – or do you just want to point to colors (thing has
color)?  In the latter case, wouldn't literals with datatypes be sufficient?
For literals, there is at least a standard for RGB colors: #RRGGBB.  Still,
here it's the standard _datatype_ that's missing.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


2nd CfP: Ontology Repositories and Editors for the Semantic Web (ORES2010 @ ESWC)

2010-02-11 Thread Christoph LANGE
ESWC 2010 Workshop on Ontology Repositories and Editors for the
Semantic Web
ORES 2010 - Call for papers and system descriptions -
http://www.ontologydynamics.org/od/index.php/ores2010/
Heraklion, Greece, May 31 - Submission Deadline: March 1, 2010

The growing number of online ontologies makes the availability of
ontology repositories, in which ontology practitioners can easily
find, select and retrieve reusable components, a crucial issue. The
recent emergence of several ontology repository systems is a further
sign of this. However, in order for these systems to be successful, it
is necessary to provide a forum for researchers and developers to
discuss features and exchange ideas on the realization of ontology
repositories in general and to consider explicitly their role in the
ontology lifecycle. In addition, it is now critical to achieve
interoperability between ontology repositories, through common
interfaces, standard metadata formats, etc. ORES10 intends to provide
such a forum.

Illustrating the importance of the problem, significant initiatives
are now emerging. One example is the Open Ontology Repositories (OOR)
working group set up by the Ontolog community. Within this effort
regular virtual meetings are organized and actively attended by
ontology experts from around the world; The Ontolog OOR 2008 meeting
was held at the National Institute for Standards in Technology (NIST),
generating a joint communiqué outlining requirements and paving the
way for collaborations. Another example is the Ontology Metadata
Vocabulary (OMV) Consortium, addressing metadata for describing
ontologies. Despite these initial efforts, ontology repositories are
hardly interoperable amongst themselves. Although sharing similar aims
(providing easy access to Semantic Web resources), they diverge in the
methods and techniques employed for gathering these documents and
making them available; each interprets and uses metadata in a
different manner. Furthermore, many features are still poorly
supported, such as modularization and versioning, as well as the
relationship between ontology repositories and ontology engineering
environments (editors) to support the entire ontology lifecycle.

Submitting papers and system descriptions

We want to bring together researchers and practitioners active in the
design, development and application of ontology repositories,
repository-aware editors, modularization techniques, versioning
systems and issues around federated ontology systems. We therefore
encourage the submission of research papers, position papers and
system descriptions discussing some of the following questions:

 * How can ontology repositories talk to each other?
 * How can the abundant and complex knowledge contained in an
ontology repository be made comprehensible for users?
 * What is the role of ontology repositories in the ontology lifecycle?
 * How can branching and versioning be managed in and across ontology
repositories?
 * How can ontology repositories interoperate with ontology editors,
and other applications and legacy systems?
 * How can connections across ontologies be managed within and across
ontology repositories?
 * How can modularity be better supported in ontology repositories
and editors?
 * How can ontology repositories and editors use distributed reasoning?
 * How can ontology repositories support corporate, national and
domain specific semantic infrastructures?
 * How do ontology repositories support novel semantic applications?
 * What measurements for describing and comparing ontologies can we
use? How could ontology repositories use these?

Research papers are limited to 12 pages and position papers to 5
pages. For system descriptions, a 5 page paper should be submitted.
All papers and system descriptions should be formatted according to
the LNCS format (http://www.springer.com/computer/lncs?SGWID=0-164-2-72376-0
). Proceedings of the workshop will be published online. Depending on
the number and quality of the submissions, authors might be invited to
present their papers during a poster session.

Submissions can be realized through the easychair system at
http://www.easychair.org/conferences/?conf=ores2010 .

Important dates

Papers and demo submission: March 1, 2010 (23:59 Hawaii Time)
Notification: April 5, 2010
Camera ready version: April 18, 2010
Workshop: May 31, 2010

Organizing committee

Mathieu d'Aquin, the Open University, UK
Alexander García Castro, Bremen University, Germany
Christoph Lange, Jacobs University Bremen, Germany
Kim Viljanen, Aalto University, Finland

Program committee

Ken Baclawski, Northeastern University, USA.
Leo J. Obrst, MITRE Corporation, USA.
Mark Musen, Stanford University, USA.
Natasha Noy, Stanford University, USA.
Li Ding, Rensselaer Polytechnic Institute, USA.
Mike Dean, BBN, USA.
John Bateman, Universität Bremen, Germany.
Michael Kohlhase, Jacobs University, Germany.
Tomi Kauppinen, University of Muenster, Germany.
Peter Haase, Fluid Operations

Re: Recommendations for serving backlinks when having hash URIs?

2010-02-10 Thread Christoph LANGE
Hi Richard,

2010-02-10 02:43 Richard Cyganiak rich...@cyganiak.de:
 On 9 Feb 2010, at 23:17, Christoph LANGE wrote:
  [lots of musings on how it could(n't) be done with hash URIs]
  
  ...
  
  Of course any reasonable approach to pick the most relevant triples
  depends on the specific vocabulary and application, but still, are there
  any guidelines?  Or should we rather consider ways of mapping hash URIs to
  slash URIs?  Are there standard approaches?  I could imagine that e.g. for
  foo27 the server could return only these triples:
 
  foo27#bar1 owl:sameAs slashland/foo27/bar1 .
  foo27#bar2 owl:sameAs slashland/foo27/bar2 .
  ...
 
 Maybe simpler:
 
 foo27#bar1 rdfs:seeAlso slashland/foo27/bar1.rdf .
 foo27#bar2 rdfs:seeAlso slashland/foo27/bar2.rdf .
 ...
 
 And then serve up the detailed description inside the *.rdf files,
 while still using the hash URIs inside these files. This limits the
 complication to the RDF file level, without requiring messing about
 with multiple URI aliases.

Thanks, that sounds very reasonable (both in the English and in the formal
sense)!

So far the MIME type for which we will most urgently need such a solution is
indeed RDF.  However, if we should also need the hash→slash redirection for
other MIME types, would you rather recommend adding e.g.

foo27#bar1 rdfs:seeAlso slashland/foo27/bar1.html .

or would it then be preferable to resort to my initial approach and perform
content negotiation and 303 redirects on the slash URIs?

Another question is how to deal with the weak semantics of rdfs:seeAlso here.
In _our_ application we can of course hard-code that whenever a hash URI is
encountered, the rdfs:seeAlso link (if existing) must be followed first.  But
then how would other clients know that in this setup the rdfs:seeAlso is not
just anything optional/additional that you can follow or not, depending on
what the user is interested in, but that it _should_ _really_ be followed in
order to retrieve even basic information about the resources?  Is it safe to
expect that any reasonable linked data client, when the only triple that it
finds when crawling is rdfs:seeAlso, assumes a closed world and somehow
guesses that it _has_ to follow that link in order to find out more?

Cheers, and thanks,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Re: Recommendations for serving backlinks when having hash URIs?

2010-02-10 Thread Christoph LANGE
2010-02-10 12:20 Richard Cyganiak rich...@cyganiak.de:
 I'd recommend:
 
 foo27#bar1 rdfs:seeAlso slashland/foo27/bar1 .
 
 and then perform standard (non-redirect) content negotiation at
 slashland/foo27/bar1, with variants at bar1.rdf, bar1.html etc.

OK, thanks, that makes sense.

 …
 
 at the end of the day it's always up to the clients wether they follow your
 links or not, no matter what you call your property (owl:sameAs,
 rdfs:seeAlso, my:mustFollowThisOrDie). The rdfs:seeAlso property at least is
 standard, and thus has a decent probability of being understood by a client.
 
 In practice, some clients will understand it and some won't. Hence it might
 be prudent to include some *very* basic information directly in your
 original file at foo27, let's say at least an rdfs:label and maybe an
 rdf:type for the foo27#barNNN URIs.

Indeed – that should be feasible.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Recommendations for serving backlinks when having hash URIs?

2010-02-09 Thread Christoph LANGE
Dear all,

  are there any guidelines on how to reasonably restrict the number of served
RDF triples when

* the linked data served should also include backlinks (i.e. triples that have
  the requested resource as object), as is good practice
* hash URIs are used (and therefore a server response contains a lot more than
  what the client actually needs)
?

In our setting, we are bound to use hash URIs for certain resources, as the
URI syntax conventions have existed before the age of linked data.  Suppose we
have resources with URIs like fooX#barY, such as foo23#bar57.  Suppose that
the RDF on the server is not served from static files fooX.rdf, but from a
triple store, and thus the server has some flexibility w.r.t. what exactly to
serve.  Now suppose a client is interested in foo27#bar4 and therefore (hash
URIs!) has to request foo27 from the server.  Then, the server would at least
have to return a lot of triples having any of the foo27#barY as subject, e.g.

foo27#bar1 :someprop foo27#bar56 .
foo27#bar2 :someprop foo33#bar1 .
foo27#bar4 :someprop foo66#bar89 .

Additionally we would like to get some triples having foo27#barY as an object,
but again the server does not know that the client is only interested in
foo27#bar4.

Of course any reasonable approach to pick the most relevant triples depends
on the specific vocabulary and application, but still, are there any
guidelines?  Or should we rather consider ways of mapping hash URIs to slash
URIs?  Are there standard approaches?  I could imagine that e.g. for foo27 the
server could return only these triples:

foo27#bar1 owl:sameAs slashland/foo27/bar1 .
foo27#bar2 owl:sameAs slashland/foo27/bar2 .
...

and that then the client would issue a second request to slashland, where a
reasonable response of links and relevant backlinks could be computed more
easily.

Cheers, and thanks in advance for any help,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701


signature.asc
Description: This is a digitally signed message part.


Re: Content negotiation: Why always redirect from non-information resource to information resource?

2010-01-26 Thread Christoph LANGE
Hi Georgi,

2010-01-27 01:00 Georgi Kobilarov georgi.kobila...@gmx.de:
 If a client asks the server: please give me Berlin, the server must
 respond with sorry, can't give you that, because Berlin is a city that
 can't be send through the wire (non-information resource), but look over
 there, maybe that's of help. That's a 303 redirect.
 
 The server is only allowed to respond with an HTTP 200 if it can actually
 send what the client wants.

thanks, but what if I suppose that my client is software (HTML-aware browser
or RDF-aware agent) that does not have any understanding of a city, but just
of HTTP?

I.e. if I assume that my client just finds the URI
http://not-dbpedia.org/Berlin in some RDF on the web (not caring about whether
this is an information resource or not) and dereferences it,

* case 1: requesting RDF – if the server directly serves RDF from
  http://not-dbpedia.org/Berlin, this is what the client wants
* case 2: requesting HTML – then the server would understand that the content
  (i.e. the information resource) at http://not-dbpedia.org/Berlin is not what
  the client wants, it would 303-redirect the client to, say,
  http://html.not-dbpedia.org/Berlin, which is what the client wants.

 A nice workaround are #-URIs (which I prefer...)

In our setting we are not planning to rely on normal #-URIs, i.e. long
documents containing lots of fragments, for various reasons.  Cool URIs
recommends fake #-URIs for non-information resources, e.g.
http://not-dbpedia.org/Berlin#this (see
http://www.w3.org/TR/cooluris/#choosing), but here I don't understand what
they are good for.

So far, I don't understand why it is recommended to have separate URIs for the
non-information resource and the (RDF/XML) information resource.  I can
understand that it might be desirable for clear communication among humans to
have two separate URIs/URLs (that's why it might be hard for me to communicate
my point), and I can also understand that from a software engineering point of
view the designer of a web server application might not want to privilege a
certain data format (e.g. RDF/XML) over the others by making the arbitrary
decision to serve that format from the URL that is also used to denote the
abstract philosophical concept.

I.e. the reasoning in
http://www.w3.org/2001/tag/doc/httpRange-14/2007-05-31/HttpRange-14#iddiv1138805816
(essentially the same as what you said above) is clear to me from a
philosophical point of view, but not from a technical one.  I'm taking a more
pragmatic view here, as the information that we would like to serve is not
necessarily the one, true and only concept of Berlin, but rather something
more pragmatic, such as our view of Berlin, as we happen to define it (in
RDF) – and IMHO the latter can as well be unified with an information
resource.

Or am I entirely missing the point?

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Re: RDF, XML, XSLT: Grit

2010-01-20 Thread Christoph LANGE
Hi Axel,

2010-01-20 11:45 Axel Polleres axel.polle...@deri.org:
 reading the thread with interest. If I understand correctly most of these
  approaches Grit, RXR, etc only provide normalisation, which in my opinion
  is only ONE part of the story in making existing RDF data amenable to
  XSLT/XQuery transformations.
 
 What it doesn't address is that probably big (and increasingly with the
  adoption of SPARQL) ammounts of RDF data are residing in RDF stores... you
  don't want to dump that whole data into RDF/XML first and then query it
  with XSLT/XQuery if a SPARQL interface is already available, do you?
 
 To this end we have developed a combined query- and transformation language
  called XSPARQL [1,2,3] which should address this drawback. ...

Theoretically, I fully agree.  Practically, I partly agree.  I have been
following the development of XSPARQL, and I am looking forward to it becoming
more widely supported.  Using SPARQL for RDF queries and XQuery for XML
queries is definitely my preferred division of responsibilities.  But it is
not always possible for technical reasons.

The setting in which I'm currently using RXR is an XML database that natively
supports XQuery (http://trac.mathweb.org/tntbase/, based on Berkeley DB XML).
From this database, you can retrieve XML documents as they are, or, for
certain languages supported by the system, you can also retrieve documents
rendered to HTML via XSLT.  Now we wanted to enrich that HTML output by RDFa.
The developer of TNTBase was not in favor of installing a triple store and
SPARQL endpoint _only_ for the purpose of providing the RDF that was to be
integrated into the HTML output as RDFa, as that is currently only a minor
goodie, not the core feature of the system.  On the other hand it was not a
big deal in our setting to make the RDF data available as RXR, and to add some
code to the XSLT that queries RXR and transforms it into RDFa annotations.

I think what makes the difference in my setting is

1. that the RDF→XML transformation (here: providing RDF as RXR) is not a
   superfluous roundtrip.  Even if we could obtain the RDF from a SPARQL
   endpoint, we would eventually have to convert it to some XML representation
   (e.g. SPARQL Query Results) in order to feed it into the rendering XSLT.
   (IIRC your XSPARQL also uses the SPARQL Query Results format internally.  )
2. that RXR is perfectly suitable for the task:  We do not do high-level
   queries, but only retrieve predicates and objects for a given subject.
   This is perfectly feasible with RDF represented as some normalized XML.

  is it really normalized RDF/XML that we want or don't we rather want to
  query RDF directly with SPARQL and XML with XQuery/XSLT?

So my conclusion is that direct queries are preferable theoretically, as well
as in many practical applications, but that there will always be other
practical applications, where it is more suitable to query normalized RDF as
XML.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Re: RDF, XML, XSLT: Grit

2010-01-19 Thread Christoph LANGE
2010-01-19 20:04 Toby Inkster t...@g5n.co.uk:
 You may be interested in the RXR output plugin I wrote for ARC2 a few
 months ago:
 
 http://goddamn.co.uk/viewvc/demiblog-new/arc-patching/plugins/

I had already feared that RXR had been abandoned, so it's good to see that
there are up-to-date implementations supporting it :-)

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701



Re: RDF, XML, XSLT: Grit

2010-01-18 Thread Christoph LANGE
Hi Niklas,

2010-01-17 18:53 Niklas Lindström lindstr...@gmail.com:
 I made this primarily for using XSLT to produce (xhtml) documents from
 controlled sets of RDF, e.g. vocabularies and such. I've found it
 conventient enough to think that there may be general interest.

My feedback will be …

 I would love feedback if you find this to be interesting, either for just
 XSLT/XQuery etc., or even as (yet another..) RDF format …

… of that kind:  I have successfully done some XSLT processing with RXR
(http://wiki.oasis-open.org/office/RXR,
http://www.dajobe.org/papers/xmleurope2004/).  I found it very nice for XSLT
processing, as there is exactly one way for writing down things.  On the other
hand, it's a bit harder to read for humans, as it always uses full URIs, and
there is not syntax for anonymous bnodes; you always have to give bnodes an
ID.

Still, whatever syntax it will be in the end, I support any initiative towards
deprecating RDF/XML or at least introducing a machine-friendly XML syntax in
RDF 2.0.

Cheers,

Christoph

-- 
Christoph Lange, Jacobs Univ. Bremen, http://kwarc.info/clange, Skype duke4701