CFP: WaSABi - 3rd Workshop on Semantic Web Enterprise Adoption and Best Practice @ EKAW2014

2014-09-01 Thread Magnus Knuth

--
CALL FOR PAPERS
for the 3rd Workshop on

* SEMANTIC WEB ENTERPRISE ADOPTION AND BEST PRACTICE *
Special Issue on Linked Data Lifecycle Management
(WaSABi SI 2014)

http://wasabi-ws.org/
November 24th or 25th (tba), 2014, Linköping, Sweden
*** Paper submission deadline: September 19th, 2014 ***
--

There is a disconnect between the Semantic Web research and commercial 
practitioner communities, as evidenced by the limited uptake of these very 
promising technologies for managing knowledge and information in the broader 
industry. Researchers steadily improve upon modelling languages, reasoners, and 
triple stores - but all too often, these improvements are driven not by real 
business needs, but rather by the interests of researchers to solve interesting 
challenges. Conversely, practitioners are oftentimes unaware of how existing 
Semantic Web technologies can help solve their problems. Even in the cases that 
they do know about these Semantic Web solutions, most practitioners lack the 
knowledge about tooling, scalability issues, design patterns, etc., that is 
required to successfully apply them.

In order to bridge this gap, the WaSABi workshop provides an arena for 
discussing and developing ways of applying Semantic Web technologies to real 
world knowledge modelling problems. The workshop aims to develop a greater 
understanding of industrial organisations and their needs among academics 
(guiding them in selecting problems to work on that are of direct relevance to 
practitioner partners), and to discover or establish best practices for 
Semantic Web technology development and use, guiding practitioners who want to 
apply these technologies. 

TOPICS OF INTEREST
==
Authors are invited to consider the following (non exhaustive) list of topics:

* Semantic technologies for practical knowledge management
* Semantic tools supporting knowledge acquisition 
* Surveys or case studies on Semantic Web technology in enterprise systems
* Comparative studies on the evolution of Semantic Web adoption
* Architectural overviews for Semantic Web systems
* Design patterns for semantic technology architectures and algorithms
* System development methods as applied to semantic technologies
* Semantic toolkits for enterprise applications
* Surveys on identified best practices based on Semantic Web technology

Of special interest are submissions that touch upon the issues discussed during 
the brainstorming sessions of the previous WaSABi workshop:

* Linked Data lifecycle management: How can the longevity of key URIs and 
namespaces be guaranteed? Are such resources too important infrastructure or 
community assets to leave to commercial actors? How does one handle change 
management in a linked data context? How can software be analysed to find 
(potentially dangerous) dependencies on distributed Semantic Web resources?
* Software development for the Semantic Web: Are traditional software 
engineering methods well suited to the development of solutions for the 
Semantic Web? How can the complexity of the Semantic Web technology stack be 
abstracted or simplified (e.g. ORM for RDF)? Can Semantic Web software 
components run unchanged on cloud computing platforms, or how must they be 
adapted?

Additionally, industrial papers that focus on approaches, architectures, or 
tools demonstrating best practices in Semantic Web technologies are 
particularly encouraged.


SUBMISSIONS
===
Submission criteria are as follows:

* Papers must adhere to the LNCS format guidelines 
(http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).
* Papers are limited to eight pages (including figures, tables and appendices).
* Papers are submitted in PDF format via EasyChair 
(https://easychair.org/conferences/?conf=wasabiekaw2014).

Accepted authors are given a presentation time slot of 15 minutes, with 5 
minutes QA.


PROCEEDINGS

All accepted papers will be included in the WaSABi SI 2014 proceedings, to be 
published online via CEUR-WS.


IMPORTANT DATES
===
All deadlines are unless otherwise stated 23:59 Hawaii time.

• Submission - September 19, 2014
• Notification -  October 17, 2014
• Camera ready version - October 31, 2014
• Workshop - November 24 or 25 (tba), 2014 (half day)


ORGANIZING COMMITTEE

• Marco Neumann, KONA LLC
• Sam Coppens, IBM Research
• Karl Hammar, Jönköping University, Linköping University
• Magnus Knuth, Hasso Plattner Institute - University of Potsdam
• Dominique Ritze, University of Mannheim
• Miel Vander Sande, iMinds – Multimedia Lab – Ghent University

For enquiries, please contact the organizers at wasabiekaw2...@easychair.org

PROGRAM COMMITTEE
=
- Ghislain Atemezing, Eurecom
- Henrik Eriksson, Linköping University
- Daniel Garijo, 

Re: Does anyone know a good editor for RDF that plays nicely with HTTP

2014-09-01 Thread John Walker
Hi David, Laurens,

Thanks for the tips.

To be homest Callimachus seems a bit too much for this, was looking for 
something that 'just works' either browser or desktop based. I didn't plan to 
build an application.

Snapper seems to fit the bill so will see if I can get it talking to the graph 
store.

Regards,

John

On 30 Aug 2014, at 17:03, Laurens Rietveld laurens.rietv...@vu.nl wrote:

 Give Snapper (http://jiemakel.github.io/snapper/) a try as well,made by Eetu 
 Mäkelä. A completely client-side javascript turtle editor, supporting 
 uploading and downloading. 
 As far as I know, it requires a (CORS-enabled) SPARQL endpoint for updating 
 the triples (you can try sending a github feature request if this does not 
 suit your usecase)
 
 gr Laurens
 
 
 On Sat, Aug 30, 2014 at 3:52 PM, David Wood da...@3roundstones.com wrote:
 Hi John,
 
 The Callimachus Project (http://callimachusproject.org) does all of that.
 
 Regards,
 Dave
 --
 http://about.me/david_wood
 Sent from my iPad
 
  On Aug 30, 2014, at 8:36 AM, john.walker john.wal...@semaku.com wrote:
 
  Hi,
 
  I'm looking for an editor that can be used to easily modify RDF resources 
  on the web without needing to use curl to do the requests.
  So something that I can open a resource over HTTP using GET request, edit 
  the RDF contents and save my changes using PUT request.
  Basically I want to be able to use the SPARQL 1.1 Graph Store HTTP Protocol 
  or LDP to access the resources.
 
  Probably easiest would be to use Turtle, so the relevant Accept and 
  Content-Type headers need to be sent with the request.
  Must support HTTP basic authentication too.
 
  Syntax highlighting/validation would be a bonus although simply being able 
  to edit a text is sufficient.
 
  Cheers,
  John
 
 
 
 
 -- 
 VU University Amsterdam
 Faculty of Exact Sciences
 Department of Computer Science
 De Boelelaan 1081 A
 1081 HV Amsterdam
 The Netherlands
 www.laurensrietveld.nl
 laurens.rietv...@vu.nl
 
 Visiting address: 
 De Boelelaan 1081
 Science Building Room T312


RDF Quality, sorry what? [lightning talks in LDQ @ SEMANTiCS 2014]

2014-09-01 Thread Magnus Knuth
Since we didn’t get to bring a keynote speaker for LDQ workshop, we invite all 
of you to share your RDF Quality problem statements.

Tomorrow at 16:20 we’ll start a lightning talk session where all of you are 
invited to give a very quick talk on Linked Data Quality in the form of What 
(is my problem), Why (it is important), Who(m it affects) or How (should he 
tackle this).

After the Lightning talks end we’ll start the good part, the discussion :)

Please add your name on the following Google Doc to get a slot
http://tinyurl.com/LDQ14LightningTalks

If you can’t make it personally, you can add your personal statement and follow 
the discussion from the Google Doc.

The LDQ2014 organizing team
Magnus, Dimitris, and Harald

-- 
Workshop on Linked Data Quality (LDQ2014) at SEMANTiCS 2014 (September 2, 2014 
– Leipzig, Germany)
http://ldq.semanticmultimedia.org/


Re: A proposal for two additional properties for LOCN

2014-09-01 Thread Gannon Dick
Hi Frans,

A complete and coherent coordinate system is a sine qua non for analysis of 
data, planning strategy and measuring performance.

Your questions ...
1)   Are the semantics of the two properties really absent from the  semantic 
web at the moment?

As long as the perception exists that translation parameters are an arbitrary 
coding option; absent is exactly the right word, IMHO.

2)  Is the Location Core Vocabulary an appropriate place to add  them?

I believe so, because if not there, where ?

3)  Is the proposed way of modelling the two properties right? Could conflicts 
with certain use cases occur?

The amplitude of a Normal Distribution depends on sigma (square root of the 
Variance) and the square root of '2' and the square root of PI().  The universe 
where (Variance, Two or PI) have multi-valued roots is not a valid use case, 
it is modeling (i.e. Graphic Arts) malpractice.
-
If you want hard numbers ...

For strategy, planning and performance metric measurement (Strategy Markup 
Language - StratML) [1] the coordinate system is South Pole to North Pole 
(degrees) / East to West (degrees) / Year to Year+1 (degree-day) [2].

The spreadsheets - detailed calculations - are at [3].

For Work-Life Balance, sunrise and sunset detailed calculations are at [4].  
There is little or no difference between the results obtained by shifting the 
origin from the Winter Solstice to New Year's Day , and marking seasonal 
transits over the Tropics.  These are Astronomy conventions versus Civil Time 
conventions.  However,  vertical shifts (Work - Life Balance) are better 
visualized with a two-layer map. The time of day (layer) and sleep wake cycle 
(layer) are not spin coupled, meaning that vector and raster scales 
(mentioned in your Wiki post) cannot be reconsilled with an average. There is 
no Central Limit to Work Ethic.
-
If you want pictures ...
It is the Labor Day Holiday in the US, and like many I am mourning the passage 
of summer by eating too much in short pants I am to old to wear and avoiding 
anything like work.  The spreadsheets have charts :-)

Best,
Gannon


[1] http://xml.fido.gov/stratml/index.htm
[2] http://www.rustprivacy.org/2014/balance/opers/
[3] http://www.rustprivacy.org/2014/balance/opers/stratml-operations.zip
[4] http://www.rustprivacy.org/2014/balance/opers/true-up-wlb.zip




On Mon, 9/1/14, Frans Knibbe | Geodan frans.kni...@geodan.nl wrote:

 Subject: A proposal for two additional properties for LOCN

 Hello all,
 
 
 
 I have made a wiki page for a provisional proposal for the addition of two 
new properties to the Location Core Vocabulary: CRS and spatial resolution. I 
would welcome your thoughts and comments. 
  
 The proposal is based on earlier discussions on this list. I am not 
certain about any of it, but I think starting with certain definitions can help 
in eventually getting something that is good to work with. 
 
 Some questions that I can come up with are:
   
   Are the semantics of the two properties really absent from the  semantic 
web at the moment?
   Is the Location Core Vocabulary an appropriate place to add  them?
   Is the proposed way of modelling the two properties right? Could 
conflicts with certain use cases occur?
 
 More detailed questions are on the wiki page.
 
 
 Regards,
 
   Frans
 
 
 
 
 
 
 
 Frans Knibbe
 
 Geodan
 
 President Kennedylaan 1
 
 1079 MB Amsterdam (NL)
 
 
 
 T +31 (0)20 - 5711 347
 
 E frans.kni...@geodan.nl
 
 www.geodan.nl | disclaimer
 
 
   
 
 
 
 
 



Seeking Experienced Jena developer

2014-09-01 Thread Samuel Rose
The Prevention at Home project is seeking an experienced Apache Jena
developer for a 6 month contract, of approximately 20 hours per week.

Desired knowledge includes:

Java programming experience, SPARQL query experience, OWL and RDF
Ontology usage and creation experience (we are also working with
Protoge and Neologism to create serve ontologies), and experience with
Jena Rules and Inference.

Please contact: samuel.r...@gmail.com if you are interested in this 6
month to 1 year contract position.


About the project:

http://innovation.cms.gov/initiatives/Health-Care-Innovation-Awards-Round-Two/Washington-DC.html

GEORGE WASHINGTON UNIVERSITY

Project Title: PREVENTION AT HOME: A Model for Novel use of Mobile
Technologies and Integrated Care Systems to Improve HIV Prevention and
Care While Lowering Cost
Geographic Reach: Washington D.C.
Estimated Funding Amount: $23,808,617

Summary: The George Washington University project will test a model
that will utilize mobile technologies and optimize the prevention and
care continuum (early detection, treatment adherence, retention in
care, viral load suppression, decreased hospitalizations) for HIV+
individuals.  The project will bring together a consortium of
stakeholders including community outreach organizations, clinical care
systems, a hospital, a managed care organization, the DC Department of
Health, and DC Medicaid to share integrated IT systems. Together these
systems will provide Medicaid members with the ability to receive
online education, the option of ordering home testing and home
specimen collection for sexually transmitted infections and HIV,
receive sexually transmitted infection and viral load test results,
receive e-prescriptions and support linking and relinking to care.
Additionally, the systems will provide community health workers (CHW)
with a mobile tool to collect recruitment data, to guide counseling,
testing and linkage services, and will provide CHW with a list of
active patients to provide care coordination who have detectable viral
load, missed clinic visits, missed medication refills, emergency room
visits or hospitalizations. Finally, the system will allow CHW and /or
patients to generate a care plan that will be integrated into the
primary care provider’s electronic health record, to facilitate
continuity of care.