RE: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia ontology

2015-02-25 Thread Vladimir Alexiev
 From: M. Aaron Bossert [mailto:maboss...@gmail.com]
 I am more than happy to work the ML problem with you.  

Hi Aaron!
Would be great to work with someone from Cray but I don't have a good idea how 
to use ML here,
nor indeed a lot of trust in using ML to produce or fix mappings.

E.g. see this exchange:
https://twitter.com/valexiev1/status/565814870973890560
Generating 30% wrong prop maps for the Ukrainian dbpedia is IMHO doing them a 
disservice!
Who's gonna clean up all this?

I guess I'm more of a MLab (Manual Labor) guy, I just learned they coined such 
alias for crowdsourcing:
http://link.springer.com/chapter/10.1007%2F978-3-319-13704-9_14

   DBO: dbo:parent rdfs:range dbo:Person
   Wikipedia: | mother = [[Queen Victoria]] of [[England]]
 For your example of the dichotomy with the domain and range of mother and
 queen Victoria being the mother, this begs for contextual approach to that 
 concept

She IS the mother, not sure what you mean.

Here a simple post-extraction cleanup can take care of it: 
remove all statements that violate range (so dbo:parent [[England]] will be 
removed).
But we dare not do it, because many of the ranges are imprecise, or set 
wishfully without regard to existing data / mappings.
(As usual, the real data is more complex than any model of it.)

So we need to check our Ontological Assumptions and precise domains/ranges 
before such cleanup.
See example in 
http://vladimiralexiev.github.io/pres/20150209-dbpedia/dbpedia-problems-long.html#sec-6-7





Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia ontology

2015-02-25 Thread M. Aaron Bossert
Vladimir,

I'm thinking of trying to do some stats on the existing ontology and the 
mappings to see where there is room for improvement.  I'm tied up this week 
with a couple deadlines that I seem to moving towards at greater than light 
speed, though my progress is not.

As soon as I get the rough cut done, I'll share the results with you and maybe 
we can discuss paths forward?

I'm with you on the 30% error rate...that doesn't help anyone.

Aaron

On Feb 25, 2015, at 08:02, Vladimir Alexiev vladimir.alex...@ontotext.com 
wrote:

 From: M. Aaron Bossert [mailto:maboss...@gmail.com]
 I am more than happy to work the ML problem with you.  
 
 Hi Aaron!
 Would be great to work with someone from Cray but I don't have a good idea 
 how to use ML here,
 nor indeed a lot of trust in using ML to produce or fix mappings.
 
 E.g. see this exchange:
 https://twitter.com/valexiev1/status/565814870973890560
 Generating 30% wrong prop maps for the Ukrainian dbpedia is IMHO doing them a 
 disservice!
 Who's gonna clean up all this?
 
 I guess I'm more of a MLab (Manual Labor) guy, I just learned they coined 
 such alias for crowdsourcing:
 http://link.springer.com/chapter/10.1007%2F978-3-319-13704-9_14
 
 DBO: dbo:parent rdfs:range dbo:Person
 Wikipedia: | mother = [[Queen Victoria]] of [[England]]
 For your example of the dichotomy with the domain and range of mother and
 queen Victoria being the mother, this begs for contextual approach to that 
 concept
 
 She IS the mother, not sure what you mean.
 
 Here a simple post-extraction cleanup can take care of it: 
 remove all statements that violate range (so dbo:parent [[England]] will be 
 removed).
 But we dare not do it, because many of the ranges are imprecise, or set 
 wishfully without regard to existing data / mappings.
 (As usual, the real data is more complex than any model of it.)
 
 So we need to check our Ontological Assumptions and precise domains/ranges 
 before such cleanup.
 See example in 
 http://vladimiralexiev.github.io/pres/20150209-dbpedia/dbpedia-problems-long.html#sec-6-7
 
 



VoCamp in Energy measurement data in municipalities

2015-02-25 Thread María Poveda
(Apologies for cross-posting)

Please register in

http://goo.gl/X5XUJR

by April 1, 2015.


Call for Participation

VoCamp in Energy measurement data in municipalities

http://smartcity.linkeddata.es/LD4SC/VoCamp/


April 22-23, 2015

Austrian Institute of Technology - AIT

Vienna, Austria


Abstract - This VoCamp on Energy measurement data in municipalities is
focused on how municipalities can represent their data about energy
measurement in order to publish it online (e.g., as open data). The
interest to this question arises from the expected benefits such as the
ability to easily reuse these data by third parties or to link them to
other relevant data for further processing (e.g., building information
models, climate, occupancy).



Content - The goal of the VoCamp will be to obtain a common ontology that
can be used by municipalities to represent their energy measurement data in
order to publish such data online as Linked Data.

The event will be of a practical nature. A set of energy measurement
datasets will be selected and, dividing participants into groups, we will
define vocabularies that can be used to represent such datasets. Work will
start from a seed vocabulary and the idea is to end the VoCamp with a
common vocabulary that can be used by the different datasets. Apart from
defining this common vocabulary, we aim to analyse potential use cases for
these datasets+vocabularies as well as work on the localization of these
vocabularies.

VoCamp participants are encouraged to bring their own datasets to the VoCamp;
this way after the event they will have a vocabulary that can be used with
their data. To this end, datasets must be submitted 15 days before the
VoCamp to the organizers.


Venue

-

The event will be held at the Austrian Institute of Technology - AIT
(Energy Department), Giefinggasse 2, 1210 Vienna, Austria.

Registration



The VoCamp is open to all practitioners and researchers interested in the
application of Linked Data technologies for the publication of energy
measurement data at a municipality level.

The VoCamp event itself is free and the the organization will provide
lunches and coffee breaks. In addition, there is a limited budget for
travel and accommodation expenses according to the legislation of the EC.
Apply for the travel and accommodation reimburse in the registration form.

Participants should register to the VoCamp through:http://goo.gl/X5XUJR

The deadline for registrations is April 1, 2015.

Important dates:

   -

   Registration: April 1, 2015
   -

   Datasets to be used: April 8, 2015
   -

   Vocamp: April 22-23, 2015


Organizing committee:

   -

   María Poveda Villalón, Universidad Politécnica de Madrid
   -

   Raúl García Castro, Universidad Politécnica de Madrid
   -

   Jan Peters-Anders, AIT
   -

   Andrea Cavallaro, D’Appolonia


Contact: vocamp-ene...@delicias.dia.fi.upm.es

-- 
María Poveda Villalón

PhD student
Ontology Engineering Group (OEG)
Universidad Politécnica de Madrid
Madrid, Spain

e-mail: mpov...@fi.upm.es
website: http://purl.org/net/mpoveda
http://delicias.dia.fi.upm.es/~mpoveda/
blog: http://thepetiteontologist.wordpress.com/


FGCT 2015

2015-02-25 Thread sara


Fourth International Conference on Future Generation Communication
Technologies (FGCT 2015)
University of Bedfordshire, Luton (near London) UK
July 29-31, 2015
www.socio.org.uk/fgct
(Technically co-sponsored by IEEE UK  RI)


In the last decade, a number of newer communication technologies have
been evolved, which have a significant impact on the technology, as a
whole. The impact ranges from incremental applications to dramatical
breakthrough in the society. Users rely heavily on broadcast
technology, social media, mobile devices, video games and other
innovations to enrich the learning and adoption process.

This conference is designed for teachers, administrators,
practitioners, researchers and scientists in the development arenas.
It aims to provide discussions and simulations in the communication
technology at the broad level and broadcasting technology and related
technologies at the micro level. Through a set of research papers,
using innovative and interactive approach, participants can expect to
share a set of research that will prepare them to apply new
technologies to their work in teaching, research and educational
development amid this rapidly evolving landscape.

Topics discussed in this platform are not limited to-

 Emerging cellular and new network architectures for 5G
 New antenna and RF technology for 5G wireless
 Internet of Everything
 Modulation algorithms
 Circuits, software and systems for 5G
 Convergence of multi-modes, multi-bands, multi-standards and multi-
applications in 5G systems
 Cognitive radio and collaborative transmissions in 5G
 Computing and processing platform for 5G
 Programming models and development tools to enable 5G systems
 Small cells and heterogeneous networks
 Metrics and Evaluation of 5G systems
 Standardization of 5G
 Broadcost technology
 Future Internet and networking architectures
 Future mobile communications
 Mobile Web Technology
 Mobile TV and multimedia phones
 Communication Security, Trust, Protocols and Applications
 Communication Interfaces
 Communication Modelling
 Satellite and space communications
 Communication software
 Future Generation Communication Networks
 Communication Network Security
 Communication Data Grids
 Collaborative Communication Technology
 Intelligence for future communication systems
 Forthcoming optical communication systems
 Communication Technology for Elearning, Egovernment, Ebusiness
 Games and games designing
 Social technology devises, tools and applications
 Crowdsourcing and Human Computation
 Human-computer communication
 Pervasive Computing
 Grid, crowd sourcing and cloud computing
 Hypermedia systems
 Software and technologies for E-communication
 Intelligent Systems for E-communication
 Future Cloud for Communication
 Future warehousing
 Future communication for healthcare and medical devices applications
 Future communication for Mechatronic applications

All presented papers in the conference will be published in the
proceedings of the conference and submitted to the IEEE Xplore Digital
Library for inclusion.

The conference will have workshops on specific themes, industrial
presentation, invited talks and collaborative discussion forums.

Important Dates

Submission of Papers:   May 01, 2015
Notification of Acceptance: June 10, 2015
Camera Ready:   July 10, 2015
Conference Dates:   July 29-31, 2015


The selected papers after extension and modification will be published
in many peer reviewed and indexed journals.

 Journal of Computer and System Sciences/ (ISI/Scopus)
 Journal of Digital Information Management (Scopus/EI)
 International Journal of Computational Science and Engineering
(Scopus and EI Indexed)
 Decision Analytics
 International Journal of Big Data Intelligence
 International Journal of Applied Decision Sciences (Scopus/EI)
 International Journal of Management and Decision Making (Scopus/EI)
 International Journal of Strategic Decision Sciences
 International Journal of Enterprise Information Systems (Scopus/EI)

Programme Committee

General Chair

Ezendu Ariwa, University of Bedfordshire, UK

Programme Chairs

Carsten Maple, Warwick University, UK
Yong Yue, University of Bedfordshire, UK
Hathairat Ketmaneechairat, King Mongkut’s University of Technology, Thailand


Programme Co-Chairs

Koodichimma Ibe-Ariwa, Cardiff Metropolitan University, UK
Gloria Chukwudebe, Federal University of Technology, Owerri, Nigeria

Submissions at: http://www.socio.org.uk/fgct/submission.php
Email: f...@socio.org.uk

--




Italian DBpedia 3.4. release

2015-02-25 Thread Marco Fossati

I am pleased to announce the 3.4. release of the Italian DBpedia chapter.
It includes links to Wikidata and exhaustive type coverage through DBTax.
Check out the blog post here:
http://it.dbpedia.org/2015/02/dbpedia-italiana-release-3-4-wikidata-e-dbtax/?lang=en

Cheers!
--
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
Skype: hell_j




Re: Microsoft Access for RDF?

2015-02-25 Thread Graham Klyne

Hi Kingsley,

In https://lists.w3.org/Archives/Public/public-lod/2015Feb/0116.html
You said, re Annalist:

My enhancement requests would be that you consider supporting of at
least one of the following, in regards to storage I/O:

1. LDP
2. WebDAV
3. SPARQL Graph Protocol
4. SPARQL 1.1 Insert, Update, Delete.

As for Access Controls on the target storage destinations, don't worry
about that in the RDF editor itself, leave that to the storage provider
[1] that supports any combination of the protocols above.


Thanks for you comments and feedback - I've taken note of them.

My original (and current) plan is to provide HTTP access (GET/PUT/POST/etc) with 
a little bit of WebDAV to handle directory content enumeration., which I think 
is consistent with your suggestion (cf. [1]).  The other options you mention are 
not ruled out.


You say I shouldn't worry too much about access control, but leave that to the 
back-end store.  If by this you mean *just* access control, then that makes 
sense to me.


A challenge I face is to understand what authentication tokens are widely 
supported by existing HTTP stores.  Annalist itself uses OpenID Connect (ala 
Google+, etc) is its main authentication mechanism, so I cannot assume that I 
have access to original user credentials to construct arbitrary security tokens.


I had been thinking that something based on OAuth2 might be appropriate (I 
looked at UMA [2], had some problems with it as a total solution, but I might be 
able to use some of its elements).  I took a look at the link you provided, but 
there seem to be a lot of moving parts and couldn't really figure out what you 
were describing there.


Thanks!

#g
--

[1] https://github.com/gklyne/annalist/issues/32

[2] http://en.wikipedia.org/wiki/User-Managed_Access, 
http://kantarainitiative.org/confluence/display/uma/Home






RE: [Dbpedia-discussion] Italian DBpedia 3.4. release

2015-02-25 Thread Marco Fossati
Hi Vladimir,

Il 24/feb/2015 08:38 Vladimir Alexiev vladimir.alex...@ontotext.com ha
scritto:

 Excellent, thanks!!

 1. Do you have a description how does this work?
 I can't even find your presentation from Dublin 7 Feb
The paper describing the approach is under review on a top conference, I
wilk share it when the review period is over.

 2. How can I split out Drink from Food?
 E.g. Beer is in here: http://it.dbpedia.org/downloads/dbtax/A-Box/Food.ttl
 Not joking: this can help us in Europeana Food and Drink :-)
The T-Box may help you out.
You can check http://it.dbpedia.org/downloads/dbtax/T-Box.tsv

Hope this helps.
Marco

 Cheers! Vladimir




Enterprise information system

2015-02-25 Thread Hugh Glaser
So, here’s a thing.

Usually you talk to a company about introducing Linked Data technologies to 
their existing IT infrastructure, emphasising that you can add stuff to work 
with existing systems (low risk, low cost etc.) to improve all sorts of stuff 
(silo breakdown, comprehensive dashboards, etc..)

But what if you start from scratch?

So, the company wants to base all its stuff around Linked Data technologies, 
starting with information about employees, what they did and are doing, 
projects, etc., and moving on to embrace the whole gamut.
(Sort of like a typical personnel management core, plus a load of other related 
DBs.)

Let’s say for an organisation of a few thousand, roughly none of whom are 
technical, of course.

It’s a pretty standard thing to need, and gives great value.
Is there a solution out of the box for all the data capture from individuals, 
and reports, queries, etc.?
Or would they end up with a team of developers having to build bespoke things?
Or, heaven forfend!, would they end up using conventional methods for all the 
interface management, and then have the usual LD extra system?

Any thoughts?

-- 
Hugh Glaser
   20 Portchester Rise
   Eastleigh
   SO50 4QS
Mobile: +44 75 9533 4155, Home: +44 23 8061 5652





RE: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia ontology

2015-02-25 Thread John Flynn
It seems the first level effort should be a requirements analysis for the
Dbpedia ontology.
- What is the level of expressiveness needed in the ontology language- 1st
order logic, some level of descriptive logic, or a less expressive language?
- Based on the above, what specific ontology implementation language should
be used?
- Should the Dbpedia ontology leverage an existing upper ontology, such as
SUMO, DOLCE, etc?
- Should the Dbpedia ontology architecture consist of a basic common core of
concepts (possibly in addition to the concepts in a upper ontology) that are
then extended by additional domain ontologies?
- How will the Dbpedia ontology be managed?
- What are the hosting requirements for access loads on the ontology? How
many simultaneous users?

This is only a cursory cut at Dbpedia ontology requirement issues. But, it
seems the community needs to come to grips with this issue before
implementing specific changes to the existing ontology.

John Flynn
http://semanticsimulations.com   

-Original Message-
From: M. Aaron Bossert [mailto:maboss...@gmail.com] 
Sent: Wednesday, February 25, 2015 9:13 AM
To: vladimir.alex...@ontotext.com
Cc: dbpedia-ontology; Linked Data community; SW-forum;
dbpedia-discuss...@lists.sourceforge.net
Subject: Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia
ontology

Vladimir,

I'm thinking of trying to do some stats on the existing ontology and the
mappings to see where there is room for improvement.  I'm tied up this week
with a couple deadlines that I seem to moving towards at greater than light
speed, though my progress is not.

As soon as I get the rough cut done, I'll share the results with you and
maybe we can discuss paths forward?

I'm with you on the 30% error rate...that doesn't help anyone.

Aaron

 




Re: [Dbpedia-discussion] [Dbpedia-ontology] Advancing the DBpedia ontology

2015-02-25 Thread Mike Bergman

Hi John,

My thoughts are for DBpedia to stay close to the mission of extracting 
quality data from Wikipedia, and no more. That quality extraction is an 
essential grease to the linked data ecosystem, and of much major benefit 
to anyone needful of broadly useful structured data.


I think both Wikipedia and DBpedia have shown that crowdsourced entity 
information and data works beautifully, but the ontologies or knowledge 
graphs (category structures) that emerge from these effort are mush.


DBpedia, or schema.org from that standpoint, should not be concerned so 
much about coherent schema, computable knowledge graphs, ontological 
defensibility, or any such T-Box considerations. They have demonstrably 
shown themselves to not be strong in these suits.


No one hears the term folksonomy any more because all initial admirers 
have seen no crowd-sourced schema to really work (from dmoz to 
Freebase). A schema is not something to be universally consented, but a 
framework by which to understand a given domain. Yet the conundrum is, 
to organize anything globally, some form of conceptual agreement about a 
top-level schema is required.


Look to what DBpedia now does strongly: extract vetted structured data 
from Wikipedia for broader consumption on the Web of data.


My counsel is to not let DBpedia's mission stray into questions of 
conceptual truth. Keep the ontology flat and simple with no 
aspirations other than just the facts, ma'am.


Thanks, Mike

On 2/25/2015 10:33 PM, M. Aaron Bossert wrote:

John,

You make a good point...but are we talking about a complete tear-down of the 
existing ontology?  I'm not necessarily opposed to that notion, by want to make 
sure that we are all in agreement as to the scope of work, as it were.

What would be the implications of a complete redo?  Would the benefit outweigh 
the impact to the community?  I would assume that there would be a ripple 
effect across all other LOD datasets that map to dbpedia, correct?  Or am I 
grossly overstating/misunderstanding how interconnected the ontology is?

Vladimir, your thoughts?

Aaron


On Feb 25, 2015, at 21:14, John Flynn jflyn...@verizon.net wrote:

It seems the first level effort should be a requirements analysis for the
Dbpedia ontology.
- What is the level of expressiveness needed in the ontology language- 1st
order logic, some level of descriptive logic, or a less expressive language?
- Based on the above, what specific ontology implementation language should
be used?
- Should the Dbpedia ontology leverage an existing upper ontology, such as
SUMO, DOLCE, etc?
- Should the Dbpedia ontology architecture consist of a basic common core of
concepts (possibly in addition to the concepts in a upper ontology) that are
then extended by additional domain ontologies?
- How will the Dbpedia ontology be managed?
- What are the hosting requirements for access loads on the ontology? How
many simultaneous users?

This is only a cursory cut at Dbpedia ontology requirement issues. But, it
seems the community needs to come to grips with this issue before
implementing specific changes to the existing ontology.

John Flynn
http://semanticsimulations.com

-Original Message-
From: M. Aaron Bossert [mailto:maboss...@gmail.com]
Sent: Wednesday, February 25, 2015 9:13 AM
To: vladimir.alex...@ontotext.com
Cc: dbpedia-ontology; Linked Data community; SW-forum;
dbpedia-discuss...@lists.sourceforge.net
Subject: Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia
ontology

Vladimir,

I'm thinking of trying to do some stats on the existing ontology and the
mappings to see where there is room for improvement.  I'm tied up this week
with a couple deadlines that I seem to moving towards at greater than light
speed, though my progress is not.

As soon as I get the rough cut done, I'll share the results with you and
maybe we can discuss paths forward?

I'm with you on the 30% error rate...that doesn't help anyone.

Aaron





--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the
conversation now. http://goparallel.sourceforge.net/
___
Dbpedia-discussion mailing list
dbpedia-discuss...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion








Re: [Dbpedia-discussion] [Dbpedia-ontology] Advancing the DBpedia ontology

2015-02-25 Thread M. Aaron Bossert
The one thing I would say is that while I agree in general...the one thing that 
keeps eating away at me is that there is tremendous potential in dbpedia for 
bigger questions to be answered, but the more advanced analytics require that 
some level of sanity exists within the ontology...much more so than now.  As an 
example, I have created several different applications for customers that are 
based on dbpedia...one of which is a recommender system.  The level of effort 
required to simply say (in SPARQL, of course) show me every living person that 
is highly similar to person X, excluding politicians athletes and actors is 
quite a tedious thing to do until after I have fixed all the erroneous and 
missing properties associated with things in general...which person class do 
I focus on?  Which living people?  Which politicians?  Perhaps legislators?  It 
gets pretty ugly, pretty quickly.

I'm not sure that the ontology needs to be completely rewritten, but surely it 
can't be that difficult to clean up a bit with a little common sense logic 
applied such as if a thing has a death date (never mind which one), then 
surely they are not a living person...or if they hold a political office, 
surely they must be a politician.

Aaron

 On Feb 26, 2015, at 00:19, Mike Bergman m...@mkbergman.com wrote:
 
 Hi John,
 
 My thoughts are for DBpedia to stay close to the mission of extracting 
 quality data from Wikipedia, and no more. That quality extraction is an 
 essential grease to the linked data ecosystem, and of much major benefit to 
 anyone needful of broadly useful structured data.
 
 I think both Wikipedia and DBpedia have shown that crowdsourced entity 
 information and data works beautifully, but the ontologies or knowledge 
 graphs (category structures) that emerge from these effort are mush.
 
 DBpedia, or schema.org from that standpoint, should not be concerned so much 
 about coherent schema, computable knowledge graphs, ontological 
 defensibility, or any such T-Box considerations. They have demonstrably shown 
 themselves to not be strong in these suits.
 
 No one hears the term folksonomy any more because all initial admirers have 
 seen no crowd-sourced schema to really work (from dmoz to Freebase). A schema 
 is not something to be universally consented, but a framework by which to 
 understand a given domain. Yet the conundrum is, to organize anything 
 globally, some form of conceptual agreement about a top-level schema is 
 required.
 
 Look to what DBpedia now does strongly: extract vetted structured data from 
 Wikipedia for broader consumption on the Web of data.
 
 My counsel is to not let DBpedia's mission stray into questions of conceptual 
 truth. Keep the ontology flat and simple with no aspirations other than 
 just the facts, ma'am.
 
 Thanks, Mike
 
 On 2/25/2015 10:33 PM, M. Aaron Bossert wrote:
 John,
 
 You make a good point...but are we talking about a complete tear-down of the 
 existing ontology?  I'm not necessarily opposed to that notion, by want to 
 make sure that we are all in agreement as to the scope of work, as it were.
 
 What would be the implications of a complete redo?  Would the benefit 
 outweigh the impact to the community?  I would assume that there would be a 
 ripple effect across all other LOD datasets that map to dbpedia, correct?  
 Or am I grossly overstating/misunderstanding how interconnected the ontology 
 is?
 
 Vladimir, your thoughts?
 
 Aaron
 
 On Feb 25, 2015, at 21:14, John Flynn jflyn...@verizon.net wrote:
 
 It seems the first level effort should be a requirements analysis for the
 Dbpedia ontology.
 - What is the level of expressiveness needed in the ontology language- 1st
 order logic, some level of descriptive logic, or a less expressive language?
 - Based on the above, what specific ontology implementation language should
 be used?
 - Should the Dbpedia ontology leverage an existing upper ontology, such as
 SUMO, DOLCE, etc?
 - Should the Dbpedia ontology architecture consist of a basic common core of
 concepts (possibly in addition to the concepts in a upper ontology) that are
 then extended by additional domain ontologies?
 - How will the Dbpedia ontology be managed?
 - What are the hosting requirements for access loads on the ontology? How
 many simultaneous users?
 
 This is only a cursory cut at Dbpedia ontology requirement issues. But, it
 seems the community needs to come to grips with this issue before
 implementing specific changes to the existing ontology.
 
 John Flynn
 http://semanticsimulations.com
 
 -Original Message-
 From: M. Aaron Bossert [mailto:maboss...@gmail.com]
 Sent: Wednesday, February 25, 2015 9:13 AM
 To: vladimir.alex...@ontotext.com
 Cc: dbpedia-ontology; Linked Data community; SW-forum;
 dbpedia-discuss...@lists.sourceforge.net
 Subject: Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia
 ontology
 
 Vladimir,
 
 I'm thinking of trying to do some stats on the existing ontology and the

Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia ontology

2015-02-25 Thread M. Aaron Bossert
John,

You make a good point...but are we talking about a complete tear-down of the 
existing ontology?  I'm not necessarily opposed to that notion, by want to make 
sure that we are all in agreement as to the scope of work, as it were.

What would be the implications of a complete redo?  Would the benefit outweigh 
the impact to the community?  I would assume that there would be a ripple 
effect across all other LOD datasets that map to dbpedia, correct?  Or am I 
grossly overstating/misunderstanding how interconnected the ontology is? 

Vladimir, your thoughts?

Aaron

 On Feb 25, 2015, at 21:14, John Flynn jflyn...@verizon.net wrote:
 
 It seems the first level effort should be a requirements analysis for the
 Dbpedia ontology.
 - What is the level of expressiveness needed in the ontology language- 1st
 order logic, some level of descriptive logic, or a less expressive language?
 - Based on the above, what specific ontology implementation language should
 be used?
 - Should the Dbpedia ontology leverage an existing upper ontology, such as
 SUMO, DOLCE, etc?
 - Should the Dbpedia ontology architecture consist of a basic common core of
 concepts (possibly in addition to the concepts in a upper ontology) that are
 then extended by additional domain ontologies?
 - How will the Dbpedia ontology be managed?
 - What are the hosting requirements for access loads on the ontology? How
 many simultaneous users?
 
 This is only a cursory cut at Dbpedia ontology requirement issues. But, it
 seems the community needs to come to grips with this issue before
 implementing specific changes to the existing ontology.
 
 John Flynn
 http://semanticsimulations.com   
 
 -Original Message-
 From: M. Aaron Bossert [mailto:maboss...@gmail.com] 
 Sent: Wednesday, February 25, 2015 9:13 AM
 To: vladimir.alex...@ontotext.com
 Cc: dbpedia-ontology; Linked Data community; SW-forum;
 dbpedia-discuss...@lists.sourceforge.net
 Subject: Re: [Dbpedia-ontology] [Dbpedia-discussion] Advancing the DBpedia
 ontology
 
 Vladimir,
 
 I'm thinking of trying to do some stats on the existing ontology and the
 mappings to see where there is room for improvement.  I'm tied up this week
 with a couple deadlines that I seem to moving towards at greater than light
 speed, though my progress is not.
 
 As soon as I get the rough cut done, I'll share the results with you and
 maybe we can discuss paths forward?
 
 I'm with you on the 30% error rate...that doesn't help anyone.
 
 Aaron
 
 
 



Re: Enterprise information system

2015-02-25 Thread Giovanni Tummarello
Hugh,

i think if you send them down a route where you have to write bespoke
software (which uses RDF concept, hard to find developers to write and
maintain) for purposes for which mature widely tested and widely spread
software exists you'd be doing them a disservice.

Eventually they'll find someone showing them how normally these things are
done they'd say hey but this is what we need really - give it to us now.
This will at that point both possibly spoil your reputation with them and
their perception toward LD technologies, which could on the other hand be
useful if used in moderation - or in domain where data variability is
indeed extreme.

I would recommend look for good open source personnel or project management
system (Groupware etc) and see if it makes sense to introduce concepts such
as unique identifiers used across the organization (Which could be
resolvable URI thus giving you a homepage for every core concept of the
company). But be flexible even in this case if you are to add any LD at
all.. people often prefer typ+number (e.g. personnel ID, project code) than
URIs  so if you do a global lookup interface for all, dont insist they must
use URIs to find something. However if anything does in fact show on a
stable and nice URI in their browser, they'll naturally refer to it when
passing each other references in emails etc.  but this is the same than
what they would be doing with any reputable content management system.

my2c
Gio

On Wed, Feb 25, 2015 at 10:06 PM, Hugh Glaser h...@glasers.org wrote:

 So, here’s a thing.

 Usually you talk to a company about introducing Linked Data technologies
 to their existing IT infrastructure, emphasising that you can add stuff to
 work with existing systems (low risk, low cost etc.) to improve all sorts
 of stuff (silo breakdown, comprehensive dashboards, etc..)

 But what if you start from scratch?

 So, the company wants to base all its stuff around Linked Data
 technologies, starting with information about employees, what they did and
 are doing, projects, etc., and moving on to embrace the whole gamut.
 (Sort of like a typical personnel management core, plus a load of other
 related DBs.)

 Let’s say for an organisation of a few thousand, roughly none of whom are
 technical, of course.

 It’s a pretty standard thing to need, and gives great value.
 Is there a solution out of the box for all the data capture from
 individuals, and reports, queries, etc.?
 Or would they end up with a team of developers having to build bespoke
 things?
 Or, heaven forfend!, would they end up using conventional methods for all
 the interface management, and then have the usual LD extra system?

 Any thoughts?

 --
 Hugh Glaser
20 Portchester Rise
Eastleigh
SO50 4QS
 Mobile: +44 75 9533 4155, Home: +44 23 8061 5652