Re: Where are the Linked Data Driven Smart Agents (Bots) ?

2016-07-06 Thread Gannon Dick
Hi Ruben,

On Wed, 7/6/16, Kingsley Idehen  wrote:
"Smart Agents and Bots are now hot topics across the industry at large."

bullet point - Wants are getting a little ahead of wishes, as usual :(

What people already believe about Linked Data is that {an SQL  right outer join 
of Category Name Elements on Topic Name Elements in a homogeneous name space - 
e.g. counts grouped by Category Name} is a unique vector. SQL has fewer shades 
of promiscuity than SPARQL.

This is not a commercial deal breaker ... rather:  a long ago verbal contract, 
possibly signed under duress, with your Second Grade Teacher.  When she said 
"2+2=4" although substantiation would forthcoming, her work product was 
guaranteed against material defects.  The contract is still in effect, 
world-wide.  No Nobel Prizes to be had here.  Wolfgang Pauli already won - The 
Pauli Exclusion Principle means that "2+2=5" is, in Pauli's words, "Not Even 
Wrong !!!".  Little justice in the prize judging, BTW.  Nuns (I had Dominicans) 
have been communicating the same message for centuries. Mistakes were made.  
Knuckles jumped in front of wooden rulers on a regular basis, etc. :)

Ruben ... 
"One of the main problems I see is how our community  (now particularly 
thinking about the scientific subgroup) receives submissions of novel work."

I think ...
Maybe the problem is the identification of "novel work".  Interoperability can 
depend on the novel nature of the work or Induced Knuckle PTSD. Just a guess, 
but not many Software Patents mention Nuns or knuckles.  Somebody might do a 
survey though.

Ruben ...
" We have evolved into an extremely quantitative-oriented view, where anything 
that can be measured with numbers is largely favored over anything that cannot."

I think ...
True enough.
In addition ...
1) The arbiters of taste (paying customers) went to Second Grade, and in 
subsequent steps the first professional trick they learned was writing a annual 
balance sheet. Content: (12 Monthly Values) (4 Quarterly sums) (1 Annual sum)

2) All of us folk, OTOH, may have encountered von Neumann Architecture first.  
A balance sheet is not von Neuman Architecture.  The 17 pigeon holes are 
"smart" or so you evangelize, but all will be for naught if you relabel the 
pigeon holes.   Actually, the birds won't care, but you will unsettle the 
pigeons mightily because they consider von Neumann Architecture a disruptive 
technology, and always will. 


It just seems to me that - knowing that web discovery is prone at least 17 
pigeon hole misdemeanors and probably all seven deadly sins as well -  it is 
wise to proceed stepwise before accepting the web of things identification of 
reputable brands of truth.

Some English guy said it a whole lot better, BTW ...
“When you have eliminated all which is impossible, then whatever remains, 
however improbable, must be the truth.”  ― Arthur Conan Doyle, The Case-Book of 
Sherlock Holmes

--Gannon


On Wed, 7/6/16, Ruben Verborgh  wrote:

 Subject: Re: Where are the Linked Data Driven Smart Agents (Bots) ?
 To: "Kingsley Idehen" 
 Cc: "public-lod" 
 Date: Wednesday, July 6, 2016, 11:38 AM
 
 Hi,
 
 This is a very important question for our community,
 given that smart agents once were an important theme.
 Actually, the main difference we could bring with the
 SemWeb
 is that our clients could be decentralized
 and actually run on the client side, in contrast to others.
 
 One of the main problems I see is how our community
 (now particularly thinking about the scientific subgroup)
 receives submissions of novel work.
 We have evolved into an extremely quantitative-oriented
 view,
 where anything that can be measured with numbers
 is largely favored over anything that cannot.
 
 Given that the smart agents / bots field is quite new,
 we don't know the right evaluation metrics yet.
 As such, it is hard to publish a paper on this
 at any of the main venues (ISWC / ESWC / …).
 This discourages working on such themes.
 
 Hence, I see much talent and time going to
 incremental research, which is easy to evaluate well,
 but not necessarily as ground-breaking.
 More than a decade of SemWeb research
 has mostly brought us intelligent servers,
 but not yet the intelligent clients we wanted.
 
 So perhaps we should phrase the question more broadly:
 how can we as a community be more open
 to novel and disruptive technologies?
 
 Best,
 
 Ruben



Re: Close to spam (was Re: [CfP] Reminder: Submit to ISWC2016 Doctoral Consortium!)

2016-05-19 Thread Gannon Dick
David, the part of my comment which was not confidential was some wisdom from a 
famous English Author with Copyright Troll experience ... 

-
Everybody knows this one:
"The first thing we do, let's kill all the lawyers." (2 Henry VI, 4.2.59)
William Shakespeare
-
But this one is more instructive for Linked Open Data and Open Source Web 
Applications:
DROMIO OF SYRACUSE. There's no time for a man to recover his hair that grows 
bald by nature.
ANTIPHOLUS OF SYRACUSE. May he not do it by fine and recovery?
DROMIO OF SYRACUSE. Yes, to pay a fine for a periwig and (meaning: "is to") 
recover the lost hair of another man.

(The Comedy of Errors, 2.2.71)
William Shakespeare
--

Forsooth, already, Phil :)

--Gannon

On Thu, 5/19/16, David Booth  wrote:

 Subject: Re: Close to spam (was Re: [CfP] Reminder: Submit to ISWC2016  
Doctoral Consortium!)
 To: "Phil Archer" , "Miel Vander Sande" 
, "semantic...@yahoogroups.com" 
, 
"service-orientated-architect...@yahoogroups.com" 
, "www-rdf-inter...@w3.org" 
, "www-rdf-lo...@w3.org" , 
"www-rdf-ru...@w3.org" , "public-sparql-...@w3.org" 
, "a...@listserv.heanet.ie" 
, "public-lod@w3.org" , 
"semantic-...@w3.org" 
 Date: Thursday, May 19, 2016, 11:41 AM
 
 Wow, that is strict,
 given that ISWC is clearly relevant to all of the 
 lists below.
 
 David Booth
 
 On
 05/19/2016 11:24 AM, Phil Archer wrote:
 >
 Miel,
 >
 > Thank you
 for including [CfP] - that's helpful and this is clearly
 a
 > relevant event. But you have sent
 this to
 >
 > www-rdf-inter...@w3.org
 > www-rdf-lo...@w3.org
 > www-rdf-ru...@w3.org
 > public-sparql-...@w3.org
 > public-lod@w3.org
 > semantic-...@w3.org
 >
 > One is enough, and
 that one should be semantic-...@w3.org.
 >
 > CfPs should not be
 sent to any of the other W3C ones.
 >
 > As you may have seen, we're cracking
 down on this.
 >
 >
 Thanks
 >
 > Phil
 >
 >
 >
 On 19/05/2016 14:50, Miel Vander Sande wrote:
 >>
 >> Submit to
 ISWC2016 Doctoral Consortium!
 >>
 ==
 >> 15th International Semantic Web
 Conference (ISWC 2016)
 >> Kobe, Japan,
 October 17 -21, 2016
 >>
 >>
 >> Website: http://iswc2016.semanticweb.org
 >> Facebook: https://www.facebook.com/groups/113652365383847
 >> Twitter: https://twitter.com/ISWC2016
 >>
 >>
 >> The ISWC 2016 Doctoral Consortium will
 take place as part of the 15th
 >>
 International Semantic Web Conference in Kobe, Japan. This
 forum will
 >> provide PhD students an
 opportunity to share and develop their
 >> research ideas in a critical but
 supportive environment, to get
 >>
 feedback from mentors who are senior members of the Semantic
 Web
 >> research community, to explore
 issues related to academic and research
 >> careers, and to build relationships
 with other Semantic Web PhD
 >>
 students from around the world.
 >>
 >> The Consortium aims to broaden the
 perspectives and to improve the
 >>
 research and communication skills of these students.
 >>
 >> The Doctoral
 Consortium is intended for students who have a specific
 >> research proposal and some preliminary
 results, but who have
 >> sufficient
 time prior to completing their dissertation to benefit
 from
 >> the consortium experience.
 Generally, students in their second or
 >> third year of PhD will benefit the
 most from the Doctoral Consortium.
 >>
 In the Consortium, the students will present their proposals
 and get
 >> specific feedback and
 advice on how to improve their research plan.
 >>
 >> All proposals
 submitted to the Doctoral Consortium will undergo a
 >> thorough reviewing process with a view
 to providing detailed and
 >>
 constructive feedback. The international program committee
 will select
 >> the best submissions
 for presentation at the Doctoral Consortium.
 >>
 >> We anticipate
 that students with accepted submissions at the Doctoral
 >> Consortium will receive travel
 fellowships to offset some of the
 >>
 travel costs.
 >>
 >> Submissions due: May 30, 2016
 >> Detailed info:
 >> http://iswc2016.semanticweb.org/pages/calls/doctoral-consortium.html
 >>
 >> Program
 Chairs
 >> * Philippe Cudré-Mauroux -
 University of Freiburg, Switzerland
 >>
 * Natasha Noy - Google Inc.
 >> *
 Riichiro Mizoguchi - JAIST, Japan
 >>
 >> ​
 >>
 >
 



Re: Temporal validity: alternative for dcterms:valid?

2015-12-24 Thread Gannon Dick
Hi Frans,

"a versioning scheme based on DCMI has a weak spot: the property for denoting 
temporal validity (dcterms:valid) is impractical to the point of being 
unusable."

Well, sort of.  If one is trying to rank temporal validity then one is trying 
to rank the negative pattern of the property.  One has to wake a sleeping 
naughty boy or girl up to determine  the naughtiness property.  In another 
familiar case, in a Hospital patients are woken up at all hours to take 
medications - you can be just as ill at home, with uninterrupted sleep.  This 
seems to me to be the same logical problem (post hoc ergo propter hoc).

Santa Claus, et al. can do this sort of thing without waking up naughty boys 
and girls.   An Artificial Intelligence which believes it can do what Santa 
Claus can do is quite unhinged, I think :)

--Gannon 

On Thu, 12/24/15, Frans Knibbe  wrote:

 Subject: Temporal validity: alternative for dcterms:valid?
 To: public-lod@w3.org
 Date: Thursday, December 24, 2015, 9:57 AM
 
 Hello again,The DCMI
 Metadata Terms vocabulary seems to have all the basic
 ingredients for building a versioning mechanism in to a
 dataset (which is or should be a very common requirement).
 Objects in a dataset can have life spans (temporal
 validity), be versions (dcterms:hasVersion/dcterms:isVersionOf)
 of another resource and replace each other 
(dcterms:replaces/dcterms:isReplacedBy).But as Jeni
 Tennison has noted some time ago (see final section
 'Unanswered Questions'), a versioning scheme based
 on DCMI has a weak spot: the property for denoting temporal
 validity (dcterms:valid)
 is impractical to the point of being unusable. Dcterms:valid
 only takes literals (rdfs:Literal) as value, which makes it
 hard to use it for practical expressions of time intervals.
 Time intervals should be compound objects that are based on
 useful datatypes. For instance, xsd:dateTime (for dates) or
 xsd:integer (for years or seconds (e.g. in UNIX time)) could
 be used in SPARQL queries to filter or order temporal data.
 In a versioned dataset queries like 'give me all changes
 between time T1 and time T2' or 'give me the state
 of the dataset at time T3' should be easy to create and
 to resolve. It seems to me that this requires proper and
 well supported data types. A text string notation for time
 intervals is recommended by DCMI: dcmi-period.
 It is easy and versatile enough, but the average triple
 store probably does not recognize this notation as temporal
 or numerical data. So I wonder if there is a good
 alternative for dcterms:valid somewhere that can be used to
 indicate temporal validity.I did find 
http://www.w3.org/ns/prov#invalidatedAtTime in
 PROV-O, which could be considered applicable, but a matching
 property to indicate the start of the time period of
 validity does not seem to exist in PROV-O. Also, its range
 is xsd:dateTime, which I think is too restrictive because
 the time needs to be known up to the level of seconds.Does this gap still need 
to be
 plugged? Or is the solution out there?Greetings,Frans



Re: New Extension for revealing Structured Data Islands embedded in HTML pages

2015-12-05 Thread Gannon Dick
+1
Very useful.  There is a definite need for a GRDDL replacement :)

http://www.w3.org/TR/grddl-tests/


On Sat, 12/5/15, Kingsley Idehen  wrote:

 Subject: New Extension for revealing Structured Data Islands embedded in HTML  
pages
 To: "'W3C Web Schemas Task Force'" 
 Cc: "public-lod@w3.org" , ontolog-fo...@googlegroups.com
 Date: Saturday, December 5, 2015, 12:50 PM
 
 All,
 
 Here's a quick FYI about our recently released browser
 extension that
 simplifies discovery of metadata oriented structured data
 embedded in
 HTML docs. Naturally, this extension also functions as a
 Linked Data
 exploration launch point for follow-your-nose discovery of
 related Web
 resources.
 
 A few key benefits:
 
 [1] Discovering use of schema.org terms across pages and web
 sites
 [2] Evaluating document metadata quality in relation of SEO
 optimization .
 
 Current browser support includes: Chrome, Opera, and Firefox
 (nightly
 builds only).
 
 Enjoy!
 
 Links:
 
 [1] http://osds.openlinksw.com -- Home Page
 [2]
 http://kidehen.blogspot.com/2015/12/openlink-structured-data-sniffer-osds.html
 -- Blog Post about extension
 [3] https://github.com/openlink/structured-data-sniffer/releases
 
 -- 
 Regards,
 
 Kingsley Idehen          
 Founder & CEO 
 OpenLink Software     
 Company Web: http://www.openlinksw.com
 Personal Weblog 1: http://kidehen.blogspot.com
 Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
 Twitter Profile: https://twitter.com/kidehen
 Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen
 Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
 



Re: New Extension for revealing Structured Data Islands embedded in HTML pages

2015-12-05 Thread Gannon Dick
Wow.  It is even self-documenting, just like Groucho promised ...

"I refuse to join any club that would have me as a member." -- Groucho Marx

Hello gannon_d...@yahoo.com,

We're writing to let you know that the group you tried to contact 
(ontolog-forum) may not exist, or you may not have permission to post messages 
to the group. A few more details on why you weren't able to post ...

+1 with Oak Leaf Clusters, Kingsley

Sorry Alphabet, couldn't resist.


On Sat, 12/5/15, Gannon Dick <gannon_d...@yahoo.com> wrote:

 Subject: Re: New Extension for revealing Structured Data Islands embedded in  
HTML  pages
 To: "'W3C Web Schemas Task Force'" <public-voc...@w3.org>, "Kingsley Idehen" 
<kide...@openlinksw.com>
 Cc: "public-lod@w3.org" <public-lod@w3.org>, ontolog-fo...@googlegroups.com
 Date: Saturday, December 5, 2015, 3:23 PM
 
 +1
 Very useful.  There is a definite need for a GRDDL
 replacement :)
 
 http://www.w3.org/TR/grddl-tests/
 
 
 On Sat, 12/5/15, Kingsley Idehen <kide...@openlinksw.com>
 wrote:
 
  Subject: New Extension for revealing Structured Data
 Islands embedded in HTML  pages
  To: "'W3C Web Schemas Task Force'" <public-voc...@w3.org>
  Cc: "public-lod@w3.org"
 <public-lod@w3.org>,
 ontolog-fo...@googlegroups.com
  Date: Saturday, December 5, 2015, 12:50 PM
  
  All,
  
  Here's a quick FYI about our recently released browser
  extension that
  simplifies discovery of metadata oriented structured data
  embedded in
  HTML docs. Naturally, this extension also functions as a
  Linked Data
  exploration launch point for follow-your-nose discovery of
  related Web
  resources.
  
  A few key benefits:
  
  [1] Discovering use of schema.org terms across pages and
 web
  sites
  [2] Evaluating document metadata quality in relation of
 SEO
  optimization .
  
  Current browser support includes: Chrome, Opera, and
 Firefox
  (nightly
  builds only).
  
  Enjoy!
  
  Links:
  
  [1] http://osds.openlinksw.com -- Home Page
  [2]
  http://kidehen.blogspot.com/2015/12/openlink-structured-data-sniffer-osds.html
  -- Blog Post about extension
  [3] https://github.com/openlink/structured-data-sniffer/releases
  
  -- 
  Regards,
  
  Kingsley Idehen          
  Founder & CEO 
  OpenLink Software     
  Company Web: http://www.openlinksw.com
  Personal Weblog 1: http://kidehen.blogspot.com
  Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
  Twitter Profile: https://twitter.com/kidehen
  Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
  LinkedIn Profile: http://www.linkedin.com/in/kidehen
  Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
  



RE: What Happened to the Semantic Web?

2015-11-16 Thread Gannon Dick
At the equator the Sun God works half-days.  Fine example for the little folk, 
that.

Plus another.

On Mon, 11/16/15, john.nj.dav...@bt.com  wrote:

 Subject: RE: What Happened to the Semantic Web?
 To: janow...@ucsb.edu, neil...@oilit.com, public-lod@w3.org, 
semantic-...@w3.org
 Date: Monday, November 16, 2015, 3:24 PM
 
 +1
 
 Bigdata as the new religion debunked
 
 -Original Message-
 From: Krzysztof Janowicz [mailto:janow...@ucsb.edu]
 
 Sent: 16 November 2015 17:41
 To: Neil McNaughton ;
 public-lod@w3.org;
 semantic-...@w3.org
 Subject: Re: What Happened to the Semantic
 Web?
 
 > In this context
 you might like to see what Google thinks
 >
 > https://www.google.com/trends/explore#q=rdf%2C%20%2Fm%2F0f2vj%2C%20%2Fm%2F076k0=q=Etc%2FGMT-1
 
 Or this link here ;-)
 
 http://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/
 
 
 On 11/16/2015
 01:59 AM, Neil McNaughton wrote:
 > In
 this context you might like to see what Google thinks
 >
 > https://www.google.com/trends/explore#q=rdf%2C%20%2Fm%2F0f2vj%2C%20%2Fm%2F076k0=q=Etc%2FGMT-1
 >
 > or
 >
 > https://books.google.com/ngrams/graph?content=rdf%2Csemantic+web%2Cresource+description+framework_start=2000_end=2008=15=0=_url=t1%3B%2Crdf%3B%2Cc0%3B.t1%3B%2Csemantic%20web%3B%2Cc0%3B.t1%3B%2Cresource%20description%20framework%3B%2Cc0
 >
 >
 >
 Best regards
 > Neil McNaughton
 > Editor and Publisher, Oil IT Journal
 > Now in its 20th year!
 >
 Oil IT Journal is published by The Data Room SARL
 > 7 Rue des Verrieres
 >
 92310 Sevres, France
 > Cell - +336 7271
 2642
 > Tel - +331 4623 9596
 > UK - +44 20 7193 1489
 >
 USA - +1 281 968 0752
 > i...@oilit.com/http://www.oilit.com
 >
 >
 >
 -Original Message-
 > From:
 Michael Brunnbauer [mailto:bru...@netestate.de]
 > Sent: Friday, November 13, 2015 5:31 PM
 > To: Kingsley Idehen 
 > Cc: Ruben Verborgh ;
 public-lod@w3.org;
 semantic-...@w3.org
 > Subject: Re: What Happened to the Semantic
 Web?
 >
 >
 > hi all,
 >
 > correct me if I am wrong:
 >
 > -Google CSE
 >   -cannot be queried
 programmatically without violating the Google TOS  -will
 only accept a disjunctive list of schema.org classes as
 restriction  -will only find pages mentioning things, not
 things
 >
 > -Google
 products generally will not recognize triples with classes
 or  properties outside the schema.org namespace (with
 selected exceptions, e.G.
 >   Goodrelations). This is
 understandable, but:
 >
 > -There is no way to tell Google crawlers
 that your classes/properties are  specializations of
 schema.org classes/properties.
 >
 > I would say we are not there yet.
 >
 > Regards,
 >
 > Michael Brunnbauer
 >
 
 
 -- 
 Krzysztof Janowicz
 
 Geography Department,
 University of California, Santa Barbara
 4830
 Ellison Hall, Santa Barbara, CA 93106-4060
 
 Email: j...@geog.ucsb.edu
 Webpage: http://geog.ucsb.edu/~jano/
 Semantic Web Journal: http://www.semantic-web-journal.net
 
 



Re: What Happened to the Semantic Web?

2015-11-11 Thread Gannon Dick
Hi All,

Melvin had a very good point about the vector *types*.  Some of the types are 
not so benign in implementation [1]. When the meta data sits atop a firewall as 
XML, it is shared, and the DOM is transferred intact. With XHTML this means 
access unbalanced access to the name space subdivisions:

1) "http://www.w3.org/1999/xhtml#head; (ennoblement)
2) "http://www.w3.org/1999/xhtml#body; (entailment)

The challenges for meta data sets are 1) persistence 2) reliable visibility and 
3) provenance, all as observed from outside the perimeter.

--Gannon

[1] https://www.us-cert.gov/ncas/alerts/TA15-314A

On Wed, 11/11/15, Dave Raggett  wrote:

We’re evolving the Web from a Web of pages to a much bigger Web of things.
 
 
 
 On 11 Nov 2015, at 14:55, Melvin Carvalho  wrote:

  Really enjoyed.
 
 I think sem web has evolved on a number vectors
 
 On 11 November 2015 at 15:17, Kingsley Idehen  wrote:

 I think I inadvertently forgot to share this blog post [1] with this community.
 

 



Re: What Happened to the Semantic Web?

2015-11-11 Thread Gannon Dick

On Wed, 11/11/15, Wouter Beek  wrote:

I find it difficult to see why centralization will not be the end game for the 
SW as it has been for so many other aspects of computing (search, email, social 
networking, even simple things like text chat).  The WWW shows that the 'soft 
benefits' of privacy, democratic potential, and data
ownership are not enough to make distributed solutions succeed.

-
fait accompli ?  No.  Just no. As Wolfgang Pauli put it "That's not even 
wrong!".



Re: What Happened to the Semantic Web?

2015-11-11 Thread Gannon Dick


On Wed, 11/11/15, Ruben Verborgh  wrote:

 Subject: Re: What Happened to the Semantic Web?
 To: "Kingsley Idehen" 
 Cc: public-lod@w3.org, "semantic-...@w3.org" 
 Date: Wednesday, November 11, 2015, 11:25 AM
 
 
To me, The Semantic Web is like Google, but then run on my machine. My client 
that knows my preferences, doesn't share them, but uses them the find 
information on the Web for me. I still hope to see that. Then, we might be 
there.
 
--
Nicely put Ruben.

I live in the countryside now, but lived in large cities for a couple of 
decades.  I prefer the country, but it is a negative pattern.  In a large city 
one is never far from a place one would rather not be.  The unease is constant 
and persistent.  To the extent that Google insists I "experience" my kitchen on 
their terms I have no time for them.
 



Re: Are there any datasets about companies? ( DBpedia Open Data Initiative)

2015-11-06 Thread Gannon Dick
Hi all,

Organizational Identifiers are a bit dangerous for the little people to talk 
about :-)

1) First, some food for thought ... if FOAF identifies real people rigorously, 
one would think complexity less and convergence faster for many fewer 
organizations.  That would make no sense, unless (reads manual).
2) Second, an observation ... is the "Open World" assumption an HTML ordered 
list or an HTML unordered list ?  Who decides ? Hint: Moses had 10 
Commandments, but plainly meant an unordered list.  Even the most (hardened 
agnostic) developer should be able to admit that 10 Commandments in an 
unordered list and 10 Items in an ordered list is not a valid substitution 
pattern. Learning to love Turtle is not a resolution to this dilemma, BTW.
3) Strategy Markup Language (StratML) Collections resolve these issues by using 
a compound key for the :
a) Organization Name -> Acronym (caps of Proper Case)
b) Subdivision Name -> UUID, Acronym (caps of Proper Case)
c) (StratML (XML) File Name) ->   (Acronym from a) DOT (Acronym from b) DOT xml

This can enable styling within the Core by CSS or XSLT while maintaining 
Collection integrity because an OUTER JOIN on Organization Name preserves a 
collection of right-directed graphs.  If this sounds like slavery to you, take 
a nap, "they" can't own your dreams ;-)

--Gannon

On Thu, 11/5/15, Rolf Kleef  wrote:

 Subject: Re: Are there any datasets about companies? ( DBpedia Open Data  
Initiative)
 To: "Sebastian Hellmann" , "Kay Müller" 
, "Chris Taggart" 
, public-lod@w3.org
 Date: Thursday, November 5, 2015, 6:49 AM
 
 Hi Sebastian, Kay,
 
 If you haven't done it
 yet, I suggest getting in touch with Chris
 Taggart of Open Corporates (cc'd). He has
 years of experience doing
 this, and is also
 involved in cross-standards work on "organisational
 identifiers", crucial in the development
 of for instance the Open
 Contracting Data
 Standard and the International Aid Transparancy
 Initiative:
 
 http://www.open-contracting.org/
 http://iatistandard.org/201/organisation-identifiers/
 
 ~~Rolf.
 
 On 03/11/15 16:17, Sebastian Hellmann wrote:
 > [Apologies for cross-posting]
 > 
 > Dear all,
 > this message is part announcement of an
 open data initiative and part
 > call for
 feedback and support.
 > 
 > We are considering to work on creating a
 free, open and interoperable
 > dataset on
 companies and organisations, which we are planing to
 > integrate into DBpedia+ and offer as dump
 download. As we are in a very
 > early
 phase of the endeavour, we would like to know whether there
 is
 > existing work in this area.
 > 
 > We are looking for
 any available datasets which have information about
 > companies and other organizations in any
 language and any country.
 > Ideally, the
 datasets are:
 > 1. downloadable as
 dump
 > 2. openly licensed , e.g. CC-BY
 following the http://opendefinition.org/
 > 3. in an easily parseable format, e.g. RDF
 or CSV and not PDF
 > 
 > But hey! Send around anything you know,
 and we will look at it and see
 > whether
 we can make use of it. You can reach us either by replying 
 to
 > this email or send feedback directly
 to me and Kay Müller
 > .
 > If you have any private/closed data,
 please contact us as well. We might
 >
 make use of it to cross-reference and validate public/open
 data with it.
 > Or just learn from it to
 build a good scheme.
 > 
 > We started a link collection here (and
 attached the current status at
 > the end
 of this email)
 > https://docs.google.com/document/d/1IaWSSt4_SZVhypvB1QzBlCtBuMQHv-q5Ti0n8xoZFIQ/edit
 > Also we started to collect potential
 identifiers for linking here:
 > https://docs.google.com/spreadsheets/d/1EMqemA1BlqvyOXGLzYbvY0IcBCAhaRd5XgYLMWIxGsA/edit#gid=0
 > 
 > Regards and thank
 you for any support on this,
 > Sebastian
 and Kay
 > 
 >
 ##
 > 
 > https://docs.google.com/document/d/1IaWSSt4_SZVhypvB1QzBlCtBuMQHv-q5Ti0n8xoZFIQ/edit
 > 
 > 
 > *
 > 
 > 
 >   Open
 Company Data
 > 
 >
 Open Company Data
 > 
 > 
 > Identifiers for
 companies/organisation
 > 
 > 
 > URIs (Linked
 Data/Semantic Web)
 > 
 > 
 > Downloadable
 Datasets with Company info (confirmed)
 >
 

 > 
 > Portals with no bulk
 downloads
 > 
 > 
 > 

RE: [Caution: Message contains Redirect URL content] Re: Ontology to link food and diseases

2015-05-06 Thread Gannon Dick
+1, with a Science caveat ...

nano-_ is a marketing nonsense word and followed by ... the smallest 
unit of ... is nonsense x 10^5.  The atomic conversion of a serial number 
tagged (URI) decimal number (currency for example, bills and coins) is 10,000 
(URI) + D(0) = 1M (URN) + D(0) {D(0) = Domain)}.

AFAICT, there is no 10^4 prefix ... may I propose web = centi-mega- which 
will at least enable th bookkeepers to count RDF Labels (and Quantum Groups) in 
pennies, Euro cents, etc..

--Gannon

On Wed, 5/6/15, Svensson, Lars l.svens...@dnb.de wrote:

 Subject: RE: [Caution: Message contains Redirect URL content] Re: Ontology  to 
link food and diseases
 To: Bernard Vatant bernard.vat...@mondeca.com, Marco Brandizi 
brand...@ebi.ac.uk
 Cc: Linking Open Data public-lod@w3.org
 Date: Wednesday, May 6, 2015, 10:07 AM
 

 
 Hi Marco, 
    
 This sounds like a use case for
 nanopublications [1]. They define it as 
    
 [[ 
 A nanopublication is the smallest
 unit of publishable information: an assertion about anything
 that
  can be uniquely identified and attributed to its
 author. 
    
 Individual nanopublications can be
 cited by others and tracked for their impact on the
 community. 
 ]] 
    
 [1]
 http://nanopub.org/wordpress/
 
    
 Best, 
    
 Lars 
    
 *** Lesen. Hören. Wissen. Deutsche
 Nationalbibliothek ***
  
 --
  
 Dr. Lars G.
 Svensson 
 Deutsche
 Nationalbibliothek 
 Informationsinfrastruktur und
 Bestanderhaltung 
 Adickesallee 1 
 D-60322 Frankfurt am
 Main 
 Telefon:
 +49-69-1525-1752 
 Telefax:
 +49-69-1525-1799 
 mailto:l.svens...@dnb.de
  
 http://www.dnb.de
 
    
 
 
 
 From:
 Bernard
  Vatant [mailto:bernard.vat...@mondeca.com] 
 
 Sent: Tuesday,
 May 05, 2015 9:59 AM
 
 To: Marco
 Brandizi
 
 Cc: Linking
 Open Data
 
 Subject:
 [Caution: Message contains Redirect URL content] Re:
 Ontology to link food and diseases 
 
 
 
   
 
 
 
 
 Hi
 Marco 
 
 This is a very
 touchy domain, where vocabularies and data should be
 carefully wrapped within provenance, source, time stamp,
 authority. More than anywhere else, beware
  of any positivist, unique thought, thruth-based approach
 ... 
 
 The examples you
 give are not facts, but just statements which should be
 backed by literature. Exceptions and different viewpoints
 exist, etc. 
 
 
 Think about the fact
 it will feed algorithms, at the end of the day. And if you
 make them public, end in Google Knowledge Graph
 ... 
 
 
 See
 
 http://bvatant.blogspot.fr/2015/02/statements-are-only-statements.html
 
 
 
   
 
 
 
 
 
   
 
 2015-05-03 23:20
 GMT+02:00 Marco Brandizi brand...@ebi.ac.uk:
 
 
 Hi all,
 
 
 
 
 I'm looking for an ontology/controlled vocabulary/alike
 that links food ingredients/substances/dishes to human
 diseases/conditions, like intolerances, allergies, diabetes
 etc.
 
 
 
 
 Examples of information I'd like to find coded (please
 assume they're true, I'm no expert):
 
   - gluten must be avoided by people affected by coeliac
 disease
 
   - omega-3 is good for people with high cholesterol
 
   - sugar should be avoided by people with diabetes risk
 
 
 
 I also would like linked data about commercial food
 products, but even an ontology without 'instances'
 would be useful. 
 
 
 
 
 So far, I've found an amount of literature (eg, [1-3])
 and vocabularies like AGROVOC[4], but nothing like the
 above.
 
 
 
 Thanks in advance for any help!
 
 Marco
 
 
 
 [1]
 http://fruct.org/publications/abstract14/files/Kol_21.pdf
 
 
 [2] 
 
http://www.researchgate.net/publication/224331263_FOODS_A_Food-Oriented_Ontology-Driven_System
 
 [3] http://www.hindawi.com/journals/tswj/aip/475410/
 
 [4] http://tinyurl.com/ndtdhwn
 
 
 
 
 
  
 --  
    
 ===
 
 Marco Brandizi, PhD brand...@ebi.ac.uk,
 http://www.marcobrandizi.info
 
    
 Functional Genomics Group - Sr
 Software Engineer 
 http://www.ebi.ac.uk/microarray
 
    
 European Bioinformatics Institute
 (EMBL-EBI) 
 European Molecular Biology
 Laboratory 
 Wellcome Trust Genome Campus,
 Hinxton, Cambridge CB10 1SD, United Kingdom
  
    
 Office V2-26, Phone: +44 (0)1223 492 613, Fax: +44 (0)1223 492 620 
  
 
 
 
 
 
 
 
 
 --  
 
 
 Bernard
 Vatant 
 
 Vocabularies
  Data Engineering 
 
 
 Tel
 :  + 33 (0)9 71 48
  84 59 
 
 Skype
 : bernard.vatant 
 
 
 http://google.com/+BernardVatant
 
 
 
 
 
 
 
 
 Mondeca 
                           
   
 
 
 35 boulevard de
 Strasbourg 75010 Paris 
 
 
 www.mondeca.com
 
 
 
 
 Follow us on
 Twitter :
 @mondecanews
 
 
 
 --
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 




Re: Ontology to link food and diseases

2015-05-05 Thread Gannon Dick
In light of Bernard's comments (nice job, BTW) may I suggest StratML ...

(http://www.iso.org/iso/catalogue_detail.htm?csnumber=59859)

StratML helps prevent man-in-the-middle substitution attacks on code sets at 
the sub-domain (submitter) level.  These attacks are kin to SQL Injection 
Attacks .. a SPARQL Injection Attack, so to speak.  When the encoding formats 
are well defined and complete (with exception handling) - and this is certainly 
the case with Health practices - then the code set still follows the rules 
(http://en.wikipedia.org/wiki/Kerckhoffs%27s_principle) but excludes security 
by obscurity as a motivation.

it's not a bug, it's a feature.  No, it's a bug all right, and Biologists 
discovered it long ago ...

Dans les champs de l'observation le hasard ne favorise que les esprits 
préparés.
(In the fields of observation chance favors only the prepared mind.)
 -- Louis Pasteur (1822 – 1895)

Happy Cinco de Mayo

-- Gannon

On Tue, 5/5/15, Marco Brandizi brand...@ebi.ac.uk wrote:

 Subject: Re: Ontology to link food and diseases
 To: public-lod@w3.org
 Date: Tuesday, May 5, 2015, 4:39 AM
 
 
 Hi Bernard, 
 
 
 
 I've just given a few examples, to give an idea of
 which kind of
 formal representation I'm looking for. I agree with
 you that some
 form of provenance/evidence tracking would be useful
 (even something
 as simple as pointers to the provenance of a whole data
 set and
 criteria that were used to build it).
 
 
 
 Cheers,
 
 Marco.
 
 
 
 
 
 On 05/05/2015 08:58,
 Bernard Vatant
   wrote:
 
 
 
   
 
   
 Hi Marco
 
   
 
 
 This is a very touchy domain, where vocabularies
 and data
 should be carefully wrapped within provenance,
 source, time
 stamp, authority. More than anywhere else,
 beware of any
 positivist, unique thought, thruth-based
 approach ...
 
   
   The examples you give are not facts, but just
 statements which
   should be backed by literature. Exceptions and
 different
   viewpoints exist, etc.
 
 
 Think about the fact it will feed algorithms,
 at the end of
   the day. And if you make them public, end in
 Google Knowledge
   Graph ...
 
   
 
 
 See 
http://bvatant.blogspot.fr/2015/02/statements-are-only-statements.html
 
 
 
 
 
   
 
   
 
 2015-05-03 23:20 GMT+02:00
   Marco Brandizi brand...@ebi.ac.uk:
 
   
  Hi all, 
 
   
 
   I'm looking for an
 ontology/controlled
   vocabulary/alike that links food
   ingredients/substances/dishes to human
   diseases/conditions, like
 intolerances, allergies,
   diabetes etc. 
 
   
 
   Examples of information I'd like
 to find coded
   (please assume they're true,
 I'm no expert):
 
     - gluten must be avoided by people
 affected by
   coeliac disease
 
     - omega-3 is good for people with
 high
   cholesterol
 
     - sugar should be avoided by people
 with
   diabetes risk
 
   
 
   I also would like linked data about
 commercial
   food products, but even an ontology
 without
   'instances' would be useful. 
 
 
   
 
   So far, I've found an amount of
 literature (eg,
   [1-3]) and vocabularies like
 AGROVOC[4], but
   nothing like the above.
 
   
 
   Thanks in advance for any help!
 
   Marco
 
   
 
   [1]
 
 http://fruct.org/publications/abstract14/files/Kol_21.pdf
   
 
   [2] 
http://www.researchgate.net/publication/224331263_FOODS_A_Food-Oriented_Ontology-Driven_System
 
   [3] 
   http://www.hindawi.com/journals/tswj/aip/475410/
 
   [4] http://tinyurl.com/ndtdhwn
 
 
 
   
   -- 
 
 ===
 Marco Brandizi, PhD brand...@ebi.ac.uk,
 http://www.marcobrandizi.info
 
 Functional Genomics Group - Sr Software Engineer
 http://www.ebi.ac.uk/microarray
 
 European Bioinformatics Institute (EMBL-EBI)
 European Molecular Biology Laboratory
 Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD,
 United 

Re: Ontology to link food and diseases

2015-05-04 Thread Gannon Dick
Good idea, Marco.  A collated reverse lookup of medical literature for food 
incompatibilities would be helpful for the layman. The US FDA, Centers for 
Disease Control maintains Posion Hot Line (phone).  They may probably have 
created a reverse lookup / Ontology already.

You can obtain all the instances (citations) you could ever want here ...

https://pubmed.codeplex.com/

http://www.ncbi.nlm.nih.gov/sites/gquery



On Sun, 5/3/15, Marco Brandizi brand...@ebi.ac.uk wrote:

 Subject: Ontology to link food and diseases
 To: public-lod@w3.org
 Date: Sunday, May 3, 2015, 4:20 PM
 
 
   
 
 
   
   
 Hi all, 
 
 
 
 I'm looking for an ontology/controlled
 vocabulary/alike that links
 food ingredients/substances/dishes to human
 diseases/conditions,
 like intolerances, allergies, diabetes etc. 
 
 
 
 Examples of information I'd like to find coded
 (please assume
 they're true, I'm no expert):
 
   - gluten must be avoided by people affected by
 coeliac disease
 
   - omega-3 is good for people with high cholesterol
 
   - sugar should be avoided by people with diabetes
 risk
 
 
 
 I also would like linked data about commercial food
 products, but
 even an ontology without 'instances' would be
 useful.  
 
 
 
 So far, I've found an amount of literature (eg,
 
 
 [1-3]) and vocabularies like AGROVOC[4], but nothing
 like the above.
 
 
 
 Thanks in advance for any help!
 
 Marco
 
 
 
 
 [1]
  
 http://fruct.org/publications/abstract14/files/Kol_21.pdf
 
 
 
 [2]
 
 
http://www.researchgate.net/publication/224331263_FOODS_A_Food-Oriented_Ontology-Driven_System
 
 [3] 
 
 http://www.hindawi.com/journals/tswj/aip/475410/
 
 [4] http://tinyurl.com/ndtdhwn
 
   
 
 
 
 
 -- 
 
 ===
 Marco Brandizi, PhD brand...@ebi.ac.uk,
 http://www.marcobrandizi.info
 
 Functional Genomics Group - Sr Software Engineer
 http://www.ebi.ac.uk/microarray
 
 European Bioinformatics Institute (EMBL-EBI)
 European Molecular Biology Laboratory
 Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD,
 United Kingdom 
 
 Office V2-26, Phone: +44 (0)1223 492 613, Fax: +44 (0)1223
 492 620  
 
   
 




Re: Algorithm evaluation on the complete LOD cloud?

2015-04-24 Thread Gannon Dick
@Gannon here.

Apologies Paul, my sarcasm went a bit over the top.

If only new creation of list labels (data definitions) is considered, then 
there is only one choice of structure for a  any large collection of *well* 
organized RDF data.

rdf:list
   rdf:firstSum partial fractions e.g. a Ground State/rdf:first

   rdf:restre-normalization group fraction/rdf:rest
   rdf:restre-normalization group fraction/rdf:rest
   rdf:restre-normalization group fraction/rdf:rest
...
   rdf:nil /
 /rdf:list

Semantic data does not need ground state change (Bayesian inference) to be 
useful.  Inflation as homage to the Open World Assumption does much harm to 
insight.  No need to subject the dynamics to continuous compounding (change of 
Radix in LOG Space); because it is already there.

--Gannon 

On Fri, 4/24/15, Paul Houle ontolo...@gmail.com wrote:

 Subject: Re: Algorithm evaluation on the complete LOD cloud?
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: public-lod@w3.org public-lod@w3.org, SW-forum Web 
semantic-...@w3.org, Laurens Rietveld laurens.rietv...@vu.nl
 Date: Friday, April 24, 2015, 9:39 AM
 
 Here is my
 take.
 The Complete LOD
 cloud is a stand-in for any large collection of
 poorly organized RDF data.  If you believe that RDF
 is a good model for representing other sorts of data, you
 could imagine that some big organization like Citibank or
 the U.S. Military has a large number of different divisions
 that have all sorts of data of various quality.  In fact if
 I look at all the files I have on my SOHO network you could
 say the same is true for individuals and small biz
 too.
 Then the right
 question to ask is What Methods would one use to
 characterize such a data set with little prior
 knowledge?
 That
 is a carefully chosen phrase.  @Gannon rails against
 frequentism, and there are a number of ways to reach a
 similar conclusion,  such as
 * the grounding problem
 in classical semantics* the fact that any useful
 or interesting semantic system has to do something or other
 that is competitive with some way of doing something that is
 better in some way (i.e. if you don't know where you are
 going you are going to wind up nowhere)
 Also I find the no special
 hardware requirements thing to be strange,  probably
 because it ought to be defined in terms of I have a
 machine with these specific specifications.  For
 instance,  if you had a machine with 32GB of RAM (which is
 pretty affordable if you don't pay OEM prices) you could
 load a billion triples into a triple store.  If your
 machine is a hand-me-down laptop from a salesman who
 couldn't sell that has just 4GB of RAM you are in a very
 different situation.
 On Thu, Apr 23, 2015 at
 1:14 PM, Gannon Dick gannon_d...@yahoo.com
 wrote:
 Hi
 Laurens,
 
 
 
 Ignore the hecklers, I know what you mean.
 
 
 
 Look at the two solutions to the German Tank
 Problem: http://en.wikipedia.org/wiki/German_tank_problem
 
 
 
 The analyses illustrate the difference between
 frequentist inference and Bayesian inference.
 
 Estimating the population maximum based on a single sample
 yields divergent results, while the estimation based on
 multiple samples is an instructive practical estimation
 question whose answer is simple but not obvious.
 
 
 
 A complete LOD Cloud has frequentist inference
 labels, the LOD Cloud the hecklers want to build adds
 Bayesian inference (aka spam or spinning or
 semantic) labels.  So what's the right answer ?  The
 right answer is that the Bayesian inference folks want you
 to speak predicateGerman/predicate like them
 and frequentist inference folks just want to count Tanks
 correctly.
 
 
 
 The frequentist-istas  are boring, with just a single
 answer (transformation) and they insist on spewing normative
 information all over the Universe.  No wonder semantic
 hipsters mock them.  Newton, Einstein, Fermi, Dirac,
 Feynman ... all losers ... not smart enough to make up their
 own labels for things.  Chaos and Informative data sets
 FOREVER!
 
 
 
 --Gannon
 
 
 
 
 
 
 
 
 
 
 
 On Thu, 4/23/15, Laurens Rietveld laurens.rietv...@vu.nl
 wrote:
 
 
 
  Subject: Algorithm evaluation on the complete LOD
 cloud?
 
  To: public-lod@w3.org
 public-lod@w3.org,
 SW-forum Web semantic-...@w3.org
 
  Date: Thursday, April 23, 2015, 6:21 AM
 
 
 
  Hi all,
 
  I'm doing some research on evaluating
 
  algorithms on the complete LOD cloud (via http://lodlaundromat.org),
 
  and am looking for existing papers and algorithms to
 
  evaluate
 
  The criteria for such an algorithm
 
  are:It should be open
 
  sourceDomain independentNo dependency on
 
  third data sources, such as query logs or a gold
 
  standardNo particular hardware dependencies (e.g. a
 
  cluster)The algorithm should take a dataset as
 
  input, and produce results as
 
  output Many thanks in advance for any
 
  suggestionsBest, Laurens
 
 
 
 
 
  --
 
  VU University

Re: Algorithm evaluation on the complete LOD cloud?

2015-04-23 Thread Gannon Dick
Hi Laurens,

Ignore the hecklers, I know what you mean.

Look at the two solutions to the German Tank Problem: 
http://en.wikipedia.org/wiki/German_tank_problem

The analyses illustrate the difference between frequentist inference and 
Bayesian inference.
Estimating the population maximum based on a single sample yields divergent 
results, while the estimation based on multiple samples is an instructive 
practical estimation question whose answer is simple but not obvious.

A complete LOD Cloud has frequentist inference labels, the LOD Cloud the 
hecklers want to build adds Bayesian inference (aka spam or spinning or 
semantic) labels.  So what's the right answer ?  The right answer is that the 
Bayesian inference folks want you to speak predicateGerman/predicate like 
them and frequentist inference folks just want to count Tanks correctly.

The frequentist-istas  are boring, with just a single answer (transformation) 
and they insist on spewing normative information all over the Universe.  No 
wonder semantic hipsters mock them.  Newton, Einstein, Fermi, Dirac, Feynman 
... all losers ... not smart enough to make up their own labels for things.  
Chaos and Informative data sets FOREVER!

--Gannon

 



On Thu, 4/23/15, Laurens Rietveld laurens.rietv...@vu.nl wrote:

 Subject: Algorithm evaluation on the complete LOD cloud?
 To: public-lod@w3.org public-lod@w3.org, SW-forum Web 
semantic-...@w3.org
 Date: Thursday, April 23, 2015, 6:21 AM
 
 Hi all,
 I'm doing some research on evaluating
 algorithms on the complete LOD cloud (via http://lodlaundromat.org),
 and am looking for existing papers and algorithms to
 evaluate
 The criteria for such an algorithm
 are:It should be open
 sourceDomain independentNo dependency on
 third data sources, such as query logs or a gold
 standardNo particular hardware dependencies (e.g. a
 cluster)The algorithm should take a dataset as
 input, and produce results as
 output Many thanks in advance for any
 suggestionsBest, Laurens
 
 
 -- 
 VU University
 AmsterdamFaculty of Exact
 SciencesDepartment of Computer
 ScienceDe Boelelaan 1081
 A1081 HV
 AmsterdamThe
 netherlandswww.laurensrietveld.nllaurens.rietv...@vu.nl
 Visiting
 address: De Boelelaan
 1081Science Building Room
 T312



Re: Looking for pedagogically useful data sets

2015-03-13 Thread Gannon Dick
SQL queries are a semantics killer ***for those situations where the 
uncertainty inherent in semantics (Bayesian Interference) can not be eliminated 
and the deterministic solution (frequentist Interference) is unavailable***.

The terms Bayesian Interference and frequentist Interference are defined and 
conceptually validated here:(http://en.wikipedia.org/wiki/German_tank_problem).

SPARQL queries preserve semantics by sampling the as yet unrealized 
deterministic solution.  In the case where the deterministic solution is known 
(http://en.wikipedia.org/wiki/Birthday_problem), the apparently conflicting 
results are due to mere addition of quantum states.

The difference in problems is a difference in conceptual motivation (initial 
conditions). This is a trivial solution to chaos effects on probability 
calculations, which involve a rounding at some tolerance of the constant 
sqrt(2*PI).

Kingsley is right.  To presume the infallibility of SPARQL is an ethical 
hazard.  To presume that a DBMS is complete when it is not is also an ethical 
hazard.  Let the debate continue, if it ends we all lose.

--Gannon



On Fri, 3/13/15, Kingsley Idehen kide...@openlinksw.com wrote:

 Subject: Re: Looking for pedagogically useful data sets
 To: public-lod@w3.org
 Date: Friday, March 13, 2015, 9:05 AM
 
 On 3/12/15 5:38 PM, Paul
 Houle wrote:
  The goal is to show that
 you can do the same things you do with a 
  relational database,  and maybe *just* a
 little bit more.
 
 Every RDF
 store is a relational database management system (RDBMS). As
 
 you know, an RDF compliant RDBMS simply
 group sets of RDF 3-tuples by 
 statement
 predicate.
 
 We can't
 continue to concede the notion of a relational database 
 management to SQL relational database
 management systems (sets of 
 n-tuples
 grouped by Table Name).
 
 Maybe we should start referring to SPARQL
 compliant RDF stores as SPARQL 
 Relational
 Database Management Systems, just like SQL Relational 
 Database Management Systems which have now
 become synonymous with 
 Relational Database
 Management System. Then just a little more
 becomes 
 much closer to demonstrable
 reconciliation of the truth, the whole 
 truth, and nothing but the truth, in regards to
 relations, databases, 
 and database
 management systems :)
 
 ACID has nothing to do with what constitutes an
 RDBMS either, that's an 
 a useful, but
 optional feature of any RDBMS. So don't fall for that
 
 baloney laden push-back when taking the
 SPARQL RDBMS position.
 
 We
 MUST end the SQL RDBMS power-grab! It has done a major
 disservice to 
 the entire DBMS industry,
 over the last 40+ years. You have a 
 multi-billion dollar industry that's
 fundamentally about companies and 
 individuals that are data-access-heavy and
 data-exploitation-challenged 
 i.e., they
 have tons of data (Big Data these days), but
 still can't 
 achieve basic agility goals
 in regards to: accessing, integrating, and 
 moving data effectively to the right people, at
 the right time, in the 
 right form, and in
 appropriate context etc..
 
 Links:
 
 [1] http://bit.ly/spasql-sql-querying-based-on-sparql-table-relation
 -- 
 demonstrating that relations are
 relations (even when the underlying 
 tuple
 organizations vary e.g., when organized as sql relational
 tables 
 or rdf statements graphs) .
 
 [2] http://www.openlinksw.com/c/9C5DNHYW --
 Relation .
 
 [3] http://www.openlinksw.com/c/9BVTLIAG --
 SQL Relation .
 
 [4] http://www.openlinksw.com/c/9BH3NH7S --
 RDF Relation.
 
 [5] http://www.openlinksw.com/c/9BDLVDX3 --
 Differentiating Database 
 (a
 Document comprised of sets of Relations [Data] ) from
 Database 
 Management System
 (software for indexing and querying culled from 
 Database Documents).
 
 -- 
 Regards,
 
 Kingsley Idehen    
 Founder  CEO
 OpenLink
 Software
 Company Web: http://www.openlinksw.com
 Personal Weblog 1: http://kidehen.blogspot.com
 Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
 Twitter Profile: https://twitter.com/kidehen
 Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen
 Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
 




Re: Microsoft Access for RDF?

2015-02-20 Thread Gannon Dick
If you don't like double housekeeping (most programmers know the pitfalls 
here), then using OWL or inference rules  you can also infer attendance from 
the arrival events.

Are most programmers who work for the Human Resources Department ignorant or 
just really scary ?
It's Friday.  Get thee to beer.  Quickly.


On Fri, 2/20/15, Stian Soiland-Reyes soiland-re...@cs.manchester.ac.uk wrote:

 Subject: Re: Microsoft Access for RDF?
 To: Michael Brunnbauer bru...@netestate.de
 Cc: public-lod@w3.org, Pat Hayes pha...@ihmc.us
 Date: Friday, February 20, 2015, 3:53 PM
 
 Sorry, now I
 forgot my strawman! Too late on a Friday..
 
 So say the user of an triple-order-preserving
 UI says:
 document prov:wasAttributedTo :alice,
 :charlie, :bob.
 .. And consider the order important because Bob
 didn't contribute as much to the document as Alice and
 Charlie.
 In that case the above statements is not
 detailed enough and some new property or resource is needed
 to represent this distinction in RDF.
 Here I would think OWL fear combined with
 desire to reuse existing vocabularies mean that you
 don't get specific enough. Its OK to state the same
 relation with two different properties, and even better to
 make a new sub property that explains the combination.
 In the strawman, using more specific properties
 like pav:authoredBy and prov:wasInfluencedBy would clarify
 the distinction much more than an ordered list with an
 unspecified order criteria.
 
 In other cases the property is really giving a
 shortcut, say;
 meeting :attendedBy :john, :alice,
 :charlie .
 ..And the user is also encoding arrival time at
 the meeting by the list order. 
 But this is using :attendesBy to describe both
 who were there, and when they arrived. In this case, the
 event of arriving could better be modelled separately with a
 partial ordering.
 If you don't like double housekeeping (most
 programmers know the pitfalls here), then using OWL or
 inference rules  you can also infer attendance from the
 arrival events. 
 




Re: Microsoft Access for RDF?

2015-02-19 Thread Gannon Dick
Hi Paul,

Thanks for the detailed answer.

There is human culture, and language, etc..  All in all we are a complicated 
species, but somehow, at whatever Latitude we are, snow, rain, heat, gloom of 
night, etc. we maintain an average internal body temperature of about 37 
Centigrade (98.6 F). Humans all have this property, and we are the only animals 
with gadgets that locate us in time and space.  We wander around in an 
isotropic fashion like  Englishmen without dogs. It is not Brownian Motion, one 
can chart seasonal variation in location ... after all we might be 
Australians without dogs.

Here are the charts.  The variation over a year's time compared with tropical 
variation (tropics 45 Degrees N and -45 Degrees South) is tiny, tiny, tiny ( 
13 arc minutes).
http://www.rustprivacy.org/2015/locating-humans-with-gadgets.pdf

Gadgets wildly over exaggerate their judgement as to our true location over 
time.  The commercial world desperately wants to believe they know something 
important and Linked Open Data cannot stop their believing. We can, however 
make sure we don't feed the beast.  It won't hurt RDF or Semantics a bit.

Stian said, and I would agree:
No, this is dangerous and is hiding the truth. Take the red pill and admit to 
the user that this particular property is unordered, for instance by always 
listing the values sorted (consistency is still king).
 

--Gannon

On Wed, 2/18/15, Paul Houle ontolo...@gmail.com wrote:

 Subject: Re: Microsoft Access for RDF?
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: Linked Data community public-lod@w3.org
 Date: Wednesday, February 18, 2015, 6:10 PM
 
 Yes,
  there is the general project of capturing 100% of critical
 information in documents and that is a wider problem than
 the large amount of Linked Data which is basically
 RelationalDatabase++.
 Note
 that a lot of data in this world is in spreadsheets (like
 relational tables but often less discipline) and in formats
 like XML and JSON that are object-relational in the sense
 that a column can contain either a list or set of
 rows.
 Even before we
 tackle the problem of representing the meaning of written
 language (particularly when it comes to policies,
  standards documents,  regulations,  etc. as opposed to
 Finnegan's Wake or Mary had a little lamb)
 there is the slightly easier problem of understanding all
 the semi-structured data out there.
 Frankly I think the Bible gets
 things all wrong at the very beginning with In the
 beginning there was the word... because in the
 beginning we were animals and then we developed the language
 instinct,  which is probably a derangement in our ability
 to reason about uncertainty which reduces the sample space
 for learning grammar.
 Often people suggest that animals
 are morally inferior to humans and I think the bible has a
 point where in some level we screw it up,  because animals
 don't do the destructive behaviors that seem so
 characteristic of our species and that are often tied up
 with our ability to use language to construct contafactuals
 such as the very idea that there is some book that has all
 the answers because once somebody does that,  somebody else
 can write a different book and say the same thing and then
 bam your are living after the postmodern heat death,  even
 a few thousand years before the greeks.
 (Put simply:  you will find that
 horses,  goats,  cows,  dogs,  cats and other
 domesticated animals consistently express pleasure when you
 come to feed them,  which reinforces their feeding.  A
 human child might bitch that they aren't getting what
 they want,  which does not reinforce parental feeding
 behavior.  Call it moral or social reasoning or whatever
 but when it comes to maximizing one's utility function,
  animals do a good job of it.  The only reason I'm
 afraid to help an injured raccoon is that it might have
 rabies.)
 Maybe the
 logical problems you get as you try to model language have
 something to do with human nature,  but the language
 instinct is a peripheral of an animal and it can't be
 modeled without modeling the animal.
 There is a huge literature of first
 order logic,  temporal logic,  modal logic and other
 systems that capture more of what is in language and the
 question of what comes after RDF is interesting;
  the ISO Common Logic idea that we go back to the predicate
 calculus and just let people make statements with arity
 2 in a way that expands RDF is great,  but we really
 need ISO Common Logic* based on what we know now.  Also
 there is no conceptual problem in introducing arity  2
 in SPARQL so we should just do it -- why convert relational
 database tables to triples and add the complexity when we
 can just treat them as tuples under the SPARQL
 algebra?
 Anyway there
 is a big way to go in this direction and I have thought
 about it deeply because I have stared into the
 commercialization valley of death for so long,  but I think
 an RDF

Re: Microsoft Access for RDF?

2015-02-18 Thread Gannon Dick
Hi Paul,

I'm detecting a snippy disturbance in the Linked Open Data Force :)

The text edit problem resides in the nature of SQL type queries vs. SPARQL type 
queries.  It's not in the data exactly, but rather in the processing 
(name:value pairs).  To obtain RDF from data in columns you want to do a parity 
shift rather than a polarity shift.

Given the statement:

Mad Dogs and Englishmen go out in the midday sun

(parity shift) Australians are Down Under Englishmen and just as crazy.
(polarity shift) Australians are negative Englishmen, differently crazy.

Mad Dogs ? Well, that's another Subject.

The point is, editing triples is not really any easier than editing columns, 
but it sometimes looks dangerously easy.

-Gannon

[1]  'Air and water are good, and the people are devout enough, but the food is 
very bad,' Kim growled; 'and we walk as though we were mad--or English. It 
freezes at night, too.'
--  Kim by Rudyard Kipling (Joseph Rudyard Kipling (1865-1936)), Chapter 
XIII, Copyright 1900,1901

On Wed, 2/18/15, Paul Houle ontolo...@gmail.com wrote:

 Subject: Microsoft Access for RDF?
 To: Linked Data community public-lod@w3.org
 Date: Wednesday, February 18, 2015, 2:08 PM
 
 I am looking at some
 cases where I have databases that are similar to Dbpedia and
 Freebase in character,  sometimes that big (ok,  those
 particular databases),   sometimes smaller.  Right now
 there are no blank nodes,  perhaps there are things like
 the compound value types from Freebase which are
 sorta like blank nodes but they have names,
 Sometimes I want to manually edit a few
 records.  Perhaps I want to delete a triple or add a few
 triples (possibly introducing a new subject.)
 It seems to me there could be some kind of system
 which points at a SPARQL protocol endpoint (so I can keep my
 data in my favorite triple store) and given an RDFS or OWL
 schema,  automatically generates the forms so I can easily
 edit the data.
 Is there something out there?
 
 -- 
 Paul Houle
 Expert on Freebase, DBpedia, Hadoop and RDF
 (607) 539 6254    paul.houle on Skype   
ontology2@gmail.comhttp://legalentityidentifier.info/lei/lookup




Re: linked open data and PDF

2015-01-22 Thread Gannon Dick
Wow, there's a blast from the past*.

--Gannon

* past = back when URL's looked like URI's and control freaks could keep 
their domain holdings and ontologies on the same ledger.  Not for a minute do I 
think this was a good thing, and in any case no fault of GRDDL. 

On Wed, 1/21/15, Paul Tyson phty...@sbcglobal.net wrote:

 Subject: Re: linked open data and PDF
 To: Norman Gray nor...@astro.gla.ac.uk
 Cc: Paul Houle ontolo...@gmail.com, Herbert Van de Sompel 
hvds...@gmail.com, jschnei...@pobox.com jschnei...@pobox.com, 
public-lod@w3.org public-lod@w3.org
 Date: Wednesday, January 21, 2015, 8:52 PM
 
 On Wed, 2015-01-21 at
 17:16 +, Norman Gray wrote:
 
  (also it's not even really about XMP;
 there are all sorts of ways of
  
 scraping metadata out of objects and turning it into
 something which
   an RDF parser can
 read, and from that point you can start being
   imaginative.  This is of course
 stupidly obvious to everyone on this
  
 list, but it's an aha! that many people haven't got
 yet).
 
 GRDDL, anyone? [1]
 
 I think the GRDDL spec was too
 narrowly scoped to XML resources. The
 concept is simple and ingenious, and applicable
 to any type of resource.
 Many years ago,
 inspired by the then-new GRDDL spec) I built a modest
 RDF gleaning framework for tracing software
 requirements through
 development and
 testing. It gleaned from requirements documents and
 functional specification (in MS Word format),
 design documents (in TeX),
 source code
 (c++), test results (in XML), and probably also plain
 text
 (csv) and MS Excel.
 
 Regards,
 --Paul
 
 [1] http://www.w3.org/TR/2007/REC-grddl-20070911/
 
 




Re: Debates of the European Parliament as LOD

2014-11-12 Thread Gannon Dick
Juan,

The G8 Open Data Charter Technical Annex contains Topics and Categories which 
would be a good starting point [1].

I scraped it and linked it with URN's to the US Library of Congress ID Servers. 
 I can supply that table if it would save you some time [2].

--Gannon


[1] 
https://www.gov.uk/government/publications/open-data-charter/g8-open-data-charter-and-technical-annex
[2] http://www.rustprivacy.org/faca/samples/displayStratMLtopics.html

On Wed, 11/12/14, Hollink, L. l.holl...@vu.nl wrote:

 Subject: Re: Debates of the European Parliament as LOD
 To: Juan Sequeda juanfeder...@gmail.com
 Cc: public-lod@w3.org public-lod@w3.org, SW-forum Web 
semantic-...@w3.org
 Date: Wednesday, November 12, 2014, 2:51 AM
 
 
 On 11 Nov 2014, at 23:08, Juan Sequeda juanfeder...@gmail.com
 wrote:
 
 
 
 Laura,
 
 
 
 Is the data tagged or annotated somehow to specific
 topics? 
 
 
 
 
 
 
 
 
 No. What we do have in that direction is the title of the
 agenda item, for example “French Football Federation
 fine” or “Statement by Mr Prodi”. Also, we have info
 on the committee that the speakers work in. E.g. a speech
 from someone in the agriculture committee
  is likely to be about agriculture. However, people can be
 members of multiple committees. 
 
 
 
 Laura
 
 
 
 
 
 
 
 
 
 
 
 
 Juan Sequeda
 
 +1-575-SEQ-UEDA
 
 www.juansequeda.com
 
 
 
 On Tue, Nov 11, 2014
 at 11:00 AM, Hollink, L. 
 l.holl...@vu.nl
 wrote:
 
 
 On 08 Nov 2014, at 17:20, Tim
 Berners-Lee ti...@w3.org
 wrote:
 
 
 
 
 
  On 2014-11 -05, at 11:19, Hollink, L. l.holl...@vu.nl
 wrote:
 
 
 
  - Dataset announcement -
 
 
 
  We are happy to announce the release of a
 new linked dataset: the proceedings of the plenary debates
 of the European Parliament as Linked Open Data..
 
 
 
  The dataset covers all plenary debates held in the
 European Parliament (EP) between July 1999 and January 2014,
 and biographical information about the members of
 parliament. It includes: the monthly sessions of the EP, the
 agenda of debates, the spoken words
  and translations thereof in 21 languages; the speakers,
 their role and the country they represent; membership of
 national parties, European parties and commissions. The data
 is available though a SPARQL endpoint, see
 http://linkedpolitics.ops.few.vu.nl/
 for more details.
 
 
 
  Please note that this is a first version; we hope
 you will try it out and send us your feedback!
 
 
 
  - Access to the data -
 
  We provide access in three ways:
 
  • Through a SPARQL endpoint at 
 http://linkedpolitics.ops.few.vu.nl/sparql/
 
  • Using the ClioPatria web interface at 
 http://linkedpolitics.ops.few.vu.nl/
 
  • By downloading data dumps. See 
 http://linkedpolitics.ops.few.vu.nl/.
 
 
 
 
 
  I am on a plane so I can't check but I hope also
 the data is Linked Data,
 
  in that every URI used can be looked up directly by
 HTTP GET.
 
 
 
 
 
  TimBL
 
 
 
 
 
 Dear Tim,
 
 
 
 Yes it is. Thanks for looking into it!
 
 E.g.
 
         curl --location --header
 Accept:application/rdf+xml 
 http://purl.org/linkedpolitics/eu/plenary/1999-07-20/Speech_1
 
 gives:
 
 
 
 ?xml version='1.0'
 encoding='UTF-8'?
 
 !DOCTYPE rdf:RDF [
 
     !ENTITY dc 'http://purl.org/dc/elements/1.1/'
 
     !ENTITY lp 'http://purl.org/linkedpolitics/vocabulary/'
 
     !ENTITY lp_eu 'http://purl.org/linkedpolitics/vocabulary/eu/plenary/'
 
     !ENTITY rdf 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'
 
     !ENTITY xsd 'http://www.w3.org/2001/XMLSchema#'
 
 ]
 
 
 
 rdf:RDF
 
     xmlns:dc=dc;
 
     xmlns:lp=lp;
 
     xmlns:lp_eu=lp_eu;
 
     xmlns:rdf=rdf;
 
     xmlns:xsd=xsd;
 
 
 
 lp_eu:Speech 
rdf:about=http://purl.org/linkedpolitics/eu/plenary/1999-07-20/Speech_1;
 
   lp:spokenText xml:lang=en
 
               – I declare resumed the session of
 the European Parliament adjourned on 7 May 1999, [ …
 etc.]
 
 
 
 Laura
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 




Re: It is not the URI that bends, it is only ..

2014-10-17 Thread Gannon Dick


On Fri, 10/17/14, Sarven Capadisli i...@csarven.ca wrote:

 Subject: It is not the URI that bends, it is only ..
 To: Linking Open Data public-lod@w3.org, SW-forum semantic-...@w3.org
 Date: Friday, October 17, 2014, 12:32 PM
 
 Dear PhiloWebers,
 
 I think I have some questions, but I'm not sure:
 
 * If you have an URI about yourself and no one knows about it, does it  exist? 
Do you?
-
Depends ... If a Sarven falls in the forest does it make a sylvan-like sound ?
-
 * Do URIs dream of HTTP?
-
I think they are dreaming of A White-List Christmas, but I'd have to check with 
Bing Crosby. 
-
 
 * Where do URIs go when its HTTP response is 410?

They mope about in caches until 411 is available.  (4-1-1 is US phone Directory 
Assistance)
-
 -Sarven
 http://csarven.ca/#i
 



Re: How to model valid time of resource properties?

2014-10-16 Thread Gannon Dick


On Thu, 10/16/14, Frans Knibbe | Geodan frans.kni...@geodan.nl wrote:

... the current state of a resource might be part of the default graph (which 
can remain unnamed) and historical states associated with temporal graphs. 

You see this in the wild every day with (equal) Time Series Reports (Quarterly, 
Annual, etc). If numbers have been made public, then they have to add up.

Map and navigate with the Law of Tangents or Law of Sine's. If you use the Law 
of Cosines you are ranking not measuring.  This is big mischief in Strategy and 
Planning.

http://www.rustprivacy.org/2014/balance/opers/

--Gannon




Re: Reference management

2014-10-09 Thread Gannon Dick
Good catch, Michael. LOL.

There is a nomenclature problem.  The confusion here is that the Chinese own 
the confusion.  You mustn't forget that Five Eyes have joint custody.

If your 20 random samples were *my* fingers and toes then certainly they would 
be distinct from *your* fingers and toes.  However, if the 20 random samples 
were *my* fingers and *your* toes then Five Eyes would conclude that my 
fingers are your toes.

As I said, Five Eyes has joint custody.

(punchline)
Which leads me to think that Five Eyes time would be better spent reading 
Chinese porn than pretending they understand how the Semantic Web works.  Also 
that my life would be richer reading strikeChinese porn/strike well defined 
content.

--Gannon 

On Thu, 10/9/14, Michael Brunnbauer bru...@netestate.de wrote:

 Subject: Re: Reference management
 To: Phillip Lord phillip.l...@newcastle.ac.uk
 Cc: Simon Spero sesunc...@gmail.com, Gray, Alasdair 
a.j.g.g...@hw.ac.uk, semantic-...@w3.org, Peter F. Patel-Schneider 
pfpschnei...@gmail.com, Mark Diggory mdigg...@gmail.com, W3C LOD Mailing 
List public-lod@w3.org
 Date: Thursday, October 9, 2014, 7:57 AM
 
 
 Hello
 Phillip,
 
 On Thu, Oct 09, 2014 at
 12:56:14PM +0100, Phillip Lord wrote:
 
 but most of the web appears to be chinese
 pornography.
 
 Let me
 test this hypothesis with our database of 1.7 billion
 websites. Here is
 a sample of 20 random
 sites:
 
 752.34567899.com 
               Host not found
 23407.qsdcgtu.com           
    Host not found
 59429.8419315.eu               
 Connection refused
 me8ft.yvqyxa.com     
           Host not found
 221.leewn.zzyinkw.com       
    Chinese gaming
 67i1u.bc.nujiangrcw.cn          Empty
 page
 ouboyuleyulecheng.coqe55nsf.com Empty
 page
 www.jesusisgod.com             
 JESUS IS GOD ALMIGHTY !
 7kpeo.bwinkaihu.com         
    Host not found
 ngcqsf.4r65ws4.com              Host not
 found
 79532.lbecttm.com           
    ??? A chinese ranting about a car rental
 bill?
 shengtaoshayulechengzhenrenyouxi.hw678.com   
   Host not found
 pcptcr.wenwumedia.com   
        Host not found
 91wxx.daikuan12.com         
    Connection refused
 kaianqingfuwufapiao.krtnh.in    Timeout
 771.hfzcdb.com                 
 Timeout
 1119.gzfeng88.in             
   Chinese fast food
 erhjhz.banbiyewenping.com   
    Host not found
 kaj7467.bbs.517taojin.com   
    Host not found
 www.retailconcept.de            Domain
 for sale
 
 I conclude that
 most of the Web is broken (we definitely have to do a DNS
 cleanup of this database again) and the rest is
 chinese ...um... whatever -
 I would not call
 it pornography :-). An eye should be kept on the
 Christian
 fundamentalists.
 
 Regards,
 
 Michael Brunnbauer
 
 -- 
 ++  Michael Brunnbauer
 ++  netEstate GmbH
 ++ 
 Geisenhausener Straße 11a
 ++  81379
 München
 ++  Tel +49 89 32 19 77 80
 ++  Fax +49 89 32 19 77 89 
 ++  E-Mail bru...@netestate.de
 ++  http://www.netestate.de/
 ++
 ++  Sitz: München, HRB
 Nr.142452 (Handelsregister B München)
 ++ 
 USt-IdNr. DE221033342
 ++ 
 Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
 ++  Prokurist: Dipl. Kfm. (Univ.) Markus
 Hendel



Re: Cost and access (Was Re: [ESWC 2015] First Call for Paper)

2014-10-03 Thread Gannon Dick
Hi Phillip, Eric, et. al.

On Fri, 10/3/14, Phillip Lord phillip.l...@newcastle.ac.uk wrote:


 
 Eric Prud'hommeaux
 e...@w3.org
 writes:
 
  Let's work through the requirements and a plausible migration plan. We need:
 
  1 persistent storage: it's hard to beat books for a feeling of persistence.
  Contracts with trusted archival institutions can help but we might also
  want some assurances that the protocols and formats will persist as well.
 [snip] 
 Protocols and formats, yes, true a problem. I think in an argument between 
HTML and PDF,
 then it's hard to see one has the advantage over another. My experience is 
that HTML is easier
 to extract text from, which is always going to be base line.
---
Easier still is (X)HTML or XML written in plain text with Character Entities 
Hex Escaped.  Clipboards are owned by the OS and for ordinary users, syntax 
errors are fatal; BreadButter (full employment) for Help Desks.  Personally, I 
am un-fond of that ideology.  XSLT 2.0 has a (flawless) translation mechanism 
which eases user pain.  I've used it several times for StratML projects.  If 
you want a copy of the transform, contact me off line.
 ---
 For what it is worth, there are achiving solutions, including archive.org and 
arxiv.org both of which leap to mind.
 ---
The archiving solutions work well for the persistance of protocols and formats. 
 Persistance of Linked Data depends upon the ability of an archive to reduce 
owl:sameAs and rdfs:* to their *export* standards.  Professional 
credibility in all disciplines relies on how well one hefts the lingo - applies 
the schema labels to shared concepts. Publishers are very sensitive to this 
concern and it may be Linked Data with the deaf ear.

[snip]
 Okay. I would like to know who made the decision that HTML is not acceptable 
and why.

This is a related issue.  The decision to ignore the seperation of concerns 
issue mentioned above is a user acceptance impediment when protocols and 
formats are the only parameters considered.  In a few decades perhaps we will 
have real AI, Turing Machines, and academic disciplines will have their own 
Ontologies which speak to them.  As a container, I think HTML is fine.  I am 
not comfortable with RDFa decorations or /html/head meta data as absentee 
ownership of documents.

In the meantime, Archives will have to develop methods to recycle and reduce 
rdfs:Labels, and they will have to be (uncharactaristically) ruthless.  The 
statistics of RDF rely on a well known paradox 
(http://en.wikipedia.org/wiki/Birthday_problem).  Close matches between name 
spaces and Ontologies have an extreme bias toward high probability 
identification.  In the end, the probability is just a number, but it 
intimidates ordinary partial fractions who believe it is the smartest guy in 
the room.  That is rather a bad thing.

Cheers,
Gannon 


 
 Phil
 
 



Re: Cost and access (Was Re: [ESWC 2015] First Call for Paper)

2014-10-03 Thread Gannon Dick


On Fri, 10/3/14, Simon Spero sesunc...@gmail.com wrote:

We never thought of making up imaginary people to cite stuff though.

Never mind that, imagine the automation possibilities
Huge numbers of imaginary people talking to themselves ...
(thanks for the laugh)

There is a lot of effort going in to making data citable in ways meaningful to 
funding agencies.

A few years ago, I wrote a page which enables Agencies of the US Government to 
discover like-interested peers within so they could compare stratigies and 
plans. Simply talking to each other would be a possible solution, but given 
that the Agencies compete for funds with the same funding agency - Congress - 
there is a reluctance to be too open with each other.  The output is Library of 
Congress MODS XML.

It is dated, but here it is: 
http://www.rustprivacy.org/faca/samples/displayStratMLcorrespondants.html

--Gannon
  



Re: [ESWC 2015] First Call for Paper

2014-10-01 Thread Gannon Dick
I agree, Laura.

If you are roaming overseas (in any direction) and a friend sends you a chunk 
of data which may not be accessible publically or wants to send you only 
Chapter 416 of 837 then it teaches publishers nothing but may keep ISP's in 
Champagne for years to come.  Yes, London has free WiFi, and yes I'm jealous.  
Rural Texas is, um, not quite there yet.  Calibre (http://calibre-ebook.com/) 
makes it easy to dodge local interconnection problem$ which have nothing 
whatsoever to do with Linked Data.  There is a practical aspect to this too.

--Gannon

On Wed, 10/1/14, Laura Dawson laura.daw...@bowker.com wrote:

 Subject: Re: [ESWC 2015] First Call for Paper
 To: Kingsley Idehen kide...@openlinksw.com, public-lod@w3.org 
public-lod@w3.org
 Date: Wednesday, October 1, 2014, 12:10 PM
 
 What about EPUB, which is
 xHTML and has support for Schema.org markup? It
 also provides for fixed-layout.
 
 On 10/1/14, 12:55 PM, Kingsley
 Idehen kide...@openlinksw.com
 wrote:
 
 On 10/1/14 12:35
 PM, Sarven Capadisli wrote:
  On
 2014-10-01 18:12, Fabien Gandon wrote:
  Dear Saven,
 
  Thank your
 for your response Fabien.
 
  The scientific articles are
 presenting scientific achievements in a
  format that is suitable for human
 consumption.
  Documents in a
 portable format remain the best way to do that for a
  conference today.
 
  I acknowledge
 the current state of matters for sharing scientific
  knowledge. However, the concern was
 whether ESWC was willing to
  promote
 Web native technologies for sharing knowledge, as opposed
 to
  solely insisting on Adobe's
 PDF, a desktop native technology.
 
  If my memory
 serves me correctly, the Web took off not
 because of
  PDF, but due to plain
 old simple HTML. You know just as well that HTML
  was intended for scientific knowledge
 sharing at large scale, for
  human
 as well as machine consumption.
 
  However:
  - all the metadata of the
 conference are published as linked data e.g.
  http://data.semanticweb.org/conference/eswc/2014/html
 
  This is
 great. But, don't you think that we can and ought to do
 better
  than just metadata?
 
  - authors
 are encouraged to publish, the datasets and algorithms
 they
  use in their research on
 the Web following its standards.
 
  I think we all know too well that this
 is something left as optional
  that
 very few follow-up. There is no reproducibility
 police in SW/LD
  venues.
 Simply put, we can't honestly reproduce the research
 because
  all of the important atomic
 components that are discussed in the
  papers e.g., from hypothesis,
 variables, to conclusions, are not
 
 precisely identified or easily discoverable. Most of the
 time, one has
  to hunt down the
 authors for that information. IMHO, this severely
  limits scientific progress on Web
 Science.
 
 
 Will you compromise on the submission such that the
 submissions can be
  in PDF and/or in
 HTML(+RDFa)?
 
 +1
 
 We need to get over
 this hurdle. We can't expect to be taken seriously
 if we don't wire what we espouse into
 the fabric of our existence.
 
 -- 
 Regards,
 
 Kingsley Idehen   
 
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog 1: http://kidehen.blogspot.com
 Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
 Twitter Profile: https://twitter.com/kidehen
 Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen
 Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
 
 
 




Re: Formats and icing (Was Re: [ESWC 2015] First Call for Paper)

2014-10-01 Thread Gannon Dick
Not to pile on Sarven, said Gannon, piling on, but I recently tripped over the 
XSD validation tests I did a couple of years ago.  They are XHTML 1.1 + RDFa 
all zipped up so you won't have to stitch the schema together (it is 20+ little 
files).  If anyone wants it please contact me off-board.
(If you are the person who stole my Nook, contact me.  Then run.)
--Gannon

On Wed, 10/1/14, Sarven Capadisli i...@csarven.ca wrote:

 Subject: Formats and icing (Was Re: [ESWC 2015] First Call for Paper)
 To: Laura Dawson laura.daw...@bowker.com, Kingsley Idehen 
kide...@openlinksw.com, public-lod@w3.org public-lod@w3.org
 Date: Wednesday, October 1, 2014, 1:42 PM
 
 On 2014-10-01 19:10,
 Laura Dawson wrote:
  What about EPUB,
 which is xHTML and has support for Schema.org markup? It
  also provides for fixed-layout.
 
 IMO, this particular
 discussion is not what we should be focusing on. 
 And, it almost always deters from the main
 topic. There are a number of 
 ways to get to
 Web friendly representations and presentations.
 EPUB? 
 Sure. Whatever floats the
 author's boat. As long as we can precisely 
 identify and be able to discover the items in
 research papers, that's 
 all fine.
 
 I personally don't find
 the need to set any hard limitations on (X)HTML 
 or which vocabularies to use. So, schema.org is
 not granular enough at 
 this time. There are
 more appropriate ones out there e.g: e.g., 
 http://lists.w3.org/Archives/Public/public-lod/2014Jul/0179.html
 , but 
 that doesn't mean that we
 can't use them along with schema.org.
 
 I favour plain HTML+CSS+RDFa
 to get things going e.g.:
 
 https://github.com/csarven/linked-research
 
 (I will not dwell on the use
 of SVG, MathML, JavaScript etc. at this 
 point, but you get the picture).
 
 The primary focus right now is
 to have SW/LD venues compromise i.e., not 
 insist only on Adobe's PDF, but welcome Web
 native technologies.
 
 Debating on which Doctype or vocabulary or
 whatever is like the icing on 
 the cake. Can
 we first bring the flour into our kitchen?
 
 -Sarven
 http://csarven.ca/#i
 



Re: A Distributed Economy -- A blog involving Linked Data

2014-09-20 Thread Gannon Dick
Hello Paul and Michael,

IMHO, this is the fulcrum between Linked Data and Linked Open Data - the 
ethics of a Repository.  A Repository needs internal procedures to limit 
inflation due to redundancy.  You can not stop people from thinking of 
namespaces as brands but you can not stop unscrupulous data merchants from 
restocking the shelves with repackaged data either.  If you want to reuse RDF 
it is best run it through an ethical Repository as you would recycle a book 
through a Library.
--Gannon

On Sat, 9/20/14, Michael Brunnbauer bru...@netestate.de wrote:

 Subject: Re: A Distributed Economy -- A blog involving Linked Data
 To: Paul Houle ontolo...@gmail.com
 Cc: Gannon Dick gannon_d...@yahoo.com, Linked Data community 
public-lod@w3.org, Kingsley Idehen kide...@openlinksw.com
 Date: Saturday, September 20, 2014, 11:30 AM
 
 
 Hello
 Paul,
 
 or why don't we
 just acknowledge that whether your knowledge
 representation
 
 1) has a
 formal semantics
 2) has a formal syntax
 3) is more or less expressive
 
 is a matter of the task at
 hand.
 
 The whole Web already
 *is* linked data and I wonder if one day when we will
 have the perfect RDF, it will be obsolete. Of
 course - per the above - it
 would still have
 its use cases.
 
 Regards,
 
 Michael Brunnbauer
 
 On Sat, Sep 20, 2014 at 11:25:43AM -0400, Paul
 Houle wrote:
  Why don't we just
 reorganize RDF to look like the predicate calculus,  let
  the arity  2,  and then say it is
 something new so we can escape the RDF
 
 name.
  
  In fact, 
 let's just call it ISO Common Logic,
  
  http://en.wikipedia.org/wiki/Common_logic
  
  Most of the
 cool kids weren't around in the 1980s so
 they don't have the
  bad taste in
 their mouth left by the predicate calculus.
  
  I don't see any
 problem with extending SPARQL to arity  2 facts,  we
 can
  let OWL dry and blow away.
  
  On Sat, Sep 20, 2014
 at 10:47 AM, Gannon Dick gannon_d...@yahoo.com
 wrote:
  
   +1,
 nicely put Kingsley
   --Gannon
  
 
   On Fri, 9/19/14, Kingsley Idehen
 kide...@openlinksw.com
 wrote:
  
   
 Subject: Re: A Distributed Economy -- A blog involving
 Linked Data
    To: public-lod@w3.org
    Date: Friday, September 19, 2014,
 5:48 PM
  
 
 
        On 9/19/14 4:38
 PM,
    Brent Shambaugh
          wrote:
  
  
  
         
 Manu Sporny's post titled Building
    Linked Data into the Core
            of the Web
  
         
   [
   http://lists.w3.org/Archives/Public/public-webpayments/2014Sep/0063.html]
            led to the question:
  
  
          is linked data and
 semantic web tech useful? I
   
 think so. I
            can only
 speak from my own perspective and
 
   experience.
  
  
  
        Brent,
 
 
  
  
        Pasting my reply to Manu
 (with some editing) here, as I
   
 think its
        important:
  
  
  
       
 Manu,
  
  
  
  
        It is misleading (albeit
 inadvertent in regards to your
   
 post above)
        to infer that
 Linked Data isn't already the core of
    the Web. The
 
       absolute fact of the matter is that Linked Data
 has been
    the core of
        the Web since it was an idea
 [1][2].
  
 
 
  
  
        The Web doesn't work at
 all if HTTP URIs aren't
   
 names for:
  
 
 
  
  
        [1] What exists on the Web
  
  
        [2] What exists, period.
  
  
  
  
        We just have the misfortune
 of poor communications
    mucking
 up proper
        comprehension of
 AWWW. For example, RDF should have been
 
   presented
        to the
 world as an effort by the W3C to standardize an
    existing
 
       aspect of the Web i.e., the ability to leverage
 HTTP
    URIs as
        mechanisms for:
  
  
  
  
        1. entity identification
  naming
  
 
 
        2. entity description
 using sentences or statements --
   
 where (as is
        the case re.,
 natural language) a sentence or statement
    is comprised
 
       of a subject, predicate, and object.
  
  
  
  
        Instead, we ended up with an
 incomprehensible,
    indefensible,
 and at
        best draconian
 narrative that has forever tainted the
 
   letters
       
 R-D-F .  And to compount matters,
    HttpRange-14 has
 become
        censorship tool
 (based on its ridiculous history), that
 
   blurs fixing
        this
 horrible state of affairs.
  
  
  
  
       
 Links:
  
 
 
  
  
        [1] http://bit.ly/10Y9FL1
        -- Evidence that Linked Data
 was always at the core of
    the
 Web
        (excuse some
 instability on my personal data space
 
   instance, at this
       
 point in time, should you encounter issues looking up
    the document
 
       identified by this HTTP URI)
  
  
  
  
        [2]
  
 http://kidehen.blogspot.com/2014/03/world-wide-web-25-years-later.html
        -- World Wide Web, 25 years
 later
  
  
  
  
        [3]
  
 
http://media-cache-ak0.pinimg.com/originals/04/b4/79/04b4794ccf2b6fd14ed3c822be26382f.jpg

Re: A proposal for two additional properties for LOCN

2014-09-01 Thread Gannon Dick
Hi Frans,

A complete and coherent coordinate system is a sine qua non for analysis of 
data, planning strategy and measuring performance.

Your questions ...
1)   Are the semantics of the two properties really absent from the  semantic 
web at the moment?

As long as the perception exists that translation parameters are an arbitrary 
coding option; absent is exactly the right word, IMHO.

2)  Is the Location Core Vocabulary an appropriate place to add  them?

I believe so, because if not there, where ?

3)  Is the proposed way of modelling the two properties right? Could conflicts 
with certain use cases occur?

The amplitude of a Normal Distribution depends on sigma (square root of the 
Variance) and the square root of '2' and the square root of PI().  The universe 
where (Variance, Two or PI) have multi-valued roots is not a valid use case, 
it is modeling (i.e. Graphic Arts) malpractice.
-
If you want hard numbers ...

For strategy, planning and performance metric measurement (Strategy Markup 
Language - StratML) [1] the coordinate system is South Pole to North Pole 
(degrees) / East to West (degrees) / Year to Year+1 (degree-day) [2].

The spreadsheets - detailed calculations - are at [3].

For Work-Life Balance, sunrise and sunset detailed calculations are at [4].  
There is little or no difference between the results obtained by shifting the 
origin from the Winter Solstice to New Year's Day , and marking seasonal 
transits over the Tropics.  These are Astronomy conventions versus Civil Time 
conventions.  However,  vertical shifts (Work - Life Balance) are better 
visualized with a two-layer map. The time of day (layer) and sleep wake cycle 
(layer) are not spin coupled, meaning that vector and raster scales 
(mentioned in your Wiki post) cannot be reconsilled with an average. There is 
no Central Limit to Work Ethic.
-
If you want pictures ...
It is the Labor Day Holiday in the US, and like many I am mourning the passage 
of summer by eating too much in short pants I am to old to wear and avoiding 
anything like work.  The spreadsheets have charts :-)

Best,
Gannon


[1] http://xml.fido.gov/stratml/index.htm
[2] http://www.rustprivacy.org/2014/balance/opers/
[3] http://www.rustprivacy.org/2014/balance/opers/stratml-operations.zip
[4] http://www.rustprivacy.org/2014/balance/opers/true-up-wlb.zip




On Mon, 9/1/14, Frans Knibbe | Geodan frans.kni...@geodan.nl wrote:

 Subject: A proposal for two additional properties for LOCN

 Hello all,
 
 
 
 I have made a wiki page for a provisional proposal for the addition of two 
new properties to the Location Core Vocabulary: CRS and spatial resolution. I 
would welcome your thoughts and comments. 
  
 The proposal is based on earlier discussions on this list. I am not 
certain about any of it, but I think starting with certain definitions can help 
in eventually getting something that is good to work with. 
 
 Some questions that I can come up with are:
   
   Are the semantics of the two properties really absent from the  semantic 
web at the moment?
   Is the Location Core Vocabulary an appropriate place to add  them?
   Is the proposed way of modelling the two properties right? Could 
conflicts with certain use cases occur?
 
 More detailed questions are on the wiki page.
 
 
 Regards,
 
   Frans
 
 
 
 
 
 
 
 Frans Knibbe
 
 Geodan
 
 President Kennedylaan 1
 
 1079 MB Amsterdam (NL)
 
 
 
 T +31 (0)20 - 5711 347
 
 E frans.kni...@geodan.nl
 
 www.geodan.nl | disclaimer
 
 
   
 
 
 
 
 



Labor Day: Work - Life Balance

2014-08-29 Thread Gannon Dick
One of the more tedious planning chores is populating time lines, or the 
related task of start and end points of intervals.  With a 4 high stacking, or 
a 4 wide packing (16x1) of quads or quarters in an appropriate coordinate 
system, this chore becomes considerably easier.

http://www.rustprivacy.org/2014/balance/opers/

There is a catch.  You have to be measuring scalar quantities not frequencies 
(the (square root 2)/2 is the scalar in that case).  Probability Distributions 
(Normal, Bell Curves) do not stack (the amplitude is 40% of Range).  This is 
a problem for Work-Life Balance because the 8 hr workday or the 12 hr 
Continental shift do not relate to sleep needs. A Physician is unable to 
prescribe rest and sleep in a meaningful way as it becomes a derived quantity 
(by subtraction).

http://www.rustprivacy.org/2014/balance/opers/work-life-balance.jpg

Anyway, for your Labor Day Weekend enjoyment...

--Gannon



Re: Linked SDMX Data

2014-08-15 Thread Gannon Dick
Hi Sarven,

Yes, I have implemented Artificial Bureaucracy, a derivative, so to speak, of 
Artificial Intelligence.  Much more importantly, it was also officially adopted 
by a reputable *cough* Spook Shop *cough* Authority on June 30, 2014 [1].

The two letter (country/domain) codes are affine [2] with all other systems of 
similar codes (eg. ISO 3166-1 (2A), ISO 3166-2 (3A), MARC (2A), FIPS (2N), 
etc..  Artificial Bureaucracy is shorthand for:  There are only 676 names in 
the hat, draw one and put it back before you draw another one..  There are 
many subdivisions, and a RESTful *stovepipe* has an unlimited number of 
variations, but a RESTful *top level domain* does not.

Display Language Codes (2A), Bibliographic Language Codes (2A or 3A) have the 
same property, although the US Library of Congress is the ISO Authority and 
Librarians in general are mean enough to use it on hapless Scholars caught in 
their web.  I doubt that the scheme has changed much since I linked it two 
years ago[3].

Anyway, Time Shifting (think: Quarterly Conference Calls with Analysts) and 
Shape Shifting (think: National Boundaries disputed or not) are part and parcel 
of the same geometry[4], as I observed yesterday[5], spreadsheets [6].  At all 
costs you want to avoid inversion of the Work-Life-(rest) Balance.  This may 
be a deep question for Finance (money works 24x7), but it is very simple 
principle for a local Grocery Store (is there somewhere we can send cabbage 
overnight where it could be sold = will be a performing asset ? Um, no.).

None of this is an attempt to own Linked Data in the Embrace, Extend, 
Extinguish sense.  The NGA Undersea Feature is owl:sameAs the UN-LOCODE 
Installations in International Waters.

--Gannon

[1] http://geonames.nga.mil/namesgaz/
[2] http://en.wikipedia.org/wiki/Affine_transformation
[3] http://www.rustprivacy.org/2012/urn/lang/person/
[4] http://en.wikipedia.org/wiki/Pythagorean_theorem
[5] http://lists.w3.org/Archives/Public/public-egov-ig/2014Aug/0003.html
[6] spreadsheets 
http://lists.w3.org/Archives/Public/public-egov-ig/2014Aug/0001.html



On Fri, 8/15/14, Sarven Capadisli i...@csarven.ca wrote:

 Subject: Re: Linked SDMX Data
 To: Gannon Dick gannon_d...@yahoo.com, public-lod@w3.org
 Cc: KevinFord k...@loc.gov, public-loc...@w3.org public-loc...@w3.org, 
public-egov...@w3.org public-egov...@w3.org, public-open...@w3.org 
public-open...@w3.org
 Date: Friday, August 15, 2014, 4:12 AM
 
 On 2014-08-11 22:58,
 Gannon Dick wrote:
  Sorry for the
 x-post
 
 Don't be. It is
 a natural thing.
 
  Hi
 Sarven,
 
  I noticed
 you used GeoNames for the Australian Bureau of
 Statistics Linked Data hack mentioned below. 
 GeoNames does much useful work ... but everyone in the
 Linked Data business could use a little help.
 
  Domains - in theory,
 the countries of the world are a group of (federalized data
 set of ...) (groups of) Court Houses, Jurisdictions, keyed
 with two and three letter acronyms (ISO 3166).  This set
 for all practical purposes is a Unicode Code Page, but
 instead of (16x4)=256 members there are (169x4)=676 Latin
 Alphabet Capital Letters.  Statistical metrics at the
 domain level are manipulated with Linear Algebra and Linear
 Programming. Diacritics (Côte d'Ivoire) or alternate
 forms (Ivory Coast) do nothing semantically useful, the
 acronym is the leveler.
 
  So, I rewrote the GeoName table (http://www.geonames.org/countries/) to
 be:
  1) Unicode compliant for XML (HTML
 entities are HEX escaped)
  2) The
 Geo's, Country Profiles, whatever are local links.  I
 left those as is and included/matched the MARC System / US
 Library of Congress Linked Data Service URI's 
(http://id.loc.gov/vocabulary/countries.html).
  3) Finally, I used an SQL RDB to do an
 Outer Join on the Code Set - all 676 possibilities.  Adding
 a three character code synonym does not increase
 the code page size.  It is then possible to split this
 registry into lists of codes 1) Present, 2)
 Missing and 3) Slack (in the Linear Programming
 usage).
  4) Put the files in (FODS -
 (Flat XML) Open Document Spreadsheets format) so that
 European Civil Servants can not whine about data quality
 (got your back, DERI, you too ABS).
 
  http://www.rustprivacy.org/2014/balance/gts/geonames_domains.zip
 
  Unfortunately, when
 RDF Lists of Place Names are filtered through previously
 written applications the result is often unhelpful
 additions, however these steps should ameliorate the problem
 significantly.
 
 
 --Gannon
 
 Thanks Gannon. If
 I understand correctly, you got around to implement 
 your suggestion back in 2012Q1:
 
 http://lists.w3.org/Archives/Public/public-lod/2012Mar/0108.html
 
 Care to clarify what I should
 make of:
 
 http://www.rustprivacy.org/2012/urn-lex/artificial-bureaucracy.html
 
 ?
 
 -Sarven
 http://csarven.ca/#i
 




RE: (groan, not again): OGC Temporal DWG. Was: space and time

2014-08-14 Thread Gannon Dick
Hello Chris,
 
 Thank you for this. I think a
 lot of people have forgotten how much mathematics and
 science were engendered by trying to construct calendars and
 timescales, for whatever purposes, and I think you are
 helping carry that flag.
=
Yes
=
 
 However, in the OGC context of trying to
 produce a first Best Practice and some straightforward
 recommendations for future work, we have already ruled
 calendars, especially weeks and months, other than ISO8601
 Gregorian, out of scope. And sadly, I think we will rule out
 true solar time too, though people's need for conversion
 to and from local solar time is a well established, and has
 very important use cases, such as for aerial imagery shadow
 interpretation.
==
That is a spreadsheet versus javascript problem.  The *advantage* to ISO 8601 
formats is that they round off time predictably, and that feature would be very 
worthwhile to retain.

The spreadsheet versus javascript problem is that COBAL dates extend back to 
1600 AD and there are many base dates for *nix and PC time, none of which come 
close to 1600 AD. The Julian Date extends back to 4713 BC. All of these are 
*day* standards, with FUBAR time-of-day rounding which requires conversion to 
UTC.

There exist spreadsheets for local solar time[1].  As well as details of the 
calculation[2] and tables of observed (or forecast) seasonal set points 
(Solstice) and transits (Equinox) [3].

==
 
 I think we
 should capture that particular use case and incorporate
 details of the conversion of UTC to and from local solar
 time, perhaps as an engineering report, but I think the
 resources are not available for our stated delivery in
 December 2014.

 Do you know
 of any authoritative sources that we could reference?
===
see links below.
NOAA = (Earth System Research Laboratory, National Oceanic and Atmospheric 
Administration)
USNO = (Astronomical Applications Department of the U.S. Naval Observatory)
NASA, for authoritative purposes, plays with big, loud, cool toys.  Please 
don't tell them I said that.
===

my take:
I think of linked data as ... as you are driving by the Taj Mahal the Turing 
Machine you are married to says Look the Taj Mahal and then a little light on 
your AI equipped dashboard lights up and says Taj Mahal, India.  Never 
contradict a Turing Machine even when Linked Data agrees ;-)

My spreadsheet is a different, Linked Data specific use case.  Because Time of 
Day is not roundable, you risk an inversion in Work-Life Balance where 8 hours 
of Rest is isometric with 16 hours of Activity.  Only Overseas Banks and 
Government Mint Printing Presses work 24x7.

The Solar Time (workday) Standard at the Equator is 12 Hours (plus ca. 7.5 
Minutes) not 16 hours in mid-summer London.  Make of that what you will :-)
http://www.rustprivacy.org/2014/balance/gts/inversionEquator.jpg
http://www.rustprivacy.org/2014/balance/gts/inversion50N.jpg

Best Wishes,
Gannon

[1] http://www.esrl.noaa.gov/gmd/grad/solcalc/calcdetails.html
[2a] http://www.esrl.noaa.gov/gmd/grad/solcalc/solareqns.PDF
[2b] http://www.gandraxa.com/length_of_day.xml (this is someone's personal 
site, but the ability to include or exclude Twilight (Atmospheric Refraction) 
is important)
[3] http://aa.usno.navy.mil/data/docs/EarthSeasons.php


 
 Best wishes, Chris
 
 -Original Message-
 From: Gannon Dick [mailto:gannon_d...@yahoo.com]
 
 Sent: Wednesday, August 13, 2014 6:38 PM
 To: andrea.per...@jrc.ec.europa.eu;
 frans.kni...@geodan.nl;
 simon@csiro.au;
 Chris Beer; Little, Chris
 Cc: public-loc...@w3.org;
 public-egov...@w3.org;
 public-lod; tempo...@lists.opengeospatial.org;
 Piero Campalani; Matthias Müller
 Subject:
 (groan, not again): OGC Temporal DWG. Was: space and time
 
 Hi Chris,
 
 FWIW.
 
 While
 commerce depends upon Product Release Dates and Versioning,
 geographic information can default to the Julian Calendar
 harmonics.  Not having to deal with this administrative
 detail is a real, pardon the expression, time saver.  So, I
 did the math and made a spreadsheet (FODS or EXCEL), a
 prototype generic version Release Clock. This
 calendar is not in harmony with astronomical calculations,
 which use the Winter Solstice as an anchor rather than New
 Year's.  I apologize for this shameless attempt to
 curry favour with Champagne Manufacturers ;-)
 
 The calendar is by year, with
 anchors at New Year's and New Year+1.  There are three
 arc control points.  These are arbitrary but can be
 recognized holidays - and the key word is
 recognized.  For example I used Easter ~
 Passover ~ Jerusalem Tourist Season.  The control point
 labels (identity) has no Controlling Authority, that is,
 they have no effect on the graph (timeline).
 
 http://www.rustprivacy.org/2014/balance/gts/utct.zip
 
 Overseas Banks and Government
 Mint Printing Presses work over night.  Human Resources and
 a Retail Store's safe in the back room do not.  Neither

(groan, not again): OGC Temporal DWG. Was: space and time

2014-08-13 Thread Gannon Dick
Hi Chris,

FWIW.

While commerce depends upon Product Release Dates and Versioning, geographic 
information can default to the Julian Calendar harmonics.  Not having to deal 
with this administrative detail is a real, pardon the expression, time saver.  
So, I did the math and made a spreadsheet (FODS or EXCEL), a prototype generic 
version Release Clock. This calendar is not in harmony with astronomical 
calculations, which use the Winter Solstice as an anchor rather than New 
Year's.  I apologize for this shameless attempt to curry favour with Champagne 
Manufacturers ;-)

The calendar is by year, with anchors at New Year's and New Year+1.  There are 
three arc control points.  These are arbitrary but can be recognized holidays - 
and the key word is recognized.  For example I used Easter ~ Passover ~ 
Jerusalem Tourist Season.  The control point labels (identity) has no 
Controlling Authority, that is, they have no effect on the graph (timeline).

http://www.rustprivacy.org/2014/balance/gts/utct.zip

Overseas Banks and Government Mint Printing Presses work over night.  Human 
Resources and a Retail Store's safe in the back room do not.  Neither do many 
Cultural Heritage resources.  Work-Life Balance gets, um, unbalanced.

--Gannon

On Tue, 7/29/14, Gannon Dick gannon_d...@yahoo.com wrote:

 Subject: RE: OGC Temporal DWG. Was: space and time

 Date: Tuesday, July 29, 2014, 12:45 PM
 
 
 
 On Tue, 7/29/14, Little, Chris chris.lit...@metoffice.gov.uk
 wrote:
  
  And I agree that
 transparency about calendar algorithms is an issue, not
 just
  in their book. This isone thing that I
 hope that an OGC Best Practice document could help, in
 however a small way.
  
 
 
 
 Hi Chris,
 
 Maybe it is time to go
 big - Universal Coordinated Calendar Time (UTCT).  In
 the near term, (this Julian Century) the Calendar has no
 unidentified shifts.  We know about Leap Days and the
 Calendar is ignorant of Leap Seconds.  So, it is
 possible.  
 
 This presents
 a problem for Linked Data because even though Personal
 Identity is coupled to Occupation and Occupation is coupled
 to the Location of the Workplace, these are couplings not
 correlations.
 
 Mid-day,
 Noon, is a mean value, but one can't assume regression
 to the mean. At the Equator the Authority -
 Solar Noon - has a whopping 7 1/2 minute time shift.  This
 is not hidden, but it is overwhelmed by the Equation of
 Time.  The shifts, on a day-to-day basis do not accumulate
 to significance on a year-to-year basis. To determine
 coupling constants is a fools errand.
 
 e.g. http://www.rustprivacy.org/2014/balance/utct.jpg
 
 When people triangulate in
 their heads they use 3,4,5 triangles to keep the math
 easy.  For this reason, the Axis length is 500%.  All
 shifts (events which impact Work Life Balance)
 are vertical. Sorry, the Day indicator can't
 update automatically - it's a PDF.
 
 WDYT?
 
 Best,
 
 --Gannon (J.) Dick ;-) I'm
 not a commuter, I have a funny name.
 
 
  
 
 -Original Message-
  From: Gannon
 Dick [mailto:gannon_d...@yahoo.com]
  
  Sent: Thursday, July 24,
 2014 5:24 PM
  To: andrea.per...@jrc.ec.europa.eu;
  frans.kni...@geodan.nl;
  simon@csiro.au;
  Chris Beer; Little, Chris
  Cc:
 public-loc...@w3.org;
  public-egov...@w3.org;
  public-lod; tempo...@lists.opengeospatial.org;
  Piero Campalani; Matthias Müller
  Subject:
  Re: OGC Temporal
 DWG. Was: space and time
  
 
 Hi Chris,
  
  who wrote:
  One concern that I
  have is
 that we do not re-invent the  wheel, and do
  nugatory work, hence this email. I do not 
 envisage that we
  will need to do much with
 Calendars, which  have been
  covered so
 well by Dershowitz and Reingold.
  
  =
  No question the quality of the issue
 coverage
  (Calendars) is first rate.
  
  However, the computations
 are not transparently
  self-evident and the
 references you cite in the Wiki are not
 
 available on-line - or are they ?
  
  3. Calendrical Tabulations 1900-2200, Edward
 M.
  Reingold, Nachum Dershowitz. Hardcover:
 636 pages.
  Publisher: Cambridge University
 Press (16 Sep 2002)
  Language: English
 ISBN-10: 0521782538 ISBN-13:
 
 978-0521782531
  
  4.
  Calendrical Calculations, Nachum Dershowitz,
 Edward M.
  Reingold. Paperback: 512 pages.
 Publisher: Cambridge
  University Press; 3
 edition (10 Dec 2007) Language: English
 
 ISBN-10: 0521702380 ISBN-13: 978-0521702386 
  
  Accessability to
 Wheels
  known to have been
 invented is a Wiki issue, I
  think.
  
  --Gannon
 
 
  
  
  
 
 
  On Thu, 7/24/14, Little, Chris chris.lit...@metoffice.gov.uk
  wrote:
  
  
 Subject: OGC
  Temporal DWG. Was: space and
 time
   To:
  Gannon
 Dick gannon_d...@yahoo.com,
  andrea.per...@jrc.ec.europa.eu
  andrea.per...@jrc.ec.europa.eu,
  frans.kni...@geodan.nl
  frans.kni...@geodan.nl,
  simon@csiro.au
  simon@csiro.au,
  Chris Beer ch

Re: Linked SDMX Data

2014-08-11 Thread Gannon Dick
Sorry for the x-post

Hi Sarven,

I noticed you used GeoNames for the Australian Bureau of Statistics Linked 
Data hack mentioned below.  GeoNames does much useful work ... but everyone in 
the Linked Data business could use a little help.

Domains - in theory, the countries of the world are a group of (federalized 
data set of ...) (groups of) Court Houses, Jurisdictions, keyed with two and 
three letter acronyms (ISO 3166).  This set for all practical purposes is a 
Unicode Code Page, but instead of (16x4)=256 members there are (169x4)=676 
Latin Alphabet Capital Letters.  Statistical metrics at the domain level are 
manipulated with Linear Algebra and Linear Programming. Diacritics (Côte 
d'Ivoire) or alternate forms (Ivory Coast) do nothing semantically useful, the 
acronym is the leveler.

So, I rewrote the GeoName table (http://www.geonames.org/countries/) to be:
1) Unicode compliant for XML (HTML entities are HEX escaped)
2) The Geo's, Country Profiles, whatever are local links.  I left those as is 
and included/matched the MARC System / US Library of Congress Linked Data 
Service URI's (http://id.loc.gov/vocabulary/countries.html).
3) Finally, I used an SQL RDB to do an Outer Join on the Code Set - all 676 
possibilities.  Adding a three character code synonym does not increase the 
code page size.  It is then possible to split this registry into lists of 
codes 1) Present, 2) Missing and 3) Slack (in the Linear Programming usage).
4) Put the files in (FODS - (Flat XML) Open Document Spreadsheets format) so 
that European Civil Servants can not whine about data quality (got your back, 
DERI, you too ABS).

http://www.rustprivacy.org/2014/balance/gts/geonames_domains.zip

Unfortunately, when RDF Lists of Place Names are filtered through previously 
written applications the result is often unhelpful additions, however these 
steps should ameliorate the problem significantly.

--Gannon






On Mon, 8/11/14, Sarven Capadisli i...@csarven.ca wrote:

 Subject: Re: Linked SDMX Data
 To: public-lod@w3.org
 Date: Monday, August 11, 2014, 7:20 AM
 
 On 2014-08-05 12:08,
 Sarven Capadisli wrote:
  On 2014-04-23
 15:31, Sarven Capadisli wrote:
  On
 2014-04-22 14:18, Sarven Capadisli wrote:
  On 2013-08-08 15:17, Sarven
 Capadisli wrote:
  On
 03/08/2013 01:04 PM, Sarven Capadisli wrote:
  On 02/15/2013 02:42 PM,
 Sarven Capadisli wrote:
  Ahoy hoy,
 
  OECD Linked Data:
  http://oecd.270a.info/
 
  BFS Linked Data:
  http://bfs.270a.info/
 
  FAO Linked Data:
  http://fao.270a.info/
 
  Linked SDMX Data:
  http://csarven.ca/linked-sdmx-data
 
  ECB Linked Data:
  http://ecb.270a.info/
 
  IMF Linked Data:
  http://imf.270a.info/
 
  UIS
 Linked Data:
  http://uis.270a.info/
 
  FRB Linked
 Data:
  http://frb.270a.info/
 
  BIS Linked Data:
  http://bis.270a.info/
 
 ABS Linked Data:
 http://abs.270a.info/
 
 -Sarven
 http://csarven.ca/#i




Re: SemStats 2014 Call for Challenge

2014-08-06 Thread Gannon Dick
Hi Sarven,

The US Census Bureau has piles of data available via an API.

http://www.census.gov/developers/

You need a key.  I do not see anything in the terms of service about citizenship
http://www.census.gov/data/developers/about/terms-of-service.html

I think the key motivation is to avoid DOS attacks.  The US Census Bureau is 
known for it's uncharacteristic moments of lucidity with respect to Government 
Work.

Economic Data release in the US has always been a sensitive subject.  The 
problems arose about a decade before the European Union, oops, I meant the 
Congress of Vienna ;-)
The story is here: http://www.census.gov/prod/2003pubs/conmono2.pdf
This document also explains in detail exactly what they mean by Statistical 
Safeguards for non-disclosure of confidential business information.

--Gannon



On Tue, 8/5/14, Sarven Capadisli i...@csarven.ca wrote:

 Subject: SemStats 2014 Call for Challenge
 To: Linking Open Data public-lod@w3.org, SW-forum semantic-...@w3.org
 Date: Tuesday, August 5, 2014, 2:48 AM
 
 SemStats 2014 Call for Challenge
 
 
 Second International Workshop on Semantic Statistics
 (SemStats 2014)
 Workshop website: http://semstats.org/
 Event hashtags: #SemStats #ISWC2014
 
 in conjunction with
 
 ISWC 2014
 The 13th International Semantic Web Conference
 Riva del Garda - Trentino, Italy, October 19-23, 2014
 http://iswc2014.semanticweb.org/
 
 Summary
 ---
 The SemStats Challenge is back with more action! It is
 organized in the 
 context of the SemStats 2014 workshop. Participants are
 invited to apply 
 statistical techniques and semantic web technologies within
 one of two 
 possible tracks, namely the Census Data Track and Open
 Track. Following 
 up on the success of last year's Challenge, this year, the
 Census Data 
 Track will have data from France, Italy, and Ireland. We
 would also like 
 to introduce the new Open Track, where any type of
 statistical data of 
 your choice may be used in the challenge.
 
 The challenge will consist in the realization of mashups or
 
 visualizations, but also on comparisons, analytics,
 alignment and 
 enrichment of the data and concepts involved in statistical
 data (see 
 below for the data made available and additional
 requirements).
 
 The deadline for participants to submit their short papers
 and 
 application is Sun 7th September, 2014, 23:59pm Hawai Time.
 
 It is strongly suggested to all challenge participants to
 send contact 
 informations to semstats2...@easychair.org
 in order to be kept informed 
 in case of any changes in the data provided. For any
 questions on the 
 challenge, please contact semstats2...@easychair.org.
 
 Census Data Track
 -
 We would like to point you to plenty of raw data. The
 conversion process 
 will be considered as part of the challenge.
 
 * Istat (Italian National Institute of Statistics) offers
 Census 1991, 
 2001, 2011 data and metadata: 
 http://www.istat.it/it/archivio/104317#variabili_censuarie
 (See 
 Variabili censuarie / Censimento della popolazione e delle
 
 abitazioni), which gives the population count by age range
 and sex at a 
 very detailed geographic level.
 
 * INSEE (National Institute of Statistics and Economic
 Studies) can 
 provide different things:
 
      1. Detailed results for Census
 2011: 
 
http://insee.fr/fr/themes/detail.asp?reg_id=0ref_id=fd-RP2011page=fichiers_detail/RP2011/telechargement.htm
 
 giving results on individuals only at the region level but
 with a great 
 number of other variables (see 
 
http://insee.fr/fr/ppp/bases-de-donnees/fichiers_detail/RP2011/doc/contenu_RP2011_INDREG.pdf)
 
      2. Detailed results for Census
 2010: 
 
http://insee.fr/fr/themes/detail.asp?reg_id=0ref_id=fd-RP2010page=fichiers_detail/RP2010/telechargement.htm
 
 with, for example, results on individuals at a smaller
 geographic level
 
      3. Key figures for Census 2011 on
 different themes at the 
 municipality level: 
 
http://insee.fr/fr/bases-de-donnees/default.asp?page=recensement/resultats/2011/donnees-detaillees-recensement-2011.htm
 
 * ABS (Australian Bureau of Statistics) offers Census 2011
 data at 
 http://stat.abs.gov.au/ . Data that is in particularly
 of interest to 
 this challenge can be found by navigating to: Social
 Statistics gt; 
 2011 Census of Population and Housing gt; Time Series
 Profiles (Local 
 Government Areas) gt; T03 Age by Sex (LGA)
 
 * CSO (Central Statistics Office) Ireland's Census 2011 data
 and 
 metadata available as Linked Data: http://data.cso.ie/.
 
 * You are welcome to use any other Census data whether it is
 Linked Data 
 based or not
 
 Open Track
 --
 There is one essential requirement for the Open Track:
 papers must 
 describe a publicly available application. We would love to
 see everyone 
 play and learn from what you have created. You are welcome
 to use any 
 statistical data whether it is already in 

Re: Call for Linked Research

2014-07-30 Thread Gannon Dick
A minimum amount of anything causes Occam's Razor to succeed, any more than 
that is a failure of reason.  The unsupported idea that the intellectual 
dishonesty of the Publishing Industry taints *your* data was the original 
target of the Wrath of Sarven(c).

Right Sarven ?

On Wed, 7/30/14, Paul Houle ontolo...@gmail.com wrote:

 I think's a little more than tax avoidance.

 On Wed, Jul 30, 2014 at
 9:50 AM, Gannon Dick gannon_d...@yahoo.com
 wrote:
 
 
 
 
 
 On Wed, 7/30/14, Giovanni Tummarello g.tummare...@gmail.com
 wrote:
 
 
 
  So Sarvem let us be rational and pick Occam's razor
 style simplest explanation ...
 
 
 
 By lex parsimonae (Occam's Razor) Tax Avoidance is
 magic.
 
 By Bell's Theorem, Tax Avoidance is theft (of
 services).
 
 Theft of Software As A Service is ... making me dizzy and
 diz-interested.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 -- 
 Paul Houle
 Expert on Freebase,
 DBpedia, Hadoop and RDF
 (607) 539 6254  
  paul.houle on Skype   ontolo...@gmail.com




Re: Just what *does* robots.txt mean for a LOD site?

2014-07-27 Thread Gannon Dick


On Sat, 7/26/14, aho...@dcc.uchile.cl aho...@dcc.uchile.cl wrote:

 The difference in opinion remains to what extent Linked Data
 agents need to pay attention to the robots.txt file.
 
 As many others have suggested, I buy into the idea of any
 agent not relying document-wise on user input being subject to
 robots.txt.

=
+1
Just a comment.

Somewhere, sometime, somebody with Yahoo Mail decided that public-lod mail was 
spam, so every morning I dig it out because I value the content.

Of course, I could wish for a Linked Data Agent which does that for me, but 
that would be to complete a banal or vicious cycle, depending on the circle 
classification scheme in use.  I'm looking gor virtuous cycles and in the case 
of robots.txt, The lady doth protest too much, methinks.
--Gannon





Re: OGC Temporal DWG. Was: space and time

2014-07-24 Thread Gannon Dick
Hi Chris,

who wrote:
One concern that I have is that we do not re-invent the
 wheel, and do nugatory work, hence this email. I do not
 envisage that we will need to do much with Calendars, which
 have been covered so well by Dershowitz and Reingold.

=
No question the quality of the issue coverage (Calendars) is first rate.

However, the computations are not transparently self-evident and the references 
you cite in the Wiki are not available on-line - or are they ?

3. Calendrical Tabulations 1900-2200, Edward M. Reingold, Nachum Dershowitz. 
Hardcover: 636 pages. Publisher: Cambridge University Press (16 Sep 2002) 
Language: English ISBN-10: 0521782538 ISBN-13: 978-0521782531

4. Calendrical Calculations, Nachum Dershowitz, Edward M. Reingold. Paperback: 
512 pages. Publisher: Cambridge University Press; 3 edition (10 Dec 2007) 
Language: English ISBN-10: 0521702380 ISBN-13: 978-0521702386 

Accessability to Wheels known to have been invented is a Wiki issue, I think.

--Gannon





On Thu, 7/24/14, Little, Chris chris.lit...@metoffice.gov.uk wrote:

 Subject: OGC Temporal DWG. Was: space and time
 To: Gannon Dick gannon_d...@yahoo.com, andrea.per...@jrc.ec.europa.eu 
andrea.per...@jrc.ec.europa.eu, frans.kni...@geodan.nl 
frans.kni...@geodan.nl, simon@csiro.au simon@csiro.au, Chris 
Beer ch...@codex.net.au
 Cc: public-loc...@w3.org public-loc...@w3.org, public-egov...@w3.org 
public-egov...@w3.org, public-lod public-lod@w3.org, 
tempo...@lists.opengeospatial.org tempo...@lists.opengeospatial.org, Piero 
Campalani cmp...@unife.it, Matthias Müller matthias_muel...@tu-dresden.de
 Date: Thursday, July 24, 2014, 9:36 AM
 
 #yiv4303497829
 #yiv4303497829 -- .yiv4303497829EmailQuote
 {margin-left:1pt;padding-left:4pt;border-left:#80 2px
 solid;}#yiv4303497829 
 
 Dear Colleagues,
  
 OGC started a Temporal Domain Working Group last year
 to address a number of problems in the geospatial domain. In
 particular, that time is usually just viewed as Yet Another
 Attribute of Features, rather than a first class
 coordinate.
  
 We agreed earlier this year, in Geneva, that the OGC
 Naming Authority would have a branch to register Temporal,
 and index based, Coordinate Reference Systems, and we agreed
 on the fundamental attributes that a CRS should have to be
 registered. 
  
 We hope to produce a Best Practice document this year
 to help clarify many confusions between CRSs, notations,
 calendars, operations and calculations. I think that now we
 have a good enough understanding of the underlying
 conceptual issues and current
 geospatial standards.
  
 We have been accumulating info on an open wiki 
http://external.opengeospatial.org/twiki_public/TemporalDWG/WebHome
 and discussing via our
 mailing list, though we are not very disciplined about
 it.
  
 One concern that I have is that we do not re-invent the
 wheel, and do nugatory work, hence this email. I do not
 envisage that we will need to do much with Calendars, which
 have been covered so well by Dershowitz and Reingold.
  
 Best wishes, Chris
  
  
 Chris Little
 
 
 Co-Chair, OGC Meteorology  Oceanography Domain Working
 Group
 Co-Chair, OGC Temporal Domain
 Working Group
 
 
 
 IT Fellow -
 Operational Infrastructures
 
 
 Met Office  FitzRoy Road  Exeter  Devon  EX1 3PB 
 United Kingdom
 
 
 Tel: +44(0)1392 886278  Fax: +44(0)1392 885681  Mobile:
 +44(0)7753 880514
 
 
 E-mail: chris.lit...@metoffice.gov.uk  http://www.metoffice.gov.uk
  
 I am normally at work Tuesday,
 Wednesday and Thursday each week
  
  
  
  
  
  
  
  
 




Re: [help] semanticweb.org admin

2014-07-21 Thread Gannon Dick
Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki. First name Wi. He/she 
is quite friendly, but perfers to speak RDF, which I often mistake a for Pidgin 
Klingon variant.  Maybe it's just me.

In any case, here is community residence home page: 
http://semanticweb.org/wiki/Main_Page

I suspect the community may be located in the Lake Wobegon District of Erewhon, 
Minnesota because everybody seems to have above average expertise and Wi Ki 
makes executive decisions.

HTH

--Gannon



On Mon, 7/21/14, Maxim Kolchin kolchin...@gmail.com wrote:

 Subject: Re: [help] semanticweb.org admin
 To: Stéphane Corlosquet scorlosq...@gmail.com
 Cc: a.l.gent...@dcs.shef.ac.uk, public-lod@w3.org public-lod@w3.org
 Date: Monday, July 21, 2014, 7:25 AM
 
 Hi Annalisa,
 
 Was you able to contact
 someone? I've sent emails to all people
 mentioned here, but no one responded to me.
 
 Thank you in advance!
 Maxim Kolchin
 PhD Student
 ITMO University (National Research
 University)
 E-mail: kolchin...@gmail.com
 Tel.: +7 (911) 199-55-73
 
 
 On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
 Corlosquet
 scorlosq...@gmail.com
 wrote:
  Knud Möller was the main
 developer of this site:
  http://www.linkedin.com/in/knudmoeller /
 https://twitter.com/knudmoeller
 
 
 
 On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
  a.l.gent...@dcs.shef.ac.uk
 wrote:
 
  Hi
 guys, just a quick question.
  Does
 anyone know who to contact for technical questions about
  http://data.semanticweb.org ?
  The admin contact ad...@data.semanticweb.org
 seems unreachable atm.
  Thanks
 you!
  Annalisa
 
  --
  Anna Lisa Gentile
  Research Associate
  Department of Computer Science
  University of Sheffield
  http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
  office: +44 (0)114 222 1876
 
 
 
 
 
 --
  Steph.




Re: [help] semanticweb.org admin

2014-07-21 Thread Gannon Dick
Hmmm ... Postal Code DERI, we're making some progress :)

On Mon, 7/21/14, Bernard Vatant bernard.vat...@mondeca.com wrote:

 Subject: Re: [help] semanticweb.org admin
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: public-lod@w3.org public-lod@w3.org, Stefan Decker 
stefan.dec...@deri.org
 Date: Monday, July 21, 2014, 8:51 AM
 
 According
 to http://whois.net/whois/semanticweb.org
 the DNS registrant is Stefan Decker, based somewhere in
 Galway, Ireland :)
 
 
 
 2014-07-21 15:40 GMT+02:00
 Gannon Dick gannon_d...@yahoo.com:
 
 
 Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki.
 First name Wi. He/she is quite friendly, but perfers to
 speak RDF, which I often mistake a for Pidgin Klingon
 variant.  Maybe it's just me.
 
 
 
 In any case, here is community residence home page: 
http://semanticweb.org/wiki/Main_Page
 
 
 
 I suspect the community may be located in the Lake Wobegon
 District of Erewhon, Minnesota because everybody seems to
 have above average expertise and Wi Ki makes executive
 decisions.
 
 
 
 HTH
 
 
 
 --Gannon
 
 
 
 
 
 
 
 On Mon, 7/21/14, Maxim Kolchin kolchin...@gmail.com
 wrote:
 
 
 
  Subject: Re: [help] semanticweb.org admin
 
  To: Stéphane Corlosquet scorlosq...@gmail.com
 
  Cc: a.l.gent...@dcs.shef.ac.uk,
 public-lod@w3.org
 public-lod@w3.org
 
  Date: Monday, July 21, 2014, 7:25 AM
 
 
 
  Hi Annalisa,
 
 
 
  Was you able to contact
 
  someone? I've sent emails to all people
 
  mentioned here, but no one responded to me.
 
 
 
  Thank you in advance!
 
  Maxim Kolchin
 
  PhD Student
 
  ITMO University (National Research
 
  University)
 
  E-mail: kolchin...@gmail.com
 
  Tel.: +7 (911)
 199-55-73
 
 
 
 
 
  On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
 
  Corlosquet
 
  scorlosq...@gmail.com
 
  wrote:
 
   Knud Möller was the main
 
  developer of this site:
 
   http://www.linkedin.com/in/knudmoeller
 /
 
  https://twitter.com/knudmoeller
 
  
 
  
 
  
 
  On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
 
   a.l.gent...@dcs.shef.ac.uk
 
  wrote:
 
  
 
   Hi
 
  guys, just a quick question.
 
   Does
 
  anyone know who to contact for technical questions
 about
 
   http://data.semanticweb.org
 ?
 
   The admin contact ad...@data.semanticweb.org
 
  seems unreachable atm.
 
   Thanks
 
  you!
 
   Annalisa
 
  
 
   --
 
   Anna Lisa Gentile
 
   Research Associate
 
   Department of Computer Science
 
   University of Sheffield
 
   http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
 
   office: +44 (0)114
 222 1876
 
  
 
  
 
  
 
  
 
  
 
  --
 
   Steph.
 
 
 
 
 
 
 
 
 -- 
 Bernard Vatant
 Vocabularies  Data
 Engineering
 
 
 Tel
 :  + 33 (0)9 71 48 84 59
 
 
 Skype
 : bernard.vatant
 http://google.com/+BernardVatant
 
 
 Mondeca       
                    
  
 
 
 
 35 boulevard de Strasbourg 75010
 Paris
 www.mondeca.comFollow
 us on Twitter : @mondecanews
 
 
 --
 
 
 
 




Re: Linked Data and Semantic Web CoolURIs, 303 redirects and Page Rank.

2014-07-18 Thread Gannon Dick

 On 18 July 2014 14:05, Mark
 Fallu m.fa...@griffith.edu.au
 wrote:
 
 I am attempting to understand how the the
 CoolURI 303 redirect pattern for the semantic web 
(http://www.w3.org/TR/cooluris/) can be
 implemented
 without negative impact on search engines.
 
 Just a
 quick question:
 
 Is there any reason you want to use
 303s?  
 
 I personally consider it an anti-pattern.
  

Thank you, Melvin. I think so too.

short version:anti-pattern

long version:
Eastern Australia is 13 hours ahead of the Central United States so ...
On Saturday night in Dallas there is no semantic difference between praying 
Australians and liquored-up cowboys.  Bug or a Feature ? No, anti-pattern.





Re: Real-world concept URIs

2014-07-17 Thread Gannon Dick
Hi Pieter,

I disagree, pending clarification.

If the transportation costs of (RESTful) URI's - an Ontology - between Top 
Level Domains TLD is Zero - more specifically exp(Zero)-1=Zero, then the 
URI's are entangled (as in Quantum Entanglement).  In this case, the  URI's 
are not broken, but rather the URL's are NOT entangled, they are distinct 
TLD's.

Alternate synthesis:
bicycle needs 2 URI's (ownership and fuel)
Porsche needs 2 URI's (ownership and fuel)

A bicycle with nobody to pedal it and a Porsche with no gas both obey Newton's 
First Law, which is quite Real World.

--Gannon

On Wed, 7/16/14, Pieter Colpaert pieter.colpa...@ugent.be wrote:

 Subject: Real-world concept URIs
 To: public-lod@w3.org community public-lod@w3.org
 Date: Wednesday, July 16, 2014, 8:55 AM
 
 Hi list,
 
 Short version:
 
 I want real-world concepts to be able to have a URI without
 a http://;. 
 You cannot transfer any real-world concept over an Internet
 protocol 
 anyway. Why I would consider changing this can be
 
   * If you don't agree, why?
   * If you do agree, should we change the definition of
 a URI? Will this 
 break existing Linked Data infrastructure?
 
 Long version:
 
 I'm overlooking the development of a hypermedia application*
 at a server 
 which redirects all http://{foobar} URIs towards
 https://{foobar}. 
 Furthermore, in order to make a distinction between
 real-world objects 
 and their representation, I have added #object at the end
 of the URIs 
 for the real-world objects in the store behind it.
 
 Now I have to explain these developers that each time a
 request is done 
 on the website, they will have to look up what the requested
 URI was, 
 then substitute https:// with http:// and then concatenate
 #object to 
 the URI, in order to be able to find statements which will
 be useful in 
 the application. The reason behind this is of course the
 real-world 
 objects which cannot be retrieved over HTTP, yet the
 representation has 
 a different URI, which is automatically generated as
 everything starting 
 at # gets deleted anyway.
 
 Now I also have to convince another reuser of the data, a
 native 
 application builder, that he should use these URIs with
 http:// and 
 #object. Inside his application, he does have his own
 style of 
 identifiers, which looks very close to URIs, the only thing
 that lacks 
 is the protocol. So I've asked him to add the protocol to
 the URIs for 
 real-world objects and add #object at the end. He ended up
 giving me 
 something with https://; in the beginnen. Which makes a lot
 of sense: 
 that's what works on the Web, but sadly not in my store.
 
 This process could have been a lot simpler with a tiny
 change: allowing 
 URIs identifying real-world objects not to have a protocol.
 Why would 
 you add http:// to something you cannot GET anyway? What if
 we would 
 allow our real-world URI to be just {foobar} and the URI of
 the 
 representation being http://{foobar} or https://{foobar}?
 Now the 
 developers just have to remove the protocol in order to find
 useful 
 triples about what the client requested in the store.
 
 This would make sense in a lot of cases: e.g., my
 organization is 
 ugent.be, and its Web representation can be found at http://ugent.be.
 
 Filling out ugent.be in a browser will automatically refer
 you to its 
 representation.
 
 Your thoughts?
 
 Kind regards,
 
 Pieter
 
 * This application I'm working on: http://iRail.be
 
 




Re: Real-world concept URIs

2014-07-17 Thread Gannon Dick


  If we want to differentiate between
      I like the
 zebra;
      I
 don't like the document about the zebra.
 
 But why do they need to be on
 the same domain? Several parties on
 different domains can represent information
 about the animal zebra.
 They just seem like
 different things to me.

===
There is a what's the problem again ? component to the problem (rinse, 
repeat).

As evidence, I offer two factoids:
a) The EU has 24 Official languages (http://europa.eu/)
b) Americans speak 100+ languages at home 
(http://www.census.gov/hhes/socdemo/language/) and have one Official language.

It seems to me those are two solutions to the problem.
What's the problem again ? :-)

--Gannon
 




Re: Real-world concept URIs

2014-07-17 Thread Gannon Dick
Right you are Paul.  I live in Texas.  Mexico does not want us back after what 
we did to their food, but if they find out what we do to Spanish I predict they 
will be building fences at the border.

IMHO, this whole catastrophe could be avoided if Google Translate spoke Zebra.  
Is it just me?
--Gannon

On Thu, 7/17/14, Paul Houle ontolo...@gmail.com wrote:

 Subject: Re: Real-world concept URIs
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: Ruben Verborgh ruben.verbo...@ugent.be, Luca Matteis 
lmatt...@gmail.com, Pieter Colpaert pieter.colpa...@ugent.be, 
public-lod@w3.org community public-lod@w3.org
 Date: Thursday, July 17, 2014, 3:15 PM
 
 I can't speak for
 other countries in North, South and Central America,
  but I can say the that United States does not
 have an official
 language, 
 even though people who hate immigrants wish it did.
 ᐧ
 
 On Thu, Jul 17, 2014 at
 3:30 PM, Gannon Dick gannon_d...@yahoo.com
 wrote:
 
 
    If we want to differentiate
 between
        I
 like the
   zebra;
        I
   don't like the document about the
 zebra.
 
   But
 why do they need to be on
   the same
 domain? Several parties on
   different
 domains can represent information
  
 about the animal zebra.
   They just
 seem like
   different things to me.
 
 
 ===
  There is a
 what's the problem again ? component to the
 problem (rinse, repeat).
 
  As evidence, I offer two factoids:
  a) The EU has 24 Official
 languages (http://europa.eu/)
 
 b) Americans speak 100+ languages at home 
(http://www.census.gov/hhes/socdemo/language/)
 and have one Official language.
 
  It seems to me those
 are two solutions to the problem.
 
 What's the problem again ? :-)
 
  --Gannon
 
 
 
 
 
 
 -- 
 Paul Houle
 Expert on Freebase,
 DBpedia, Hadoop and RDF
 (607) 539 6254   
 paul.houle on Skype   ontolo...@gmail.com




Re: Education

2014-07-12 Thread Gannon Dick
Hi Hugh,

Education being all about seeing the patterns, it's nice when the classical can 
be related to the new concepts.  So ...

Suggestion:
The US Government (NOAA) offers a spreadsheet which calculates Sunrise, Sunset, 
etc. from first principles (parabolics not hyperbolics).
There is a javascript implementation
http://www.esrl.noaa.gov/gmd/grad/solcalc/sunrise.html
And a spreadsheet
http://www.esrl.noaa.gov/gmd/grad/solcalc/calcdetails.html
and an explanation of the calculations in American (English available on 
request, I think)

The Linked Data connection is that the Julian Century is a quad of the Julian 
Day (which in turn is a big number) so quarter Great Years, quarter Years and 
quarter Days (6 hours) are in resonance.  The challenge is to square a 16/2=8 
hour workday with a 6*2=12 hour half-day. Adding Twilight as useable workspace 
is a harsh idea, it diverges limit up (all work, no sleep)).  What you really 
need is the triple (4:3) Harmonic. (http://www.gandraxa.com/length_of_day.xml)

The Julian Century and Julian Day are in quads (quaternions or Gradians (1/4 
Centigrades, a can of kelvins is a can of worms)), the other angles are in 
Degrees and Radians.  This data base might be helpful: 
http://lists.w3.org/Archives/Public/public-lod/2014Jul/0030.html

--Gannon



On Sat, 7/12/14, Hugh Glaser h...@glasers.org wrote:

 Subject: Education
 To: Linked Data community public-lod@w3.org
 Date: Saturday, July 12, 2014, 6:02 AM
 
 The other day I was asked if I would
 like to run a Java module for some Physics  Astronomy
 students.
 I am so far from plain Java and that sort of thing now there
 was almost a cognitive dissonance.
 
 But it did cause me to ponder on about what I would do for
 such a requirement, given a blank sheet.
 
 For people whose discipline is not primarily technical, what
 would a syllabus look like around Linked Data as a focus,
 but also causing them to learn lots about how to just do
 stuff on computers?
 
 How to use a Linked Data store service as schemaless
 storage:
 bit of intro to triples as simply a primitive representation
 format;
 scripting for data transformation into triples - Ruby,
 Python, PHP, awk or whatever;
 scripting for http access for http put, delete to store;
 simple store query for service access (over http get);
 scripting for data post-processing, plus interaction with
 any data analytic tools;
 scripting for presentation in html or through visualisation
 tools.
 
 It would be interesting for scientists and, even more,
 social scientists, archeologists, etc (alongside their
 statistical package stuff or whatever).
 I think it would be really exciting for them, and they would
 get a lot of skills on the way - and of course they would
 learn to access all this Open Data stuff, which is becoming
 so important.
 I’m not sure they would go for it ;-)
 
 Just some thoughts.
 And does anyone knows of such modules, or even is teaching
 them?
 
 Best
 Hugh
 -- 
 Hugh Glaser
    20 Portchester Rise
    Eastleigh
    SO50 4QS
 Mobile: +44 75 9533 4155, Home: +44 23 8061 5652
 
 




Re: Attribute or Property Ontology?

2014-07-11 Thread Gannon Dick
Hi Mike,

Maybe this is just too difficult to do, and that is the reason I'm not  finding 
any prior work. ;)
=
There is some prior work called science.
 E=mc^2 means there is no Edge of the Universe 1/4 mile ahead, please creep 
sign. Ontologists are trying to find the sign, but in the meantime photons will 
just have to imagine it's there. 
The rest of us will have to use the (199*(2/2))xQuarters+(First+Last)/2 = 100 
Quarters scale: 
http://lists.w3.org/Archives/Public/public-lod/2014Jul/0030.html
--Gannon




space and time

2014-07-10 Thread Gannon Dick
Hi All,

Apologies for the cross-posting (I come in peace, with free stuff)

Some time ago Simon suggested that there might be some measures which should be 
taken with regard to historical time frames and the Julian Calendar. IIRC, this 
is something Chris asked me, some time ago, to do as well.

The general problem is: once you have overshot the target by you don't know how 
much, how much do you back up ?  The computation is not Quantum Mechanics or 
GPS, it is Linear Algebra (on intervals) and the shape of the frequency bin 
(box or circle in 2D, no other choices).

This problem comes up with Calendars frequently since there is no way to 
un-Integrate fractional time.

I wrote a DB of Calendar Quarters for the Julian Century (1950 - 2050).  Anyone 
is welcome to it.  There are XML and SQL versions. Other formats are a 
possibility.

http://www.rustprivacy.org/2014/balance/quads/

The same algebra (matrix decomposition by fours) also applies to Unicodes and 
Acronyms for Government work where the rdfs:label / of the elements in a Code 
Page does not change from quarter to quarter, year to year, century to century 
etc., but also for the money handling (accounting) aspects of Government. In 
the US, Revenue Collection is in Spring and the Fiscal Year starts in Fall 
- I use the quotes not because this is an American system but rather because 
Downunder, the Equinox labels mean different things.

It is sort of a strange idea:  Unicodes and Acronyms have no right to be 
forgotten.

--Gannon 







Re: Encoding an incomplete date as xsd:dateTime

2014-06-26 Thread Gannon Dick
Simon,

On Wed, 6/25/14, Simon Spero sesunc...@gmail.com wrote:
 [Reasoning about named calendar years in terms of
 intervals bounded by time points is painful, especially for
 years in the future.  Leap seconds can be added with only
 about six months notice ( http://en.wikipedia.org/wiki/Leap_second
 ). Leap seconds are a PITA. ] 


What possible use could Leap Seconds be if Time Series statistics (and Public 
Records) are compiled on a Quarterly basis ?  It is a bit cynical to convolute 
typographical errors with semantics, but in fact that is what you are doing is 
you use something other than (1461 Days / 16) = Calendar Quarter.

http://www.rustprivacy.org/2014/balance/gts/

and

http://www.rustprivacy.org/2014/balance/gts/StratML-GTS.html

--Gannon
 
 



Re: Quantum Superposition and Democracy

2014-05-31 Thread Gannon Dick
Hello Milton and Michael,

I agree with what both of you have said.  There is a lot of similarity in the 
concepts.

Superposition can be seen as a statistical metaphor for fairness and equality. 
I would be thrilled to leave it at that, but I doubt the true believers in the 
high-frequency News Cycle will leave it at that.  The rewards are very high. 
However, they have assumed the existance of a quantum tunnel through and past 
Bell's Theorem.

The Rule of Law does not require this dodgy assumption Classical Mechanics are 
enough, as Michael pointed out.  Because of Agency resource procurement and 
disbursment entanglements, day to day workings of a bureaucracy do not either.  
Bell's Theorem does not threaten the legitimacy of organizations (including 
governments) on the one hand, nor should it be taken as proof that social 
inequality originates in either Classical Mechanics or Quantum Mechanics on the 
other.

--Gannon

On Fri, 5/30/14, ProjectParadigm-ICT-Program metadataport...@yahoo.com wrote:

 Subject: Re: Quantum Superposition and Democracy
 To: Michael Brunnbauer bru...@netestate.de, Gannon Dick 
gannon_d...@yahoo.com
 Cc: public-lod@w3.org public-lod@w3.org
 Date: Friday, May 30, 2014, 8:01 PM
 
 
 It is funny that you should mention eGovernance in relation
 to quantum superposition. I would like to venture the
 hypothesis that corruption, i.e. the lack of integrity,
 governance and rule of law shows a lot of similarity in
 terms of quantum entanglement and Bell's theorem for the
 paired actors and paired processes involved.
  Milton Ponson
 GSM: +297 747 8280
 PO Box 1154,
 Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A
 structured approach to bringing the tools for sustainable
 development to all stakeholders worldwide by creating ICT tools for NGOs
 worldwide and:
 providing
  online access to web sites and repositories of data and
 information for sustainable development
 
 This email and any files transmitted with it are
 confidential and intended solely for the use of the
 individual or entity to whom they are addressed. If you have
 received this email in error please notify the system
 manager. This message contains confidential information and
 is intended only for the individual named. If you are not
 the named addressee you should not disseminate, distribute
 or copy this e-mail.
  
 
  On Friday, May 30,
 2014
  7:03 PM, Michael Brunnbauer bru...@netestate.de
 wrote:
 
 
  
 Hello Gannon,
 
 On Fri, May 30, 2014 at 02:50:47PM -0700,
 Gannon Dick wrote:
  Dutch researchers
 announced yesterday they had succeeded in
 reliably transmitting information between
 quantum bits separated by 3 meters.  This relies on
 Quantum Superposition, what Einstein called  ???spooky
 action at a distance.??? (he liked things more
 classical).
 
 From what I can
 find with Google, I guess that either the press has
 failed
 *miserably* on QM one more time or
 that Ronald Hanson is a quack.
 
 Quantum entanglement has been demonstrated
 decades ago - refuting Einsteins
 view on QM:
 http://en.wikipedia.org/wiki/Bells_Theorem
 
 Most physicist assume that
 faster than light communication (non-localism) is
 not possible and prefer non-realism instead:
 
 http://en.wikipedia.org/wiki/Faster-than-light#Quantum_mechanics
 
 Note that Quantum
 information is not the same as
 information.
 
 If
 Ronald Hanson had proven non-localism, there would be a
 turmoil in the
 scientific community that I
 do not see reflected in the Web.
 
  For the Web this is exciting news
 
 Non-localism would be exciting
 news for the Web? Aha.
 
 
 Truth is that separate agencies of government
  acting autonomously are The Government in
 aggregate and use the principle of superposition on a
 regular basis.  It is also called The Rule of
 Law.  Please do next year what we tell you this year
 (to do next year) - eat, drink, be merry, land on Mars and
 oh, pay taxes.
 
 This can be
 perfectly explained with classical physics - no QM
 necessary.
 
  Naturally a Strategy Markup Language
 (StratML) needs the tools to see several steps ahead.
  This form generates a StratML Performance
 Plan template for various common time frames in use in the
 Public and Private Sectors.
  http://www.rustprivacy.org/2014/balance/gts/StratML-GTS.html
 
 The centrifugal force in this
  bend must be enormous.
 
 Regards,
 
 Michael Brunnbauer
 
 -- 
 ++  Michael
 Brunnbauer
 ++  netEstate GmbH
 ++  Geisenhausener Straße 11a
 ++  81379 München
 ++  Tel +49 89 32 19 77 80
 ++  Fax +49 89 32 19 77 89 
 ++  E-Mail bru...@netestate.de
 ++  http://www.netestate.de/
 ++
 ++  Sitz: München, HRB
 Nr.142452 (Handelsregister B München)
 ++  USt-IdNr. DE221033342
 ++  Geschäftsführer: Michael Brunnbauer,
 Franz Brunnbauer
 ++  Prokurist: Dipl.
 Kfm.
  (Univ.) Markus Hendel
 




Quantum Superposition and Democracy

2014-05-30 Thread Gannon Dick
Dutch researchers announced yesterday they had succeeded in reliably 
transmitting information between quantum bits separated by 3 meters.  This 
relies on Quantum Superposition, what Einstein called  “spooky action at a 
distance.” (he liked things more classical).

For the Web this is exciting news, geniuses (Einstein and Heisenberg matching 
wits for the future of the Universe), for governments, it is about 4 thousand 
years older, but still exciting technology.  

Truth is that separate agencies of government acting autonomously are The 
Government in aggregate and use the principle of superposition on a regular 
basis.  It is also called The Rule of Law.  Please do next year what we tell 
you this year (to do next year) - eat, drink, be merry, land on Mars and oh, 
pay taxes.

Naturally a Strategy Markup Language (StratML) needs the tools to see several 
steps ahead.

This form generates a StratML Performance Plan template for various common time 
frames in use in the Public and Private Sectors.

http://www.rustprivacy.org/2014/balance/gts/StratML-GTS.html

The connection to Government Business is also illustrated on this page.  There 
is a pitfall.  Human Resources - Civil Servants and Citizens - as well as 
natural disasters cannot wait until the next scheduled report 

A further general comments and methods about Seasonal Awareness are detailed 
here:
http://www.rustprivacy.org/2014/balance/gts/

Enjoy,
Gannon





Re: European court reaches verdict with profound impact in Internet

2014-05-17 Thread Gannon Dick


On Sat, 5/17/14, Michael Brunnbauer bru...@netestate.de wrote:

  But there is a way out of the verdict and it involves
 novel use of linked data and semantic web technologies.
 
 I very much doubt that triples can help here.

=

I agree, Michael, but would further like to point something out which is 
relavent to legal proceedings in the current age, and for evermore:

Sometime in the late 1800's the ability of a practitioner of Natural Science 
disciplines to read everything ever published in that discipline was exceeded - 
long before any of us were born.

When did this happen with the Law ?  Never, of course, the Law is bemoaned as 
behind the times.
When did this happen with Computer Science ?  Never, of course, all Computer 
Science is cutting edge.

Contempt of Microbiology is not an offense because Microbiology is not a 
legal argument.  On the other hand, reckless handling of a pathogen contrary to 
accepted standards is an offense. The scope of the Search Engine business is 
Computer Science.  To repeatedly make laughable legal argument about the Search 
Engine Business in vacuo is contempt for the Court and contempt for us all.  

Cheers,
Gannon



Re: European court reaches verdict with profound impact in Internet

2014-05-16 Thread Gannon Dick
 I think it is high time that the creators, maintainers and
 developers of the platforms which collectively form the
 Internet sit down with search engine companies and work out
 some practical rules to provide the option of the right to
 have some personal information forgotten, as stated in this
 European verdict feasible.

I agree.  If I had my say, this chat would would be decidedly less collegial 
than I think Milton has in mind though ... I would begin the conversation with: 
 Look you creepy jerks, if you keep hoovering up personal information then you 
will force Courts to force you to flatten the semantics out of the Semantic Web 
rendering it inoperable.  Stop it.  Stop it right now.

Gannon



On Fri, 5/16/14, ProjectParadigm-ICT-Program metadataport...@yahoo.com wrote:

 Subject: European court reaches verdict with profound impact in Internet
 To: public-lod public-lod@w3.org, semantic-web semantic-...@w3.org
 Date: Friday, May 16, 2014, 3:09 PM
 
 The
 European Union Court has reached a verdict with a profound
 impact on the functioning of the Internet.
 See:
  
http://curia.europa.eu/juris/document/document.jsf?docid=152065mode=reqpageIndex=1dir=occ=firstpart=1text=doclang=ENcid=34297
 
 In essence when you Google your own
 name, the search results page is subject to European privacy
 laws which state that the individual whose name popped up
 has the right to correct or alter information appearing on
 the results page.
 
 Google by virtue of this verdict is now forced to create
 some mechanism to offer any European Union individual just
 that.
 
 Issues about verification of individual requesting removal
 set aside it also has profound implications about freedom of
 right issues.
 
 What about suspects in ongoing criminal or other court cases
 who would want to exercise their right innocent until proven
 guilty, which would obviously benefit all criminals and
 corrupt
  individuals.
 
 I think it is high time that the creators, maintainers and
 developers of the platforms which collectively form the
 Internet sit down with search engine companies and work out
 some practical rules to provide the option of the right to
 have some personal information forgotten, as stated in this
 European verdict feasible.
 
 Otherwise an Orwellian future looms at the horizon where
 history is conveniently rewritten in cases where for freedom
 of information reasons this obviously should NOT.
 
 Milton Ponson
 GSM: +297 747 8280
 PO Box 1154, Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A
 structured approach to bringing the tools for sustainable
 development to all stakeholders worldwide by creating ICT tools for
 NGOs worldwide and: providing online access
 to web sites and repositories of data and information for
 sustainable development
 
 This email and any files transmitted
 with it are confidential and intended solely for the use of
 the individual or entity to whom they are addressed. If you
 have received this email in error please notify the system
 manager. This message contains confidential information and
 is intended only for the individual named. If you are not
 the named addressee you should not disseminate, distribute
 or copy this e-mail.
 



Re: Lot's of eggs, but where is the chicken? Foundation of a Data ID Unit

2014-04-17 Thread Gannon Dick
Hi Sebastian,

If you want to bring metadata up to date ...

150 years ago, or so Gauss offered up an algorithm for calculating the date of 
Easter. The procedure is accurate to within 3 days. It can also be modified to 
compute the older Passover and lots of variants [1].  If you are neither 
Christian nor Jew, no problem, you probably know one.

Gauss (and his contemporary Thomson/Kelvin) may have been pious, but they were 
engineers and therefore sneaky bast'rds.  The precision is important as it 
enables you to link lunar month (tides) and the Spring Equinox without 
introducing those parameters into the harmonic calculation.  Gauss was 
interpolating the Spring Equinox as half the distance (backwards) from Next 
Summer to Last Winter. 

http://www.rustprivacy.org/2014/balance/Easter.ods
http://www.rustprivacy.org/2014/balance/Easter.xls

(use the ODS version if possible, there is no EASTERSUNDAY() function in EXCEL)

I am also a sneaky bas..., um, kindred spirit, and observe that if you compute 
the haversine(DOY)=(1-cos(DOY))/2=sin(DOY/2)^2 on the quarter, that is (1460+1 
(Leap) Day/16) then you get a Season phase (hour) angle ... 
Spring(.5),Summer(1),Fall(.5),Winter(0)  and furthermore, this works for any 
square codeset or set of acronyms.  The groups 
[Spring,Summer,Fall]=[Agricultural Year] and [Fall,Winter,Spring]=[Academic 
Year] are always present. There is no use concerning yourself with timestamps 
and fractional phase angles and the binomial distribution because, as Gauss 
knew, if you are correct within 3 days, Easter is the one that's a Sunday.

Happy Easter :-)
 
--Gannon

[1] http://www.staff.science.uu.nl/~gent0113/easter/eastercalculator.htm


On Thu, 4/17/14, Sebastian Hellmann hellm...@informatik.uni-leipzig.de wrote:

 Subject: Lot's of eggs, but where is the chicken? Foundation of a Data ID  Unit
 To: public-lod public-lod@w3.org, public-gld-comme...@w3.org, 
datahub-disc...@lists.okfn.org datahub-disc...@lists.okfn.org
 Date: Thursday, April 17, 2014, 8:29 AM
 
 
   
 
 
   
   
 Dear all,
 
 I would like to wish you a Happy Easter. At the same
 time, I have an
 issue, which concerns LOD and data on the web in
 general.
 
 
 
 As a community, we have all contributed to this image:
 http://lod-cloud.net/versions/2011-09-19/lod-cloud.html 
 (which is
 now three years old)
 
 You can see a lot of eggs on it, but the meta data (the
 chicken) is:
 
 - inaccurate
 
 - out-dated
 
 - infeasible to maintain manually 
 
 (this is my opinion. I find it hard to belief that we
 will start
 updating triple and link counts manually)
 
 
 
 Here one pertinent example:
 
 http://datahub.io/dataset/dbpedia
 
 - (this is still linking to DBpedia 3.5.1)
 
 
 
 
 
 Following the foundation of the DBpedia Association we
 would like to
 start to solve this problem with the help of a new group
 called
 DataIDUnit (http://wiki.dbpedia.org/coop/DataIDUnit)
 using rough
 consensus and working code as their codex:
 http://wiki.dbpedia.org/coop/
 
 
 
 The first goal will be to find some good existing
 vocabularies and
 then provide a working version for DBpedia.
 
 A student from Leipzig (Markus Freudenberg) will
 implement a push
 DataId to Datahub via its API feature.
 
 This will help us describe the chicken better, that laid
 all these
 eggs. 
 
 
 
 Happy Easter, all feedback is welcome, we hope not to
 duplicate
 efforts.
 
 Sebastian
 
 
 
 
 
 -- 
 
   Sebastian Hellmann
 
 AKSW/NLP2RDF research group
 
 DBpedia Association
 
 Insitute for Applied Informatics (InfAI) 
 
 Events: 
 
 * 21st March, 2014: LD4LT
   Kick-Off @European Data Forum
 
 * Sept. 1-5, 2014 Conference Week in Leipzig,
 including
 
 
 ** Sept 2nd, MLODE 2014 
 
 ** Sept 3rd, 2nd DBpedia Community Meeting
 
 ** Sept 4th-5th, SEMANTiCS
   (formerly i-SEMANTICS) 
 
 Venha para a Alemanha como PhD: 
http://bis.informatik.uni-leipzig.de/csf
 
 Projects: http://dbpedia.org,
 http://nlp2rdf.org, http://linguistics.okfn.org,
 https://www.w3.org/community/ld4lt
 
 Homepage: http://aksw.org/SebastianHellmann
 
 Research Group: http://aksw.org
 
 Thesis:
 
 http://tinyurl.com/sh-thesis-summary
 
 http://tinyurl.com/sh-thesis
 
   
   
 




Re: Incorrect lang tags Re: Princeton WordNet RDF

2014-04-17 Thread Gannon Dick
Perhaps you want to take my suggestion for handling codes ...

http://lists.w3.org/Archives/Public/public-lod/2014Apr/0105.html

The codes are a shorthand for links and labels.  By using a lookup table with 
1461 (possibly duplicate entries) you can create a map (of synthetic bi-annual 
versions) which will keep the labels in sync with the permalinks, and update 
automatically.  The problem is that if you pare down the valid code list on a 
per application basis the abilities of the applications diverge.

For example, ET (upper case) is the Country Code for Ethiopia 
(http://id.loc.gov/vocabulary/countries/et) and et (lower case) is the 
ISO-639-1 code for Estonian (http://id.loc.gov/vocabulary/iso639-1/et).  There 
is no semantics involved, just a lot of confusion involved when everybody knows 
their favorite 10 codes of each.

There is an initial problem of using a single code set to begin with, but going 
forward, it would be worth the effort to fix inconsistancies.  RDF can be quite 
a mess without some forethought.

--Gannon

On Wed, 4/16/14, Bernard Vatant bernard.vat...@mondeca.com wrote:

 Subject: Incorrect lang tags Re: Princeton WordNet RDF
 To: John P. McCrae jmcc...@cit-ec.uni-bielefeld.de, Linking Open Data 
public-lod@w3.org
 Date: Wednesday, April 16, 2014, 4:56 PM
 
 John
 
 Looking at the data in more details, it appears that
 the lang tags are using systematically ISO 639-2 codes (3
 letters-code), even when the ISO 639-1 exists and should be
 used, as per BCP
 47.
 
 
 See e.g., 
http://www.w3.org/RDF/Validator/rdfval?URI=http%3A%2F%2Fwordnet-rdf.princeton.edu%2Fwn31%2F109637345-n.rdf
 
 
 The W3C validator is right except when not up-to-date
 with the last ISO 639 values like in : 
 Error: {W116} ISO-639 does not define language:
 'zsm'.[Line = 53, Column = 50]
 
 Nope, there is such a code in ISO 639-3 :)
 
 
 See http://www.lingvoj.org/languages/tag-zsm.html
 
 and source http://www-01.sil.org/iso639-3/documentation.asp?id=zsm
 
 
 
 Hope you can fix this
 easily!
 
 Bernard
 
 2014-04-16 15:30
 GMT+02:00 John P. McCrae jmcc...@cit-ec.uni-bielefeld.de:
 
 
 Princeton
 University in collaboration with the Cognitive Interaction
 Technology
 Excellence Center of Bielefeld University are proud to
 announce the first
 
 
 RDF version of WordNet 3.1, now available at:
 
 
  http://wordnet-rdf.princeton.edu/
 
 This version, based on the current development of the
 WordNet project,
 intends to be a nucleus for the Linguistic Linked Open Data
 cloud and the global
 
 
 
 WordNet projects. The data are accessible in five formats
 (HTML+RDFa, RDF/XML,
 Turtle, N-Triples and JSON-LD) as well as by querying a
 SPARQL endpoint.
 The model is itself based on the lemon model and
 follows the guidelines 
 
 
 
 of the W3C OntoLex Community Group. 
 
 We have incorporated direct links to the previous W3C
 WordNets, UBY, Lexvo.org, VerbNet as well as translations
 collected
 by the Open Multilingual WordNet Project. Furthermore, we
 include links
 
 
 
 within the resource for previous versions of WordNets to
 further enable
 linking. We are interested in incorporating any resources
 that are linked to
 WordNet and would greatly appreciate suggestions.
 
 Regards,
 
 
 
 John P. McCrae, Christiane Fellbaum  Philipp
 Cimiano
 
 
 
 -- 
 Bernard Vatant
 Vocabularies  Data Engineering
 
 
 Tel
 :  +
 33 (0)9 71 48 84 59
 
 
 Skype
 : bernard.vatant
 http://google.com/+BernardVatant
 
 
 Mondeca       
        
              
 
 
 
 35 boulevard de Strasbourg 75010 Paris
 www.mondeca.comFollow
 us on Twitter : @mondecanews
 
 
 --
 
 
 
 
 




Re: Inference for error checking [was Re: How to avoid that collections break relationships]

2014-04-07 Thread Gannon Dick
Hi Peter,

Data Sets all age at the same rate, (1460 Days + 1 Leap Day per 16 Calendar 
Quarters) or any scalar multiple of that single frequency.  The frequency is 
man-made.  Certainly error checking is good, but cross-domain data transfers 
are only a transportation service via a dumb pipe.  I am wary of added value 
in-transit claims.  They are a delusion that some may find in watches but are 
nowhere to be found in calendars.  

http://www.rustprivacy.org/2014/balance/CulturalHeritageVision.jpg

--Gannon

On Sun, 4/6/14, Peter F. Patel-Schneider pfpschnei...@gmail.com wrote:

 Subject: Re: Inference for error checking [was Re: How to avoid that 
collections  break relationships]
 To: David Booth da...@dbooth.org, Pat Hayes pha...@ihmc.us
 Cc: Markus Lanthaler markus.lantha...@gmx.net, public-hy...@w3.org, 
'public-lod@w3.org' (public-lod@w3.org) public-lod@w3.org, W3C Web Schemas 
Task Force public-voc...@w3.org, Dan Brickley dan...@danbri.org
 Date: Sunday, April 6, 2014, 8:07 PM
 
 Well, certainly, one could do this if
 one wanted to.  However, is this a useful thing to do,
 in general, particularly in the absence of constructs that
 actually sanction the inferenceand particularly if the
 checking is done in a context where there is no way of
 actually getting the author to fix whatever problems are
 encountered?
 
 My feelings are that if you really want to do this, then the
 place to do it isduring data entry or data importation.
 
 
 peter
 
 On 04/03/2014 03:12 PM, David Booth wrote:
  First of all, my sincere apologies to Pat, Peter and
 the rest of the
  readership for totally botching my last example,
 writing domain when
  I meant range *and* explaining it wrong.  Sorry
 for all the confusion it caused!
  
  I was simply trying to demonstrate how a
 schema:domainIncludes
  assertion could be useful for error checking even if it
 had no
  formal entailments, by making selective use of the
 CWA.  I'll
  try again.
  
  Suppose we are given these RDF statements, in which the
 author
  *may* have made a typo, writing ddd instead of ccc as
 the rdf:type
  of x:
  
    x ppp y .       
            
    # Triple A
    x rdf:type ddd .     
           # Triple B
    ppp schema:domainIncludes ccc.  #
 Triple C
  
  As given, these statements are consistent, so a
 reasoner
  will not detect a problem.  Indeed, they may or
 may
  not be what the author intended.  If the author
 later
  added the statement:
  
    ccc owl:equivalentClass ddd
 .   # Triple E
  
  then ddd probably was what the author intended
  in triple B.  OTOH if the author later added:
  
    ccc owl:disjointWith ddd . 
     # Triple F
  
  then ddd probably was not what the author intended
  in triple B.
  
  However, thus far we are only given triples {A,B,C}
  above, and an error checker wishes
  to check for *potential* typos by applying the rule:
  
    For all subgraphs of the form
  
      { x ppp y .
        ppp
 schema:domainIncludes ccc . }
  
    check whether
  
       { x rdf:type ccc . }
  
    is *provably* true.  If not, then
 fail the
    error check.  If all such
 subgraphs pass, then
    the error check as a whole passes.
  
  Under the OWA, the requirement:
  
       { x rdf:type ccc . }
  
  is neither provably true nor provably false given
  graph {A,B,C}.  But under the CWA it is
  considered false, because it is not provably true.
  
  This is how the schema:domainIncludes can be
  useful for error checking even if it has no formal
  entailments: it tells the error checker which
  cases to check.
  
  I hope that now makes more
 sense.   Again, sorry to
  have screwed up my example so badly last time, and
  I hope I've got it right this time.  :)
  
  David
  
  
  On 04/02/2014 11:42 PM, Pat Hayes wrote:
  
  On Mar 31, 2014, at 10:31 AM, David Booth da...@dbooth.org
 wrote:
  
  On 03/30/2014 03:13 AM, Pat Hayes wrote:
  [ , . . ]
  What follows from knowing that
  
  ppp schema:domainIncludes ccc . ?
  
  Suppose you know this and you also know
 that
  
  x ppp y .
  
  Can you infer x rdf:type ccc? I presume
 not, since the domain might
  include other stuff outside ccc. So, what
 *can* be inferred about the
  relationship between x and ccc ? As far as
 I can see, nothing can be
  inferred. If I am wrong, please enlighten
 me. But if I am right, what
  possible utility is there in even making a
 schema:domainIncludes
  assertion?
  
  If inference is too strong, let me weaken
 my question: what
  possible utility **in any way whatsoever**
 is provided by knowing
  that schema:domainIncludes holds between
 ppp and ccc? What software
  can do what with this, that it could not do
 as well without this?
  
  I think I can answer this question quite
 easily, as I have seen it come up before in discussions of
 logic.
  
  ...
  
  Note that this categorization typically relies
 on making a closed world assumption (CWA), which is common
 for an application to make for 

Re: Semantic Web culture vs Startup culture

2014-03-31 Thread Gannon Dick
I agree, Kingsley.

Problems with SKOS (Lists) and RDF (Lists) are implementation problems, not 
processing problems.  It is very difficult to prevent people from perceiving a 
first, rest, nil sequence as a Monte Carlo integration of probability.  
From a young age we see that, if it is summer, winter is half a year forward or 
back and vice-versa.  What good is SKOS or RDF if the graphs do not 
show/(provide a visualization of) that seasonal straight line depreciation 
accounting ?  

Dilemma Answer: make up a virtuous bookkeeper's scale and divide it by 4 
(always possible) and call it a Quarterly Conference Calls and the last one an 
Annual Report. Profits ? Sorry, absolutely no telling when 
gravity=(1/1)=(2pi/2pi)=(360 Degrees/360 Degrees)=(Thing/sameAs), etc.). A 
bookkeeper is always virtuous, maybe because they are exactly congruent to 
virtue and maybe because they fear what a psychopathic authority might do to 
them if they fail to tell them the truth scaled to what they want to hear.  
That is not a probability either, it protects accomplices and keeps you and 
your friends safe. foaf:Person does not always make that my team-other team 
relation all present and accounted for.

http://www.rustprivacy.org/2014/balance/eCommerceVision.jpg
http://www.rustprivacy.org/2014/balance/CulturalHeritageVision.jpg

Superstitious, bigoted Scientists are virtuous bookkeepers who often have to 
decide if icebergs float because they are Witches or float because they are 
Queer.  You can't resolve that culture war by calling Alan Turing dirty names, 
and Implementers simply can not assume that an audience who knows what 
recursion is also knows what recursion does.  That is a semantic mistake.
--Gannon


On Sun, 3/30/14, Kingsley Idehen kide...@openlinksw.com wrote:

 Subject: Re: Semantic Web culture vs Startup culture
 To: public-lod@w3.org
 Date: Sunday, March 30, 2014, 1:00 PM
 
 On 3/29/14 1:41 PM, Luca Matteis
 wrote:
  Started a sort of Semantic Web vs Startup culture war
 on Hacker News:
  https://news.ycombinator.com/item?id=7491925
  
  Maybe you all can help me with some of the comments
 ;-)
  
  
 My comments, posted to the list:
 
 RDF is unpopular because it is generally misunderstood. This
 problem arises (primarily) from how RDF has been presented
 to the market in general.
 To understand RDF you have first understand what Data
 actually is [1], once you cross that hurdle two things [2[3]
 ]will become obvious:
 
 1. RDF is extremely useful in regards to all issues relating
 to Data
 2. RDF has been poorly promoted.
 
 Links:
 [1] http://slidesha.re/1epEyZ1 -- Understanding Data
 [2] http://bit.ly/1fluti1 -- What is RDF, Really?
 [3] http://bit.ly/1cqm7Hs -- RDF Relation (RDF should
 really stand for: Relations Description Framework) .
 
 -- 
 Regards,
 
 Kingsley Idehen    
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter Profile: https://twitter.com/kidehen
 Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen
 
 
 
 
 




Re: Semantic Web culture vs Startup culture

2014-03-31 Thread Gannon Dick
laff out loud, indeed  All SPARQL (and the Open World Assumption) ever needed 
to play nice with the Central Limit Theorem is to stuff all the red herrings 
into Planck Scale boxes and pack them at the bottom of the rdf:List container.

I just invented a 4th Law of Thermodynamics - Conservation of Cat Video's.  I'm 
goin' to Hell for that.
Whatever.  It is impossible to re-invent a wheel which takes more than half the 
implementation time of the original invention ... If it does your watch is 
broken.

You should call it entropy, for two reasons. In the first place your 
uncertainty function has been used in statistical mechanics under that name, so 
it already has a name. In the second place, and more important, no one really 
knows what entropy really is, so in a debate you will always have the 
advantage.  John von Neumann to Claude Shannon

With four parameters I can fit an elephant, and with five I can make him 
wiggle his trunk.
Also John von Neumann (apparently he was talking to himself. People were 
listening, but nobody was hearing.)

--Gannon

On Mon, 3/31/14, Paul Houle ontolo...@gmail.com wrote:

 Subject: Re: Semantic Web culture vs Startup culture
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: Linked Data community public-lod@w3.org, Kingsley Idehen 
kide...@openlinksw.com
 Date: Monday, March 31, 2014, 12:48 PM
 
  What makes me laff is that the same
 people who think RDF sucks
 think Neo4J is the bee's knees.  (Even if they've never
 quite shipped
 an actual product with it,  or if they did a demo it
 performs worse
 than the same demo I did with MySQL in 2002)
 
 Somehow,  SPARQL has never been seen as a NoSQL and I
 don't know why.
 
 
 On Mon, Mar 31, 2014 at 1:07 PM, Gannon Dick gannon_d...@yahoo.com
 wrote:
  I agree, Kingsley.
 
  Problems with SKOS (Lists) and RDF (Lists) are
 implementation problems, not processing problems.  It
 is very difficult to prevent people from perceiving a
 first, rest, nil sequence as a Monte
 Carlo integration of probability.  From a young age we
 see that, if it is summer, winter is half a year forward or
 back and vice-versa.  What good is SKOS or RDF if the
 graphs do not show/(provide a visualization of) that
 seasonal straight line depreciation accounting ?
 
  Dilemma Answer: make up a virtuous bookkeeper's scale
 and divide it by 4 (always possible) and call it a Quarterly
 Conference Calls and the last one an Annual Report. Profits
 ? Sorry, absolutely no telling when
 gravity=(1/1)=(2pi/2pi)=(360 Degrees/360
 Degrees)=(Thing/sameAs), etc.). A bookkeeper is always
 virtuous, maybe because they are exactly congruent to virtue
 and maybe because they fear what a psychopathic authority
 might do to them if they fail to tell them the truth scaled
 to what they want to hear.  That is not a probability
 either, it protects accomplices and keeps you and your
 friends safe. foaf:Person does not always make that
 my team-other team relation all present and accounted
 for.
 
  http://www.rustprivacy.org/2014/balance/eCommerceVision.jpg
  http://www.rustprivacy.org/2014/balance/CulturalHeritageVision.jpg
 
  Superstitious, bigoted Scientists are virtuous
 bookkeepers who often have to decide if icebergs float
 because they are Witches or float because they are
 Queer.  You can't resolve that culture war by calling
 Alan Turing dirty names, and Implementers simply can not
 assume that an audience who knows what recursion is also
 knows what recursion does.  That is a semantic
 mistake.
  --Gannon
 
  
  On Sun, 3/30/14, Kingsley Idehen kide...@openlinksw.com
 wrote:
 
   Subject: Re: Semantic Web culture vs Startup
 culture
   To: public-lod@w3.org
   Date: Sunday, March 30, 2014, 1:00 PM
 
   On 3/29/14 1:41 PM, Luca Matteis
   wrote:
    Started a sort of Semantic Web vs Startup
 culture war
   on Hacker News:
    https://news.ycombinator.com/item?id=7491925
   
    Maybe you all can help me with some of the
 comments
   ;-)
   
   
   My comments, posted to the list:
 
   RDF is unpopular because it is generally
 misunderstood. This
   problem arises (primarily) from how RDF has been
 presented
   to the market in general.
   To understand RDF you have first understand what
 Data
   actually is [1], once you cross that hurdle two
 things [2[3]
   ]will become obvious:
 
   1. RDF is extremely useful in regards to all
 issues relating
   to Data
   2. RDF has been poorly promoted.
 
   Links:
   [1] http://slidesha.re/1epEyZ1 --
 Understanding Data
   [2] http://bit.ly/1fluti1 -- What is RDF, Really?
   [3] http://bit.ly/1cqm7Hs -- RDF Relation (RDF should
   really stand for: Relations Description
 Framework) .
 
   --
   Regards,
 
   Kingsley Idehen
   Founder  CEO
   OpenLink Software
   Company Web: http://www.openlinksw.com
   Personal Weblog: http://www.openlinksw.com/blog/~kidehen
   Twitter Profile: https://twitter.com/kidehen
   Google+ Profile: https

Re: How to avoid that collections break relationships

2014-03-29 Thread Gannon Dick
Hi Thad and Niklas,

Apologies to all for the quantum entanglements, um, cross posting.

One schema.org is enough, sorry Niklas.  My point was that eCommerce and Web 
Scale are related in linear fashion, and an Ontology needs to reflect strict, 
non-proprietary procedure for revisionist history making. The Accountants are 
counting cumulative petite cash as owl:Things / and suggesting that 
foaf:Person / follows the same geometric ratios as any other commercial 
product.

The problem is catastrophe accounting.  Everyone knows a successful product 
launch is due to an equilibrium of brilliant outside Advertising and the CEO's 
equally brilliant choice of donuts at early planning meetings ...  W3C 
Standards ? Pffft...

http://www.rustprivacy.org/2014/balance/LinkedOpenData.jpg

I can't afford a Smart Phone, so I ordered up a 5.1 Magnitude Earthquake in 
Southern California to illustrate my point (Sorry about the broken windows, 
BTW, glad no foaf:Person were hurt). In a year from now, the Accounting 
Department is going to say this cost a lot of money ... some multiple of 
$36,500.25 to be exact (Looking for change for a quarter).  This is 113.10% due 
to an outside Agency Advertising so I'm going to fire 100% of the in-house 
people and sue for the other (n x $13.10).  Only 24.6% of the Marketing 
Department appears worthless.  I can fire them, but I can't shoot them.  That 
would be wrong.  But, it makes perfect sense in Social Networking terms if King 
Henry II had scrawled Will no one rid me of this turbulent priest? on Thomas 
à Becket's FaceBook Page.

When geologists say 5.1 Magnitude they are saying that the Earthquake cost 
in energy equals 100% x [log(5.1 Magnitude,base 2) + log(accounting cost 1 year 
later, base 2)]. Note that 100% is also -1*log(2/4 (years / reports per year), 
base 2).  You get what you pay for.

The partial fractions are fixed:
41.16% The Advertiser's fault
27.44% The Salesman's fault
31.41% Unidentified somebody else
==
100.00% Somebody else's fault or Somebody else's Ontology.

That's why I say that one schema.org is enough Ontology to handle 50% of the 
blame for catastrophe, but if you want to handle 100% of the probability you 
have to compute the dimensions Web Scale differently.

--Gannon
  

On Fri, 3/28/14, Thad Guidry thadgui...@gmail.com wrote:

 Subject: Re: How to avoid that collections break relationships
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: Niklas Lindström lindstr...@gmail.com, public-hy...@w3.org 
public-hy...@w3.org, public-lod@w3.org public-lod@w3.org, W3C Web 
Schemas Task Force public-voc...@w3.org
 Date: Friday, March 28, 2014, 6:45 PM
 
 It's too bad Niklas'
 opinions are in the minority...he could have his OWN
 Schema.org with thoughts like that. :)
 
 On Fri, Mar 28, 2014
 at 3:17 PM, Gannon Dick gannon_d...@yahoo.com
 wrote:
 
 My 2
 cents:
 
 
 
 When schema.org was new, I mentioned
 to Dan B. that conflicting viewpoints between the sponsors
 might be a problem.  He agreed that I confused him entirely
 (I get that a lot).  From a business perspective, a Library
 is a rooftop of a building which collects fines for over due
 books.  From a semantic perspective a Library is a
 storehouse of Knowledge (in book title Collections) some of
 which may never be used.
 
 
 
 
 The difference is that a shopkeeping business must be able
 to make change when they open for business in the morning.
  The contents of the cash drawer at noon are a computed
 zero.  With this in mind,
 
 1) the absolute Right of an Advertiser to do
 business 24 x 7, Equals
 
 2) the absolute Right of an eCommerce to do
 business 24 x 7, Equals
 
 3) the absolute Right of a Merchant to keep
 change overnight, and compute the morning's (ante
 meridian) profits at noon if they wish to know the number.
 
 
 
 Reporting bogus profit numbers in the Annual Report is
 absolutely not right in all three cases.
 
 
 
 The speed of the advertising business is constant with
 respect to the frequency of reporting.
 
 The speed of the eCommerce business is constant with respect
 to the frequency of reporting.
 
 The speed of shopkeeping is constant with respect to the
 frequency of reporting.
 
 The speed of light is constant, but Physicists don't
 talk about that as having been just discovered this morning
 when the shop doors opened.  Instead they note that a Nurse
 has an absolute Right to wake you up to take a
 pill but only while you are in Hospital. An Advertiser or an
 eCommerce Merchant has no right to play Nurse or play alarm
 clock.
 
 
 
 
 An Ontology (and the Gold Standard for Linked Data) is a
 balanced (average) view of the profits reported in the
 Annual Report and numerically about 13% below the Advertiser
 or eCommerce Merchant perspective and 25% above the
 Shopkeeper perspective.  The numbers come from the the
 tangent half-angle substitution across the computed zero
 ((Time=Now)=petite cash).  When you

Re: Semantic Web culture vs Startup culture

2014-03-29 Thread Gannon Dick
I just had to pull an Earthquake out of my own pocket to make a point about Web 
Scale.

Culture Wars are much cheaper.  Wish I would have thought of that.

--Gannon

On Sat, 3/29/14, Luca Matteis lmatt...@gmail.com wrote:

 Subject: Semantic Web culture vs Startup culture
 To: Linked Data community public-lod@w3.org
 Date: Saturday, March 29, 2014, 12:41 PM
 
 Started a sort of Semantic Web vs
 Startup culture war on Hacker News:
 https://news.ycombinator.com/item?id=7491925
 
 Maybe you all can help me with some of the comments ;-)
 
 



Re: How to avoid that collections break relationships

2014-03-28 Thread Gannon Dick
My 2 cents:

When schema.org was new, I mentioned to Dan B. that conflicting viewpoints 
between the sponsors might be a problem.  He agreed that I confused him 
entirely (I get that a lot).  From a business perspective, a Library is a 
rooftop of a building which collects fines for over due books.  From a semantic 
perspective a Library is a storehouse of Knowledge (in book title Collections) 
some of which may never be used.

The difference is that a shopkeeping business must be able to make change when 
they open for business in the morning.  The contents of the cash drawer at noon 
are a computed zero.  With this in mind, 
1) the absolute Right of an Advertiser to do business 24 x 7, Equals
2) the absolute Right of an eCommerce to do business 24 x 7, Equals
3) the absolute Right of a Merchant to keep change overnight, and compute the 
morning's (ante meridian) profits at noon if they wish to know the number.

Reporting bogus profit numbers in the Annual Report is absolutely not right in 
all three cases. 

The speed of the advertising business is constant with respect to the frequency 
of reporting.
The speed of the eCommerce business is constant with respect to the frequency 
of reporting.
The speed of shopkeeping is constant with respect to the frequency of reporting.
The speed of light is constant, but Physicists don't talk about that as having 
been just discovered this morning when the shop doors opened.  Instead they 
note that a Nurse has an absolute Right to wake you up to take a pill but 
only while you are in Hospital. An Advertiser or an eCommerce Merchant has no 
right to play Nurse or play alarm clock. 

An Ontology (and the Gold Standard for Linked Data) is a balanced (average) 
view of the profits reported in the Annual Report and numerically about 13% 
below the Advertiser or eCommerce Merchant perspective and 25% above the 
Shopkeeper perspective.  The numbers come from the the tangent half-angle 
substitution across the computed zero ((Time=Now)=petite cash).  When you query 
the Web you query 100% of an average of that which does not mean 50% of the 
products sold or 50% of the products advertised.

http://www.rustprivacy.org/2014/balance/LinkedOpenData.jpg

You just have to be careful about the logic labels:  Advertising is useful.  
Data Products are useful. Spam is infinitely useless.  Your friends are useful 
therefore they are not Spam and not (only) Data Products.

--Gannon

3:24 AM, Niklas Lindström lindstr...@gmail.com
 wrote:
snip/
  Alternative suggestion from the spectator seat:

 In real-world situations with organizations coming
 from a different perspective, theory often clashes with
 practicality. The lesser of two evils dictum
 would be to first make sure that something gets published,
 even if it is semantically fuzzy. But, that's a
 perma-thread for a different day :)

 Cheers,Niklas 
 
snip / 




Re: Vocabulary for encoding licensing/qualifications

2014-01-23 Thread Gannon Dick
Hi Dorian,

https://dvcs.w3.org/hg/gld/raw-file/default/adms/index.html

--Gannon

On Thu, 1/23/14, dorian taylor dorian.taylor.li...@gmail.com wrote:

 Subject: Vocabulary for encoding licensing/qualifications
 To: public-lod@w3.org
 Date: Thursday, January 23, 2014, 3:31 PM
 
 Hello,
 
 I'm having trouble finding an existing vocabulary for
 abstract
 certifications of (people, organizations).  I can find
 vocabs for
 cryptographic certificates, and some for academic degrees,
 but I'm
 looking to be able to represent things like business
 licenses and/or
 trade certifications (or even say a driver's license or
 fishing
 license).
 
 This is more or less the kind of thing what I want to
 encode:
 
 c a li:Certification;
     li:jurisdiction j;
     li:qualification q;
     li:issuer i;
     li:subject s;
     li:proof p ;
     dct:identifier ABC12345;
     dct:valid v .
 
 j a geo:SpatialThing . # (e.g.), the jurisdiction,
 either physical or virtual
 q a skos:Concept .    # whatever it is
 being licensed
 i a foaf:Agent .          #
 the issuing body
 s a foaf:Agent .     
    # the entity being licensed
 p a foaf:Document .   # some evidence
 (e.g. scan of a license or
 issuer confirmation)
 v a time:Interval .      # if/when
 the certification expires
 
 Any insights?
 
 Thanks,
 
 -- 
 Dorian Taylor
 http://doriantaylor.com/
 




Re: Vocabulary for encoding licensing/qualifications

2014-01-23 Thread Gannon Dick
That was the W3C version, this is the description from the EU Commission.
https://joinup.ec.europa.eu/asset/adms/description

--Gannon

On Thu, 1/23/14, dorian taylor dorian.taylor.li...@gmail.com wrote:

 Subject: Re: Vocabulary for encoding licensing/qualifications
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: public-lod@w3.org
 Date: Thursday, January 23, 2014, 4:25 PM
 
 Thanks Gannon,
 
 Hm, I'm skimming over the
 document but I'm having trouble understanding.
 
 How would I use ADMS to
 express, say, that I have been granted the
 passport XY123456, by the Canadian government,
 which expires on
 2015-10-16?
 
 
 
 On
 Thu, Jan 23, 2014 at 2:00 PM, Gannon Dick gannon_d...@yahoo.com
 wrote:
  Hi Dorian,
 
  https://dvcs.w3.org/hg/gld/raw-file/default/adms/index.html
 
  --Gannon
 
 -- 
 Dorian
 Taylor
 http://doriantaylor.com/
 
 



Re: [Final CfP]: Uncertainty and Imprecision on the Web of Data @ IPMU 2014

2013-12-20 Thread Gannon Dick
Hi Konstantin,

As an irony fan, I note that the conference on Uncertainty and Imprecision 
begins the day after Bastille Day.  Interesting comment on the revolutionary 
character of the Web of Data :-)

Uncertainty and imprecision can not be solved by disapproval.  If you took a 
stack of 10 Trillion Dollar Bills (10^14) (representing Water Molecules) then a 
Chemist would tell you that one bill in the pile has no picture of George 
Washington and one bill has two pictures of George Washington.  Quantum 
Mechanics is weird and it makes Economist's heads blow up.  Anyway ...  

I doubt I would be able to attend but can offer some organized, if not exactly 
structured (the American Public Domain is not structured, just free) data for 
visualizations.  

http://lists.w3.org/Archives/Public/public-egovernance/2013Dec/.html
(the problem)

http://www.rustprivacy.org/2013/education/fednet.html
(the why's of the broad strokes model)

http://www.rustprivacy.org/2013/education/
(applicability to local Education issues)

--Gannon

On Fri, 12/20/13, Konstantin Todorov konstantin@gmail.com wrote:

 Subject: [Final CfP]: Uncertainty and Imprecision on the Web of Data @ IPMU 
2014
 To: public-lod@w3.org
 Date: Friday, December 20, 2013, 7:00 AM
 
 Apologies for
 cross-postings---CALL FOR PAPERS
 ---
 
 Uncertainty and
 Imprecision on the Web of Data
 July 15-19,
 2014Montpellier,
 France
 
 --Special Session at the 15th International Conference on
 Information Processing and Management of Uncertainty in
 Knowledge-Based Systems
 
 http://www.ipmu2014.univ-montp2.fr
 Submission
 deadline December 31, 2013
 
 
 -
 ***Short
 description***
 
 
 Phenomena
 related to uncertainty and imprecision are common on the Web
 of Data. On the one hand, data published as linked open
 data are often incomplete and of variable quality; we are
 frequently faced to dealing with missing, imperfect, vague
 and imprecise data in many real-world applications. On the
 other hand, often these data's meta-models are of
 inherently uncertain or imprecise nature and dealing with
 these resources requires a suitable framework. Although
 active research addressing these issues has been conducted
 recently, handling uncertainty and imprecision is still an
 open problem in the context of the Web of Data.
 
 
 The
 goal of this special session is to bring together
 researchers working in the field of imprecise/uncertain
 knowledge and data management and interested in linked open
 data technologies. The session will address problems
 related to handling imprecision and/or uncertainty of data
 and ontologies in the processes of publishing,
 interconnecting and querying data by following the Linked
 Data principles. Two major (partly
 intersecting) communities are targeted: (1) the community
 dealing with reasoning under uncertainty and (2) the
 community focusing on knowledge discovery, data mining, data
 integration and information retrieval when data are fuzzy,
 imperfect or imprecise.
 
 
 The
 topics of interest can be articulated along the following
 axes (the list being non-exhaustive):
 
 
 *
 Fuzzy ontological languages
 
 * Linking of imperfect/imprecise/vague data*
 Representaion of uncertain links
 
 * Fuzzy/probabilistic/approximate ontology matching*
 Reasoning techniques under uncertainty and fuzziness
 
 * Imprecision and uncertainty in specific domains,
 e.g.: 
     - the biological and bio-medical domains
 
       - the geo-spatial domain 
     - trust, provenance and security
 
       - multimedia 
     - multilingualism*
 Quality of open data
 
 *
 Fuzzy data mining and knowledge extraction*
 Querying warehouses opened on the web: imprecise queries and
 approximate answers
 
 
 
 ***Submissions***
 Contributions to the
 special session can be made in terms of papers which will
 undergo the standard reviewing process of the IPMU 2014
 conference. Complete information regarding the  submission
 process can be found at the conference website: 
http://www.ipmu2014.univ-montp2.fr,
 more precisely in the section Program - Special
 Sessions. In the submission process, note that the name of
 the special session will appear (and has to be selected) in
 the list of conference tracks on the Easychair site. The
 accepted papers will be published in the proceedings of IPMU
 2014.
 
 
 
 ***Organizers***
 
 Zohra BellahseneAnne
 LaurentFrançois
 Scharffe
 
 Konstantin Todorov (Main contact)
 LIRMM
 / University of Montpellier 2
 
 contact: {firstname.lastname}@lirmm.fr
 
 
 
 
 ***Program
 Committee***
 
 
 Jamal Atif / Université Paris 11,
 FranceZohra Bellahsene / University of
 Montpellier 2,
 France
 
 Isabelle
 Bloch / Télécom ParisTech – LTCI, FranceFernando
 Bobillo / University of
 Zaragoza, Spain
 
 Silvia
 Calegari / University of Milano,
 Italy
 
 Nicola Fanizzi / University of Bari, ItalyPeter
 Geibel / Charité 

Re: representing hypermedia controls in RDF

2013-11-25 Thread Gannon Dick
Hi Ruben,

I haven't been able to pull up your distributed affordance presentation but 
had a general question:

Are you thinking in terms of IPv4 or IPv6 ?
I think it makes a difference, since the Loop Back address space in IPv4 
(16,777,214) puts ca. 425 people (of about 7,126,653,500 people, today, US 
Census Estimate) on one anonymous node.  IPv6 is much different, but it seems 
to me 425 people sharing a party line is much different communication than 
private.  Is privacy dead because we ran out of virtual mail boxes ? Yikes.

--Gannon

On Mon, 11/25/13, Ruben Verborgh ruben.verbo...@ugent.be wrote:

 Subject: Re: representing hypermedia controls in RDF
 To: Kingsley Idehen kide...@openlinksw.com
 Cc: public-lod Data public-lod@w3.org
 Date: Monday, November 25, 2013, 1:33 PM
 
 Hi Kingsley,
 
  In my talks, I say that enabling is stronger than
 affording.
 
  Do you have a link to the talk in question?
 
 Well, it's something I always mention verbally, so
 enabling will not be on the slides.
 
 Nevertheless, here's a presentation on it for a wide
 audience:
 http://www.slideshare.net/RubenVerborgh/the-web-a-hypermedia-story
 On slides 41–46, I explain Fielding's definition of
 hypermedia,
 with slides 44–46 specifically focusing on affordance.
 
 And here are slides for my research project Distributed
 Affordance (what's in a name),
 which explains the topic in a similar way on slides 7–18:
 http://www.slideshare.net/RubenVerborgh/distributed-affordance-21175728
 
 Affordance is in my opinion the crucial word that defines
 the REST architectural style,
 as its loose conversational coupling is only possible
 because representations _afford_ subsequent actions;
 RPC-style interactions just _enable_ those actions.
 
 Best,
 
 Ruben




Re: representing hypermedia controls in RDF

2013-11-25 Thread Gannon Dick
I did see the presentation.  Apparently Dr. Hausenblas may have warned you 
about me too late :-)

My question can be rephrased thus:  Does the theoretical size of the  target 
audience for a distributed affordance matter ?  In IPv4, the target audience 
would be a community with a population of about 425.  In IPv6, probably much 
less than one (although I've not calculated it).  It seems to me that 
distributed entails that one size fits does not fit all clients, that 
constrains the Open World ... so, it would be very hard to tell if the concept 
and architecture are in operation.  Maybe TBL needs a 6th Star.

On Mon, 11/25/13, Ruben Verborgh ruben.verbo...@ugent.be wrote:

 Subject: Re: representing hypermedia controls in RDF
 To: Gannon Dick gannon_d...@yahoo.com
 Cc: public-lod Data public-lod@w3.org
 Date: Monday, November 25, 2013, 4:05 PM
 
 Hi Gannon,
 
  Are you thinking in terms of IPv4 or IPv6 ?
 
 I'm sorry but I lost you here… how can I IPv4/6 relate to
 this?
 
 Best,
 
 Ruben




Re: Dumb SPARQL query problem

2013-11-23 Thread Gannon Dick
Not sure if this helps multilingual pigs as much as it should, but I'm not much 
good before coffee and expect there are many fellow mammals who share my plight 
...

Language classification code reduction (in old fashioned SQL)
http://www.rustprivacy.org/faca/languages.php



On Sat, 11/23/13, Hugh Glaser h...@glasers.org wrote:

 Subject: Re: Dumb SPARQL query problem
 To: Richard Light rich...@light.demon.co.uk
 Cc: public-lod community public-lod@w3.org
 Date: Saturday, November 23, 2013, 9:17 AM
 
 Pleasure.
 Actually, I found this:
 http://answers.semanticweb.com/questions/3530/sparql-query-filtering-by-string
 
 I said it is a pig’s breakfast because you never know what
 the RDF publisher has decided to do, and need to try
 everything.
 So to match strings efficiently you need to do (at least)
 four queries:
 “cat”
 “cat”@en
 “cat”^^xsd:string
 “cat”@en^^xsd:string or “cat”^^xsd:string@en - I
 can’t remember which is right, but I think it’s only one
 of them :-)
 
 Of course if you are matching in SPARQL you can use “…
 ?o . FILTER (str(?o) = “cat”)…”, but that its likely
 to be much slower.
 
 This means that you may need to do a lot of queries.
 I built something to look for matching strings (of course! -
 finding sameAs candidates) where the RDF had been gathered
 from different sources.
 Something like
 SELECT ?a ?b WHERE { ?a ?p1 ?s . ?b ?p2 ?s }
 would have been nice.
 I’ll leave it as an exercise to the reader to work out how
 many queries it takes to genuinely achieve the desired
 effect without using FILTER and str.
 
 Unfortunately it seems that recent developments have not
 been much help here, but I may be wrong:
 http://www.w3.org/TR/sparql11-query/#matchingRDFLiterals
 
 I guess that the truth is that other people don’t actually
 build systems that follow your nose to arbitrary Linked Data
 resources, so they don’t worry about it?
 Or am I missing something obvious, and people actually have
 a good way around this?
 
 To me the problem all comes because knowledge is being
 represented outside the triple model.
 And also because of the XML legacy of RDF, even though
 everyone keeps saying that is only a serialisation of an
 abstract model.
 Ah well, back in my box.
 
 Cheers.
 
 On 23 Nov 2013, at 11:00, Richard Light rich...@light.demon.co.uk
 wrote:
 
  
  On 23/11/2013 10:30, Hugh Glaser wrote:
  Its’ the other bit of the pig’s breakfast.
  Try an @en
  
  Magic!  Thanks.
  
  Richard
  On 23 Nov 2013, at 10:18, Richard Light rich...@light.demon.co.uk
   wrote:
  
  
  Hi,
  
  Sorry to bother the list, but I'm stumped by
 what should be a simple SPARQL query.  When applied to
 the dbpedia end-point [1], this search:
  
  PREFIX foaf: 
  http://xmlns.com/foaf/0.1/
  
  PREFIX dbpedia-owl: 
  http://dbpedia.org/ontology/
  
  SELECT *
  WHERE {
      ?pers a foaf:Person .
      ?pers foaf:surname
 Malik .
      OPTIONAL {?pers
 dbpedia-owl:birthDate ?dob }
      OPTIONAL {?pers
 dbpedia-owl:deathDate ?dod }
      OPTIONAL {?pers
 dbpedia-owl:placeOfBirth ?pob } 
      OPTIONAL {?pers
 dbpedia-owl:placeOfDeath ?pod } 
  }
  LIMIT 100
  
  yields no results. Yet if you drop the '?pers
 foaf:surname Malik .' clause, you get a result set which
 includes a Malik with the desired surname property. 
 I'm clearly being dumb, but in what way? :-) 
  
  (I've tried adding ^^xsd:string to the literal,
 but no joy.)
  
  Thanks,
  
  Richard
  [1] 
  http://dbpedia.org/sparql
  
  -- 
  Richard Light
  
  
  -- 
  Richard Light
 
 -- 
 Hugh Glaser
    20 Portchester Rise
    Eastleigh
    SO50 4QS
 Mobile: +44 75 9533 4155, Home: +44 23 8061 5652
 
 
 




Making data collection less rude

2013-10-29 Thread Gannon Dick
The granularity of language encoding can be used to make data collection less 
rude. Depending on what data is being collected, and how it is being collected, 
systems of language codes can easily overreach their usefulness.  The primary 
function of Open Government websites is to attract data tourists.

For example, some US Government Agency (I forget the exact name) involved with 
geo-location also uses the ISO 639 three character codes (347 of them).  This 
is public information.  Open Government websites write in the ISO 639 two 
character codes (151 of them).  A sample of 150+ Government websites showed 
only about 68 of these two letter codes actually in use.  The codes can be 
reduced with an SQL table. There is very little need to attempt a SPARQL 
solution.

http://www.rustprivacy.org/faca/languages.php

Drop Down Lists (for example) and SQL,XML,CSV versions of the table are 
available for download.

--Gannon 





Linked Data and Lists

2013-10-05 Thread Gannon Dick


This is a list of Top Level Domains on the Net.  This is not necessarily a list 
of nice places to live and work.  It is a complete list of places for data 
tourists to visit.

http://www.rustprivacy.org/faca/simTLD/
see also : http://www.rustprivacy.org/faca/simTLD/silk_road.php
see also : http://tinyurl.com/export-cert




A person who is nice to you, but rude to the waiter, is not a nice 
person. -- Dave Barry

There are no nice Organizations. They all want tourists, data tourists 
included.  How they treat their employees when the customers aren't looking, is 
a different issue.  The list  is completely agnostic in that regard.  Code 
space intersections are agnostic by default: see 
http://www.rustprivacy.org/faca/simTLD/silk_road/silk_road_exp.php

That is my story, and I am sticking to it, in case anyone asks (Latitude: 38.89 
Longitude: -77.04 Timezone: UTC-05).

--Gannon

Re: Releasing RWW.IO

2013-09-22 Thread Gannon Dick
FWIW, this model may be of some help.


http://tinyurl.com/export-cert


(direct) http://www.rustprivacy.org/faca/simTLD/silk_road.php

Rather than a Central Mint, it has up to 676 Mints roughly corresponding to 
ccTLD's.  This can be used to prevent currency manipulation by third party tax 
authorities (really surtax authorities*) while data is in transit between Top 
Level Domains.

Indexes of the inter-tubes (indexes) are a different architecture, that system 
includes both the mints and the localized subsystem of value (hard 
currency, including !(currency), plus locally used currency).

http://www.rustprivacy.org/faca/simTLD/

--Gannon

* When, for example, a Social Networking site collects marketing information on 
any user, they are imposing a surtax on local commerce.





 From: Melvin Carvalho melvincarva...@gmail.com
To: business-of-linked-data-b...@googlegroups.com 
Cc: public-webid public-we...@w3.org; public-...@w3.org 
public-...@w3.org; public-lod@w3.org public-lod@w3.org; 
federated-social-web federated-social-...@googlegroups.com 
Sent: Sunday, September 22, 2013 9:50 AM
Subject: Re: Releasing RWW.IO
 







On 20 September 2013 17:08, Kingsley Idehen kide...@openlinksw.com wrote:

On 9/20/13 4:43 AM, Luca Matteis wrote:

I was just wondering how RWW.io could be making some cash with the service 
it's offering.


Melvin,

Luca is trying to brainstorm a business model for RWW.io and similar services. 
Basically, a business model for Personal Data Spaces that implement RWW 
functionality using AWWW and related open standards. etc..

I think that's achievable without requiring each data space platform dabble in 
the more complex realm of digital currency  :-)

Digital currency is just a ledger on a computer.  The ONLY way to have a 
business model is to use digital currency.  The double spend problem is 
traditionally solved by having a central mint.  In other words, you have a 
record keeping system on your website of everyone's account balances and 
quotas.  This is not complexity, there's no other way to do it.

I'm just saying that using linked data and URIs it becomes portable and, if you 
want, you can make it distributed so that you increase the value, and 
potentially generate more revenue...

 


-- 

Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen







Re: Maphub -- RWW meets maps

2013-09-19 Thread Gannon Dick
FWIW, the University of Oxford has an 800th Birthday coming up soon.

http://www.rustprivacy.org/2012/roadmap/oxford-university-area-map.pdf

The geo coordinates, founding dates etc. for the Colleges and Halls are
available on the University site.  The lo-res sunrise and sunset data is
available in spreadsheets at 
http://www.esrl.noaa.gov/gmd/grad/solcalc/calcdetails.html

My offering has some *cough* complete lack of artistic promise and bandwidth 
crushing size *cough* limitations, but I had fun :-)  It would be nice to see 
this *cough* done well *cough* duplicated.

--Gannon







 From: Andy Turner a.g.d.tur...@leeds.ac.uk
To: 'Kingsley Idehen' kide...@openlinksw.com; public-...@w3.org 
public-...@w3.org; public-lod@w3.org public-lod@w3.org 
Cc: chippy2...@gmail.com chippy2...@gmail.com; 
suchith.an...@nottingham.ac.uk suchith.an...@nottingham.ac.uk 
Sent: Thursday, September 19, 2013 3:36 AM
Subject: RE: Maphub -- RWW meets maps
 

Interesting work. It's a way to go for linking OpenStreetMap data and Wikimapia 
data with Wikipedia and each other etc.. I don't know the state of play with 
how OpenStreetMap or Wikimapia are currently doing this, but I like to think 
that someone at the recent Maptember events in Nottingham, UK hopefully does 
and might provide some feedback...

Thanks,

Andy
http://www.geog.leeds.ac.uk/people/a.turner/


-Original Message-
From: Kingsley Idehen [mailto:kide...@openlinksw.com] 
Sent: 18 September 2013 19:44
To: public-...@w3.org; public-lod@w3.org
Subject: Re: Maphub -- RWW meets maps

On 9/18/13 1:40 PM, Melvin Carvalho wrote:
 A fantastic open source project maphub which uses linked data to read 
 and write to current and historical maps, using RDF and the open 
 annotations vocab.  There's even links to DBPedia!

 http://maphub.github.io/

 A great example of how to use the Read Write Web.  The video is well 
 worth watching!

Also publishes annotations in Linked Data form [1] :-)

[1] http://maphub.herokuapp.com/control_points/4 .

-- 

Regards,

Kingsley Idehen    
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Re: Maphub -- RWW meets maps

2013-09-19 Thread Gannon Dick
Thanks Andy, yes, that is where I obtained the geo data.

I don't want to be an alarmist, but with the rapid rate of eastern expansion 
since 1800 we could very well see intelligent life forms in London before the 
dawn of the 31st Century.  If Washington faced a similar threat who knows how 
we Americans would react, but I am sure the Home Office is aware of the danger  
;-)

--Gannon


 From: Andy Turner a.g.d.tur...@leeds.ac.uk
To: 'Gannon Dick' gannon_d...@yahoo.com; 'Kingsley Idehen' 
kide...@openlinksw.com; public-...@w3.org public-...@w3.org; 
public-lod@w3.org public-lod@w3.org 
Cc: chippy2...@gmail.com chippy2...@gmail.com; 
suchith.an...@nottingham.ac.uk suchith.an...@nottingham.ac.uk 
Sent: Thursday, September 19, 2013 9:03 AM
Subject: RE: Maphub -- RWW meets maps
 


http://www.oucs.ox.ac.uk/oxpoints/
 
Andy
http://www.geog.leeds.ac.uk/people/a.turner/
 
From:Gannon Dick [mailto:gannon_d...@yahoo.com] 
Sent: 19 September 2013 13:55
To: Andy Turner; 'Kingsley Idehen'; public-...@w3.org; public-lod@w3.org
Cc: chippy2...@gmail.com; suchith.an...@nottingham.ac.uk
Subject: Re: Maphub -- RWW meets maps
 
FWIW, the University of Oxford has an 800th Birthday coming up soon.

http://www.rustprivacy.org/2012/roadmap/oxford-university-area-map.pdf

The geo coordinates, founding dates etc. for the Colleges and Halls are
available on the University site.  The lo-res sunrise and sunset data is
available in spreadsheets at 
http://www.esrl.noaa.gov/gmd/grad/solcalc/calcdetails.html

My offering has some *cough* complete lack of artistic promise and bandwidth 
crushing size *cough* limitations, but I had fun :-)  It would be nice to see 
this *cough* done well *cough* duplicated.

--Gannon


 
 



From:Andy Turner a.g.d.tur...@leeds.ac.uk
To: 'Kingsley Idehen' kide...@openlinksw.com; public-...@w3.org 
public-...@w3.org; public-lod@w3.org public-lod@w3.org 
Cc: chippy2...@gmail.com chippy2...@gmail.com; 
suchith.an...@nottingham.ac.uk suchith.an...@nottingham.ac.uk 
Sent: Thursday, September 19, 2013 3:36 AM
Subject: RE: Maphub -- RWW meets maps

Interesting work. It's a way to go for linking OpenStreetMap data and Wikimapia 
data with Wikipedia and each other etc.. I don't know the state of play with 
how OpenStreetMap or Wikimapia are currently doing this, but I like to think 
that someone at the recent Maptember events in Nottingham, UK hopefully does 
and might provide some feedback...

Thanks,

Andy
http://www.geog.leeds.ac.uk/people/a.turner/


-Original Message-
From: Kingsley Idehen [mailto:kide...@openlinksw.com] 
Sent: 18 September 2013 19:44
To: public-...@w3.org; public-lod@w3.org
Subject: Re: Maphub -- RWW meets maps

On 9/18/13 1:40 PM, Melvin Carvalho wrote:
 A fantastic open source project maphub which uses linked data to read 
 and write to current and historical maps, using RDF and the open 
 annotations vocab.  There's even links to DBPedia!

 http://maphub.github.io/

 A great example of how to use the Read Write Web.  The video is well 
 worth watching!

Also publishes annotations in Linked Data form [1] :-)

[1] http://maphub.herokuapp.com/control_points/4 .

-- 

Regards,

Kingsley Idehen    
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Re: voting assistance applications and use of semantic technologies

2013-08-17 Thread Gannon Dick
I agree.  Nobody has told the surveillance state that Enumerating Badness is 
a dumb idea.
http://www.ranum.com/security/computer_security/editorials/dumb/
Then they have to get used to the idea that voter assistance is something 
different than  counting voters or Enumerating Goodness.  It's not easy being 
a surveillance state, the fads are unreliable :-)





 From: ProjectParadigm-ICT-Program metadataport...@yahoo.com
To: Gannon Dick gannon_d...@yahoo.com; semantic-web semantic-...@w3.org; 
public-lod@w3.org public-lod@w3.org 
Sent: Saturday, August 17, 2013 7:54 AM
Subject: Re: voting assistance applications and use of semantic technologies
 


The current state of the art favors the surveillance state, because the 
selection of the issue statements,  which are at the heart of the VAA 
application engine doing the matching does not use semantic technologies and is 
thus narrow focused as indicated by literature.


 
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: A structured approach to bringing the tools for sustainable 
development to all stakeholders worldwide by creating ICT tools for NGOs 
worldwide and: providing online access to web sites and repositories of data 
and information for sustainable development

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.




 From: Gannon Dick gannon_d...@yahoo.com
To: ProjectParadigm-ICT-Program metadataport...@yahoo.com; semantic-web 
semantic-...@w3.org; public-lod@w3.org public-lod@w3.org 
Sent: Friday, August 16, 2013 7:54 PM
Subject: Re: voting assistance applications and use of semantic technologies
 


Milton,

It's only my opinion, but when The Surveillance State seems quite convinced 
that voter suppression (the opposite of your goal) is possible with semantic 
methods, then perhaps a step backwards toward determinism might be wise.  I am 
not suggesting that you change your methods, only that politicians and 
political parties are well skilled in hiding events they do not want 
remembered.  The process is not random.  Semantic methods may be of some help, 
but the knowledge that statistical methods  *already* fail makes semantic 
methods potentially more helpful, at least as far as assessing the coverage of 
the search.

Something like this may help to insure that the map of recent history does 
not have any missing names.  It does not make uncovering gaps in a particular 
CV any easier
 (sorry).

http://www.rustprivacy.org/faca/view/

--Gannon







 From: ProjectParadigm-ICT-Program metadataport...@yahoo.com
To: semantic-web semantic-...@w3.org; public-lod@w3.org public-lod@w3.org 
Sent: Friday, August 16, 2013 11:24 AM
Subject: voting assistance applications and use of semantic technologies
 


I am currently doing a small research project involving voting assistance 
applications (VAAs) which help voters reach informed decisions on which 
political parties or candidates suit best their own personal political 
preferences.

Since VAAs apply algorithms withe weights and are based on analyzed texts 
searched for key concepts the question begs itself. Is there literature 
available which documents VAAs using semantic technologies to scour linked data 
for content to draw up historical patterns of candidates and parties to use in 
VAAs?

Politicians should be judged on what they promise now and what they have 
achieved, and for the latter we need analyzed historical track records in which 
semantic technology use in e.g. analysis of linked data maybe of some use.

We welcome pointers to literature and projects, past, ongoing or planned on 
this issue.

 
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: A structured approach to bringing the tools for sustainable 
development to all stakeholders worldwide by creating ICT tools for NGOs 
worldwide and: providing online access to web sites and repositories of data 
and information for sustainable development

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains
 confidential information and is intended only for the individual named. If you 
are not the named addressee you should not disseminate, distribute or copy this 
e-mail.

Re: voting assistance applications and use of semantic technologies

2013-08-16 Thread Gannon Dick
Milton,

It's only my opinion, but when The Surveillance State seems quite convinced 
that voter suppression (the opposite of your goal) is possible with semantic 
methods, then perhaps a step backwards toward determinism might be wise.  I am 
not suggesting that you change your methods, only that politicians and 
political parties are well skilled in hiding events they do not want 
remembered.  The process is not random.  Semantic methods may be of some help, 
but the knowledge that statistical methods  *already* fail makes semantic 
methods potentially more helpful, at least as far as assessing the coverage of 
the search.

Something like this may help to insure that the map of recent history does 
not have any missing names.  It does not make uncovering gaps in a particular 
CV any easier (sorry).

http://www.rustprivacy.org/faca/view/

--Gannon







 From: ProjectParadigm-ICT-Program metadataport...@yahoo.com
To: semantic-web semantic-...@w3.org; public-lod@w3.org public-lod@w3.org 
Sent: Friday, August 16, 2013 11:24 AM
Subject: voting assistance applications and use of semantic technologies
 


I am currently doing a small research project involving voting assistance 
applications (VAAs) which help voters reach informed decisions on which 
political parties or candidates suit best their own personal political 
preferences.

Since VAAs apply algorithms withe weights and are based on analyzed texts 
searched for key concepts the question begs itself. Is there literature 
available which documents VAAs using semantic technologies to scour linked data 
for content to draw up historical patterns of candidates and parties to use in 
VAAs?

Politicians should be judged on what they promise now and what they have 
achieved, and for the latter we need analyzed historical track records in which 
semantic technology use in e.g. analysis of linked data maybe of some use.

We welcome pointers to literature and projects, past, ongoing or planned on 
this issue.

 
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: A structured approach to bringing the tools for sustainable 
development to all stakeholders worldwide by creating ICT tools for NGOs 
worldwide and: providing online access to web sites and repositories of data 
and information for sustainable development

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains
 confidential information and is intended only for the individual named. If you 
are not the named addressee you should not disseminate, distribute or copy this 
e-mail.

Re: Sustainable governance for long-term preservation of RDF Vocabularies

2013-07-31 Thread Gannon Dick
Hi Ivan,

My 2 cents: http://lists.w3.org/Archives/Public/public-egov-ig/2013Jul/0026.html

Eventually every RDF file which identifies (people sameAs culture) will be 
about dead people.  An obvious key to sustainability is that  present or future 
discovery of RDF not depend upon the identification of people in the present to 
do whatever it is that the RDF is supposed to be doing.  Make the RDF Easter 
Egg Free to begin with and the new will never wear off.

--Gannon







 From: Ivan Herman i...@w3.org
To: W3C LOD Mailing List public-lod@w3.org; W3C Semantic Web IG 
semantic-...@w3.org 
Sent: Wednesday, July 31, 2013 10:20 AM
Subject: Sustainable governance for long-term preservation of RDF Vocabularies
 

Discussion paper for a special session at DC-2013, Lisbon, 3 September 2013
   http://wiki.dublincore.org/index.php/Vocabulary_Preservation_discussion_paper
   Authors: Tom Baker, Bernard Vatant, Pierre-Yves Vandenbussche

Special session is jointly sponsored by W3C and DCMI
   http://dcevents.dublincore.org/IntConf/index/pages/view/vocPres

As a foundation for data sources meant to be usable in the long term,
the value of any given vocabulary depends on the perceived certainty
that the vocabulary -- in both its machine-readable and human-readable
forms -- will remain reliably accessible over time and that its URIs
will not be sold, re-purposed, or simply forgotten. Vocabulary
maintainers move on to other projects or retire. Resources owned by
institutions may be neglected or become unavailable. As the givers of
meaning to datasets, vocabularies are of vital importance to the
scholarly record and cultural memory. However, their preservation will
not happen automatically; it must be planned.

Requirements for long-term preservation of RDF vocabularies

The requirements for long-term preservation must consider a timeframe
that stretches beyond the planning horizon of any institution that
exists today:

* Institutional guarantees for the persistence of URIs. One good
   first step is for owners of URI domains to publish a commitment that
   any URI coined for a term in a vocabulary will be used to refer to
   the same term in perpetuity and will not be repurposed.

* Persistence of documentation. Each term URI should remain
   resolvable to namespace documents -- descriptive documentation in
   HTML and/or machine- readable representations such as RDF schemas.
   Note that persistent URIs may redirect to documentation held at
   non-persistent locations. URIs and the associated documentation
   remain persistent to the extent that the link between the two is
   maintained as documentation is moved between servers.

* Appropriate versioning of vocabularies. Vocabularies evolve over
   time and need to be appropriately versioned. One versioning approach
   commonly used on the Web is to assign a dedicated URI per resource
   version (time-specific URI) and support an additional URI from which
   at any moment in time the current resource version is available
   (time-generic URI). This approach can be used for vocabularies. 

Towards a solution

* Cooperation among vocabulary maintainers. Vocabulary maintainers
   can take some measures to address the problem in the form of mutual aid.
   The Friend of a Friend (FOAF) Project, for example -- a vocabulary
   maintained by two private individuals -- has a cooperation agreement with
   the Dublin Core Metadata Initiative (DCMI) whereby DCMI maintains an
   up-to-date snapshot of the FOAF Vocabulary, may temporarily host the
   vocabulary by redirecting FOAF URIs to the DCMI website if needed, and
   could assume maintenance responsibility for the vocabulary if the FOAF
   Project should cease its normal activity [2]. The intent of this agreement
   is in part to promote the idea of long-term planning in the vocabulary
   maintenance community and to affirm best-practice principles and policies
   for RDF Vocabularies. Other individual vocabulary publishers, or short-term
   projects, should be able to engage in similar agreements.

* Cooperation between vocabulary maintainers and memory
   institutions. Creative Commons took off, as an idea, when it
   provided a set of options for standard, well-understood contracts
   detailing bundles of legal choices related to copyright. Imagine a
   menu of commitments about rights and duties regarding rapid
   interventions (e.g., redirecting URIs if disaster strikes) and the
   transfer of Internet domain names and of maintenance responsibility
   in the long-term. The involvement of major players in the Vocabulary
   Ecosystem could help ensure, at a minimum, that the popular
   vocabularies have such mechanisms in place.

* Flexible cooperation among memory institutions. No institution on
   the planet can possibly guarantee, today, that it will able to honor
   a preservation commitment of decades -- even in the case of the most
   well-endowed and politically secure national 

Re: Licensing advice

2013-07-25 Thread Gannon Dick
My two cents: Isn't Linked Data supererogatory in any Jurisdiction ?
http://plato.stanford.edu/entries/supererogation/
--Gannon  




 From: John Erickson olyerick...@gmail.com
To: Víctor Rodríguez Doncel vrodrig...@fi.upm.es 
Cc: Linked Data community public-lod@w3.org 
Sent: Thursday, July 25, 2013 8:03 AM
Subject: Re: Licensing advice
 

My two cents: In many legal regimes it has been successfully argued
that code is speech. The imperative vs declarative distinction is
likely to fail; if the code conveys information intended to control
the operation of another system, it can be argued that it is a form of
speech (and not merely data, for which different IP rules may
apply). Consider the DeCSS trials (and tribulations) of the last
decade http://digital-law-online.info/lpdi1.0/treatise50.html

People interested in this topic might enjoy Gabriella Coleman's Code
is Speech (2009) http://bit.ly/CodeIsSpeech and Coding Freedom
(2013) http://bit.ly/CodingFreedom

On Thu, Jul 25, 2013 at 8:30 AM, Víctor Rodríguez Doncel
vrodrig...@fi.upm.es wrote:

 Oh! I didn't know... but if you can insert a SQL expression then R2RML is
 certainly imperative.
 Now I am very curious about the Prolog question, too, and I would like to
 hear more opinions.

 To foster the discussion, I have posted about RDF Mappings and Licenses
 here: http://licensius.com/blog/MappingsAndLicenses

 Víctor

 El 25/07/2013 13:13, Barry Norton escribió:


 Interesting distinction, but I'm not sure I buy it.

 Does that mean software licenses don't apply to PROLOG code?

 I can actually make R2RML mappings more imperative than PROLOG cuts by using
 control flow features of SQL.

 Barry


 On 25/07/13 12:04, Víctor Rodríguez Doncel wrote:

 Dear Roberto, all

 Well, I have not heard about any case in a trial court about this and the
 legal texts seem somewhat ambiguous. Also, I have not heard other qualified
 opinions on this particular regard. So, this can be matter for a friendly
 discussion.

 But I still lean towards not considering a mapping (for example the R2RML
 below) as a computer program.
 The mapping is declarative, not imperative. They are not instructions, as
 required in the legal text.

 Think of HTML pages. I dont think they are regarded as software. People
 don't license them with a BSD license. They use CreativeCommons licenses,
 intended for general works. You declare a table, a computer program will
 process it. (Yet, a Javascript piece would be made up of instructions).

 I hope I clarified my point.
 Víctor



 @prefix rr: http://www.w3.org/ns/r2rml#.
 @prefix ex: http://example.com/ns#.

 #TriplesMap1
     rr:logicalTable [ rr:tableName EMP ];
     rr:subjectMap [
         rr:template http://data.example.com/employee/{EMPNO};;
         rr:class ex:Employee;
     ];
     rr:predicateObjectMap [
         rr:predicate ex:name;
         rr:objectMap [ rr:column ENAME ];
     ].



 El 25/07/2013 10:32, Roberto García escribió:

 Dear Víctor, Tom, all,

 Maybe I've missed something but if what is going to be licensed are R2RML
 mappings, for me this is code.

 As Víctor quoted, a computer program is (WIPO): a set of instructions,
 which controls the operations of a computer in order to enable it to perform
 a specific task.

 This is just what happens with R2RML mappings, they are based on a
 metalanguage that is read by a computer using a R2RML interpreter
 (implemented using another programming language but just similar to a
 compiler) that at last executes a set of instructions that read data from a
 source and generate a data stream in the output...

 My 2c,


 Roberto


 On Wed, Jul 24, 2013 at 11:01 AM, Víctor Rodríguez Doncel
 vrodrig...@fi.upm.es wrote:


 Well, ODC data licenses include both copyrights and database rights.
 So you dont give up your claims for having made a creative work...

 Víctor

 El 24/07/2013 10:38, Tom Heath escribió:

 Just seen this thread, apols for the slow response Barry...

 Of course IANAL and all that, but I disagree with Victor's conclusion.

 I would argue that the individual mappings are creative works (as you
 say), and therefore a CC license would apply (better still, why not
 apply a public domain waiver so they're totally open?).

 The collection as a whole would probably qualify as a database, at
 which point Victor's points about a DB license would be relevant.

 As others have mentioned, the data created by the execution of these
 mappings is another issue altogether, which you seem to have covered.

 My 2p worth -- hope it helps :)

 Tom.


 On 12 July 2013 21:38, Víctor Rodríguez Doncel vrodrig...@fi.upm.es
 wrote:

 Barry,

 My opinion is the following:

 1. Code license NO. A computer program is (WIPO): a set of instructions,
 which controls the operations of a computer in order to enable it to
 perform
 a specific task
 2. Intellectual Property. I'd say no in this case. Some databases are
 protected by IP law. They are if they can assumed to be 

Re: An ontology for adverts?

2013-07-25 Thread Gannon Dick


I think DOAP is the best bet for advertising a Root Node, but if you think 
about it in those terms, you are already there so what's the point ?

Actually, there is a point.  There are any number of things which can happen 
after a hyperlink is activated, and as Winston Churchill said; Why, you can 
take the most 
gallant sailor, the most intrepid airman or the most audacious soldier, put 
them at a table together - what do you get? The sum of their fears..


1) You could add the DOAP as text with working links.  If you recognize DOAP, 
you know where to look for the link splash in your browser.
2) The W3C Validator does not validate DOAP, if it did you could provide a 
revalidation link and badge to confirm the content.  Any other trusted 
validation service would do, though.

--Gannon





From: Melvin Carvalho melvincarva...@gmail.com
To: Linked Data community public-lod@w3.org 
Sent: Thursday, July 25, 2013 3:00 PM
Subject: An ontology for adverts?



Does anyone know if there is any ontology for adverts?

I'd like to take a product or service or project, then put up a banner link or 
a text ad on my site based on machine readable data

Is DOAP the best bet?

Re: Linked Data Glossary is published!

2013-07-02 Thread Gannon Dick
I agree Michael, but if you'll excuse the expression, I think we are arguing 
semantics.
 
Datasets often have domain specific names for meta data components.
 
For example, Data Dot Gov has ~44 terms-of-art.  They are to organizational 
outsiders, tag soup, to organizational insiders they are a flexible ontology 
looking thingy (you have to squint).
 
http://www.rustprivacy.org/2013/egov/catalog/DataDotGovTagSoup.html
 
(directions: push the button)
 
  A URI is a right directed graph, the left-most component of which is a 
transport protocol but it is also the root node of XML documents (a destination 
Port) - below and to the right they speak the domain's language.
 


 From: Michael Miller michael.mil...@systemsbiology.org
To: KANZAKI Masahide mkanz...@gmail.com; John Erickson 
olyerick...@gmail.com 
Cc: Bernadette Hyland bhyl...@3roundstones.com; W3C public GLD WG WG 
public-gld...@w3.org; Linked Data community public-lod@w3.org; egov-ig 
mailing list public-egov...@w3.org; HCLS public-semweb-life...@w3.org 
Sent: Tuesday, July 2, 2013 10:23 AM
Subject: RE: Linked Data Glossary is published!
  


hi all, 
 
XML takes on many levels of machine readability.  i would argue that if XML 
came with an DTD/XML schema it is at least 3 star and possibly 4 star.  that at 
least was my experience with MAGE- ML (i'd say 3 star) and the clinical XML for 
the TCGA project (4 star) 
 
cheers, 
michael
  
Michael Miller
Software Engineer 
Institute for Systems Biology
  
 
From:KANZAKI Masahide [mailto:mkanz...@gmail.com] 
Sent: Monday, July 01, 2013 7:19 PM
To: John Erickson
Cc: Bernadette Hyland; W3C public GLD WG WG; Linked Data community; egov-ig 
mailing list; HCLS
Subject: Re: Linked Data Glossary is published! 
 
Hello John, thanks for reply, very much appreciated.
 
2013/7/2 John Erickson olyerick...@gmail.com 
Thus, I think we should distinguish between plain old XML and Office
Open XML/OOXML/OpenXML; based on my understanding and what I read 
OpenXML could be listed as an example three-star format. 
 
Well, that's true. I hope this distinction will be incorporated into this 
glossary, rather simply showing XML as 2-stars example (which is misleading 
not only for me, but also for others around me). 
 
 
* I think the POINT is that the data should be published in a way
suited for machine consumption. A format should NOT be considered
machine readable simply because someone cooked up a hack on
Scraperwiki for getting the data out of an otherwise opaque data dump
on a site
 
Yes, it is desirable that data is published for machine consumption in Linked 
Data space, though my point was that the term Machine Readable is too general 
to be redefined for LD perspective.  
 
 
* The argument against having a separate term is simply that
(arguably) the common case for publishing machine readable data *is*
structured data, and adding the a special structured category merely
confuses adopters.
* The argument for a new term is, if the reason we want machine
readable data is because we expect (and usually get) structured data,
then we should specify that what we REALLY want is machine readable
structured data... (and explain what that means)
 
Well, machine readable data is *not necessarily* structured in general, so 
the second argument seems more reasonable, although I'm not arguing to add 
separate term, rather, thinking it is not good idea to redefine term machine 
readable just for a specific community. 
 
 
Thank you very much for the discussion.
 
cheers, 
 
 
-- 
@prefix : http://www.kanzaki.com/ns/sig .  :from [:name
KANZAKI Masahide; :nick masaka; :email mkanz...@gmail.com]. 

Re: Linked Data Glossary is published!

2013-07-02 Thread Gannon Dick
Oops. http://www.rustprivacy.org/2013/egov/catalog/DataDotGovMetaTagSoup.html
 
should be MetaTagSoup
 


 From: Gannon Dick gannon_d...@yahoo.com
To: Michael Miller michael.mil...@systemsbiology.org; KANZAKI Masahide 
mkanz...@gmail.com; John Erickson olyerick...@gmail.com 
Cc: Bernadette Hyland bhyl...@3roundstones.com; W3C public GLD WG WG 
public-gld...@w3.org; Linked Data community public-lod@w3.org; egov-ig 
mailing list public-egov...@w3.org; HCLS public-semweb-life...@w3.org 
Sent: Tuesday, July 2, 2013 4:20 PM
Subject: Re: Linked Data Glossary is published!
  


I agree Michael, but if you'll excuse the expression, I think we are arguing 
semantics.
 
Datasets often have domain specific names for meta data components.
 
For example, Data Dot Gov has ~44 terms-of-art.  They are to organizational 
outsiders, tag soup, to organizational insiders they are a flexible ontology 
looking thingy (you have to squint).
 
http://www.rustprivacy.org/2013/egov/catalog/DataDotGovMetaTagSoup.html
 
(directions: push the button)
 
  A URI is a right directed graph, the left-most component of which is a 
transport protocol but it is also the root node of XML documents (a destination 
Port) - below and to the right they speak the domain's language.
 


 From: Michael Miller michael.mil...@systemsbiology.org
To: KANZAKI Masahide mkanz...@gmail.com; John Erickson 
olyerick...@gmail.com 
Cc: Bernadette Hyland bhyl...@3roundstones.com; W3C public GLD WG WG 
public-gld...@w3.org; Linked Data community public-lod@w3.org; egov-ig 
mailing list public-egov...@w3.org; HCLS public-semweb-life...@w3.org 
Sent: Tuesday, July 2, 2013 10:23 AM
Subject: RE: Linked Data Glossary is published!
  


hi all, 
 
XML takes on many levels of machine readability.  i would argue that if XML 
came with an DTD/XML schema it is at least 3 star and possibly 4 star.  that at 
least was my experience with MAGE- ML (i'd say 3 star) and the clinical XML for 
the TCGA project (4 star) 
 
cheers, 
michael
  
Michael Miller
Software Engineer 
Institute for Systems Biology
  
 
From:KANZAKI Masahide [mailto:mkanz...@gmail.com] 
Sent: Monday, July 01, 2013 7:19 PM
To: John Erickson
Cc: Bernadette Hyland; W3C public GLD WG WG; Linked Data community; egov-ig 
mailing list; HCLS
Subject: Re: Linked Data Glossary is published! 

Hello John, thanks for reply, very much appreciated.

2013/7/2 John Erickson olyerick...@gmail.com 
Thus, I think we should distinguish between plain old XML and Office
Open XML/OOXML/OpenXML; based on my understanding and what I read 
OpenXML could be listed as an example three-star format. 


Well, that's true. I hope this distinction will be incorporated into this 
glossary, rather simply showing XML as 2-stars example (which is misleading 
not only for me, but also for others around me). 


* I think the POINT is that the data should be published in a way
suited for machine consumption. A format should NOT be considered
machine readable simply because someone cooked up a hack on
Scraperwiki for getting the data out of an otherwise opaque data dump
on a site

Yes, it is desirable that data is published for machine consumption in Linked 
Data space, though my point was that the term Machine Readable is too general 
to be redefined for LD perspective.  


* The argument against having a separate term is simply that
(arguably) the common case for publishing machine readable data *is*
structured data, and adding the a special structured category merely
confuses adopters.
* The argument for a new term is, if the reason we want machine
readable data is because we expect (and usually get) structured data,
then we should specify that what we REALLY want is machine readable
structured data... (and explain what that means)

Well, machine readable data is *not necessarily* structured in general, so 
the second argument seems more reasonable, although I'm not arguing to add 
separate term, rather, thinking it is not good idea to redefine term machine 
readable just for a specific community. 


Thank you very much for the discussion.

cheers, 


-- 
@prefix : http://www.kanzaki.com/ns/sig .  :from [:name
KANZAKI Masahide; :nick masaka; :email mkanz...@gmail.com]. 

Re: Linked Data Glossary is published!

2013-06-28 Thread Gannon Dick
The Linked Data Glossary mentions Meta Data Object Schemes (MODS), a Library of 
Congress (LOC) citation scheme.  As you know, Bernadette, I (heart) Linked Data 
to death, and (user) annotations are a great idea. Still for Government Policy, 
annotations as Crowd Sourced Opinion, Easter Eggs, Sponsored Links (whatever 
you want to call them) are not Democracy Ajectives.  They are easy to 
remove/redact from XML Citations (like MODS) with XSLT (also mentioned in the 
Glossary), provided that you intended to remove them at some point.  You simply 
enclose the material to be redacted as Tagged Marginalia (a Librarianesque Term 
for don't let me catch you writing in the margins!, keyword:catch).  In my 
example (MODS 3.4+XHTML 1.0) the tag is tm class=rust /.  Validation by 
XSD, partial redaction (to comments) etc. is possible.  I am having trouble 
uploading to my web site at the moment, so if you want a copy of the example, 
email me.

I am not sure what this feature looks like in JSON or JSON-LD.  I doubt is 
quite that transparent.





 From: Bernadette Hyland bhyl...@3roundstones.com
To: paoladimai...@googlemail.com 
Cc: public-ldp...@w3.org; egov-ig mailing list public-egov...@w3.org; W3C 
public GLD WG WG public-gld...@w3.org; Linked Data community 
public-lod@w3.org 
Sent: Friday, June 28, 2013 10:50 AM
Subject: Re: Linked Data Glossary is published!
 


Hi Paolo,
Oh like that idea of annotations a lot!  I don't want to get ahead of myself 
here but assuming the Open Data Directory CG is happy to publish the glossary 
as LOD from the w3.org namespace, we can readily incorporate a new feature 
supporting the open annotations spec as nanopublications using Callimachus (the 
FLOSS data platform that the Open Data Directory is built on).  That would be a 
nice exemplar and certainly a good demonstration of LOD in the wild.

Thank you for the suggestion and we'll report back on progress this summer.  
Maybe you can test the future implementation  see if the workflow makes sense?


Cheers,

Bernadette

On Jun 27, 2013, at 12:54 PM, Paola Di Maio paola.dim...@gmail.com wrote:

yep


great work, B and all
a starting point, and hopefully a valuable learning /teaching resource as well
 #webscience #learning
-  could it be useful perhaps to find a way of quickly add annotations to the 
terms, so that when things crop up, input from (registered?)  users can be 
recorded, and considered in future iterations


cheers


P



On Thu, Jun 27, 2013 at 8:38 PM, Bernadette Hyland bhyl...@3roundstones.com 
wrote:

Hi,
On behalf of the editors, I'm pleased to announce the publication of the 
peer-reviewed Linked Data Glossary published as a W3C Working Group Note 
effective 27-June-2013.[1]  


We hope this document serves as a useful glossary containing terms defined 
and used to describe Linked Data, and its associated vocabularies and best 
practices for publishing structured data on the Web.  


The LD Glossary is intended to help foster constructive discussions between 
the Web 2.0 and 3.0 developer communities, encouraging all of us appreciate 
the application of different technologies for different use cases.  We hope 
the glossary serves as a useful starting point in your discussions about data 
sharing on the Web.


Finally, the editors are grateful to David Wood for contributing the initial 
glossary terms from Linking Government Data, (Springer 2011).  The editors 
wish to also thank members of the Government Linked Data Working Group with 
special thanks to the reviewers and contributors: Thomas Baker, 
Hadley Beeman, Richard Cyganiak, Michael Hausenblas, Sandro Hawke, 
Benedikt Kaempgen, James McKinney, Marios Meimaris, Jindrich Mynarz and 
Dave Reynolds who diligently iterated the W3C  Linked Data Glossary in order to 
create a foundation of terms upon 
which to discuss and better describe the Web of Data.  If there is anyone that 
the editors inadvertently overlooked in this list, please accept our apologies. 


Thank you one  all!

Sincerely,
Bernadette Hyland, 3 Round Stones
Ghislain Atemezing, EURECOM
Michael Pendleton, US Environmental Protection Agency
Biplav Srivastava, IBM

W3C Government Linked Data Working Group
Charter: http://www.w3.org/2011/gld/ 


[1] http://www.w3.org/TR/ld-glossary/


Re: Civic apps and Linked Data

2013-06-26 Thread Gannon Dick






I would love to hear from anyone who knows of data sets that may help.  I am 
also, like you, curious to hear any other civic use cases for the use of this 
technology.


Really, Sands, you Librarians are taking all the fun out of world domination :-)
First, you could use the Library of Congress Linked Data Service 
(http://id.loc.gov/).  Sadly, that takes all the fun out of wheel reinvention 
too.
Second, and this is a bit counter-intuitive, you could consult the coding 
systens used in the CIA World Factbook 
(https://www.cia.gov/library/publications/the-world-factbook/) or UN-LOCODES 
(http://www.unece.org/cefact/codesfortrade/codes_index.html).  The 
counter-intuitive part is: of course the CIA and the UN are pushing adjendas, 
just not the adjendas you think.  They have no use for virtual worlds, one is 
complicated enough.
Third, if you like, the IANA Root Zone can be used 
(http://www.iana.org/domains/root/db), although you will find out that the 
counter-intuitive restraints of the CIA and the UN are a good thing after all.
Last, the web is good for wildly inflationary greedy matches.  That is not so 
good for a Librarian's career.  If you want to do it yourself, stay safe: c.f. 
http://www.rustprivacy.org/2012/roadmap/olympics/
--Gannon


- Sands Fish
- Senior Software Engineer / Data Scientist
- MIT Libraries
- sa...@mit.edu





 
From: alvaro.gra...@gmail.com [alvaro.gra...@gmail.com] on behalf of Alvaro 
Graves [alv...@graves.cl]
Sent: Wednesday, June 26, 2013 10:03 AM
To: Linked Data community
Subject: Civic apps and Linked Data


Hello everyone,

A few days ago I attended ABRELATAM'13 an unconference focused on Open Data in 
Latin America. I proposed a session about Open Data + Linked Data to discuss 
how semantics and LOD in general can help government and civi organizations. I 
want to share the main ideas that emerged from the conversation:

- SW/LOD sounds really cool and the direction where thing should move.
- However there are many technical aspects that remain unsolved
- Since for many people having a relatively good solution using CSV, JSON, etc. 
is easier, they don't want to use SW/LOD because it is an overkill and too 
complicated.

So my question is: Why we don't see lots of civic apps using Linked Data? Where 
are the SW activists? Why we haven't been able to demonstrate to the hacker 
community the benefits of using semantic technologies? Is it because they are 
hard to use? They don't scale well in many cases (as a googled pointed out)? 
Are we too busy working in academia/businesses?

I know very few civic apps using semantic technologies and I don't think I have 
seen anyone that has made real impact in any country. I would love if you can 
prove me wrong and if we can discuss how can we involve more activists and 
hackers into the SW/LOD community.


Alvaro Graves-Fuenzalida
Web: http://graves.cl - Twitter: @alvarograves

Re: The Great Public Linked Data Use Case Register for Non-Technical End User Applications

2013-06-24 Thread Gannon Dick
Spell Checkers, because there are some jobs Web Visionaries just won't do.

Unpaid volunteers have a plan for World Domination and it includes nice 
penmanship too :-) 



Reusing patterns does make it easier for tools to aggregate and present 
data. The perfect might be the enemy of the good, but sometimes a little 
effort to do things consistently is good.

Dave

[1] http://dir.w3.org/

Re: Are Topic Maps Linked Data?

2013-06-23 Thread Gannon Dick
Topic Maps keep help keep RDF NULLs honest.  So, they have a big impact on 
systemic transparency.  That emphasis is on the O in LOD, IMO. --Gannon



From: Dan Brickley dan...@danbri.org
To: public-lod public-lod@w3.org 
Sent: Sunday, June 23, 2013 9:04 AM
Subject: Are Topic Maps Linked Data?
 


Just wondering,

Dan

Re: Representing NULL in RDF

2013-06-14 Thread Gannon Dick
1. Trivial Use Case:  If you ask my big sister's age, *you* are the dead 
person.  Her motto is 39 'till the end of time.
2. Wonky version (Autoclass (A Bayesian Classifier from NASA) documentation)
Truncation error will often dominate measurement error.  Here the
classical example is human age: measurable to within a few minutes,
easily computable to within a few days, yet generally reported in years.
The reported value has been truncated to much less than its potential
precision.  Thus the error in that reported value is on the order of
half the least difference of the representation.  Truncation error can
arise from a variety of causes.  Its presence should be suspected
whenever measurements of intrinsically continuous properties are reported
as integers or limited precision floating point numbers.

Hope this helps.
--Gannon




- Original Message -
From: Andy Turner a.g.d.tur...@leeds.ac.uk
To: 'Karl Dubost' k...@la-grange.net; Juan Sequeda juanfeder...@gmail.com
Cc: public-lod public-lod@w3.org
Sent: Friday, June 14, 2013 5:35 AM
Subject: RE: Representing NULL in RDF

 Do dead persons have an age?

That depends. A body of a dead person belonged to a living person. The idea of 
a person can live on. Indeed there are still anniversary celebrations of famous 
people's life events. The work of a person can also live on. People are partly 
defined by their ideas and identity, but they are currently inherently linked 
to their body which is itself made of parts that are continually updating.

Toodle Pip!

Re: There's No Money in Linked Data

2013-06-07 Thread Gannon Dick
I agree, Andrea, and would further point out that how much money is a 
relativistic question.  Money has an associated Time Value.

Money, Light and Linked Data get no Birthday Party, sadly, which is to say they 
have no Birthday.  Money tries to cheat by having a Time Value but no Birthday. 
 Light can not cheat: One (1) light-year is 364+(2/364) light-days plus 1 
light-day (after) every four years. (1/365) is an approximation to 364 days + 
2 halves of the same measurement.  This is not a trivial point.

To paraphrase your question: What is the Banker's Return on the Time Value of 
Linked Data ?
Answer: Zero (intellectually honest answer), But don't tell Bankers, they are 
ferocious when provoked..
--Gannon



 From: Andrea Splendiani andrea.splendi...@iscb.org
To: Prateek jainprat...@gmail.com 
Cc: public-lod@w3.org; Semantic Web semantic-...@w3.org 
Sent: Friday, June 7, 2013 4:10 AM
Subject: Re: There's No Money in Linked Data
 


Hi,

Let me get into this thread with a bit of a provocative statement.

I think the issue is not whether there is money or not in linked data, but: how 
much money is in linked data ?

Lot of money has been injected by research funds, maybe governments and maybe 
even industry. 
Is the business generated of less, more, or just about the same value ? 

Another point of view, perhaps more appropriate, is that Linked-Data is a bit 
like building highways. You can eventually measure the economic benefit of 
having them, but (at least in several countries) it's not something from which 
you expect a return.

ciao,
Andrea


Il giorno 06/giu/2013, alle ore 13:13, Prateek jainprat...@gmail.com ha 
scritto:

For some reason, my original post didn't appear in the mailing list archives. 
My apologies for duplicate posts, if they show up here.


-- Forwarded message --
From: Prateek jainprat...@gmail.com
Date: Wed, Jun 5, 2013 at 7:16 PM
Subject: Re: There's No Money in Linked Data
To: public-lod@w3.org, Semantic Web semantic-...@w3.org, 
a.bluma...@semantic-web.at



Hello All,

I am one of the authors of the work being discussed.

All the stuff I have seen till now is about Linked Data being great and useful 
for data integration within commercial settings. The work does not disputes 
that. I agree we didn't use the proper term, and from the reading of the work 
it becomes clear we didn't complain about this aspect. The work will be 
revised to correct the terminology and other feedback from the mailing list.


The issue pointed out in the work is with Linked Open Data Cloud data sets. 
This is getting limited or no attention in the discussions. Its like saying 
the technology is awesome, lets not worry so much about the 'open' data sets. 

In Adrea's blog he is saying technology is mature now. That is great. But 
these technologies have been around for a while now.

The question still remains, what about the 'open' datasets amassed till now? 
The 300+ datasets which everyone uses in their slides.

In the blog


Yes, there is a critical mass of available LOD sources (for example UK 
Ordnance Survey) and also of high-quality thesauri and ontologies (for example 
Wolter Kluwer’s working law thesaurus) to be reused in corporate settings

But they have been around for about 6 yrs? Why haven't they been used till now 
besides academic playgrounds or for pure research? Is it not good enough to be 
used? In the hope it will happen one day? In your blog there is a link for use 
case of Linked Data. Why don't we find same thing for Linked Open Data?
 

(These are all questions which I have pondered about, not a criticism)


I have tried collecting the use cases before for LOD 
http://comments.gmane.org/gmane.org.w3c.public-lod/1575


The response was limited.


Happy to see the discussion, but I think the main issue seems to be getting 
sidelined.

Regards

Prateek


Note: The views expressed herein are my own and do not necessarily reflect the 
views of my co-authors of the work 'There's No Money in Linked Data' 
and my employer.

- - - - - - - - - - - - - - - - - - - 



Prateek Jain, Ph. D.
RSM
IBM T.J. Watson Research Center
1101 Kitchawan Road, 37-244
Yorktown Heights, NY 10598
Linkedin: http://www.linkedin.com/in/prateekj 


Re: There's No Money in Linked Data

2013-06-07 Thread Gannon Dick
Lots of people make lots of money from data, structured data and Linked Data.  
This is a good thing. But data is a perpetuity not an annuity.  The math works 
fine if correctly applied.  Don't expect your Smart Phone or Robotic Agent to 
have a Banker's expectations, they are much too logical for that :-)
--Gannon



 From: Kingsley Idehen kide...@openlinksw.com
To: public-lod@w3.org public-lod@w3.org 
Cc: Semantic Web semantic-...@w3.org 
Sent: Friday, June 7, 2013 9:59 AM
Subject: Re: There's No Money in Linked Data
 


On 6/7/13 10:47 AM, Gannon Dick wrote:

I agree, Andrea, and would further point out that how much money is a 
relativistic question.  Money has an associated Time Value.

Money, Light and Linked Data get no Birthday Party, sadly, which
is to say they have no Birthday.  Money tries to cheat by having
a Time Value but no Birthday.  Light can not cheat: One (1)
light-year is 364+(2/364) light-days plus 1 light-day (after)
every four years. (1/365) is an approximation to 364 days + 2
halves of the same measurement.  This is not a trivial point.

To paraphrase your question: What is the Banker's Return on the
Time Value of Linked Data ?
Answer: Zero (intellectually honest answer), But don't tell
Bankers, they are ferocious when provoked..
--Gannon
What about when you apply your formula to the Web? Basically, is
anyone (including Bankers) making money on the Web? 

Funnily enough, I just had a conversation with a Banker that went
something like this, as part of an identity verification process:

Banker said based on public records, which of the following
statements about you is true? 

Was the outcome of interaction valuable to the banker? 

Was the outcome valuable to me? 

In either case, would money be potentially made or lost as are
result of that interaction? It took about 5 minutes :-) 

Kingsley 





 From: Andrea Splendiani andrea.splendi...@iscb.org
To: Prateek jainprat...@gmail.com 
Cc: public-lod@w3.org; Semantic Web semantic-...@w3.org 
Sent: Friday, June 7, 2013 4:10 AM
Subject: Re: There's No Money in Linked Data
 


Hi, 


Let me get into this thread with a bit of a provocative statement.


I think the issue is not whether there is money or not in linked data, but: 
how much money is in linked data ?


Lot of money has been injected by research funds, maybe governments and maybe 
even industry. 
Is the business generated of less, more, or just about the same value ? 


Another point of view, perhaps more appropriate, is that Linked-Data is a bit 
like building highways. You can eventually measure the economic benefit of 
having them, but (at least in several countries) it's not something from which 
you expect a return.


ciao,
Andrea




Il giorno 06/giu/2013, alle ore 13:13, Prateek jainprat...@gmail.com ha 
scritto:

For some reason, my original post didn't appear in the mailing list archives. 
My apologies for duplicate posts, if they show up here.


-- Forwarded message --
From: Prateek jainprat...@gmail.com
Date: Wed, Jun 5, 2013 at 7:16 PM
Subject: Re: There's No Money in Linked Data
To: public-lod@w3.org, Semantic Web semantic-...@w3.org, 
a.bluma...@semantic-web.at



Hello All,

I am one of the authors of the work
being discussed.

All the stuff I have seen till now is
about Linked Data being great and useful
for data integration within commercial
settings. The work does not disputes
that. I agree we didn't use the proper
term, and from the reading of the work
it becomes clear we didn't complain
about this aspect. The work will be
revised to correct the terminology and
other feedback from the mailing list.


The issue pointed out in the work is with Linked Open Data Cloud data sets. 
This is getting limited or no attention in the discussions. Its like saying 
the technology is awesome, lets not worry so much about the 'open' data sets. 

In Adrea's blog he is saying technology
is mature now. That is great. But these
technologies have been around for a
while now.

The question still remains, what about
the 'open' datasets amassed till now?
The 300+ datasets which everyone uses in
their slides.

In the blog 


Yes, there is a critical mass of
  available LOD sources (for example UK
  Ordnance Survey) and also of
  high

Re: There's No Money in Linked Data

2013-06-07 Thread Gannon Dick
Very nice work, Kingsley.

FWIW, Vint Cerf recently made the same point ...

http://www.theregister.co.uk/2013/06/05/internet_pioneer_vince_cerf_big_data_apocalypse/

I'm not sure you can get him to agree that that was the point he was making.  
You are on your own there.

And I apologize in advance for not being an early adopter of your Service.  I 
my particular instance ID Control is well handled by the simple fact that 
nobody else is crazy enough to want a strange name like Gannon (J.) Dick.  
Firewalls have been trying to educate me for years without success.

--Gannon





 From: Kingsley Idehen kide...@openlinksw.com
To: public-lod@w3.org public-lod@w3.org 
Cc: Semantic Web semantic-...@w3.org; business-of-linked-data-bold 
business-of-linked-data-b...@googlegroups.com 
Sent: Friday, June 7, 2013 10:47 AM
Subject: Re: There's No Money in Linked Data
 


On 6/7/13 11:25 AM, Gannon Dick wrote:

Lots of people make lots of money from data, structured data and Linked Data.  
This is a good thing. But data is a perpetuity not an annuity. 

Depends on who is claiming the annuity. For instance, imagine a
world in which you charge the annuity for access to your master
profile data.

Master profile data? That's data curated by You and culled from a
plethora of sources that include those Web 2.0 social networks that
once thought the joke was on You etc.. 


The math works fine if correctly applied. 

Yes. Thus, flip the script :-)


Don't expect your Smart Phone or Robotic Agent to have a Banker's expectations, 
they are much too logical for that :-)

Not expecting that. I believe in the magic of being you!

Links: http://youid.openlinksw.com -- for a teaser !

Kingsley 

--Gannon 




 From: Kingsley Idehen kide...@openlinksw.com
To: public-lod@w3.org public-lod@w3.org 
Cc: Semantic Web semantic-...@w3.org 
Sent: Friday, June 7, 2013 9:59 AM
Subject: Re: There's No Money in Linked Data
 


On 6/7/13 10:47 AM, Gannon Dick wrote:

I agree, Andrea, and would further point out that how much money is a 
relativistic question.  Money has an associated Time Value.

Money, Light and Linked Data get no Birthday
  Party, sadly, which is to say they have no
  Birthday.  Money tries to cheat by having a Time
  Value but no Birthday.  Light can not cheat: One
  (1) light-year is 364+(2/364) light-days plus 1
  light-day (after) every four years. (1/365) is an
  approximation to 364 days + 2 halves of the same
  measurement.  This is not a trivial point.

To paraphrase your question: What is the Banker's
  Return on the Time Value of Linked Data ?
Answer: Zero (intellectually honest answer), But
  don't tell Bankers, they are ferocious when
  provoked..
--Gannon
What about when you apply your formula to the Web?
  Basically, is anyone (including Bankers) making money
  on the Web? 

Funnily enough, I just had a conversation with a
  Banker that went something like this, as part of an
  identity verification process:

Banker said based on public records, which of the
  following statements about you is true? 

Was the outcome of interaction valuable to the banker? 

Was the outcome valuable to me? 

In either case, would money be potentially made or
  lost as are result of that interaction? It took about
  5 minutes :-) 

Kingsley 





 From: Andrea Splendiani andrea.splendi...@iscb.org
To: Prateek jainprat...@gmail.com 
Cc: public-lod@w3.org; Semantic Web semantic-...@w3.org 
Sent: Friday, June 7, 2013 4:10 AM
Subject: Re: There's No Money in Linked Data
 


Hi, 


Let me get into this thread with a bit of a provocative statement.


I think the issue is not whether there is money or not in linked data, but: 
how much money is in linked data ?


Lot of money has been injected by research funds, maybe governments and maybe 
even industry. 
Is the business generated of less, more, or just about the same value ? 


Another point of view, perhaps more appropriate, is that Linked-Data is a bit 
like building highways. You can eventually measure the economic benefit of 
having them, but (at least in several countries) it's not something from 
which you expect a return.


ciao,
Andrea




Il giorno 06/giu/2013, alle ore 13:13, Prateek jainprat...@gmail.com ha 
scritto:

For some reason, my original post didn't appear in the mailing list archives. 
My apologies for duplicate posts, if they show up here.


-- Forwarded message --
From: Prateek jainprat...@gmail.com
Date: Wed, Jun 5, 2013 at 7:16
  PM
Subject: Re: There's No Money

Re: Request for Help: US Government Linked Data

2013-05-22 Thread Gannon Dick
Hi Jürgen,

Thanks for the Cartoon.
A mixture of education/outreach has always served the web well.
The US Government is a special case because they issue a lot of integrated 
facts into the Public Domain.  Civil Servants (in theory) never have to debate 
the quality of work product.  This system produces frictions not unique to US 
Territory.  The link ( 
http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf) makes the 
general case:  You want both tolerance and precision, but both are not 
necessary.  It is not reasonable to land on Mars directly (without a few 
orbits) or even drive from Berlin to Bonn and back in *exactly* the same time.  
You have to let the average happen.  Both Lord Kelvin (Absolute Zero) and 
Professor Einstein (Mass and Energy) played tricks at the far ends of the range 
to make a return trip possible.  To put it another way, you can level the 
playing field (and you should), but if you build a high brick wall in front of 
the Goal, then the game you are playing is not Football (Soccer or American 
Football). 
 Actually, it is not a fun game at all.

StratML can specify successive approximations (steps) to the goal, SKOS and RDF 
determine a median which is presumed close to the mean (average).  Watches were 
invented to agree with each other, not the sun, which is never on time for 
lunch.  The average is what it is.  Policy goals are not stupid and not fish.

--Gannon


 From: Jürgen Jakobitsch j.jakobit...@semantic-web.at
To: Gannon Dick gannon_d...@yahoo.com 
Cc: Eric Mill konkl...@gmail.com; Luca Matteis lmatt...@gmail.com; David 
Wood da...@3roundstones.com; community public-lod@w3.org; eGov W3C 
public-egov...@w3.org 
Sent: Tuesday, May 21, 2013 6:17 PM
Subject: Re: Request for Help: US Government Linked Data
 

it's a fish of course, not a frog [1]... (excuse climb typo)

wkr j

[1] 
http://smrt.ccel.ca/files/2012/08/Albert-Einstein-everyone-is-a-genius-but-if-you-judge-a-fish-by-its-ability-to-climb-a-tree.jpeg



- Original Message -
From: Jürgen Jakobitsch j.jakobit...@semantic-web.at
To: Gannon Dick gannon_d...@yahoo.com
Cc: Eric Mill konkl...@gmail.com, Luca Matteis lmatt...@gmail.com, 
David Wood da...@3roundstones.com, community public-lod@w3.org, eGov 
W3C public-egov...@w3.org
Sent: Wednesday, May 22, 2013 1:13:10 AM
Subject: Re: Request for Help: US Government Linked Data

hi,

for clarification, my comment was about this line [1]. 
it was meant to give you an idea that comparison made cannot be left just so
as it is full of suggestions and
 doesn't seem to be based on anything other
than emotion (as are most is better than debates).
the given judgement [1] suggests that it is the best solution for what 
it was actually made for (=void) [3] by first comparing it with something the 
main goal of which 
is to create knowledge representations in the form of a controlled vocabulary 
(thesaurus)
and secondly comparing it with a whole datamodel the inherence of it's ability 
to create
a schema suitable to handle the given use case of which i am convinced.

you know... if you judge a frog by it's ability to clime a tree...

wkr jürgen


[1] Sorry to say, for reasons given, that StratML seems the better choice
for Strategic Policy Representation (rather than SKOS and RDF).
[2] http://en.wikipedia.org/wiki/List_of_XML_markup_languages
[3] from [2]: StratML is an
 XML vocabulary and schema for strategic and performance plans and reports

- Original Message -
From: Gannon Dick gannon_d...@yahoo.com
To: Eric Mill konkl...@gmail.com, Luca Matteis lmatt...@gmail.com
Cc: j jakobitsch j.jakobit...@semantic-web.at, David Wood 
da...@3roundstones.com, public-lod@w3.org community public-lod@w3.org, 
eGov W3C public-egov...@w3.org
Sent: Tuesday, May 21, 2013 10:02:09 PM
Subject: Re: Request for Help: US Government Linked Data

FWIW, having been labelled confused multiple times through the magic of email 
forwarding ... :-) 




From: Eric Mill konkl...@gmail.com
Subject: Re: Request for Help: US Government Linked Data



My (completely personal, unofficial) request of the LD community, as Project 
Open Data and its discussion threads grow, is to avoid a general summoning of 
the troops to this stuff. 
===
Yes, avoid circle the wagons too.  Data Models are important, and ... (cont.)

===

One of the things that
 was made obvious to me by that thread is how painfully easy it is for people 
who very much have the same awesome shared end goals in mind - more useful 
government data - to talk past each other. That only gets easier when comments 
get more emotional, and gauging one's success during a debate becomes a matter 
of quantity rather than quality.

As I said, I really valued the thread we had - and most especially, I love what 
POD is doing, and I think the US and Github are going to have a profound impact 
on how the world views policy making in the long run. The POD project

Re: Request for Help: US Government Linked Data

2013-05-21 Thread Gannon Dick
FWIW, having been labelled confused multiple times through the magic of email 
forwarding ... :-) 




 From: Eric Mill konkl...@gmail.com
Subject: Re: Request for Help: US Government Linked Data
 


My (completely personal, unofficial) request of the LD community, as Project 
Open Data and its discussion threads grow, is to avoid a general summoning of 
the troops to this stuff. 
===
Yes, avoid circle the wagons too.  Data Models are important, and ... (cont.)

===

One of the things that was made obvious to me by that thread is how painfully 
easy it is for people who very much have the same awesome shared end goals in 
mind - more useful government data - to talk past each other. That only gets 
easier when comments get more emotional, and gauging one's success during a 
debate becomes a matter of quantity rather than quality.

As I said, I really valued the thread we had - and most especially, I love what 
POD is doing, and I think the US and Github are going to have a profound impact 
on how the world views policy making in the long run. The POD project is going 
to be looked at by governments around the world, and they're going to evaluate 
POD based on the quality of those discussions (not the outcomes).
===
... outcomes are a dodgy gauge of Policy with Open Data for well defined 
reasons - and you do not conflate well defined with benefits the Smartest 
Guys in the Room most, obviously.  In the Commercial world, the square root of 
a dollar (or Euro) is 10 dimes, the square root of a dime is a penny and the 
square root of 2 dollars is $1.41.  Facts just don't understand simple 
Economics, or something like that.  That said, Open Data does use an odd 
Coordinate System.  Map Makers will tell you that these maps have little 
navigation value.  They also expect you to know that the voids cannot be flown 
over or tunnelled under without a theory to support such an operation.  
Because I think it would be nice if I could isn't a theory.  There are quite 
a few other non-theories around too. The regions beyond the length of the 
Equator+Voids are real, not Outliers.  The shortest distance between two points 
is the long route around, sometimes.
http://en.wikipedia.org/wiki/File:MercTranSph_enhanced.png (no voids - 
Spherical)
http://upload.wikimedia.org/wikipedia/commons/9/93/MercTranEll.png (Elliptical 
with voids at the boundaries)

1) The voids never go away.  
2) Governments and the LD Community cannot treat Outliers as useless losers 
who don't get LD.

===

It's going to be great to having more of those discussions with everyone here, 
both about LD (and things other than LD!). There's a ton of unblazed trails 
here, and I'm just so excited to see where they go.

===
Me too.
===


-- Eric



On Mon, May 20, 2013 at 4:31 PM, Luca Matteis lmatt...@gmail.com wrote:

The pull request was merged. Great success! 


Let's continue this effort by submitting more LOD pull-requests.



On Sun, May 19, 2013 at 7:15 PM, Jürgen Jakobitsch SWC 
j.jakobit...@semantic-web.at wrote:

On Sun, 2013-05-19 at 08:19 -0700, Gannon Dick wrote:
 Dave,


 IMHO, the W3C Cookbook methods do not go far enough to define the
 short-term strategy game of which Americans are so fond.  The Federal
 Government must plan Social Policy from ante Meridian (AM) to post
 Meridian (PM).  Playing statistical games with higher frequencies or
 modified time spans is fun, but it is not Science (a Free Energy
 Calculation).


 http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf


 Sorry to say, for reasons given, that StratML seems the better choice
 for Strategic Policy Representation (rather than SKOS and RDF).

sorry, no offence but above are two lines of total confusion...

wkr j




 --Gannon



 __

 From: David Wood da...@3roundstones.com
 To: public-lod@w3.org community public-lod@w3.org
 Sent: Saturday, May 18, 2013 8:59 AM
 Subject: Re: Request for Help: US Government Linked Data


 Hi all,

 I take it back: Don't just comment.

 We need to introduce pull requests into the Project Open Data
 documents that add Linked Data terms, examples and guidelines to the
 existing material.

 There are a few scattered RDFa references in relation to schema.org,
 but most of the Linked Data material has been removed from the
 documents.  We need to get this back in existing Linked Data efforts
 within the US Government might very well be hurt.

 Please help.  Thanks.

 Regards,
 Dave
 --
 http://about.me/david_wood



 On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

  Hi all,
 
  Parts of the US Government have been discussing the role of Linked
 Data in government agencies and whether Linked Data is what the Obama
 Administration meant when they mandated machine readable data.
 Unsurprisingly, some people like to do things the old ways, with a
 three-tier architecture and without

Re: Request for Help: US Government Linked Data

2013-05-19 Thread Gannon Dick
Dave,

IMHO, the W3C Cookbook methods do not go far enough to define the short-term 
strategy game of which Americans are so fond.  The Federal Government must plan 
Social Policy from ante Meridian (AM) to post Meridian (PM).  Playing 
statistical games with higher frequencies or modified time spans is fun, but it 
is not Science (a Free Energy Calculation).

http://www.rustprivacy.org/2013/egov/roadmap/NoMoneyInGovernment.pdf

Sorry to say, for reasons given, that StratML seems the better choice for 
Strategic Policy Representation (rather than SKOS and RDF).

--Gannon




 From: David Wood da...@3roundstones.com
To: public-lod@w3.org community public-lod@w3.org 
Sent: Saturday, May 18, 2013 8:59 AM
Subject: Re: Request for Help: US Government Linked Data
 

Hi all,

I take it back: Don't just comment.

We need to introduce pull requests into the Project Open Data documents that 
add Linked Data terms, examples and guidelines to the existing material.

There are a few scattered RDFa references in relation to schema.org, but most 
of the Linked Data material has been removed from the documents.  We need to 
get this back in existing Linked Data efforts within the US Government might 
very well be hurt.

Please help.  Thanks.

Regards,
Dave
--
http://about.me/david_wood



On May 18, 2013, at 09:16, David Wood da...@3roundstones.com wrote:

 Hi all,
 
 Parts of the US Government have been discussing the role of Linked Data in 
 government agencies and whether Linked Data is what the Obama Administration 
 meant when they mandated machine readable data.  Unsurprisingly, some 
 people like to do things the old ways, with a three-tier architecture and 
 without fostering reuse of the data.
 
 Please respond to the GitHub thread if you would like to support Linked Data:
  https://github.com/project-open-data/project-open-data.github.io/pull/21
 
 Regards,
 Dave
 --
 http://about.me/david_wood
 
 
 

Re: Given a university's name, retrieve URL for university's home page.

2013-05-14 Thread Gannon Dick
Affiliation (for the text name) and domain of the lead Author's email should 
give you a little uncertainty with which to  resolve DBpedia.  Their rules 
are very fussy and not as much uncertainty as you would like, but it is a 
start.  The REGEX to chop up email addresses is here: 
http://www.ietf.org/rfc/rfc2396.txt (page 28).
--Gannon





 From: Sam Kuper sam.ku...@uclmail.net
To: Gannon Dick gannon_d...@yahoo.com 
Cc: public-lod public-lod@w3.org 
Sent: Monday, May 13, 2013 11:25 PM
Subject: Re: Given a university's name, retrieve URL for university's home page.
 

  From: Sam Kuper sam.ku...@uclmail.net
 To: public-lod public-lod@w3.org
 Sent: Monday, May 13, 2013 1:39 PM
 Subject: Given a university's name, retrieve URL for university's home
 page.

 I wish to solve the following problem: given a string that represents
 one of perhaps several common orthographic representations of a
 university's name (e.g. Cambridge University might be given, instead
 of University of Cambridge), retrieve the URL of that university's
 home page on the WWW.

 My first attempt at a solution is a two-step process. It is to query
 the Wikipedia API in order to obtain, with any luck, the title for the
 university's article in Wikipedia
 [and then the]
 second step is to use that title to submit a SPARQL query to
 DBpedia in the hope of obtaining the university's website's URL[...]

 On 13/05/2013, Gannon Dick gannon_d...@yahoo.com wrote:
 The problem is already solved in fine detail, but the parameter names may be
 a little difficult to relate to LOD usage.

 http://www.ncbi.nlm.nih.gov/books/NBK25497/

 Good luck :-)

Hi Gannon, and thanks for the suggestion.

I've now had time to skim through the lists of searchable fields
available for almost all of the Entrez databases, using the list of
databases here:
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/einfo.fcgi and the
advanced search page for each one, which - where available - lists
that database's searchable fields.

Unfortunately, I'm still rather unsure how the NCBI represents a
detailed solution to the problem I outlined. Perhaps I'm just being a
bit slow, but are you proposing I use a search on the Affiliation
field in Entrez to identify the university, and then cross-reference
this to a URL field in Entrez for that university? If so, then I
haven't yet found a practical way of doing this. In Entrez, an
Affiliation seems (forgive the OOP speak, which might not be
entirely appropriate here, but serves my meaning) to be an attribute
of an author/team rather than an object with a URL attribute.

If you're certain that the Entrez dataset *can* be used to solve the
problem I outlined, then please could you provide a demo, or at least
a bit more detail concerning the *way* it would be used in such a
solution, e.g. which field(s) of which database(s) would you propose I
query?

Thanks again,

Sam

Re: Given a university's name, retrieve URL for university's home page.

2013-05-14 Thread Gannon Dick
PubMed was assembled with those three assumptions (+ peer review).  The problem 
I referred to as solved was the Search Stack logic.  The result set is 
different from queries of Page Rank based data stores (Web Search Engines). You 
might have better luck with DDG (https://duckduckgo.com/ BTW) Neither knowledge 
universe is absolutely complete, but the relationship overlap (we hope) yields 
valuable insights.  As a practical matter, a data store (PubMed) with only 22 
million entries would be a very lame basis for a Search Engine. For a different 
reason, your mileage may vary.
--Gannon





 From: Sam Kuper sam.ku...@uclmail.net
To: public-lod public-lod@w3.org 
Sent: Tuesday, May 14, 2013 9:32 AM
Subject: Re: Given a university's name, retrieve URL for university's home page.
 

On 14/05/2013, Gannon Dick gannon_d...@yahoo.com wrote:
 Affiliation (for the text name) and domain of the lead Author's email should
 give you a little uncertainty with which to  resolve DBpedia.  Their rules
 are very fussy and not as much uncertainty as you would like, but it is a
 start.

IIUC, this strategy's success rests on (at least) the assumptions that:

[1] Each of the universities I'll be searching for is listed as an
affiliation in at least one publication within NCBI.
[2] For all such publications, the lead author's email address is
provided among the metadata for the publication.
[3] For all such publications, the lead author's email address
incorporates the domain of the affliated institution for which I
searched.

I may, as I say, be being a bit slow-minded, but these each strike me
as rather tenuous assumptions; and the likelihood of them all being
true seems even smaller.

Assumption [3], for instance, was false for the first test I ran:
affiliation searched for was London School of Economics but although
both authors of the first open access publication listed shared this
affiliation, the contact email's domain was popcouncil.org rather
than lse.ac.uk. Assumption [2] was false for the third test I ran:
affiliation searched for was Royal Holloway, but only the
publication's third author's email address was provided (which
happened to be for the cam.ac.uk domain).

I suppose I could try to narrow down the results to those with only a
single author, but that still wouldn't automatically fulfil
assumptions [1]-[3].

Perhaps I am still failing to understand the crucial insight that
enabled you to state with confidence that, The problem is already
solved in fine detail via the NCBI; if so, please could you share it?

Many thanks,

Sam

Re: Is science on sale this week?

2013-05-13 Thread Gannon Dick
If the game is knowledge, then the coaches don't matter, only the players do 
- the size of the playing field and the number of players is constant [1,2]. 

Nonetheless, RDF and SKOS list processing implies non-negative dissociation 
constant [3].  If you integrate to an area outside the playing field where the 
coaches stand you see the (very small) effect - on the order of water pH 
(10e-7).  pH is something water never lacks, however and in effect, the players 
are exhibiting equal value to the coaching teams - either conference 
organizers or publishers, who are selling their own (quite imaginary) 
brilliance.  



--Gannon

[1] http://www.soccer-fans-info.com/soccer-field-layout.html
[2] 
http://www.sportsknowhow.com/football/field-dimensions/nfl-football-field-dimensions.html

[3] http://lists.w3.org/Archives/Public/public-egov-ig/2013May/0018.html



 From: Sarven Capadisli i...@csarven.ca
To: Linking Open Data public-lod@w3.org; SW-forum semantic-...@w3.org; 
beyond-the-...@googlegroups.com beyond-the-...@googlegroups.com 
Sent: Monday, May 13, 2013 10:25 AM
Subject: Is science on sale this week?
 

Hi!

If we subscribe to science, free and open access to knowledge, what's 
the purpose of the arrangement between conferences and publishers?

-Sarven
http://csarven.ca/#i

Re: Is science on sale this week?

2013-05-13 Thread Gannon Dick


We, as content creators, are holding all the cards. This is worth bearing in 
mind when one has to deal with demands of e.g. specialist book editors - the 
publisher needs the content creator to survive, but the inverse is not so true 
these days.

-- 
Leon R A Derczynski
Research Associate, NLP Group

Department of Computer Science
University of Sheffield
Regent Court, 211 Portobello
Sheffield S1 4DP, UK

+45 5157 4948
http://www.dcs.shef.ac.uk/~leon/

I agree, with the proviso that creation can not be dissociated from content, 
but economic survival is optional*.
So, +1, by which I mean (approx.)
plus
   rdf:first1*exp(ln(6 + (28/100))/rdf:first
   rdf:restrdf:nill //rdf:rest
/plus

[*]
 (Keppler's 2nd Law: A line joining a planet and the Sun sweeps out 
equal areas during equal intervals of time.) is not the same thing as  
... sweeps out equal areas during *consecutive* intervals of time.
--Gannon  

Re: Given a university's name, retrieve URL for university's home page.

2013-05-13 Thread Gannon Dick
Hi Sam,

The problem is already solved in fine detail, but the parameter names may be a 
little difficult to relate to LOD usage.

http://www.ncbi.nlm.nih.gov/books/NBK25497/

Good luck :-)



 From: Sam Kuper sam.ku...@uclmail.net
To: public-lod public-lod@w3.org 
Sent: Monday, May 13, 2013 1:39 PM
Subject: Given a university's name, retrieve URL for university's home page.
 

Dear all,

As I am something of an LOD noob, please feel free to point me in the
direction of other mailing lists or sources of advice if you feel they
are more appropriate than public-lod is for my request below.

I wish to solve the following problem: given a string that represents
one of perhaps several common orthographic representations of a
university's name (e.g. Cambridge University might be given, instead
of University of Cambridge), retrieve the URL of that university's
home page on the WWW.

My first attempt at a solution is a two-step process. It is to query
the Wikipedia API in order to obtain, with any luck, the title for the
university's article in Wikipedia, e.g.:
http://en.wikipedia.org/w/api.php?action=querylist=searchsrprop=scoresrredirects=truesrlimit=1format=jsonsrsearch=Cambridge%20University
yields 
{query-continue:{search:{sroffset:1}},query:{searchinfo:{totalhits:86254},search:[{ns:0,title:University
of Cambridge}]}}

The second step is to use that title to submit a SPARQL query to
DBpedia in the hope of obtaining the university's website's URL, e.g.
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT+%3Fwebsite%0D%0AWHERE++{+%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FUniversity_of_Cambridge%3E+dbpprop%3Awebsite+%3Fwebsite+.+}format=text%2Fhtmltimeout=0
yields an HTML table containing the desired result.

This attempt suffers from several shortcomings:

(1) Step 1 does not reliably yield a result unless the string is
varied slightly and resubmitted, e.g.
http://en.wikipedia.org/w/api.php?action=querylist=searchsrprop=scoresrredirects=truesrlimit=1format=jsonsrsearch=Pennsylvania%20State%20University%20-%20University%20Park
does not yield an article title, but
http://en.wikipedia.org/w/api.php?action=querylist=searchsrprop=scoresrredirects=truesrlimit=1format=jsonsrsearch=Pennsylvania%20State%20University-University%20Park
does.

(2) Step 2 does not reliably yield a result, even if step 1 is
successful and Wikipedia has a record of the university's website,
e.g. 
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT+%3Fwebsite%0D%0AWHERE++{+%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FHarvard_University%3E+dbpprop%3Awebsite+%3Fwebsite+.+}format=text%2Fhtmltimeout=0
yields no URL.

(3) In step 3, I am using HTML output from the SPARQL query only
because the JSON output seems to be unreliable. For example,
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT+%3Fwebsite%0D%0AWHERE++{+%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FUniversity_of_California,_Los_Angeles%3E+dbpprop%3Awebsite+%3Fwebsite+.+}format=text%2Fhtmltimeout=0
yields the desired URL in the output but
http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.orgquery=SELECT+%3Fwebsite%0D%0AWHERE++{+%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FUniversity_of_California,_Los_Angeles%3E+dbpprop%3Awebsite+%3Fwebsite+.+}format=jsontimeout=0
does not.

I therefore suspect that there are better approaches, e.g.: better
ways for me to use the APIs of the resources I am querying (i.e.
Wikipedia and DBpedia), or better resources to query, or some
combination of the two. If you can suggest any such improvements (or,
as I mentioned above, more appropriate sources of advice), I would be
grateful.

Many thanks in advance,

Sam

Re: predatory journals and conferences article in NY Times

2013-04-23 Thread Gannon Dick
+1
The Nominations in the Semantic Asset Utilization Category:
Best catch: Milton
Best Intelligent Life in Journalism Discovery: NOAA New York Times

Best Village Idiot Impersonation, don't believe it for a minute: Phil I'm just 
a ...
Best Monte Carlo Simulation Marksmanship: Leon ... Mission Creep




 From: Leon Derczynski l...@dcs.shef.ac.uk
To: Phillip Lord phillip.l...@newcastle.ac.uk 
Cc: ProjectParadigm-ICT-Program metadataport...@yahoo.com; 
public-lod@w3.org public-lod@w3.org; semantic-web semantic-...@w3.org 
Sent: Tuesday, April 23, 2013 6:05 AM
Subject: Re: predatory journals and conferences article in NY Times
 


IIRC, impact factor was only ever intended as an heuristic for librarians when 
making marginal decisions over which journals to subscribe to on behalf of 
their institution. Everything else is but mission creep.

All the best,


Leon



On 23 April 2013 12:38, Phillip Lord phillip.l...@newcastle.ac.uk wrote:


It's high time universities stopped judging academics by *where* they
have published rather than *what*.

We already have a form of rating for journals. It's called impact
factor. It doesn't work, because judging papers by their place of
publication is nonsensical.

Linked data and semantic web technologies provide opportunities, I
think, to handle the metadata associated with scientific publication, to
represent the knowledge in academic publications, and to do so without
the necessity for a centralised authority.

But, then I am a researcher with a metanl narrow focus, so what do I
know?

Phil


ProjectParadigm-ICT-Program metadataport...@yahoo.com writes:
 This is a problem which manifests itself in every discipline and it preys on
 basic human needs for recognition. The current publishing world of academia
 itself is to blame partially.

 Because in each field of science scientists and researchers usually have a
 short list of peer-reviewed journals and conferences in their mental narrow
 focus, only librarians typically have a (often not much) better overview of
 available reputable journals and conferences in respective fields.

 It is high time for a global registry of scientific publishers and their
 respective journals and a form of rating and grading them.

 Linked data and semantic web technologies provide opportunities to create 
 such
 rating and grading systems, and maybe an item for a separate W3C Community
 Group?

  
 Milton Ponson
 GSM: +297 747 8280
 PO Box 1154, Oranjestad
 Aruba, Dutch Caribbean
 Project Paradigm: A structured approach to bringing the tools for sustainable
 development to all stakeholders worldwide by creating ICT tools for NGOs
 worldwide and: providing online access to web sites and repositories of data
 and information for sustainable development

 This email and any files transmitted with it are confidential and intended
 solely for the use of the individual or entity to whom they are addressed. If
 you have received this email in error please notify the system manager. This
 message contains confidential information and is intended only for the
 individual named. If you are not the named addressee you should not
 disseminate, distribute or copy this e-mail.

--
Phillip Lord,                           Phone: +44 (0) 191 222 7827
Lecturer in Bioinformatics,             Email: phillip.l...@newcastle.ac.uk
School of Computing Science,            
http://homepages.cs.ncl.ac.uk/phillip.lord
Room 914 Claremont Tower,               skype: russet_apples
Newcastle University,                   twitter: phillord
NE1 7RU




-- 
Leon R A Derczynski
Research Associate, NLP Group

Department of Computer Science
University of Sheffield
Regent Court, 211 Portobello
Sheffield S1 4DP, UK

+45 5157 4948
http://www.dcs.shef.ac.uk/~leon/ 

Re: Restpark - Minimal RESTful API for querying RDF triples

2013-04-17 Thread Gannon Dick

would you also say that a ferrari is a slow car if you see it cruising
in a speed limit zone?

 Don't laugh, the US Census uses Free Fall distance
Commuting
because a Ferrari stopped in traffic has the same rest mass as my ... um ... an 
actual slow car.


just started to read this paper The Complexity of Evaluating Path Expressions 
in SPARQL [1]
i'll be back in seven and a half million years :)

 Better idea: Find ~1837 friends and you'll be done in a year.  Warning: do 
 not give anyone 13 eV to buy Pizza, or you never see them again :-)

Re: Triple Checker

2013-04-03 Thread Gannon Dick
Thanks Christopher.  not entirely useless are all the initial conditions one 
needs to solve a differential equation by the Monte Carlo Differentiation 
Method I invented two days ago.  In retrospect, this inverse of Monte Carlo 
Integration leads me to believe there is a resonant point (not entirely making 
stuff up - not entirely useless).  Where classic engineering has points of 
failure, linked data has points of non-reproducibility, a not entirely 
useless concept in relation to The Scientific Method.

I am going to work in some utterly opaque notation(*) so important people 
will understand.


--Gannon

* see also: Talking to Myself (Private Communication)



 From: Christopher Gutteridge c...@ecs.soton.ac.uk
To: public-lod@w3.org public-lod@w3.org 
Sent: Wednesday, April 3, 2013 8:33 AM
Subject: Triple Checker
 
Hi, thanks for everyone's sense of humour about uri4uri.net -- I had fun 
writing it. I'm now working on taking out the silly bits and leaving it up 
indefinitely as I think it's not entirely useless. Suggestions welcome.

Now it's past April 1st, I'd like to show off a few more useful tools I've 
built:

http://graphite.ecs.soton.ac.uk/checker/
This catches common mistakes people (me, for example) make when producing RDF: 
it checks for minor typos in common namespaces, and for terms  classes which 
have a namespace which resolves to a schema/ontology but the term in question 
isn't there. It's saved me loads of silly mistakes. It's on github if people 
want to suggest improvements.

http://graphite.ecs.soton.ac.uk/browser/
I wrote this RDF browser as a lightweight alternative to the existing ones. 
It's aimed at developers wanting to see inside an RDF file with a bit less 
headache than raw RDF (or RDFa, etc). Again, suggestions welcome and you can 
run a local copy if you want, once again all the code is on github.

http://graphite.ecs.soton.ac.uk/geo2kml/
Looks for lat/long data in RDF and makes a Google Earth (or maps) KML file. 
Handy for spotting obvious mistakes in your data.

http://graphite.ecs.soton.ac.uk/stuff2rdf/
This is a bit of a personal swiss army knife if I need to quickly munge RDF 
between the common formats.

Share and enjoy!

-- Christopher Gutteridge -- http://users.ecs.soton.ac.uk/cjg

University of Southampton Open Data Service: http://data.southampton.ac.uk/
You should read the ECS Web Team blog: http://blogs.ecs.soton.ac.uk/webteam/

Re: Why is it bad practice to consume Linked Data and publish opaque HTML pages?

2013-03-30 Thread Gannon Dick
+1 verified, maybe a lot more if people took this advice :-)





 From: Kingsley Idehen kide...@openlinksw.com
To: public-lod@w3.org public-lod@w3.org 
Sent: Saturday, March 30, 2013 9:35 AM
Subject: Why is it bad practice to consume Linked Data and publish opaque  HTML 
pages?
 
All,

 Citing sources is useful for many reasons: (a) it shows that it isn't a 
half-baked idea I just pulled out of thin air, (b) it provides a reference for 
anybody who wants to dig into the subject, and (c) it shows where the ideas 
originated and how they're likely to evolve. -- John F. Sowa [1].

An HTTP URI is an extremely powerful citation and attribution mechanism. 
Incorporate Linked Data principles and the power increases exponentially.

It is okay to consume Linked Data from wherever and publish HTML documents 
based on source data modulo discoverable original sources Linked Data URIs.

It isn't okay, to consume publicly available Linked Data from sources such as 
the LOD cloud and then republish the extracted content using HTML documents, 
where the original source Linked Data URIs aren't undiscoverable by humans or 
machines.

The academic community has always had a very strong regard for citations and 
source references. Thus, there's no reason why the utility of Linked Data URIs 
shouldn't be used to reinforce this best-practice, at Web-scale .

Links:

1. http://ontolog.cim3.net/forum/ontolog-forum/2013-03/msg00084.html -- ontolog 
list post .

-- 
Regards,

Kingsley Idehen    
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen

Re: Visualizing Linked Data - did we miss anything?

2013-03-27 Thread Gannon Dick
Thanks Oscar.  Your presentation is much nicer than mine.

re: * Matrices, parallel co-ordinates
* Timeline and topology plots, map and landscape views

A problem for visualizations (and a huge concern of mine) is that the 
underlying physics of the visualization be a Socio-Technical System.  That 
means that physical constants remain constant under iteration by degrees (0-90) 
and radians (0-PI/2) in a parallel coordinate system.  Economists and 
for-profit businesses regularly get this wrong, because the apparent result is 
mighty attractive, new and improved, shows growth and so forth, with some 
unwitting validation from Contract Law*.  Governments should not work this way 
- the apparent variability is 100% due to hidden fees (truncation on a 
chaotic boundary).  For example (#2), sunset and sunrise calculations are 
centered on mid-summer.  Solar Noon wanders around Local Noon according to 
The Equation of Time, but never more than about half an hour of a clock noon - 
the center of lunch hour - or Siesta.  Siesta caused many Work-Life Balance 
problems on the real boundary with Family Time
 around sunrise and sunset. The logical mistake was that a watch was 
registering the true constant labelled noon and the sun was being somehow 
disorderly - a Watch Ethic disguised as a Work Ethic, it only worked on the 
Mid-Summer's Day.

Thanks for the confusing explanation, Gannon.  


Socio-Technical Systems fix this, the math is fairly straight forward.
If someone at KIT or FI.UPM.ES would like to help me improve the teach-ability 
please contact me off line.

--Gannon

* Example #1: A One Year Contract = 365.242196 Day Contract according to 
Astronomers (Kepler) and a 365.25 Day Contract according to Bankers and a 365 
Day Contract according to the Payroll Department.  It's a tri-label not a 
three (tri)-nomial. 





 From: Oscar Corcho ocor...@fi.upm.es
To: Maria Maleshkova maria.maleshk...@kit.edu; public-lod@w3.org 
Cc: euclid-proj...@sti2.org 
Sent: Wednesday, March 27, 2013 5:43 PM
Subject: Re: Visualizing Linked Data - did we miss anything?
 

Hi Maria,

If you are interested in covering map-based visualizations, you may want to add 
Map4RDF (http://oegdev.dia.fi.upm.es/map4rdf/)

Oscar

-- 

Oscar Corcho
Ontology Engineering Group (OEG)
Departamento de Inteligencia Artificial
Facultad de Informática
Campus de Montegancedo s/n
Boadilla del Monte-28660 Madrid, España
Tel. (+34) 91 336 66 05
Fax  (+34) 91 352 48 19
De:  Maria Maleshkova maria.maleshk...@kit.edu
Fecha:  miércoles, 27 de marzo de 2013 17:49
Para:  public-lod@w3.org
Asunto:  Visualizing Linked Data - did we miss anything?
Nuevo envío de:  public-lod@w3.org
Fecha de nuevo envío:  Wed, 27 Mar 2013 21:28:39 +


Dear all,


we are trying to compile a survey of topics and tools for visualizing Linked 
Data. This is part of the contributions of the European project EUCLID 
(http://www.euclid-project.eu/), which aims to provide an educational 
curriculum for Linked Data practitioners. So far we have created training 
materials on introducing the Linked Data principles and application scenarios 
[1], and on querying Linked Data [2]. Currently we are working on covering 
visualization. If you are a developer or a user of methods or tools, which are 
relevant and we have missed, please let us know (direct reply to the email or 
euclid-proj...@sti2.org and on Twitter https://twitter.com/euclid_project).

All training materials produced by EUCLID are freely available [3] 
(Attribution) and can be reused for trainings and educational activities. 

*  Linked Data Visualization
* Visualisation Techniques
* Visualizing the Linked Data Cloud
* Requirement for Visualisation Tools
* Visualizing Different Data Dimensions
* Existing Linked Data Visualisations
* Simple bar and pie charts, histograms, line and scatterplots
* Node-link tree and graph visualisations, in both 2D and 3D
* Matrices, parallel co-ordinates
* Timeline and topology plots, map and landscape views
* Space-filling visualisations such as tree maps, rose diagrams, 
icicle, bubble and sunburst plots
* Iconography, including star and glyph plots
* Text-based
* Linked Data Browsers
* sig.ma, sindice, OpenLink RDF Browser, Marbles, Disco - Disco 
Hyperdata Browser, Piggy Bank, part of SIMILE, Zitgist DataViewer, iLOD, URI 
Burner
* Browsers with Visualisation Options
* Tabulator, IsaViz, OpenLink Data Explorer, RDF Gravity, RelFinder, 
DBpedia Mobile, LESS http://less.aksw.org/
* Further: SIMILE Exhibit, Haystack, FoaF Explorer, Humboldt, LENA, 
Noadster, mSpace, Revyv, RKBExplorer, Semanlink
* Visualisation toolkits
* Information Workbench Linked Open Data, Graves
* SPARQL Visualisation


Thank you for your feedback!

Visit out website for further resources: 

  1   2   >