Re: [ontolog-forum] Deborah L. McGuinness, keynote speaker at OCAS!!!

2011-10-23 Thread Patrick Durusau

John,

All true but how many of the people in Deborah's audience could name one 
contemporary of Aristotle?


Deborah could and a few others.

Old ideas are being repackaged and sold as new. Property graphs are all 
the rage today when it was more than 20 years ago that relational 
databases were being implemented as hypergraphs.


Let them have their fun.

We are in no real danger of a new level of human understanding, 
democracy and universal peace. Never have been, never will be.


Hope you are at the start of a great week!

Patrick



On 10/23/2011 04:48 PM, John F. Sowa wrote:

On 10/23/2011 3:39 PM, Alexander Garcia Castro wrote:


  *Deborah L. McGuinness, keynote speaker at OCAS!!!*

At 9AM on the 24th Deborah will be opening the workshop, u cant miss 
it!!


Deborah's talks are usually very good, and I would recommend it.

But I have some quibbles about the title:

   Ontologies come of Age in the Semantic Web

Ontology is as old as Aristotle.  Cyc has been developing ontologies
since 1984, and they devoted over 1000 person years to writing them.
That would make ontologies considerably older than the Semantic Web.

Furthermore, if you download OWL ontologies, you might notice that
the overwhelming majority of them don't use any logical features
beyond what can be stated more clearly and concisely in Aristotle's
original notation.

John






--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Where to put the knowledge you add

2011-10-12 Thread Patrick Durusau
://dbpedia.org/ontology/capital ?capital .
}

As a URI:

http://dbpedia.org/snorql/?query=SELECT+DISTINCT+%3Fcapital+WHERE+%7B%0D%0A+%3Fs+owl%3AsameAs+%3Chttp%3A%2F%2Fdbpedia.org%2Fresource%2FBelgium%3E+.%0D%0A+%3Fs+owl%3AsameAs+%3Fcountry+.%0D%0A+%3Fcountry+%3Chttp%3A%2F%2Fdbpedia.org%2Fontology%2Fcapital%3E+%3Fcapital+.%0D%0A%7D%0D%0A

Output:
capital
http://dbpedia.org/resource/City_of_Brussels
http://dbpedia.org/resource/Maseru


--
Hugh Glaser,
 Web and Internet Science
 Electronics and Computer Science,
 University of Southampton,
 Southampton SO17 1BJ
Work: +44 23 8059 3670 tel:%2B44%2023%208059%203670, Fax: +44 23
8059 3045 tel:%2B44%2023%208059%203045
Mobile: +44 75 9533 4155 tel:%2B44%2075%209533%204155 , Home:
+44 23 8061 5652 tel:%2B44%2023%208061%205652
http://www.ecs.soton.ac.uk/~hg/ http://www.ecs.soton.ac.uk/%7Ehg/






--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau



Re: Where to put the knowledge you add

2011-10-12 Thread Patrick Durusau

Glenn,

On 10/12/2011 09:24 AM, glenn mcdonald wrote:


Who else would be able to make assertions about your notion of
Brussels vis-a-vis some other notion of Brussels with any more
authority than your own?


The person doing the integration of your dataset with some other 
dataset for some purpose of /theirs/. The kind of correspondence 
needed is a function of the purpose of combining the data.


And what of a purpose of yours? That is when you are combining datasets 
for purposes of your own?


Ignoring owl:sameAs statements isn't an option?


Not if you have a mixture of owl:sameAs statements you /have/ to use 
and ones you /can't/. Whereas if you have x:correspondsTo, 
y:correspondsTo and z:correspondsTo, it /is/ easy to say that 
y:correspondsTo (but not the other two) is the same as owl:sameAs.
Sorry, who is using the x:correspondsTo? You make it sound like 
correspondsTo will be used so as to allow/enable the choice you want to 
make with regard to owl:sameAs.


Note I am not disagreeing that you will need to make the distinction, 
what I am missing is how correspondsTo (if not used with sufficient 
granularity) will help you get there? You can't choose to ignore 
owl:sameAs on the basis of the components that make up the relationship?


Hope you are having a great day!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau



Re: [foaf-protocols] How to make an idea popular

2011-09-19 Thread Patrick Durusau

Adam,

On 9/19/2011 9:29 AM, Adam Saltiel wrote:

I didn't follow the links yet. But I'm sure Kingsley means popular such as to 
gain traction and wide spread use. This does seem inevitable. It is just that 
it has been a bit slow.

Why inevitable?

People make their webpages available b/c the benefit of being heard by 
a wider audience is worth the cost of admission.


The cost/benefit picture for creating RDF for the consumption of others 
isn't as clear.


The HTML involved very minimal effort in order to participate.

Perhaps a useful question to consider would be comparing the effort in 
the average webpage versus Linked Data or RDF or RDFa?


Such a study may already exist and if so, I would appreciate a reference 
to it.


Hope you are at the start of a great week!

Patrick



Am I right that algorithmic based social networks intervened in what might have 
been a more straight forward uptake?
I think we need to be clearer about the differences between machine curation on 
the basis of algorithms run on huge data sets and machine curation on the basis 
of type categories.
We need to know the both the means and intentional ends of both approaches.
Br

Adam

Sent from my iPhone

On 19 Sep 2011, at 02:49, Patrick Durusaupatr...@durusau.net  wrote:


Kingsley,

An idea being popular doesn't mean that it is feasible or even desirable.

Fascism for example. Quite popular a number of times in history.

Hope you are at the start of a great week!

Patrick



On 09/18/2011 03:19 PM, Kingsley Idehen wrote:

On 9/18/11 8:35 AM, Melvin Carvalho wrote:

http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action.html

Enjoy! :)
___
foaf-protocols mailing list
foaf-protoc...@lists.foaf-project.org
http://lists.foaf-project.org/mailman/listinfo/foaf-protocols


Amen!

cc. some other mailing lists where members continue to be challenged about 
uptake of at least one of the following:

1. Linked Data
2. Semantic Web Project deliverables and their adoption beyond niches.




--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: [foaf-protocols] How to make an idea popular

2011-09-18 Thread Patrick Durusau

Kingsley,

An idea being popular doesn't mean that it is feasible or even desirable.

Fascism for example. Quite popular a number of times in history.

Hope you are at the start of a great week!

Patrick



On 09/18/2011 03:19 PM, Kingsley Idehen wrote:

On 9/18/11 8:35 AM, Melvin Carvalho wrote:
http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action.html 



Enjoy! :)
___
foaf-protocols mailing list
foaf-protoc...@lists.foaf-project.org
http://lists.foaf-project.org/mailman/listinfo/foaf-protocols


Amen!

cc. some other mailing lists where members continue to be challenged 
about uptake of at least one of the following:


1. Linked Data
2. Semantic Web Project deliverables and their adoption beyond niches.





--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Question: Authoritative URIs for Geo locations? Multi-lingual labels?

2011-09-09 Thread Patrick Durusau

Kingsley,

On 09/09/2011 10:20 AM, Kingsley Idehen wrote:

On 9/9/11 8:58 AM, Leigh Dodds wrote:

Hi,

As well as the others already mentioned there's also Yahoo Geoplanet:

http://beta.kasabi.com/dataset/yahoo-geoplanet

This has multi-lingual labels and is cross-linked to the Ordnance
Survey data, Dbpedia, but that could be improved.

As for a list, there are currently 34 geography related datasets
listed in Kasabi here:

http://beta.kasabi.com/browse/datasets/results/og_category%3A147


Leigh,

Can anyone access these datasets or must they obtain a kasabi account 
en route to authenticated access?



Been a while. ;-)

From the Kasabi FAQ:


Why do you require API Keys?

An important part of Kasabi is letting data providers explore the 
potential (commercial and utility) of their data. API Keys let us 
track the actual usage of each dataset and API, giving us the ability 
to provide stats to the data providers and curators. With these stats, 
they are in a better position to understand how their data is being 
used, and to what extent it's being picked up.




also relevant:


Can I download a dataset?

No. Kasabi is a hosted service, and downloading data isn't a feature 
we're planning to support. The reasoning behind this is partly for 
data providers to be able to see how their data is being used, and 
partly because we see Kasabi's role in curation as being a valuable 
aspect of the marketplace. We can't keep download versions up to date, 
for example. Data providers may make datasets available for download 
on their own terms, but not via Kasabi.




Hope you are looking forward to a great weekend!

Patrick



Kingsley

Cheers,

L.

On 8 September 2011 15:38, M. Scott 
Marshallmscottmarsh...@gmail.com  wrote:

It seems that dbpedia is a de facto source of URIs for geographical
place names. I would expect to find a more specialized source. I think
that I saw one mentioned here in the last few months. Are there
alternatives that are possible more fine-grained or designed
specifically for geo data? With multi-lingual labels? Perhaps somebody
has kept track of the options on a website?

-Scott

--
M. Scott Marshall
http://staff.science.uva.nl/~marshall

On Thu, Sep 8, 2011 at 3:07 PM, Sarven Capadislii...@csarven.ca  
wrote:

On Thu, 2011-09-08 at 14:01 +0100, Sarven Capadisli wrote:

On Thu, 2011-09-08 at 14:07 +0200, Karl Dubost wrote:

# Using RDFa (not implemented in browsers)


ul xmlns:geo=http://www.w3.org/2003/01/geo/wgs84_pos#; 
id=places-rdfa

lispan
 about=http://www.dbpedia.org/resource/Montreal;
 geo:lat_long=45.5,-73.67Montréal/span, Canada/li
lispan
 about=http://www.dbpedia.org/resource/Paris;
 geo:lat_long=48.856578,2.351828Paris/span, France/li
/ul

* Issue: Latitude and Longitude not separated
   (have to parse them with regex in JS)
* Issue: xmlns with!doctype html


# Question

On RDFa vocabulary, I would really like a solution with geo:lat 
and geo:long, Ideas?
Am I overlooking something obvious here? There is lat, long 
properties

in wgs84 vocab. So,

span about=http://dbpedia.org/resource/Montreal;
span property=geo:lat
   content=45.5
   datatype=xsd:float/span
span property=geo:lat
   content=-73.67
   datatype=xsd:float/span
 Montreal
/span

Tabbed for readability. You might need to get rid of whitespace.

-Sarven

Better yet:

li about=http://dbpedia.org/resource/Montreal;
span property=geo:lat
...


-Sarven











--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau



Re: CAS, DUNS and LOD (was Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW)

2011-08-23 Thread Patrick Durusau

David,

On 8/22/2011 9:55 PM, David Booth wrote:

On Mon, 2011-08-22 at 20:27 -0400, Patrick Durusau wrote:
[ . . . ]

The use of CAS identifiers supports searching across vast domains of
*existing* literature. Not all, but most of it for the last 60 or so
years.

That is non-trivial and should not be lightly discarded.

BTW, your objection is that non-licensed systems cannot use CAS
identifiers? Are these commercial systems that are charging their
customers? Why would you think such systems should be able to take
information created by others?


Using the information associated with an identifier is one thing; using
the identifier itself is another.  I'm sure the CAS numbers have added
non-trivial value that should not be ignored.  But their business model
needs to change.  It is ludicrous in this web era to prohibit the use of
the identifiers themselves.

If there is one principle we have learned from the web, it is enormous
value and importance of freely usable universal identifiers.  URIs rule!
http://urisrule.org/

:)
Well, I won't take the bait on URIs, ;-), but will note that re-use of 
identifiers of a sort was addressed quite a few years ago.


See: /*Feist Publications, Inc., v. Rural Telephone Service Co.*/, 499 
U.S. 340 (1991) or follow this link:


http://en.wikipedia.org/wiki/Feist_v._Rural

The circumstances with CAS numbers is slightly different because to get 
access to the full set of CAS numbers I suspect you have to sign a 
licensing agreement on re-use, which makes it a matter of *contract* law 
and not copyright.


Perhaps they should increase the limits beyond 10,000 identifiers but 
the only people who want the whole monty as it were are potential 
commercial competitors.


The people who publish the periodical Brain for example at $10,000 a 
year. Why should I want the complete set of identifiers to be freely 
available to help them?


Personally I think given the head start that the CAS maintainers have on 
the literature, etc., that different models for use of the identifiers 
might suit their purposes just as well. Universal identifiers change 
over time and my concern is with the least semantic friction and not as 
much with how we get there.


Hope you are having a great day!

Patrick






--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau



Re: CAS, DUNS and LOD (was Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW)

2011-08-23 Thread Patrick Durusau

John

On 8/23/2011 9:05 AM, John Erickson wrote:

This is an important discussion that (I believe) foreshadows how
canonical identifiers are managed moving forward.

Both CAS and DUNS numbers are a good example. Consider the challenge
of linking EPA data; it's easy to create a list of toxic chemicals
that are common across many EPA datasets. Based on those chemical
names, its possible to further find (in most cases) references in
DBPedia and other sources, such as PubChem:

* ACETALDEHYDE
* http://dbpedia.org/page/Acetaldehyde
* http://pubchem.ncbi.nlm.nih.gov/summary/summary.cgi?cid=177
* etc...

Now, add to this a sensible agency-rooted URI design and a
DBPedia-like infrastructure and one has a very powerful hub that
strengthens the Linked Data ecosystem. It would arguably be stronger
if CAS identifiers were also (somehow) included, but even the bits of
linking shown above change the value proposition of traditional
proprietary naming schemes...
Quite so and I did not mean to imply otherwise. Yes, gathering 
government agency URI identifiers for toxic chemicals is a value-add 
proposition.


I am curious if you find that different offices within agencies use the 
same URIs? Or did they have other identifiers in their records prior to 
the URIs?


That is will the URIs map to the identifiers used in EPA datasets for 
example?


Despite its obvious value, I don't agree that the project change[s] the 
value proposition of traditional proprietary naming schemes...


Mostly because it does not address the *prior* use of other identifiers 
in the published literature. However convenient it may be to pretend 
that we are starting off fresh, in fact we are not, in any information 
system.


The fact remains that even if we switched (miraculously) today to all 
new URI identifiers, we will be accessing literature using prior 
identifiers for a very long time. I suspect hundreds of years.


BTW, who bridges between the new URI schemes and the CAS identifiers? 
For searching traditional literature?



John
PS: At TWC we are about to go live with a registry called Instance
Hub that will demonstrate the association of agency-based URI schemes
--- think EPA, HHS, DOE, USDA, etc --- with instance data over which
the agency has some authority or interest...More very soon!

Looking forward to it!

Hope you are having a great day!

Patrick




On Tue, Aug 23, 2011 at 8:31 AM, Patrick Durusaupatr...@durusau.net  wrote:

David,

On 8/22/2011 9:55 PM, David Booth wrote:

On Mon, 2011-08-22 at 20:27 -0400, Patrick Durusau wrote:
[ . . . ]

The use of CAS identifiers supports searching across vast domains of
*existing* literature. Not all, but most of it for the last 60 or so
years.

That is non-trivial and should not be lightly discarded.

BTW, your objection is that non-licensed systems cannot use CAS
identifiers? Are these commercial systems that are charging their
customers? Why would you think such systems should be able to take
information created by others?

Using the information associated with an identifier is one thing; using
the identifier itself is another.  I'm sure the CAS numbers have added
non-trivial value that should not be ignored.  But their business model
needs to change.  It is ludicrous in this web era to prohibit the use of
the identifiers themselves.

If there is one principle we have learned from the web, it is enormous
value and importance of freely usable universal identifiers.  URIs rule!
http://urisrule.org/

:)

Well, I won't take the bait on URIs, ;-), but will note that re-use of
identifiers of a sort was addressed quite a few years ago.

See: Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340
(1991) or follow this link:

http://en.wikipedia.org/wiki/Feist_v._Rural

The circumstances with CAS numbers is slightly different because to get
access to the full set of CAS numbers I suspect you have to sign a licensing
agreement on re-use, which makes it a matter of *contract* law and not
copyright.

Perhaps they should increase the limits beyond 10,000 identifiers but the
only people who want the whole monty as it were are potential commercial
competitors.

The people who publish the periodical Brain for example at $10,000 a year.
Why should I want the complete set of identifiers to be freely available to
help them?

Personally I think given the head start that the CAS maintainers have on the
literature, etc., that different models for use of the identifiers might
suit their purposes just as well. Universal identifiers change over time and
my concern is with the least semantic friction and not as much with how we
get there.

Hope you are having a great day!

Patrick




--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net

Re: CAS, DUNS and LOD (was Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW)

2011-08-22 Thread Patrick Durusau

David,

On 8/22/2011 7:39 PM, David Wood wrote:

Hi all,

On Aug 19, 2011, at 06:37, Patrick Durusau wrote:
Case in point, CAS, http://www.cas.org/. Coming up on 62 million 
organic and inorganic substances given unique identifiers. What is 
the incentive for any of their users/customers to switch to Linked Data?


Well, for one thing, CAS (like DUNS) identifiers are proprietary. 
 They can't be reused for the purposes of identification in 
non-licensed systems.  That causes no end of trouble for researchers, 
government agencies and corporations who have bought into those 
proprietary identification schemes only to find out that they can't 
reuse the identifiers in new contexts.


Not quite correct. You can use up to 10,000 of the CAS identifiers 
before licensing restrictions kick in.


I think the EPA creating their own identifiers is the result of bad advice.

For the following reasons:

1) It simply dirties up the pond of identifiers for organic and 
inorganic substances with yet another identifier.


2) Users and other implementers will bear the added cost of supporting 
yet another set of identifiers.


3) The literature in the area will have yet another set of identifiers 
to either be discovered or mapped.


4) The expertise behind CAS numbers is well known and has a history of 
high quality work.


The use of CAS identifiers supports searching across vast domains of 
*existing* literature. Not all, but most of it for the last 60 or so years.


That is non-trivial and should not be lightly discarded.

BTW, your objection is that non-licensed systems cannot use CAS 
identifiers? Are these commercial systems that are charging their 
customers? Why would you think such systems should be able to take 
information created by others?


Hope you are having a great day!

Patrick



An example is the US Environmental Protection Agency, who uses CAS 
numbers.  They cannot reuse those identifiers when they publish open 
government data.  They are not thrilled about that.  The EPA is now 
publishing their own identifiers.  How long will CAS last as a 
standard?  How many ids has the Encyclopedia of Life developed?  Or 
Wikipedia?


DUNS numbers, another widely used proprietary identification scheme, 
are very similar.  Orgpedia [1] and similar approaches are and have 
been started just to break the deadlock of that scheme.


Face it:  People just hate being boxed in.  Sure, you can make a 
business model out of doing so, but don't expect anyone to love you 
for it.  The Web allows people to think about not boxing themselves 
in.  That is a direct threat to those older and less friendly business 
models, DUNS and CAS included.


Regards,
Dave

[1] http://dotank.nyls.edu/ORGPedia.html




--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau



Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-19 Thread Patrick Durusau

Kingsley,

One more attempt.

The press release I pointed to was an example that would have to be 
particularized to a CIO or CTO in term of *their* expenses of 
integration, then showing *their* savings.


The difference in our positions, from my context, is that I am saying 
the benefit to enterprises has to be expressed in terms of *their* 
bottom line, over the next quarter, six months, year. I hear (your 
opinion likely differs) you saying there is a global benefit that 
enterprises should invest in with no specific ROI for their bottom line 
in any definite period.


Case in point, CAS, http://www.cas.org/. Coming up on 62 million organic 
and inorganic substances given unique identifiers. What is the incentive 
for any of their users/customers to switch to Linked Data?


As I said several post ago, your success depends upon people investing 
in a technology for your benefit. (In all fairness you argue they 
benefit as well, but they are the best judges of the best use of their 
time and resources.)


Hope you are looking forward to a great weekend!

Patrick

On 8/18/2011 10:09 PM, Kingsley Idehen wrote:

On 8/18/11 5:27 PM, Patrick Durusau wrote:

Kingsley,

Citing your own bookmark file hardly qualifies as market numbers. 


My own bookmark? I gave you a URL to a bookmark collection. The 
collection contains links for a variety of research documents.


People promoting technologies make up all sorts of numbers about what 
use of X will save. Reminds me of the music or software theft numbers. 


Er. and you posted a link to a press release. What's your point?


They have no relationship to any reality that I share.


But you posted an Informatica press release to make some kind of 
point. Or am I completely misreading and misunderstanding the purpose 
of that URL too?




It's been enjoyable as usual but without some common basis for 
discussion we aren't going to get any closer to a common understanding.


Correct :-)

Kingsley


Hope you are having a great week!

Patrick



On 8/18/2011 3:24 PM, Kingsley Idehen wrote:

On 8/18/11 2:50 PM, Patrick Durusau wrote:

Kingsley,

On 8/18/2011 1:52 PM, Kingsley Idehen wrote:

On 8/18/11 1:40 PM, Patrick Durusau wrote:

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity 
costs.  If someone else is eating you lunch by disrupting your 
market you simply have to respond. Thus, on this side of the 
fence its better to focus on eating lunch rather than warning 
about the possibility of doing so, or outlining how it could be 
done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack 
Park says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.


I mean: just start eating the lunch i.e., make a solution that 
takes advantage of an opportunity en route to market disruption. 
Trouble with the Semantic Web is that people spend too much time 
arguing and postulating. Ironically, when TimBL worked on the 
early WWW, his mindset was: just do it! :-)



Still dodging the question I see. ;-)


Of course not.

You want market research numbers, see the related section at the end 
of this reply. I sorta assumed you would have found this 
serendipitously though? Ah! You don't quite believe in the utility 
of this Linked Data stuff etc..






It avoids it in favor of advocacy.


See my comments above. You are skewing my comments to match you 
desired outcome, methinks.



You reach that conclusion pretty frequently.


See my earlier comment.



I ask for hard numbers, you say that isn't your question and/or 
skewing your comments.


Yes. I didn't know this was about market research and numbers [1].





Example: Privacy controls and Facebook. How much would it cost to 
solve this problem?


I assume you know the costs of the above.
It won't cost north of a billion dollars to make a WebID based 
solution. In short, such a thing has existed for a long time, 
depending on your context lenses .




I assume everyone here is familiar with: 
http://www.w3.org/wiki/WebID ?


So we need to take the number of users who have a WebID and 
subtract that from the number of FaceBook users.


Yes?


No!

Take the number of people that have are members of a service that's 
ambivalent to the self calibration of the vulnerabilities of its 
members aka. privacy.




The remaining number need a WebID or some substantial portion, yes?


Ultimately they need a WebID absolutely! And do you know why? It 
will enable members begin the inevitable journey towards self 
calibration of their respective vulnerabilities.


I hope you understand that society is old and the likes of G+, FB 
are new and utterly immature. In society, one is innocent until 
proven guilty or not guilty. In the world of FB and G+ the 
fundamentals of society are currently being inverted. Anyone can 
ultimately say anything about you. Both parties are building cyber 
police states

Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-19 Thread Patrick Durusau

Kingsley,

Correction: I have never accused you of being modest or of not being an 
accountant. ;-)


Nor have I said the costs you talk about in your accountant voice don't 
exist.


The problem is identifying the cost to a particular client, say of email 
spam, versus the cost the solution for the same person.


For example, I picked a spam article at random that says a 100 person 
firm *could be losing* as much as $55,000 per year due to spam.


Think about that for a minute. That works out to $550 per person.

So, if your solution costs more than $550 per person, it isn't worth 
buying.


Besides, the $550 per person *isn't on the books.* Purchasing your 
solution is. As they say, spam is a hidden cost. Hidden costs are hard 
to quantify or get people to address.


Not to mention that your solution requires an investment before the 
software can exist for any benefit. That is an even harder sell.


Isn't investment to enable a return from another investment (software, 
later) something accountants can see?


Hope you are having a great day!

Patrick


PS: The random spam article: 
http://blogs.cisco.com/smallbusiness/the_big_cost_of_spam_viruses_for_small_business/



On 8/19/2011 9:57 AM, Kingsley Idehen wrote:

On 8/19/11 6:37 AM, Patrick Durusau wrote:

Kingsley,

One more attempt.

The press release I pointed to was an example that would have to be 
particularized to a CIO or CTO in term of *their* expenses of 
integration, then showing *their* savings.


Yes, and I sent you a link to a collection of similar documents from 
which you could find similar research depending on problem type. On 
the first page you should have seen a link to a research document 
about the cost of email spam, for instance.


CEO, CIOs, CTOs are all dealing with costs of:

1. Spam
2. Password Management
3. Security
4. Data Integration.

There isn't a shortage of market research material re. the above and 
their costs across a plethora of domains.




The difference in our positions, from my context, is that I am 
saying the benefit to enterprises has to be expressed in terms of 
*their* bottom line, over the next quarter, six months, year.
For what its worth I worked for many years as an accountant before I 
crossed over to the vendor realm during the early days of Open Systems 
-- when Unix was being introduced to enterprises. That's the reason 
why integration middleware and dbms technology has been my passion for 
20+ years. I am a slightly different profile to what you assume in 
your comments re. cost-benefits analysis.


I hear (your opinion likely differs) you saying there is a global 
benefit that enterprises should invest in with no specific ROI for 
their bottom line in any definite period.


See comment above. I live problems first, then architect technology to 
solve them. When I tell you about the costs of data integration to 
enterprises I am basically telling you that I've lived the problem for 
many years. My understanding is quite deep. Sorry, but this isn't an 
area when I can pretend to be modest :-)




Case in point, CAS, http://www.cas.org/. Coming up on 62 million 
organic and inorganic substances given unique identifiers. What is 
the incentive for any of their users/customers to switch to Linked Data?


I think the issue is more about: what would identifiers provide to 
this organization with regards to the obvious need to virtualize its 
critical data sources such that:


1. data sources are represented as fine grained data objects
2. every data object is endowed with an identifier
3. identifiers become superkey that provide conduits highly navigable 
data object based zeitgeists -- a single identifier should resolve to 
graph pictorial representing all data associated with that specific 
identifier and and additional data that has been reconciled logically 
e.g., leverage owl:sameAs and IFP (inverse functional property) logic.




As I said several post ago, your success depends upon people 
investing in a technology for your benefit. (In all fairness you 
argue they benefit as well, but they are the best judges of the best 
use of their time and resources.)


Kingsley


Hope you are looking forward to a great weekend!

Patrick

On 8/18/2011 10:09 PM, Kingsley Idehen wrote:

On 8/18/11 5:27 PM, Patrick Durusau wrote:

Kingsley,

Citing your own bookmark file hardly qualifies as market numbers. 


My own bookmark? I gave you a URL to a bookmark collection. The 
collection contains links for a variety of research documents.


People promoting technologies make up all sorts of numbers about 
what use of X will save. Reminds me of the music or software theft 
numbers. 


Er. and you posted a link to a press release. What's your point?


They have no relationship to any reality that I share.


But you posted an Informatica press release to make some kind of 
point. Or am I completely misreading and misunderstanding the 
purpose of that URL too?




It's been enjoyable as usual but without some

Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

Your characterization of problems is spot on:

On 8/18/2011 9:01 AM, Kingsley Idehen wrote:

snip

Linked Data addresses many real world problems. The trouble is that 
problems are subjective. If you have experienced a problem it doesn't 
exist. If you don't understand a problem it doesn't exist. If you 
don't know a problem exists then again it doesn't exist in you context.




But you left out: The recognized problem must *cost more* than the 
cost of addressing it.


A favorable cost/benefit ratio has to be recognized by the people being 
called upon to make the investment in solutions.


That is recognition of a favorable cost/benefit ratio by the W3C and 
company is insufficient.


Yes?

For the umpteenth time here are three real world problems addressed 
effectively by Linked Data courtesy of AWWW (Architecture of the World 
Wide Web):


1. Verifiable Identifiers -- as delivered via WebID (leveraging Trust 
Logic and FOAF)
2. Access Control Lists -- an application of WebID and Web Access 
Control Ontology
3. Heterogeneous Data Access and Integration -- basically taking use 
beyond the limits of ODBC, JDBC etc..


Let's apply the items above to some contemporary solutions that 
illuminate the costs of not addressing the above:


1. G+ -- the real name debacle is WebID 101 re. pseudonyms, 
synonyms, and anonymity
2. Facebook -- all the privacy shortcomings boil down to not 
understanding the power of InterWeb scale verifiable identifiers and 
access control lists
3. Twitter -- inability to turn Tweets into structured annotations 
that are basically nano-memes
4. Email, Comment, Pingback SPAM -- a result of not being able to 
verify identifiers
5. Precision Find -- going beyond the imprecision of Search Engines 
whereby subject attribute and properties are used to contextually 
discover relevant things (explicitly or serendipitously).


The problem isn't really a shortage of solutions, far from it.

For the sake of argument only, conceding these are viable solutions, the 
question is:


Do they provide more benefit than they cost?

If that can't be answered favorably, in hard currency (or some other 
continuum of value that appeals to particular investors), no one is 
going to make the investment.


Economics 101.

That isn't specific to SemWeb but any solution to a problem. The 
solution has to provide a favorable cost/benefit ratio or it won't be 
adopted. Or at least not widely.


Hope you are having a great day!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity costs.  If 
someone else is eating you lunch by disrupting your market you simply 
have to respond. Thus, on this side of the fence its better to focus 
on eating lunch rather than warning about the possibility of doing so, 
or outlining how it could be done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack Park 
says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.

It avoids it in favor of advocacy.

Example: Privacy controls and Facebook. How much would it cost to solve 
this problem? Then, what increase in revenue will result from solving it?


Or if Facebook's lunch is going to be eaten, say by G+, then why doesn't 
G+ solve the problem?


Are privacy controls are a non-problem?

Your context lenses.

True, you can market a product/service that no one has ever seen before. 
Like pet rocks.


And they just did it!

With one important difference.

Their *doing it* did not depend upon the gratuitous efforts of thousands 
if not millions of others.


Isn't that an important distinction?

Hope you are having a great day!

Patrick


On 8/18/2011 10:54 AM, Kingsley Idehen wrote:

On 8/18/11 10:25 AM, Patrick Durusau wrote:

Kingsley,

Your characterization of problems is spot on:

On 8/18/2011 9:01 AM, Kingsley Idehen wrote:

snip

Linked Data addresses many real world problems. The trouble is that 
problems are subjective. If you have experienced a problem it 
doesn't exist. If you don't understand a problem it doesn't exist. 
If you don't know a problem exists then again it doesn't exist in 
you context.




But you left out: The recognized problem must *cost more* than the 
cost of addressing it.


Yes. Now in my case I assumed the above to be implicit when context is 
about a solution or solutions :-)


If a solution costs more than the problem, it is a problem^n matter. 
No good.




A favorable cost/benefit ratio has to be recognized by the people 
being called upon to make the investment in solutions.


Always! Investment evaluation 101 for any business oriented decision 
maker.




That is recognition of a favorable cost/benefit ratio by the W3C and 
company is insufficient.


Yes?


Yes-ish. And here's why. Implementation cost is a tricky factor, one 
typically glossed over in marketing communications that more often 
than not blind side decision makers; especially those that are 
extremely technically challenged. Note, when I say technically 
challenged I am not referring to programming skills. I am referring 
to basic understanding of technology as it applies to a given domain 
e.g. the enterprise.


Back to the W3C and The Semantic Web Project. In this case, the big 
issue is that degree of unobtrusive delivery hasn't been a leading 
factor -- bar SPARQL where its deliberate SQL proximity is all about 
unobtrusive implementation and adoption. Ditto R2RML .


RDF is an example of a poorly orchestrated revolution at the syntax 
level that is implicitly obtrusive at adoption and implementation 
time. It is in this context I agree fully with you. There was a 
misconception that RDF would be adopted like HTML, just like that. As 
we can all see today, that never happened and will never happened via 
revolution.


What can happen, unobtrusively, is the use and appreciation of 
solutions that generate Linked Data (expressed using a variety of 
syntaxes and serialized in a variety of formats). That's why we've 
invested so much time in both Linked Data Middleware and DBMS 
technology for ingestion, indexing, querying, and serialization.


For the umpteenth time here are three real world problems addressed 
effectively by Linked Data courtesy of AWWW (Architecture of the 
World Wide Web):


1. Verifiable Identifiers -- as delivered via WebID (leveraging 
Trust Logic and FOAF)
2. Access Control Lists -- an application of WebID and Web Access 
Control Ontology
3. Heterogeneous Data Access and Integration -- basically taking use 
beyond the limits of ODBC, JDBC etc..


Let's apply the items above to some contemporary solutions that 
illuminate the costs of not addressing the above:


1. G+ -- the real name debacle is WebID 101 re. pseudonyms, 
synonyms, and anonymity
2. Facebook -- all the privacy shortcomings boil down to not 
understanding the power of InterWeb scale verifiable identifiers and 
access control lists
3. Twitter -- inability to turn Tweets into structured annotations 
that are basically nano-memes
4. Email, Comment, Pingback SPAM -- a result of not being able to 
verify identifiers
5. Precision Find -- going beyond the imprecision of Search Engines 
whereby subject attribute and properties are used to contextually 
discover relevant things (explicitly or serendipitously).


The problem isn't really a shortage of solutions, far from it.

For the sake of argument only, conceding

Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

Here are some hard numbers on integration of data benefits:

Future Integration Needs: Emerging Complex Data - 
http://www.informatica.com/news_events/press_releases/Pages/08182011_aberdeen_b2b.aspx


*/Integration costs are rising/* -- As integration of external data 
rises, it continues to be a labor- and cost-intensive task, with 
organizations integrating external sources spending 25 percent of 
their total integration budget in this area. 


So I can ask a decision maker, what do you spend on integration now? 
Take 25% of that figure.


Compare to X cost for integration using my software Y.

Or better yet, selling the integrated data as a service.

Data that isn't in demand to be integrated, isn't.

Technique neutral, could be SemWeb, could be third-world coding shops, 
could be Watson.


Timely, useful, integrated results are all that count.

Hope you are having a great day!

Patrick

On 8/18/2011 1:40 PM, Patrick Durusau wrote:

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity costs.  If 
someone else is eating you lunch by disrupting your market you simply 
have to respond. Thus, on this side of the fence its better to focus 
on eating lunch rather than warning about the possibility of doing 
so, or outlining how it could be done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack Park 
says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.

It avoids it in favor of advocacy.

Example: Privacy controls and Facebook. How much would it cost to 
solve this problem? Then, what increase in revenue will result from 
solving it?


Or if Facebook's lunch is going to be eaten, say by G+, then why 
doesn't G+ solve the problem?


Are privacy controls are a non-problem?

Your context lenses.

True, you can market a product/service that no one has ever seen 
before. Like pet rocks.


And they just did it!

With one important difference.

Their *doing it* did not depend upon the gratuitous efforts of 
thousands if not millions of others.


Isn't that an important distinction?

Hope you are having a great day!

Patrick


On 8/18/2011 10:54 AM, Kingsley Idehen wrote:

On 8/18/11 10:25 AM, Patrick Durusau wrote:

Kingsley,

Your characterization of problems is spot on:

On 8/18/2011 9:01 AM, Kingsley Idehen wrote:

snip

Linked Data addresses many real world problems. The trouble is that 
problems are subjective. If you have experienced a problem it 
doesn't exist. If you don't understand a problem it doesn't exist. 
If you don't know a problem exists then again it doesn't exist in 
you context.




But you left out: The recognized problem must *cost more* than the 
cost of addressing it.


Yes. Now in my case I assumed the above to be implicit when context 
is about a solution or solutions :-)


If a solution costs more than the problem, it is a problem^n matter. 
No good.




A favorable cost/benefit ratio has to be recognized by the people 
being called upon to make the investment in solutions.


Always! Investment evaluation 101 for any business oriented decision 
maker.




That is recognition of a favorable cost/benefit ratio by the W3C and 
company is insufficient.


Yes?


Yes-ish. And here's why. Implementation cost is a tricky factor, one 
typically glossed over in marketing communications that more often 
than not blind side decision makers; especially those that are 
extremely technically challenged. Note, when I say technically 
challenged I am not referring to programming skills. I am referring 
to basic understanding of technology as it applies to a given domain 
e.g. the enterprise.


Back to the W3C and The Semantic Web Project. In this case, the big 
issue is that degree of unobtrusive delivery hasn't been a leading 
factor -- bar SPARQL where its deliberate SQL proximity is all about 
unobtrusive implementation and adoption. Ditto R2RML .


RDF is an example of a poorly orchestrated revolution at the syntax 
level that is implicitly obtrusive at adoption and implementation 
time. It is in this context I agree fully with you. There was a 
misconception that RDF would be adopted like HTML, just like that. As 
we can all see today, that never happened and will never happened via 
revolution.


What can happen, unobtrusively, is the use and appreciation of 
solutions that generate Linked Data (expressed using a variety of 
syntaxes and serialized in a variety of formats). That's why we've 
invested so much time in both Linked Data Middleware and DBMS 
technology for ingestion, indexing, querying, and serialization.


For the umpteenth time here are three real world problems addressed 
effectively by Linked Data courtesy of AWWW (Architecture of the 
World Wide Web):


1. Verifiable Identifiers -- as delivered via WebID (leveraging 
Trust Logic and FOAF)
2. Access Control Lists -- an application of WebID and Web Access 
Control Ontology

Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

On 8/18/2011 1:52 PM, Kingsley Idehen wrote:

On 8/18/11 1:40 PM, Patrick Durusau wrote:

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity costs.  If 
someone else is eating you lunch by disrupting your market you 
simply have to respond. Thus, on this side of the fence its better 
to focus on eating lunch rather than warning about the possibility 
of doing so, or outlining how it could be done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack 
Park says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.


I mean: just start eating the lunch i.e., make a solution that takes 
advantage of an opportunity en route to market disruption. Trouble 
with the Semantic Web is that people spend too much time arguing and 
postulating. Ironically, when TimBL worked on the early WWW, his 
mindset was: just do it! :-)



Still dodging the question I see. ;-)



It avoids it in favor of advocacy.


See my comments above. You are skewing my comments to match you 
desired outcome, methinks.



You reach that conclusion pretty frequently.

I ask for hard numbers, you say that isn't your question and/or skewing 
your comments.




Example: Privacy controls and Facebook. How much would it cost to 
solve this problem?


I assume you know the costs of the above.
It won't cost north of a billion dollars to make a WebID based 
solution. In short, such a thing has existed for a long time, 
depending on your context lenses .




I assume everyone here is familiar with: http://www.w3.org/wiki/WebID ?

So we need to take the number of users who have a WebID and subtract 
that from the number of FaceBook users.


Yes?

The remaining number need a WebID or some substantial portion, yes?

So who bears that cost? Each of those users? It cost each of them 
something to get a WebID. Yes?


What is their benefit from getting that WebID? Will it outweigh their 
cost in their eyes?



Then, what increase in revenue will result from solving it?


FB -- less vulnerability and bleed.

Startups or Smartups: massive opportunity to make sales by solving a 
palpable problem.




Or if Facebook's lunch is going to be eaten, say by G+, then why 
doesn't G+ solve the problem?


G+ is trying to do just that, but in the wrong Web dimension. That's 
why neither G+ nor FB have been able to solve the identity 
reconciliation riddle.



Maybe you share your observations with G and FB. ;-)

Seriously, I don't think they are as dumb as everyone seems to think.

It may well be they have had this very discussion and decided it isn't 
cost effective to address.




Are privacy controls are a non-problem?

Your context lenses.

True, you can market a product/service that no one has ever seen 
before. Like pet rocks.


And they just did it!

With one important difference.

Their *doing it* did not depend upon the gratuitous efforts of 
thousands if not millions of others.


Don't quite get your point. I am talking about a solution that starts 
off with identity reconciliation, passes through access control lists, 
and ultimately makes virtues of heterogeneous data virtualization 
clearer re. data integration pain alleviation.


In the above we have a market place north of 100 Billion Dollars.



Yes, but your solution: ...starts off with identity reconciliation...

Sure, start with the critical problem already solved and you really are 
at a ...market place north of 100 Billion Dollars..., but that is all 
in your imagination.


Having a system of assigned and reconciled WebIDs isn't a zero cost to 
users or businesses solution. It is going to cost someone to assign and 
reconcile those WebIDs. Yes?


Since it is your solution, may I ask who is going to pay that cost?



Isn't that an important distinction?


Yes, and one that has never been lost on me :-)



Interested to hear your answer since that distinction has never been 
lost on you.


Patrick



Kingsley


Hope you are having a great day!

Patrick


On 8/18/2011 10:54 AM, Kingsley Idehen wrote:

On 8/18/11 10:25 AM, Patrick Durusau wrote:

Kingsley,

Your characterization of problems is spot on:

On 8/18/2011 9:01 AM, Kingsley Idehen wrote:

snip

Linked Data addresses many real world problems. The trouble is 
that problems are subjective. If you have experienced a problem it 
doesn't exist. If you don't understand a problem it doesn't exist. 
If you don't know a problem exists then again it doesn't exist in 
you context.




But you left out: The recognized problem must *cost more* than 
the cost of addressing it.


Yes. Now in my case I assumed the above to be implicit when context 
is about a solution or solutions :-)


If a solution costs more than the problem, it is a problem^n matter. 
No good.




A favorable cost/benefit ratio has to be recognized by the people 
being called upon to make the investment in solutions.


Always! Investment

Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

On 8/18/2011 2:25 PM, Kingsley Idehen wrote:

On 8/18/11 2:03 PM, Patrick Durusau wrote:

Kingsley,

Here are some hard numbers on integration of data benefits:

Future Integration Needs: Emerging Complex Data - 
http://www.informatica.com/news_events/press_releases/Pages/08182011_aberdeen_b2b.aspx


*/Integration costs are rising/* -- As integration of external data 
rises, it continues to be a labor- and cost-intensive task, with 
organizations integrating external sources spending 25 percent of 
their total integration budget in this area. 


So I can ask a decision maker, what do you spend on integration now? 
Take 25% of that figure.


Compare to X cost for integration using my software Y.

Or better yet, selling the integrated data as a service.

Data that isn't in demand to be integrated, isn't.

Technique neutral, could be SemWeb, could be third-world coding 
shops, could be Watson.


Timely, useful, integrated results are all that count.


Technique wouldn't be SemWeb. It would be Data Virtualization that 
leverages Semantic Web Project outputs such as:


1. Linked Data -- data homogenization (virtualization) mechanism
2. OWL  -- facilitator of reasoning against the vitualized substrate.

To the target customer the experience would go something like this:

1. Install Data Virtualization product
2. Identify heterogeneous data sources and their access method -- 
these will typically accessible via ODBC, JDBC (if RDBMS hosted), Web 
Services (SOAP based or via RESTful patterns what used to be SOA), or 
URLs especially if external data sources are in the mix

3. Bind to data sources
4. Virtualize
5. Show the new levels of agility 1-4 accord across all tool capable 
of consuming URLs.


What would you call such a product? At OpenLink Software we call it 
OpenLink Virtuoso :-)


I would call it *no sale* if OpenLink Virtuoso + services costs more 
than I am spending now.


Isn't that the pertinent question?

Patrick



Kingsley


Hope you are having a great day!

Patrick

On 8/18/2011 1:40 PM, Patrick Durusau wrote:

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity costs.  
If someone else is eating you lunch by disrupting your market you 
simply have to respond. Thus, on this side of the fence its better 
to focus on eating lunch rather than warning about the possibility 
of doing so, or outlining how it could be done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack 
Park says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.

It avoids it in favor of advocacy.

Example: Privacy controls and Facebook. How much would it cost to 
solve this problem? Then, what increase in revenue will result from 
solving it?


Or if Facebook's lunch is going to be eaten, say by G+, then why 
doesn't G+ solve the problem?


Are privacy controls are a non-problem?

Your context lenses.

True, you can market a product/service that no one has ever seen 
before. Like pet rocks.


And they just did it!

With one important difference.

Their *doing it* did not depend upon the gratuitous efforts of 
thousands if not millions of others.


Isn't that an important distinction?

Hope you are having a great day!

Patrick


On 8/18/2011 10:54 AM, Kingsley Idehen wrote:

On 8/18/11 10:25 AM, Patrick Durusau wrote:

Kingsley,

Your characterization of problems is spot on:

On 8/18/2011 9:01 AM, Kingsley Idehen wrote:

snip

Linked Data addresses many real world problems. The trouble is 
that problems are subjective. If you have experienced a problem 
it doesn't exist. If you don't understand a problem it doesn't 
exist. If you don't know a problem exists then again it doesn't 
exist in you context.




But you left out: The recognized problem must *cost more* than 
the cost of addressing it.


Yes. Now in my case I assumed the above to be implicit when context 
is about a solution or solutions :-)


If a solution costs more than the problem, it is a problem^n 
matter. No good.




A favorable cost/benefit ratio has to be recognized by the people 
being called upon to make the investment in solutions.


Always! Investment evaluation 101 for any business oriented 
decision maker.




That is recognition of a favorable cost/benefit ratio by the W3C 
and company is insufficient.


Yes?


Yes-ish. And here's why. Implementation cost is a tricky factor, 
one typically glossed over in marketing communications that more 
often than not blind side decision makers; especially those that 
are extremely technically challenged. Note, when I say technically 
challenged I am not referring to programming skills. I am 
referring to basic understanding of technology as it applies to a 
given domain e.g. the enterprise.


Back to the W3C and The Semantic Web Project. In this case, the 
big issue is that degree of unobtrusive delivery hasn't been a 
leading factor -- bar SPARQL where its

Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Patrick Durusau

Kingsley,

Citing your own bookmark file hardly qualifies as market numbers. People 
promoting technologies make up all sorts of numbers about what use of X 
will save. Reminds me of the music or software theft numbers. They have 
no relationship to any reality that I share.


It's been enjoyable as usual but without some common basis for 
discussion we aren't going to get any closer to a common understanding.


Hope you are having a great week!

Patrick



On 8/18/2011 3:24 PM, Kingsley Idehen wrote:

On 8/18/11 2:50 PM, Patrick Durusau wrote:

Kingsley,

On 8/18/2011 1:52 PM, Kingsley Idehen wrote:

On 8/18/11 1:40 PM, Patrick Durusau wrote:

Kingsley,

From below:

This critical value only materializes via appropriate context 
lenses. For decision makers it is always via opportunity costs.  
If someone else is eating you lunch by disrupting your market you 
simply have to respond. Thus, on this side of the fence its better 
to focus on eating lunch rather than warning about the possibility 
of doing so, or outlining how it could be done. Just do it! 


I appreciate the sentiment, Just do it! as my close friend Jack 
Park says it fairly often.


But Just do it! doesn't answer the question of cost/benefit.


I mean: just start eating the lunch i.e., make a solution that takes 
advantage of an opportunity en route to market disruption. Trouble 
with the Semantic Web is that people spend too much time arguing and 
postulating. Ironically, when TimBL worked on the early WWW, his 
mindset was: just do it! :-)



Still dodging the question I see. ;-)


Of course not.

You want market research numbers, see the related section at the end 
of this reply. I sorta assumed you would have found this 
serendipitously though? Ah! You don't quite believe in the utility of 
this Linked Data stuff etc..






It avoids it in favor of advocacy.


See my comments above. You are skewing my comments to match you 
desired outcome, methinks.



You reach that conclusion pretty frequently.


See my earlier comment.



I ask for hard numbers, you say that isn't your question and/or 
skewing your comments.


Yes. I didn't know this was about market research and numbers [1].





Example: Privacy controls and Facebook. How much would it cost to 
solve this problem?


I assume you know the costs of the above.
It won't cost north of a billion dollars to make a WebID based 
solution. In short, such a thing has existed for a long time, 
depending on your context lenses .




I assume everyone here is familiar with: http://www.w3.org/wiki/WebID ?

So we need to take the number of users who have a WebID and subtract 
that from the number of FaceBook users.


Yes?


No!

Take the number of people that have are members of a service that's 
ambivalent to the self calibration of the vulnerabilities of its 
members aka. privacy.




The remaining number need a WebID or some substantial portion, yes?


Ultimately they need a WebID absolutely! And do you know why? It will 
enable members begin the inevitable journey towards self calibration 
of their respective vulnerabilities.


I hope you understand that society is old and the likes of G+, FB are 
new and utterly immature. In society, one is innocent until proven 
guilty or not guilty. In the world of FB and G+ the fundamentals of 
society are currently being inverted. Anyone can ultimately say 
anything about you. Both parties are building cyber police states via 
their respective silos. Grr... don't get me going on this matter.


Every single netizen needs a verifiable identifier. That's the bottom 
line, and WebID (courtesy of Linked Data) and Trust Semantics nails 
the issue.




So who bears that cost? Each of those users? It cost each of them 
something to get a WebID. Yes?


Look here is a real world example. Just google up on wire shark re. 
Facebook and Google. Until the wire shark episodes both peddled lame 
excuses for not using HTTPS. Today both use HTTPS. Do you want to know 
why? Simple answer: opportunity cost of not doing so became palpable.




What is their benefit from getting that WebID? Will it outweigh their 
cost in their eyes?


See comment above.

We've already witnessed Craigslist horrors. But all of this is child's 
play if identity isn't fixed on the InterWeb. If you think I need to 
give you market numbers for that too, then I think we are simply 
talking past ourselves (a common occurence).





Then, what increase in revenue will result from solving it?


FB -- less vulnerability and bleed.

Startups or Smartups: massive opportunity to make sales by solving a 
palpable problem.




Or if Facebook's lunch is going to be eaten, say by G+, then why 
doesn't G+ solve the problem?


G+ is trying to do just that, but in the wrong Web dimension. That's 
why neither G+ nor FB have been able to solve the identity 
reconciliation riddle.



Maybe you share your observations with G and FB. ;-)


Hmm. wondering how you've concluded either way :-)



Seriously, I don't

Re: Dataset URIs and metadata.

2011-07-25 Thread Patrick Durusau

Michael,

On 7/22/2011 11:19 AM, Michael Hausenblas wrote:


Patrick,



So, perhaps one day it will be a standard, but not today.



Good catch! Did you join the Pedantic Web [1] group, yet? We need more 
people like you.



Just signed up yesterday.

I have the soul of an editor. ;-)

More than happy to inveigh on behalf of clean data but suspect being 
able to cope with dirty data is at least as important if not more so.


Hope you are at the start of a great week!

Patrick





Hope you are nearing a great weekend!


Yes, indeed, I plan to go to DERI FAWM now and allow my brain to be 
off-line till 15:00 UTC tomorrow, in case anyone cares ...



Cheers,
Michael

[1] http://pedantic-web.org/
--
Dr. Michael Hausenblas, Research Fellow
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

On 22 Jul 2011, at 16:11, Patrick Durusau wrote:


Michael,

On 7/22/2011 10:42 AM, Michael Hausenblas wrote:




Probably VoID metadata/dataset URIs will be easier to discover once
the /.well-known/void trick (described in paragraph 7.2 of the W3C
VoID document) is widely adopted.


greed. But it's not a 'trick'. It's called a standard.


Is it?


Yes, I think that RFC5785 [1] can be considered a standard. Unless 
you want to suggest that RFCs are sorta not real standards :P


RFCs can be standards, but there is a path by which RFCs become 
standards.


As of today, the RFC 5785 header reads PROPOSED STANDARD.

So, perhaps one day it will be a standard, but not today.

Hope you are nearing a great weekend!

Patrick



Cheers,
   Michael

[1] http://tools.ietf.org/html/rfc5785
--
Dr. Michael Hausenblas, Research Fellow
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

On 22 Jul 2011, at 15:39, Dave Reynolds wrote:


On Fri, 2011-07-22 at 09:59 +0100, Michael Hausenblas wrote:

Frans,


[snip]


Probably VoID metadata/dataset URIs will be easier to discover once
the /.well-known/void trick (described in paragraph 7.2 of the W3C
VoID document) is widely adopted.


greed. But it's not a 'trick'. It's called a standard.


Is it?

There was me thinking it was a Interest Group Note.

Is there a newer version than:
http://www.w3.org/TR/2011/NOTE-void-20110303/

?

Dave







--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau







--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Dataset URIs and metadata.

2011-07-25 Thread Patrick Durusau

Sebastian,

On 7/22/2011 11:42 AM, Sebastian Schaffert wrote:

Am 22.07.2011 um 17:19 schrieb Michael Hausenblas:


So, perhaps one day it will be a standard, but not today.


Good catch! Did you join the Pedantic Web [1] group, yet? We need more people 
like you.

I think the underlying point is here that a standard that is not widely adopted 
can claim to be a standard as much as it wants, but it simply isn't. ;-)

We need pragmatic solutions ... ;-)



Perhaps someone's point but mine was the lesser correct citation one. 
I look up references when given, not simply to check them but to see 
what else I can learn at the reference. Sometimes I learn that the 
reference has little or nothing to do with the point in question. But I 
am still learning something.


On pragmatic solutions... I can only offer a comment that is often 
made about standards:


http://xkcd.com/927/

;-)

I don't have the answer but do think we need to frame the problem in 
terms of self-introspective change tracking.


That is change tracking that documents the semantics of both data and 
structure, of information structures, including the change tracking 
system itself.


I say that because the history of information is one of change, which 
was less troublesome or perhaps less visible, when the agents of change 
(that would be us) were also the agents who consume the changes. Not 
perfect but with enough effort, a person can trace the changes in 
semantics (in both data and structure) to extract information they need.


We remain the agents of change, just glance over the literature on 
change tracking in DB schemas as an example, yet we want our benighted 
servants to track changes in semantics of both structure and data, that 
we *did not record.* That would be a good trick if we could do it. To be 
fair, we have trouble where we have failed to document semantics.


Would it be a pragmatic solution to look for principles that solutions 
could follow in documenting the semantics of both data and structure? So 
that other pragmatic solutions could re-use the previous work?


Hope you are at the start of a great week!

Patrick



--

Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Dataset URIs and metadata.

2011-07-22 Thread Patrick Durusau

Michael,

On 7/22/2011 10:42 AM, Michael Hausenblas wrote:




Probably VoID metadata/dataset URIs will be easier to discover once
the /.well-known/void trick (described in paragraph 7.2 of the W3C
VoID document) is widely adopted.


greed. But it's not a 'trick'. It's called a standard.


Is it?


Yes, I think that RFC5785 [1] can be considered a standard. Unless you 
want to suggest that RFCs are sorta not real standards :P



RFCs can be standards, but there is a path by which RFCs become standards.

As of today, the RFC 5785 header reads PROPOSED STANDARD.

So, perhaps one day it will be a standard, but not today.

Hope you are nearing a great weekend!

Patrick



Cheers,
Michael

[1] http://tools.ietf.org/html/rfc5785
--
Dr. Michael Hausenblas, Research Fellow
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

On 22 Jul 2011, at 15:39, Dave Reynolds wrote:


On Fri, 2011-07-22 at 09:59 +0100, Michael Hausenblas wrote:

Frans,


[snip]


Probably VoID metadata/dataset URIs will be easier to discover once
the /.well-known/void trick (described in paragraph 7.2 of the W3C
VoID document) is widely adopted.


greed. But it's not a 'trick'. It's called a standard.


Is it?

There was me thinking it was a Interest Group Note.

Is there a newer version than:
http://www.w3.org/TR/2011/NOTE-void-20110303/

?

Dave







--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: What is a URL? And What is a URI

2010-11-12 Thread Patrick Durusau
Dave,

Thanks!

I was working on a much longer and convoluted response. 

Best to refer to the canonical source and let it go.

Patrick


On Fri, 2010-11-12 at 09:22 +, Dave Reynolds wrote:
 On Thu, 2010-11-11 at 12:52 -0500, Kingsley Idehen wrote:
  All,
  
  As the conversation about HTTP responses evolves, I am inclined to
  believe that most still believe that:
  
  1. URL is equivalent to a URI
  2. URI is a fancier term for URI
  3. URI is equivalent to URL.
  
  I think my opinion on this matter is clear, but I am very interested
  in the views of anyone that don't agree with the following:
  
  1. URI is an abstraction for Identifiers that work at InterWeb scale
  2. A URI can serve as a Name
  3. A URI can serve as an Address
  4. A Name != Address
  5. We locate Data at Addresses
  6. Names can be used to provide indirection to Addresses i.e., Names
  can Resolve to Data.
 
 Why would this be a matter of opinion? :) 
 
 After all RFC3986 et al are Standards Track and have quite clear
 statements on what Identifier connotes in the context of URI.
 Such as:
 
 
 Identifier 
 
   An identifier embodies the information required to distinguish
   what is being identified from all other things within its scope of
   identification.  Our use of the terms identify and identifying
   refer to this purpose of distinguishing one resource from all
   other resources, regardless of how that purpose is accomplished
   (e.g., by name, address, or context).  These terms should not be
   mistaken as an assumption that an identifier defines or embodies
   the identity of what is referenced, though that may be the case
   for some identifiers.  Nor should it be assumed that a system
   using URIs will access the resource identified: in many cases,
   URIs are used to denote resources without any intention that they
   be accessed.
 
 
 Dave
 
 





Re: What is a URL? And What is a URI

2010-11-12 Thread Patrick Durusau
Kingsley,

On Fri, 2010-11-12 at 10:12 -0500, Kingsley Idehen wrote:
 On 11/12/10 8:40 AM, Patrick Durusau wrote:
  Kingsley,
 
  On Fri, 2010-11-12 at 07:58 -0500, Kingsley Idehen wrote:
  On 11/12/10 5:59 AM, Patrick Durusau wrote:
  snip
 
 
 
  Patrick / Dave,
 
  I am hoping as the responses come in we might pick up something. There
  is certainly some confusion out there.
 
  Note my comments yesterday re. URIs and Referents. I believe this
  association to be 1:1, but others may not necessarily see it so.
 
  Isn't it that ...others may not necessarily see it so. that lies at
  the heart of semantic ambiguity?
 
 Yes!
 
 We are perpetuating ambiguity by conflating realms, ultimately. The Web 
 of URLs != Web of URIs. They are mutually inclusive (symbiotic).
 

Err, no, we are not perpetuating ambiguity. Ambiguity isn't a choice,
it is an operating condition. 

  Semantic ambiguity isn't going to go away. It is part and parcel of the
  very act of communication.
 
 This is why Context is King.
 
 You can use Context to reduce ambiguity.
 
 A good Comedian is a great Context flipper, for instance.
 
 Ambiguity exists in the real-world too, we use Context to disambiguate 
 every second of our lives.
 

Eh? True enough but context in the real-world (do computers exist in a
make believe world?) is as unbounded as the subjects we talk about.

It is the journal I am reading that is part of the context I am using
for a particular article or is it the author or is it the subject matter
or is it the sentence just before the one I am reading? 

All of those, at times some of those and at still other times, other
things will inform my context. 

  It is true that is very limited circumstances with very few semantics,
  such as TCP/IP, that is it possible to establish reliable communications
  across multiple recipients. (Or it might be more correct to say
  semantics of concern to such a small community that agreement is
  possible. I will have to pull Stevens off the shelf to see.)
 
  As the amount of semantics increases (or the size of the community), so
  does the potential for and therefore the amount of semantic ambiguity.
  (I am sure someone has published that as some ratio but I don't recall
  the reference.)
 
 So if a community believes in self-describing data, where the data is 
 the conveyor of context, why shouldn't it be able express such believes 
 in its own best practice options? Basically, we can solve ambiguity in 
 the context of Linked Data oriented applications. Of course, that 
 doesn't apply to applications that don't grok Linked Data or buy into 
 the semantic fidelity expressed by the content of a structured data 
 bearing (carrying) resource e.g. one based on EAV model + HTTP URI based 
 Names.
 

Not to be offensive but are you familiar with begging the question? 

You are assuming that ...we can solve ambiguity in the context of
Linked Data oriented applications.*

That is the *issue* at hand and cannot be assumed to be true, lest we
all run afoul of begging the question issues.

Hope you are having a great day!

Patrick

*Your Linked Data Application* may supply context but that *is not*
interchangeable with other Linked Data Applications. 

Nor does it reduce ambiguity.

Why?

For the same reason in both cases, there is no basis on which context
can be associated with identification. Remember, the URI is the
identifier. (full stop)

Fix it so that URI plus *specified* properties in RDF graph identify a
subject, then you have a chance to reduce (not eliminate) ambiguity. Not
as a matter of personal whimsy but as part of a standard that everyone
follows. 

-- 
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau
Newcomb Number: 1




Re: What is a URL? And What is a URI

2010-11-12 Thread Patrick Durusau
Kingsley,

Last one for today!

On Fri, 2010-11-12 at 16:05 -0500, Kingsley Idehen wrote:
 On 11/12/10 1:31 PM, Patrick Durusau wrote:
  

snip

  Not to be offensive but are you familiar with begging the question?
 
  You are assuming that ...we can solve ambiguity in the context of
  Linked Data oriented applications.*
 
 A Linked Data application is capable of perceiving an E-A-V graph 
 representation of data. That's context it can establish from content.
 

OK, same claim, different verse. ;-)

Now you are claim that what is contained in an E-A-V graph is sufficient
to eliminate ambiguity. 

Another assumption for which you offer no evidence. 

Being mindful that graphs are going to vary from source to source, how
can you now claim that any E-A-V graph is going to be sufficient to
eliminate ambiguity?  

Repetition of the same claims doesn't advance the conversation. 

Hope you have started a great weekend by this point!

Patrick




Re: Is 303 really necessary?

2010-11-04 Thread Patrick Durusau
Dave,

On Thu, 2010-11-04 at 12:56 -0400, David Wood wrote:
 Hi all,

snip

 - Wide-spread mishandling of HTTP content negotiation makes it difficult if 
 not impossible to rely upon.  Until we can get browser vendors and server 
 vendors to handle content negotiation in a reasonable way, reliance on it is 
 not a realistic option.  That means that there needs to be an out-of-band 
 mechanism to disambiguate physical, virtual and conceptual resources on the 
 Web.  303s plus http-range-14 provide enough flexibility to do that; I'm not 
 convinced that overloading 200 does.
 

No mud, yet. ;-)

But curious if you can point to numbers on support for 303s and
http-range-14? Might have enough flexibility but if not widely
supported, so what? 

Hope you are having a great day!

Patrick

-- 
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau
Newcomb Number: 1




Re: Is 303 really necessary?

2010-11-04 Thread Patrick Durusau
David,

On Thu, 2010-11-04 at 13:24 -0400, David Wood wrote:
 On Nov 4, 2010, at 13:17, Patrick Durusau wrote:

snip

  
  But curious if you can point to numbers on support for 303s and
  http-range-14? Might have enough flexibility but if not widely
  supported, so what? 
 
 
 Sure.  Both Persistent URLs (purlz.org, purl.org, purl.fdlp.gov and others) 
 support http-range-14 via 303 responses.   DOIs (doi.org) also depend on 303s.
 

Sure, but those aren't numbers are they? Important places (I don't
usually cite articles other than by DOIs for commercial sources), but
that isn't indicative of widespread adoption.

Yes?

Compare to the adoption of Apache webservers for example?

 The issue that seems to be understated in this discussion is whether 
 something should be abandoned on the Web because it is not used by enough 
 people.  I claim (to the contrary) that something 'useful' need not be 
 'popular' to be important.
 

Sure, lots of things aren't popular that are important. 

Personally I think Akkadian and Sumerian parsers (my background in ANE
languages) are important even though they are not popular. 

But, when talking about language adoption (which is the real issue, does
the W3C TAG dictate the language everyone must use?) then numbers
(popularity) really do matter.

You may think that Esperanto was important, but it never was popular. 

I would rather please large numbers of users than the W3C TAG any day.
But that's just a personal opinion. 

Hope you are having a great day!

Patrick

-- 
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau
Newcomb Number: 1




Re: Is 303 really necessary?

2010-11-04 Thread Patrick Durusau
Nathan,

On Thu, 2010-11-04 at 17:51 +, Nathan wrote:
 Ian Davis wrote:
  Hi all,
  
  The subject of this email is the title of a blog post I wrote last
  night questioning whether we actually need to continue with the 303
  redirect approach for Linked Data. My suggestion is that replacing it
  with a 200 is in practice harmless and that nothing actually breaks on
  the web. Please take a moment to read it if you are interested.
  
  http://iand.posterous.com/is-303-really-necessary
 
 Ian,
 
 Please, don't.
 
 303 is a PITA, and it has detrimental affects across the board from 
 network load through to server admin. Likewise #frag URIs have there own 
 set of PITA features (although they are nicer on the network and servers).
 
 However, and very critically (if you can get more critical than 
 critical!), both of these patterns / constraints are here to ensure that 
   different things have different names, and without that distinction 
 our data is junk.
 

Our data is already junk. What part of that seems unclear? 

See:

http://www.mkbergman.com/925/ontotext-sd-form-strategic-partnership/

For an unbiased assessment.

 This goes beyond your and my personal opinions, or those of anybody 
 here, the constraints are there so that in X months time when 
 multi-corp trawls the web, analyses it and releases billions of 
 statements saying like { /foo :hasFormat x; sioc:about 
 dbpedia:Whatever } about each doc on the web, that all of those 
 statements are said about documents, and not about you or I, or anything 
 else real, that they are said about the right thing, the correct name 
 is used.
 
 And this is critically important, to ensure that in X years time when 
 somebody downloads the RDF of 2010 in a big *TB sized archive and 
 considers the graph of RDF triples, in order to make sense of some parts 
 of it for something important, that the data they have isn't just 
 unreasonable junk.
 

Sorry, it is already junk. Remember? The failure to distinguish between
addresses in an address space and identifiers is mistake that keeps on
giving.

Fix the mistake and all the cruft that is trying to cover over the
mistake will be unnecessary.

Our data may still be junk in that case but for different reasons. ;-)

snip

 
 But, for whatever reasons, we've made our choices, each has pro's and 
 cons, and we have to live with them - different things have different 
 name, and the giant global graph is usable. Please, keep it that way.
 

The problem is that you want *other* people to live with your choices.
Given the lack of adoption for some of them, those other people are
making other choices. 

Hope you are having a great day!

Patrick




Re: destabilizing core technologies: was Re: An RDF wishlist

2010-07-02 Thread Patrick Durusau
, and to realise that we don't need a single answer (a single
gamble) here. Part of the problem I was getting at earlier was of
dangerously elevated expectations... the argument that *all* data in
the Web must be in RDF. We can remain fans of the triple model for
simple factual data, even while acknowledging there will be other
useful formats (XMLs, JSONs). Some of us can gamble on lets use RDF
for everything. Some can retreat to the original, noble and neglected
metadata use case, and use RDF to describe information, but leave the
payload in other formats; others (myself at least) might spend their
time trying to use triples as a way of getting people to share the
information that's inside their heads rather than inside their
computers.

   


Well, yes and no. XML enabled users to do what they wanted to do.

RDF on the other hand offers a single identity model. Not exactly the 
same thing. Particularly not when you layer all the other parts onto it.



I am not advocating in favor of any specific changes. I am suggesting that
clinging to prior decisions simply because they are prior decisions doesn't
have a good track record. Learning from prior decisions, on the other hand,
such as the reduced (in my opinion) feature set of XML, seems to have a
better one. (Other examples left as an exercise for the reader.)
 

So, I think I'm holding an awkward position here:

* massive feature change (ie. not using triples, URIs etc); or rather
focus change: become a data sharing in the Web community not a
doing stuff with triples community
* cautious feature change (tweaking the triple model doesn't have many
big wins; it's simple already)

If we as a community stop overselling triples, embrace more
wholeheartedly their use as 'mere' metadata, fix up the tooling, and
find common cause with other open data advocacy groups who [heresy!]
are using non-triple formats, the next decade could be more rewarding
than the last.

If we obsess on perfecting triples-based data, we could keep cutting
and throwing out until there's nothing left. Most people don't read
the W3C specs anyway, but learn from examples and tools. So my main
claim is that, regardless of how much we trim and tidy, RDF's core
annoyingness levels won't change much since they're tied to the
open-world triples data model. RDF's incidental annoyingness levels,
on the other hand, offer a world of possibility for improvement.
Software libraries, datasets, frameworks for consuming all this data.
This being in the W3C world, we naturally look to the annoyingness in
the standards rather than the surrounding ecosystem. I'm arguing that
for RDF, we'd do better by fixing other things than the specs.

   


The most awkward part of your position (and mine to be honest) is that 
we are evaluating technologies from *our* perspective. What do *we* have 
to do to *our* models, etc.


What's missing? Oh, yeah, those folks that we want to use our models. 
;-) As Gomer Pyle would say, Surprise, surprise, surprise.


Semantic technologies (and I would include RDF, topic maps and other 
integration strategies) left users behind about a decade ago in our 
quest for the *right* answer. I think we have the public's verdict on 
the answers we have found.


That is not to say anything negative about RDF or topic maps or whatever 
the reader's personal flavor may be, but to point out that we haven't 
found the sweet spot that HTML 3.2 did.


I don't think there is a magic formula to determine that sweet spot but 
I do know that telling ourselves that:


1) We have invested too much time and effort to be wrong

2) With better tools users are going to use what they haven't for a decade

3) Better user education is the answer

4) We have been right all along

5) etc.

isn't the way to find the answer.

Nor am I advocating that we spent years in recrimination about who said 
what to who on what list and why that was at fault.


What I am advocating is that we fan out to try a variety of approaches 
and keep up the conversations so we can share what may be working, what 
looks like it is working, etc. Without ever tying ourselves into 
something that we have to sell to an unwilling market of buyers.


Apologies for the length but I did not have time to respond adequately 
yesterday and so have too long to think about it. ;-)


Hope you are having a great day!

Patrick



Hope you are having a great day!
 

Yes thanks, and likewise!

cheers,

Dan


   


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Patrick Durusau

Ian,

On 7/2/2010 3:39 AM, Ian Davis wrote:

On Fri, Jul 2, 2010 at 4:44 AM, Pat Hayespha...@ihmc.us  wrote:
   

Jeremy, your argument is perfectly sound from your company's POV, but not
from a broader perspective. Of course, any change will incur costs by those
who have based their assumptions upon no change happening. Your company took
a risk, apparently. IMO it was a bad risk, as you could have implemented a
better inference engine if you had allowed literal subjects internally in
the first place, but whatever. But that is not an argument for there to be
no further change for the rest of the world and for all future time. Who
knows what financial opportunities might become possible when this change is
made, opportunities which have not even been contemplated until now?

 

I think Jeremy speaks for most vendors that have made an investment in
the RDF stack. In my opinion the time for this kind of low level
change was back in 2000/2001 not after ten years of investment and
deployment. Right now the focus is rightly on adoption and fiddling
with the fundamentals will scare off the early majority for another 5
years. You are right that we took a risk on a technology and made our
investment accordingly, but it was a qualified risk because many of us
also took membership of the W3C to have influence over the technology
direction.

I would prefer to see this kind of effort put into n3 as a general
logic expression system and superset of RDF that perhaps we can move
towards once we have achieved mainstream with the core data expression
in RDF. I'd like to see 5 or 6 alternative and interoperable n3
implementations in use to iron out the problems, just like we have
with RDF engines (I can name 10+ and know of no interop issues between
them)

   
I make this point in another post this morning but is your argument that 
investment by vendors =


1) technology meets need perceived by users, and

2) technology meets the need in a way acceptable to users

??

What early majority? How long did it take HTML to take off? Or XML for 
that matter, at least in its simpler forms?


As I say in another post, I am not suggesting I have an alternative but 
am suggesting that we broaden the conversation to more than we have 
invested so much so we have to be right sort of reasoning.


Hope you are having a great day!

Patrick


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Subjects as Literals

2010-07-02 Thread Patrick Durusau

Pat,

On 7/1/2010 11:14 PM, Pat Hayes wrote:





snip
That is fine. Nobody mandates that your (or anyone else's) software 
must be able to handle all cases of RDF. But to impose an irrational 
limitation on a standard just because someone has spent a lot of money 
is a very bad way to make progress, IMO. Although, I believe that 
there are still people using COBOL, so you may have a point.




It was reported that the average American has nearly 4,000 interactions 
with COBOL based transaction systems per year. 
http://www.itjungle.com/tfh/tfh060109-story03.html


So, yes, there are people still using COBOL.

Hope you are having a great day!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: destabilizing core technologies: was Re: An RDF wishlist

2010-07-02 Thread Patrick Durusau

Ian,

On 7/2/2010 5:25 AM, Ian Davis wrote:

Patrick,

Without disputing your wider point that HTML hit the sweet point of
usability and utility I will dispute the following:

   

HTML 3.2 did have:

1) *A need perceived by users as needing to be met*

 

Did users really know they wanted to link documents together to form a
world wide web? I spent much of the late nineties persuading companies
and individuals of the merits of being part of this new web thing and
then gritting my teeth when it came to actually showing them how to
get a page online - it was a painful confusion of text editors ( no
you can't use wordperfect ), fumbling in the dark ( no wysiwyg ),
dialup ( you mean I have to pay?)  and ftp! When MS frontpage came
along the users loved it because all that pain went away but they
could not understand why so many people laughed at the results.

   


Well, possibly. I am not sure that is how users saw the need.

That's the rub, I think it is hit or miss.

In the publishing area where I worked when the web came along, it was a 
question of being able to make low return material available to a wider 
audience for less distribution cost.


Not so much being part of a linked web as making material accessible.

How many users saw it that way I cannot say.


I think we all have short memories.

The advantage that HTML had was that people were able to use it before
creating their own, i.e. they were aleady reading websites so could at
some point say I want to make one of those. The problem RDF is
gradually overcoming is this bootstrapping stage. It has a harder time
because, to be frank, data is dull. But now people are seeing some of
the data being made available in browseable form e.g. at data.gov.uk
or dbpedia and saying, I want to make one of those.

   


Good point. But the basic tools to handle data have been around for a 
long time.


Why so long to get to the place where users can say: I want to make one 
of those. ?


Which I agree is a very good strategy.

Hope you are having a great day!

Patrick


Ian

   


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Patrick Durusau

Ian,

On 7/2/2010 5:27 AM, Ian Davis wrote:

On Fri, Jul 2, 2010 at 10:19 AM, Patrick Durusaupatr...@durusau.net  wrote:

   

I make this point in another post this morning but is your argument that
investment by vendors =

 

I think I just answered it there, before reading this message. Let me
know if not!

   
I think you made a very good point about needing examples so user can 
say: I want to do that.


Which was one of the strong points of HTML.

I am less convinced that argues in favor of vendor position that their 
investment equals how things have to be on the technical side.


Consider that when users see a large visualization of the WWW they 
think, I want to do that!, but when they see the graph code required, 
they become less interested. ;-)


I am less inclined to listen to vendors and much more inclined to listen 
to users.


A short story to illustrate the issue:

The Library of Congress Subject Headings, could be considered an 
ontology of sorts, has been under construction for decades. But until 
Karen Drabenstott (now Karen Marley) decided to ask the question of how 
effectively do users of the LCSH fare, no one had asked the question. I 
won't keep you in suspense, the results were:


Overall percentages of correct meanings for subject headings in the 
original order of subdivisions were as follows: children, 32%, adults, 
40%, reference 53%, and technical services librarians, 56%.


Ouch!

See Understanding Subject Heading in Library Catalogs 
http://www-personal.si.umich.edu/~ylime/meaning/meaning.pdf


It may be that the RDF stack is everything it is reported to be, but 
that does not mean that it fits the needs of users as they see them. The 
only way to know that is to ask. Asking the few users that mistakenly 
wander into working group meetings is probably insufficient.


Hope you are looking forward to a great weekend!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Patrick Durusau

Henry,

On 7/2/2010 6:03 AM, Henry Story wrote:

On 2 Jul 2010, at 11:57, Patrick Durusau wrote:

   

On 7/2/2010 5:27 AM, Ian Davis wrote:
 

On Fri, Jul 2, 2010 at 10:19 AM, Patrick Durusaupatr...@durusau.net   wrote:


   

I make this point in another post this morning but is your argument that
investment by vendors =


 

I think I just answered it there, before reading this message. Let me
know if not!


   

I think you made a very good point about needing examples so user can say: I want 
to do that.

Which was one of the strong points of HTML.
 

Ok, what users will want is the Social Web. And here is the way to convince 
people:
The Social Network Privacy Mess: Why we Need the Social Web

http://www.youtube.com/watch?v=994DvSJZywwfeature=channel


( This can of course be improved) The general ideas should be clear:

  dystopia: we cannot have all social data centralised on one server.
  utopia: there is a lot of money to be made in creating the social web, and 
thereby
 increasing democracy in the world.

  This can ONLY be done with linked data. And there is a real need for it.

   

Several presumptions:

1) there is a lot of money to be made creating the social web - ? On 
what economic model? Advertising? Can't simply presume that money can be 
made.


2) thereby increasing democracy in the world - ??? Not real sure what 
that has to do with social networks. However popular increasing 
democracy may be as a slogan, it is like fighting terrorism.


Different governments and populations have different definitions for 
both. I have my own preferences but realize there are different 
definitions used by others.


3) can ONLY be done with linked data. Really? Seems like the phone 
companies from your example did it long before linked data.


4) there is a real need for it. ? I get as annoyed as anyone with the 
multiple logins and universities do have some common logins for their 
internal systems but I am not sure I would describe it as a need. At 
least until some survey shows that a large number of users are willing 
to pay for such a service.


Hope you are looking forward to a great weekend!

Patrick


Henry
   


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: destabilizing core technologies: was Re: An RDF wishlist

2010-07-02 Thread Patrick Durusau

Henry,

On 7/2/2010 5:58 AM, Henry Story wrote:

On 2 Jul 2010, at 11:39, Patrick Durusau wrote:

   

Good point. But the basic tools to handle data have been around for a long time.
 

The web could only get going in the 90ies when

   1) Windows 95 become (A GUI) widely deployed and relatively stable and had 
support for threads
   2) modems were cheap and available
   [3 the soviet unions had fallen, so the fear mongers had no security buttons 
to press]

In 1997 the SSL layer (https) gave an extra boost as it made commerce possible.

   
Err, you are omitting one critical fact. The one that lead to TBL's 
paper being rejected from the hypertext conference. Links could fail.


That is reportedly one of the critical failings of early hypertext 
systems was that links could not be allowed to fail. That blocked any 
sort of global scaling.


Hmmm, wonder what happens when links fail with RDF, considering that it 
requires the yet to be implemented 303 solution?




Why so long to get to the place where users can say: I want to make one of 
those. ?
 


There are many reasons, but most of all is that people don't in fact 
understand hypertext, as being linked information. Or the people in charge of 
data don't think of it that way easily.

   

Sorry, I don't understand what you mean by:

that people in fact don't understand hypertest, as being linked 
information. ???



Engineers have for 50 years been educated in closed world systems, every 
programming language
including Prolog and lisp have local naming conventions that don't scale 
globally, and database people
make a fortune with SQL. The people interact only very lightly with the web. Usually 
there is a layer of Web Monkeys in between them and the web.

   


The reason I don't understand your earlier point or this one is that 
users for hundreds of years have been well familiar with texts making 
references to other texts, which to my mind qualifies as hypertext, 
even if it did not have the mechanical convenience of HTML.


What else do you think hypertext would be?


So when you ask those engineers to build a global distributed information 
system, they
come up with the closest to what they know - which is remote method calls - and 
they invent XML/RPC which leads to SOAP.

   So it is not easy to get the knowledgeable people on board. The Web Monkeys 
are not very good at modelling, and the back end engineers don't understand the 
web. Finally the business people have problems understanding abstract concepts 
such as network effect.

   It just took time then to do a few demos, which the University of Berlin put 
together, slowly getting other people on board.


   It just takes time to rewire the brain of millions of people.

   


Well, I am not so sure that we need to rewire the brain of millions of 
people. so much as we need to have our technologies adapt to them. Yes?


Granting that consumers can and do adapt to some technologies, the more 
consistent a technology is with how people think and work the easier its 
adoption. Yes?


Hope you are looking forward to a great weekend!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Patrick Durusau

Henry,

Another reason why the SW is failing:


You don't see it as a need because you don't think of the options you are 
missing. Like people in 1800 did not think horses were slow, because they did 
not consider that they could fly. Or if they did think of that it was just as a 
dream.

Or closer to home, in the 80ies most people did not miss getting information 
quickly, the library was around the corner. Or they did not miss buying their 
tickets online.

You need a bit of imagination to see what you are missing. Which is why a lot 
of people stop dreaming.
It's painful.
   


I would reply with equally ad hominem remarks but it isn't worth the effort.

Patrick

On 7/2/2010 7:03 AM, Henry Story wrote:

On 2 Jul 2010, at 12:49, Patrick Durusau wrote:

   

Henry,

On 7/2/2010 6:03 AM, Henry Story wrote:
 

On 2 Jul 2010, at 11:57, Patrick Durusau wrote:


   

On 7/2/2010 5:27 AM, Ian Davis wrote:

 

On Fri, Jul 2, 2010 at 10:19 AM, Patrick Durusaupatr...@durusau.netwrote:



   

I make this point in another post this morning but is your argument that
investment by vendors =



 

I think I just answered it there, before reading this message. Let me
know if not!



   

I think you made a very good point about needing examples so user can say: I want 
to do that.

Which was one of the strong points of HTML.

 

Ok, what users will want is the Social Web. And here is the way to convince 
people:
The Social Network Privacy Mess: Why we Need the Social Web

http://www.youtube.com/watch?v=994DvSJZywwfeature=channel


( This can of course be improved) The general ideas should be clear:

  dystopia: we cannot have all social data centralised on one server.
  utopia: there is a lot of money to be made in creating the social web, and 
thereby
 increasing democracy in the world.

  This can ONLY be done with linked data. And there is a real need for it.


   

Several presumptions:

1) there is a lot of money to be made creating the social web - ? On what 
economic model? Advertising? Can't simply presume that money can be made.
 

Look I could leave that to you as an exercise to the reader. I don't know why 
people want me
to give them answers also on how to make money. Sometimes you have to think for 
yourself.

Just think how much bigger a global social web is. Then think everyone 
connecting to everyone.
Then think that perhaps you could sell software to firms that have certain 
needs, to doctors and hostpitals that have other needs, to universities, etc. 
etc...

It's up to your imagination really.

   

2) thereby increasing democracy in the world - ??? Not real sure what that has to do with social 
networks. However popular increasing democracy may be as a slogan, it is like fighting 
terrorism.
 

Because people can publish their own data, and control what they say and to 
whome they say it a lot more.

   

Different governments and populations have different definitions for both. I 
have my own preferences but realize there are different definitions used by 
others.
 

I don't care what dictators think about democracy frankly.


   

3) can ONLY be done with linked data. Really? Seems like the phone companies 
from your example did it long before linked data.
 


Phone companies do something very simple: connect phones. The internet connects 
computers. The web connects pages. You need the semantic web to connect things 
(and hence people)


   

4) there is a real need for it. ? I get as annoyed as anyone with the 
multiple logins and universities do have some common logins for their internal systems 
but I am not sure I would describe it as a need.
 

You don't see it as a need because you don't think of the options you are 
missing. Like people in 1800 did not think horses were slow, because they did 
not consider that they could fly. Or if they did think of that it was just as a 
dream.

Or closer to home, in the 80ies most people did not miss getting information 
quickly, the library was around the corner. Or they did not miss buying their 
tickets online.

You need a bit of imagination to see what you are missing. Which is why a lot 
of people stop dreaming.
It's painful.

   

At least until some survey shows that a large number of users are willing to 
pay for such a service.
 

I have never heard of an inventor making surveys to test things out. That is 
nonsense. At most what that can tell you is little details, ways to fine tune a 
system. It will never let you see the big changes coming.

   

Hope you are looking forward to a great weekend!
 

you too.

   

Patrick

 

Henry

   

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http

Re: destabilizing core technologies: was Re: An RDF wishlist

2010-07-02 Thread Patrick Durusau

Henry,

On 7/2/2010 7:11 AM, Henry Story wrote:


   

snip

Well, I am not so sure that we need to rewire the brain of millions of 
people. so much as we need to have our technologies adapt to them. Yes?
 

When it was discovered that the earth was round, the brains of everyone on 
earth had to be rewired.
Of course people only selectively did that. Those who believed it and 
understood the implications rewired the brains of just enough people so they 
could make fortunes, colonise whole continents and rule the world for centuries.

   
Actually the discovery that the world was round, I assume you mean by 
Columbus, is a myth. It was well know that the earth was round long 
before then. The disagreement was about how large the earth was. As a 
matter of fact Columbus had seriously under estimated the distance to 
make the voyage.


Nor did he or any of the opponents to his voyage expect to discover a 
new continent.


The most accessible account on the myth about the world being flat is 
Umberto Eco's Serendipities: Language and Lunacy. 1998 - See the 
chapter The Force of Falsity.


Hope you are having a great day!

Patrick

--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)

Another Word For It (blog): http://tm.durusau.net
Homepage: http://www.durusau.net
Twitter: patrickDurusau




Re: Organization ontology

2010-06-08 Thread Patrick Durusau

Greetings!

On 6/7/2010 11:27 PM, Todd Vincent wrote:


In the law, there are two concepts (a) Person and (b) Entity.   In 
simple terms:


A person is a human.

An entity is a non-human.


Well, yes, in simple terms but the law isn't always simple. ;-)

How would you handle municipalities that are considered to be persons 
for purposes of Title 42 Section 1983 actions? (civil rights)


It remains a municipality for any number of legal purposes but is also a 
person in other contexts.


I am sure a scan of the Federal Code (to say nothing of the case law) 
would turn up any number of nuances to the concept person. Perhaps not 
as complex as the attribution of ownership rules in the IRC but enough 
to be interesting.


The law in logic folks did a lot of work on legal concepts. One of the 
journals was Modern Uses of Logic in Law, later became Jurimetrics.


Hope you are having a great day!

Patrick

Generally, these terms are used to distinguish who has the capacity to 
sue, be sued, or who lacks the capacity to sue or be sued.


A *person* (human) can sue or be sued in an individual capacity, with 
certain exceptions for juveniles, those who are legally insane, or who 
otherwise are deemed or adjudicated under the law to lack legal capacity.


An *entity* must exist as a legal person under the laws of a state.  
An entity's existence under the laws of a state occurs either through 
registration (usually with the secretary of state) or by operation of 
law (can happen with a partnership). Generally, anything else is not a 
entity.  For example, you cannot sue a group of people on a beach as 
a entity -- you would have to name each person individually. This is 
true, because the group of people on a beach typically have done 
nothing to form a legally recognized entity.


From a legal perspective, calling something a Legal Entity is 
redundant; although from a non-legal perspective, it may provide 
clarity.  In contrast a legal person is not redundant because most 
legal minds would understand this to mean an entity (i.e., a person 
with the capacity to sue and be sued that is not a human person).


From a data modeling perspective, I find it straightforward to use the 
terms Person and Organization because (a) typically only lawyers 
understand Entity and (b) the data model for an organization tends 
to work for both (legal) entities and for organizations that might 
not fully meet the legal requirements for an entity.   Taking the 
example below, a large corporation or government agency (both of which 
are [legal] entities) might be organized into non-legal divisions, 
subdivisions, departments, groups, etc, that are not (legal) entities 
but still might operate like, and need to be named as, an 
organization.  Some companies have subsidiaries that are legal 
(entities).


By adding OrganizationType to the Organization data model, you 
provide the ability to modify the type of organization and can then 
represent both (legal) entities and (legally unrecognized) organizations.


Taxing authorities (e.g., the IRS) have different classifications for 
entities.  An S Corporation, C Corporation, and a Non-Profit 
Corporation are all (legal) entities, even though their tax status 
differs.


Hope this is helpful for what it is worth.

Todd

See also U.S. Federal Rules of Civil Procedure, Rule 17.

*From:* public-egov-ig-requ...@w3.org 
[mailto:public-egov-ig-requ...@w3.org] *On Behalf Of *Patrick Logan

*Sent:* Monday, June 07, 2010 7:50 PM
*To:* Mike Norton
*Cc:* public-egov...@w3.org; Dave Reynolds; William Waites; Linked 
Data community; William Waites; Emmanouil Batsis (Manos)

*Subject:* Re: Organization ontology

Large corporations often have multiple legal entities and many 
informal, somewhat overlapping business organizations. Just saying. I 
wrangled with that. There're several different use cases for these for 
internal vs external, customer/vendor, financial vs operations, etc.


On Jun 7, 2010 3:19 PM, Mike Norton xsideofparad...@yahoo.com
mailto:xsideofparad...@yahoo.com wrote:

I can see Manos' point.   It seems that LegalEntity rather the
Organization would work well under a sub-domain such as .LAW or
.DOJ or .SEC, but under other sub-domains such as .NASA, the
Organization element might be better served as ProjectName.   All
instances would help specify the Organization type, while keeping
Organization as the general unstylized element is probably ideal,
as inferred by William Waites.

Michael A. Norton



*From:* Emmanouil Batsis (Manos) ma...@abiss.gr
mailto:ma...@abiss.gr


a) the way i get FormalOrganization, it could as well be called
LegalEntity to be more precise



--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co

Re: Earthquake in Chile; Ideas? to help

2010-03-01 Thread Patrick Durusau

Aldo,

On 3/1/2010 3:09 PM, Aldo Bucchi wrote:

Hi,

On Mon, Mar 1, 2010 at 10:04 AM, Massimo Di Pierro
mdipie...@cs.depaul.edu  wrote:
   

This software was used for Haiti

  http://www.sahanapy.org/

Here it is in production for Haiti

  http://haiti.sahanafoundation.org/prod/

and here it is for Chile

  http://chile.sahanafoundation.org/live
 

Oh.
Who put this up?
I am impressed by ( and thankful of ) the amount of efforts we are not aware of!
Now. The issue is quickly starting to become: data integration.

   

Data integration is always the issue.

Ever since there were two languages for communication. ;-)

Rather than attempting to choose (or force) a choice of an ontology, I 
would suggest creating a topic map to provide an integration layer over 
diverse data. That allows users to work with systems most familiar to 
them, which should allow them to get underway more quickly as well as 
more accurately. While allowing for integration of diverse sources.


Besides, as the relief effort evolves, things not contemplated at the 
outset are going arise. Topic maps can easily adapt to include such 
information.


Hope you are having a great day!

Patrick

( I am sure someone has been saying this for eons. He's big, darK...
can't remember his name though ;).

I will fwd this to people on the team. If you have any idea, the list
of developers coordinating this is:
digitales-por-ch...@googlegroups.com

Thanks!

   

It is based on web2py and it would be trivial to add RDF tags since web2py
has native support for linked data :

   http://web2py.com/semantic
 

OK cool.
Now, just to be clear: semantic is not really a requirement. We just
need to make things better.

Thanks!
( Leo: I copy you directly cuz you're the python man here )

   

Currently the database schema of Sahanapy has not yet been tagged but they
alway look for volunteers


http://groups.google.com/group/michipug/browse_thread/thread/e3e7700e7970059

Massimo


On Feb 28, 2010, at 8:50 PM, Aldo Bucchi wrot

 

Hi,

As many of you probably know, we just had a mega quake here in Chile.
This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in the
end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we can
find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it
is
addressed and may contain information that is privileged and confidential.
If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us
immediately by
return e-mail.

   


 



   


--
Patrick Durusau
patr...@durusau.net
Chair, V1 - US TAG to JTC 1/SC 34
Convener, JTC 1/SC 34/WG 3 (Topic Maps)
Editor, OpenDocument Format TC (OASIS), Project Editor ISO/IEC 26300
Co-Editor, ISO/IEC 13250-1, 13250-5 (Topic Maps)