Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Benjamin Nowack
On 02.07.2010 12:53:11, Richard Cyganiak wrote:
>But telling those user stories and marketing the solution bundles is
>not something that can realistically be done via the medium of *specs*.
Yes, full agreement here. That's why the thread felt so weird to me,
I think the entire focus is wrong. But I'm starting to realize that
this is apparently the wrong forum to state this ;) (It was the last
W3C list I was still subscribed, too, though. Time to stop whining and
move on, I guess).

>In all fairness, the workshop was specifically about figuring out what
>changes and additions to the RDF core specifications make sense, and
>it was limited by a tight schedule. Better learning material and
>similar work items were not in scope.
I see now, thx, I just re-read the workshop page. Should have known
better that this stuff is still not considered important enough to
put it on an agenda..

>W3C is running a number of XGs and IGs, which are in a position to
>produce marketing and teaching materials. Do you have proposals for
>concrete things that the existing groups should tackle?
If the XGs can't figure that out on their own, we're pretty lost.
What about interactive tools that answer "find the specs relevant to
me", "who is using this?", "what tools exist for this?", "what are
the dependencies?", "examples?", "what should I read next?". We'd
need an annotation tool for the individual specs and spec sections.
But then we'd have to use our own boring technology for what we say
it's made for. Nah.., just had this cool idea of sub-queries in
property paths, gotta work on that first.

;)
Benji


>
>Best,
>Richard
>
>
>
>>
>> Benji
>>
>> --
>> Benjamin Nowack
>> http://bnode.org/
>> http://semsol.com/
>>
>>
>
>




Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Benjamin Nowack

On 01.07.2010 22:44:48, Pat Hayes wrote:
>Jeremy, your argument is perfectly sound from your company's POV, but
>not from a broader perspective. Of course, any change will incur costs

Well, I think the "broader perspective" that the RDF workshop
failed to consider is exactly companies' costs and spec
marketability. The message still sent out is a crazy (or
"visionary" ;) research community creating spec after spec, with
no stability in sight. And with the W3C process not really
encouraging the quick or full refactoring of existing specs (like
getting rid of once recommended features), each spec adds *new*
features and increases the overall complexity of identifying
market-ready Recs: RIF seems to be a replacement for OWL, but
OWL2 was only just Rec'd. Which should I implement? RDFa 1.1 and
SPARQL 1.1 both look like implementation nightmares to me. Current
RDF stores can't even be used for semantic feed readers because of
poor "ORDER BY DESC(?date)" implementations, but the group is
already working on query federation. RDFa is becoming the new
RSS 1.0, with each publisher triggering the development of
dedicated parsers (one for SearchMonkey data, one for RichSnippets,
one for Facebook's OGP, etc., but a single interoperable one? Very
hard work.) Something is wrong here. Featuritis is the reason for
the tiny number of complete toolkits. It's extremely frustrating
when you know in advance that you won't be able to pass the tests
*and* have your own (e.g. performance) needs covered. Why start at
all then?

The W3C groups still seem to believe that syntactic sugar is
harmless. We suffer from spec obesity, badly. If we really want to
improve RDF, then we should go, well, for a low-carb layer cake.
Or better, several new ones. One for each target audience. KR pros
probably need OWL 2.0 *and* RIF, others may already be amazed by
"scoped key-value storage with a universal API" (aka triples + SPARQL).
These groups are equally important, but have to be addressed
differently.

Our problem is not lack of features (native literal subjects? c'mon!).
It is identifying the individual user stories in our broad community
and marketing respective solution bundles. The RDFa and LOD folks
have demonstrated that this is possible. Similar success stories are
probably RIF for the business rules market, OWL for the DL/KR sector,
and many more. (Mine is agile, flexi-schema website development.)

RDF "Next Steps" should be all about scoped learning material and
deployment. There were several workshop submissions (e.g. by Jeremy,
Lee, and Richard) that mentioned this issue, but the workshop outcome
seems to be purely technical. Too bad.

Benji

--
Benjamin Nowack
http://bnode.org/
http://semsol.com/




Re: RDF Update Feeds

2009-12-02 Thread Benjamin Nowack

On Fri, 20 Nov 2009 18:35:05 Alexandre Passant wrote:

>What about using RSS feeds (w/ RDF extensions) combined with RSSCloud
> or PubSubHubbub ?

+1

I started working on a PuSH/RDF implementation a few weeks ago. Any
recent project I'm involved in seems to require a more real-time-ish
experience, and PuSH/PSHB is a very elegant and pragmatic solution.

A very nice add-on is that it supports simple private update streams
(you can for example push profile updates to selected people in your
address book w/o too much hassle). What I also like is the ability to
hook into the wealth of RSS/Atom feeds already out there, even though
you still have to implement a polling mechanism for legacy feeds that
don't support pub/sub yet.

PuSH needs asynchronous processing and job queues, and writing a
compliant hub is not trivial. But compared to implementing RDF
specs, I dare say it's pure joy ;)

For resource updates and predicate-level changes (maybe even object-
level, not sure yet), I'm looking at Atom Activities[1]. They are a
bit like RDF, and more obvious than a directly embedded SPARQL/Update
query. Translating incoming RDF activities to Talis Changesets or
SPARQL INSERTs/DELETEs seems to be straight-forward, but I'm not
there yet code-wise, so this is really just a guess so far.

Cheers,
Benji

[1] http://activitystrea.ms/

--
Benjamin Nowack
http://bnode.org/
http://semsol.com/




Re: Dons flame resistant (3 hours) interface about Linked Data URIs

2009-07-10 Thread Benjamin Nowack

On 10.07.2009 10:53:32, Toby Inkster wrote:
>What would it mean for the file to have a dc:created property? Would the
>value of that property be my date of birth, or would it be the date I
>first uploaded my data?
>
>The classic example is that if I use the same URL to represent myself
>and my web page, then how can I state that I am the creator of my web
>page without also asserting that I'm my own father.

By simply using two different properties?

These are the typical (and correct) arguments, but they are grounded
in an AI/logics purism(?) that *maybe* shouldn't be taken too seriously
on the public SemWeb. They are of course practically motivated as well,
but the practitioner here is someone with a reasoning background, not
necessarily a web developer in a web agency.

We could most probably use Hugh's approach/idea and still solve all
our practical problems.

Why did we give URIs to properties? To tell us what types of resources
they relate. They should support us, not restrict us. So,

twitter:bengee is me (in Web 2.0 speak)

The page has a creation date:
   twitter:bengee ex1:created "2007" .
   (ex1:created relates a document to a date)

I have a birthday:
   twitter:bengee ex2:birthday "08-14" .
   (ex2:birthday relates an agent to a date)

The page has a creator:
   twitter:bengee ex1:author twitter:bengee .
   (ex1:author relates a document to an agent)

I have a father:
   twitter:bengee ex2:father "Bodo" .
   (ex2:father relates an agent to an agent)

Now, this is totally blasphemic RDF *in our current view*, but
heck would it make publishing easy. And with properly annotated
properties, it would be easy to detect whether a term refers to
a document or a NIR, and the syntax is pretty obvious about whether
we are talking about a resource or the label of a resource. And hey,
no more arguing about whether a vcard is a person or not. And we
could get rid of our über-complicated XFN converters ;)

Simple querying works easily, directly on the instance data,
and ontologies could be used for more automatic disambiguation.

So, dc:created can't tell you whether it refers to a person or a
document? Predicate FAIL, not Subject fail, maybe?


This is all rather tongue-in-cheek, of course, we've been here a couple
of times, I'm happy with the current specs, and different URIs for NIRs
and docs make a lot of sense, but we as a community should be prepared
that people will just use their homepages and OpenIDs as direct
identifiers (XFN, anyone?). Our apps will have to deal with the situation,
and it's actually not too difficult to implement such a disambiguation
step. When I read a blog post and drag an author link on my address book,
I want to add a person, not a page, and my address book should not say
"Ey dude, not a person" (well, would be cool if it could, though).

Benji

--
Benjamin Nowack
http://bnode.org/
http://semsol.com/




LOD-consumer SPARQLBot / query federation

2008-09-23 Thread Benjamin Nowack


Hi LODers,

just FYI, as some people discussed query federation and
commandline-like access to LOD.

SPARQLScript[1] (the scripting language SPARQLBot[2] is based on)
doesn't have declarative syntax for pattern federation (such as
Andy's SERVICE option), but it provides similar possibilities
through queries in loops and an ENDPOINT keyword. I used this
feature to spot links between CrunchBase and DBPedia. SPARQLBot
has a couple of commands that span multiple LOD sources as well.

Early/Experimental stuff, but sort-of worky and deployable to
hosted web servers (ARC has a SPARQLScript implementation).


Cheers,
Benji


[1] http://arc.semsol.org/docs/v2/sparqlscript
[2] http://lists.w3.org/Archives/Public/semantic-web/2008Sep/0160.html

--
Benjamin Nowack
http://bnode.org/




ANN: Semantic CrunchBase (with initial mappings)

2008-09-12 Thread Benjamin Nowack


Heyup LODers,

Semantic CrunchBase[1] is now (more or less ;) stable. SPARQL endpoint,
ConNeg and a local Browser are in place, I also activated a little
inferencer[2][3] that is generating mappings to FOAF and geo, and also
to DBPedia. The latter are still pretty sparse, but I'll continue
looking for additional LOD hooks.

You can get a complete quad dump by using "DUMP" as a SPARQL query[4].

Best,
Benji

[1] http://cb.semsol.org/
[2] http://bnode.org/blog/2008/09/12/writing-inference-rules-with-sparqlscript
[3] http://cb.semsol.org/lod-linker/
[3] http://bnode.org/blog/2008/07/03/spog-in-arc

--
Benjamin Nowack
http://bnode.org/




Re: Semantic CrunchBase

2008-07-23 Thread Benjamin Nowack


Hi,

A first version is now online[1]. The setup isn't complete yet (conneg is
still missing, and there are no mappings to other LOD datasets or even
known vocabs).

I took a slightly different approach to publishing the data. The resource
URIs are (fake-)hash-based (e.g. [2]), the related doc contains a pointer
to an RDF/XML (or conneg'd RDF/JSON) version. The HTML pages contain (the
beginning of) a new rdf-in-html format (called poshRDF) I'm experimenting
with, but I could possibly turn them into RDFa as well.

WRT to the attribution requirement, I've added links to the original source
at the resource level, so at least resource-oriented RDF browsers should
auto-render those back-links to crunchbase.com.

Cheers so far,
Benji


[1] http://cb.semsol.org/
[2] http://cb.semsol.org/company/facebook#self

--
Benjamin Nowack
http://bnode.org/


On 17.07.2008 21:16:14, Benjamin Nowack wrote:
>
>
>Heyup,
>
>Just a short heads-up that I'm working on an RDF export and SPARQL
>thingy for the recently announced[1] CrunchBase API[2]. The data is
>really cool and could probably make a great LOD node. I'm in loose[3]
>contact with the folks behind the API, they seem to be pretty open (we
>may have to inject attribution triples somehow). Anyway, I'm still
>importing the 27K source graphs, but might need some help with
>generating hooks into dbpedia (or whatever makes sense link-wise) once
>I've set up browser and endpoint.
>
>Cheers so far,
>Benji
>
>[1]
>http://www.techcrunch.com/2008/07/15/crunchbase-now-has-an-api-so-grab-our-data
>/
>[2] http://www.crunchbase.com/help/api
>[3] http://twitter.com/crunchbase/statuses/861102925
>
>--
>Benjamin Nowack
>http://bnode.org/
>
>




Semantic CrunchBase

2008-07-17 Thread Benjamin Nowack


Heyup,

Just a short heads-up that I'm working on an RDF export and SPARQL
thingy for the recently announced[1] CrunchBase API[2]. The data is
really cool and could probably make a great LOD node. I'm in loose[3]
contact with the folks behind the API, they seem to be pretty open (we
may have to inject attribution triples somehow). Anyway, I'm still
importing the 27K source graphs, but might need some help with
generating hooks into dbpedia (or whatever makes sense link-wise) once
I've set up browser and endpoint.

Cheers so far,
Benji

[1]
http://www.techcrunch.com/2008/07/15/crunchbase-now-has-an-api-so-grab-our-data/
[2] http://www.crunchbase.com/help/api
[3] http://twitter.com/crunchbase/statuses/861102925

--
Benjamin Nowack
http://bnode.org/