On 31 May 2016 at 11:01, Andy Seaborne <[email protected]> wrote:

> The "supply" side is doing OK. There are releases.  Sticking to a stable API
> is necessary to show it's ready.

Yes. Still some got scared by 0.x and "incubator". So we also need to
formalize the stability.


> The "demand" side is weak.
> Discuss what it is not, not just what it is.

> Why does anyone need (not "want") a portability layer?
> Is it for using two systems at the same time or one at a time?
> Is it for new development or for fitting to existing applications?
> What are its advantages over other approaches? (what's its niche?)
> etc etc.

Thanks Andy, I think these are important questions.


For me, one advantage of Commons RDF is that it is very narrow - so
when making APIs such as

https://taverna.incubator.apache.org/javadoc/taverna-language/org/apache/taverna/scufl2/annotation/AnnotationTools.html

..then I don't want to send downstream users (who might not know much
about RDF) straight into Jena or Sesame's comprehensive APIs - yet I
want to allow them to use those if they want to. I want to make it
easy to read or write just a tiny bit of RDF without learning a
framework.


Framework independence is also a good thing - not having to force
(particular versions of) the frameworks onto the classpath of
downstream users, giving flexibility to change implementation for
performance or compatibility reasons, but of course there's the danger
of slf4j/log4j/commonslogging multiple layering here as we're not
alone in that sphere.

For instance I am (somewhat) maintaining a perhaps messy application I
inherited that is tied to a very old version of Sesame - and I want to
update it to say get support for JSON-LD, but I would also want to try
Jena's StreamRDF mechanism and see how I can parse multiple files
concurrently. Currently I'm forced to choose which "update" path I go
down, or pick a new "outer" framework like Clerezza, but with Commons
RDF I can make the majority of the application deal with regular IRIs
and Triples using Commons RDF API, and just change the code that
parses files to see if I can get a parsing performance boost or use
new file formats. So that would be "one at a time" approach for an
existing application.


As Commons RDF have invested a lot in cross-framework support (e.g. in
terms of equality), our potential is to be used as a bridge whenever
an application for some reason needs to combine multiple frameworks
(e.g. combining two libraries which made independent framework
choices) - e.g. avoid things like serializing from Sesame to a
ByteArray stream just to parse it again in Jena within the same JVM
process. Perhaps not being able to demonstrate this yet, we have not
been promoting this feature enough.


Yet we have not seen much interest in that - the discussion with
potential contributors on our lists have focused on the more exotic
topics, e.g. generalizing RDF onto any JVM class structure, or having
an API that is easy to use across JVM languages. This line of inquiry
- while very interesting, was at odds of the original "one set of
interfaces to rule them all" approach and perhaps dismissed a bit too
easily - but this is perhaps also difficult to achieve for a narrow
Commons component with few committers.




-- 
Stian Soiland-Reyes
Apache Taverna (incubating), Apache Commons
http://orcid.org/0000-0001-9842-9718

Reply via email to