>To do this, knowledge management techniques were required - but RDF
was not a good fit.  Though there are various environmental ontologies
(the best set I found being from NASA), the agency's needs were for
knowledge organized around the "themes" they were interested in and
the specific issues their scientists were identifying.  In other
words, they wanted to develop a dynamic knowledge model based on the
information they gathered, not on a static pre-existing set of
schemas.

That sounds like a good fit for RDF to me.  In fact, the only use I've
ever made of RDF has been with dynamic models.  I'd also say that the
set of all data sets which is suitable for modelling with Topic Maps
is essentially equivalent to the set of data sets suitable for
modelling with RDF ... though Jan can probably speak more directly to
that.

Anyhow, I missed Jan's question, so my answer to him would be that RDF
is a bad fit for time series data, especially at 1000 samples per
second.  For our data, that would have worked out to several hundred
thousand triples per minute - something that existing RDF tools have
no idea how to handle;

http://esw.w3.org/topic/LargeTripleStores

Mark.





SPONSORED LINKS
Computer software Computer aided design software Computer job
Soa Service-oriented architecture


YAHOO! GROUPS LINKS




Reply via email to