Hi,

I wonder about the limit of triples when accessing DBpedia URIs:

  $ rapper -c "http://dbpedia.org/resource/Netherlands";
rapper: Parsing URI http://dbpedia.org/resource/Netherlands with parser rdfxml
  rapper: Parsing returned 2001 triples

When I access that URI by browser I receive the complete data, this means that machines are underprivileged whereas they are the ones that are capable of processing the amount of data instead of human users.

Wouldn't it be nice to:
1) whenever such a limit is applied to return a triple that states that a limit has been applied, then the machine knows that it does not know everything there is to know and 2) to include triples based on their expected relevance? For example the rdfs:label is generally of interest.

Best regards,
Basil Ell




Reply via email to