Hello, there am just starting off with Jena and am pretty new to it. I am 
trying to load all the DBpedia datasets so that I can have a local version of 
DBpedia working on my station here. I used the TDB loader to load the data sets 
while doing so I specified a directory on which to load the dataset.  I used 
the following code to query the dataset.
  String directory = "c:/dataset" ;
  DatasetGraphTDB dataset = TDBFactory.createDatasetGraph(directory);
  Graph g1 = dataset.getDefaultGraph();
  Model newModel = ModelFactory.createModelForGraph(g1);
  String q= "SELECT ?p ?o WHERE { 
<http://dbpedia.org/resource/Mendelian_inheritance> ?p ?o . }";
  Query query = QueryFactory.create(q);
  QueryExecution qexec = QueryExecutionFactory.create(query,newModel);
  ResultSet results = qexec.execSelect();
  while (results.hasNext()) {
    QuerySolution result = results.nextSolution();
    RDFNode s = result.get("s");
    RDFNode p = result.get("p");
    RDFNode o = result.get("o");
    System.out.println( " { " + s + " " + p + " " + o + " . }");
}
Now my question is, the DBpedia data dumps come in various files, do I load all 
these files in the same directory using TDB to create one huge model or do I 
need to load it into different directories thus having to create different 
models to query the data. Please not that I do not plan to load the whole of 
DBpedia datasets onto the datastore just the english version of Ontology 
Infobox properties, Titles and Ontology Infobox types. Forgive me for my very 
amateur question but i am just getting started with it ;). 

Reply via email to