> I am currently downloading the latest ttl file. On a 250gig ram machine. I 
> will see if that is sufficient to run the conversion Otherwise we have 
> another busy one with  around 310 gig.

Thank you!

> For querying I use the Jena query engine. I have created a module called 
> HDTQuery located http://download.systemsbiology.nl/sapp/ which is a simple 
> program and under development that should be able to use the full power of 
> SPARQL and be more advanced than grep… ;)

Does this tool allow to query HDT files from command-line, with SPARQL, and 
without the need to setup a Fuseki endpoint?

> If this all works out I will see with our department if we can set up if it 
> is still needed a weekly cron job to convert the TTL file. But as it is 
> growing rapidly we might run into memory issues later?

Thank you!

_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to