On 31 Mar 2016, at 22:10, Kingsley Idehen <kide...@openlinksw.com> wrote: > > How are you arriving at data devoid or metadata about its origins?
Link rot... Endpoint is still working, dumps aren't... > You would be better served, ultimately, instantiating a dedicated > Virtuoso instance in the cloud for your specific needs. This instance > could load datasets from wherever, using some of the existing endpoints > (DBpedia and others) as a mechanism for exposing provenance data etc.. How would i do that? I'm not sure i fully understand... We have a Virtuoso instance in our local network for our working group already. I usually manually load it with the dumps of datasets i want to run my algorithms against so i don't impede the public endpoints operations / get blocked for doing too many requests. > There is no nice way of trying to dump all the data from an existing > SPARQL endpoint. If there is no nice way, is there any way at all? I mean I can't be the first one who is interested in all triples a graph on a remote endpoint contains (even if that graph is big). Best, Jörn ------------------------------------------------------------------------------ _______________________________________________ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users