Hello,
Speaking about horsepower on Linked data, this could be a project to follow
: http://linkeddatafragments.org/
Cheers,

Nicola Vitucci <nicola.vitu...@gmail.com> a écrit :

Hi all,

sorry for the late answer. First of all, thanks for the encouraging
comments! In order to track suggestions and bugs more easily I've
created a Github repo here:

https://github.com/nvitucci/wikisparql-project

I'll try and write up stuff as things progress, but feel free to start
using the issue tracker.

@Paul:

You can generally run those O.K. on the DBpedia SPARQL endpoint.
It would be nice to see a few more horsepower put behind this.

You're definitely right. Speaking of "pure horsepower" the machine
hosting WikiSPARQL is not that big right now (32 GB of RAM, 8-thread
CPU, no SSDs), but there are a couple things I'd like to do from the
backend side in order to try and improve this kind of queries.

@Markus:

By the way, if you can control the output UI completely, you may
consider
adding this: instead of a property or item URI, always display its label
(in a selected language), and merely show the URI as a tooltip or
similar.
This could be done after querying with Javascript so that people don't
need
to do many join+filter parts in queries just to retrieve the labels.

Showing a (localized) label in place of a URI is a neat idea, I planned
to do that but my main concern is for when you get a lot of URIs as a
result of your query, because this might add some overhead - to be tried
though.

P.S. Your interface is very nice, but as Paul remarked it seems that
some
of the queries are a little slow. Could you maybe rewire the SPARQL

execution
to our endpoint at http://milenio.dcc.uchile.cl/sparql? It seems to be

faster.
Of course, I understand if you want to use it for your own testing, but
if
your main interest is in the UI and not the backend, this might be a

nice cooperation.

I decided to spend some time on the UI because it makes the use of the
endpoint easier, but my main goal is to try and understand together with
the community what can be done to make the use of an endpoint as
efficient as possible, not only by tuning the right parameters (e.g.
should there be a timeout on the queries? how much RAM should I make
available? etc.) but even by looking at different technologies. It's
great though to have a chance to compare results with other endpoints,
and it might be a good idea to give a chance to decide which endpoint(s)
to use as you suggested, thus I can see many chances for cooperation. By
the way, can I reuse your queries as example queries?

@Kingsley:

SPARQL URLs should work across SPARQL endpoints. Basically, you should
only change the host part of the URL to execute the same query across
different endpoints.

Indeed, that would be pretty easy, provided that SPARQL "variants" (e.g.
the use of parentheses around aggregate expressions, or magic
properties) are handled correctly.

Cheers,

Nicola

_______________________________________________
Wikidata-l mailing list

Wikidata-l@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata-l
Jean-Baptiste Pressac
Traitement et analyse de bases de données
Centre de Recherche Bretonne et Celtique
20 rue Duquesne
CS 93837
29238 BREST cedex 3
tel : +33 (0)2 98 01 68 95
fax : +33 (0)2 98 01 63 93

_______________________________________________
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l

Reply via email to