I feel like you are misrepresenting my request, and possibly trying to offend 
me as well.

My "UC" as you call it, is simply that I would like to have a local copy of 
wikidata, and query it using SPARQL. Everything that I've tried so far doesn't 
seem to work on commodity hardware since the database is so large. But HDT 
could work. So I asked if a HDT dump could, please, be added to other dumps 
that are periodically generated by wikidata. I also told you already that *I 
AM* trying to use the 1 year old dump, but in order to use the HDT tools I'm 
told that I *MUST* generate some other index first which unfortunately I can't 
generate for the same reasons that I can convert the Turtle to HDT. So what I 
was trying to say is, that if wikidata were to add any HDT dump, this dump 
should contain both the .hdt file and .hdt.index in order to be useful. That's 
about it, and it's not just about me. Anybody who wants to have a local copy of 
wikidata could benefit from this, since setting up a .hdt file seems much 
easier than a Turtle dump. And I don't understand why you're trying to blame me 
for this?

If you are part of the wikidata dev team, I'd greatly appreciate a "can/can't" 
or "don't care" response rather than playing the passive-aggressive game that 
you displayed in your last email.


> Let me try to understand ... 
> You are a "data consumer" with the following needs:
>   - Latest version of the data
>   - Quick access to the data
>   - You don't want to use the current ways to access the data by the 
> publisher (endpoint, ttl dumps, LDFragments)
>  However, you ask for a binary format (HDT), but you don't have enough memory 
> to set up your own environment/endpoint due to lack of memory.
> For that reason, you are asking the publisher to support both .hdt and 
> .hdt.index files. 
>  
> Do you think there are many users with your current UC?

_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to