Looks like someone hasn't learned the lesson:
https://www.mail-archive.com/wikidata-l@lists.wikimedia.org/msg02588.html
On Thu, Feb 26, 2015 at 9:27 PM, Lukas Benedix
wrote:
> I second this!
>
>
> btw: what is the status of the problem with the missing dumps with
> history? (latest available fro
ast of our problems. The reuse
> of data is first to happen within our projects and THAT is not so much of a
> technical problem at all.
> Thanks,
> GerardM
>
> On 28 October 2014 11:26, Martynas Jusevičius wrote:
>>
>> Gerard,
>>
>> what about query f
the Wikidata statistics really hurt.
>>
>> So lets not spend time at this time on RDF, Lets ensure that what we have
>> works, works well and plan carefully for a better RDF but lets only have it
>> go in production AFTER we know that it works well.
>> Thanks,
>>
y for a better RDF but lets only have it
> go in production AFTER we know that it works well.
> Thanks,
> GerardM
>
> On 28 October 2014 02:46, Martynas Jusevičius wrote:
>>
>> Hey all,
>>
>> so I see there is some work being done on mapping Wikidata data mo
Hey all,
so I see there is some work being done on mapping Wikidata data model
to RDF [1].
Just a thought: what if you actually used RDF and Wikidata's concepts
modeled in it right from the start? And used standard RDF tools, APIs,
query language (SPARQL) instead of building the whole thing from
Jan,
my suspicion is that my predictions from last year hold true: it is a
far more complex task to design a scalable and performant data model,
query language and/or query engine solely for Wikidata than the
designers of this project anticipated - unless they did anticipate and
now knowingly fail
Hey Lydia,
how about query access?
Martynas
graphityhq.com
On Wed, Nov 6, 2013 at 6:17 PM, Lydia Pintscher
wrote:
> Hey everyone,
>
> Progress! We now have the long awaited new search backend up and
> running for testing on Wikidata. It will still need some tweaking but
> please do try it and g
There was a long discussion not so long ago about using established
RDF tools for Wikipedia dumps instead of home-brewed ones, but I guess
someone hasn't learnt the lesson yet.
On Thu, Sep 26, 2013 at 2:22 PM, Kingsley Idehen wrote:
> All,
>
> See: https://www.wikidata.org/wiki/Q76
>
> The resour
Yes, that is one of the reasons functional languages are getting popular:
https://www.fpcomplete.com/blog/2012/04/the-downfall-of-imperative-programming
With PHP and JavaScript being the most widespread (and still misused)
languages we will not get there soon, however.
On Mon, Jul 8, 2013 at 10:57
Here's my approach to software code problems: we need less of it, not
more. We need to remove domain logic from source code and move it into
data, which can be managed and on which UI can be built.
In that way we can build generic scalable software agents. That is the
way to Semantic Web.
Martynas
You probably mean Linked Data?
On Tue, Jun 11, 2013 at 9:41 PM, David Cuenca wrote:
> While on the Hackathon I had the opportunity to talk with some people from
> sister projects about how they view Wikidata and the relationship it should
> have to sister projects. Probably you are already famili
d be very much
> interested in that.
>
> Cheers,
> Denny
>
>
>
>
> 2012/12/19 Martynas Jusevičius
>>
>> Hey wikidatians,
>>
>> occasionally checking threads in this list like the current one, I get
>> a mixed feeling: on one hand, it is sad to
Hey wikidatians,
occasionally checking threads in this list like the current one, I get
a mixed feeling: on one hand, it is sad to see the efforts and
resources waisted as Wikidata tries to reinvent RDF, and now also
triplestore design as well as XSD datatypes. What's next, WikiQL
instead of SPARQ
ple
>> SPARQL, which is doable but attempts have shown it doesn't scale. A
>> native RDF store is much more performant.
>
> Do you have a reference for this? I always thought it was exactly the
> opposite, i.e. SPARQL2SQL mappers performing better than native stores.
>
the right
> choice of modelling. Before going into the discussion any further [1], I
> think you should name an example where reification is really better than
> other options.
>
> All the best,
> Sebastian
>
> [1]http://ceur-ws.org/Vol-699/Paper5.pdf
>
>
> On 06/2
Denny, the statement-level of granularity you're describing is achieved by
RDF reification. You describe it however as a "deprecated mechanism" of
provenance, without backing it up.
Why do you think there must be a better mechanism? Maybe you should take
another look at reification, or lower your
John,
I pretty much second your concerns.
Do you know Edge Side Includes (ESI)? I was thinking about using them
with XSLT and Varnish to compose pages from remote XHTML fragments.
Regarding scalability -- I can only see those possible cases: either
Wikidata will not have any query language, or i
17 matches
Mail list logo