Dear Vlad,
Ordering claims on a page as you suggest would not work well, since
several other orders must take precedence over the order you suggest.
First of all, statements are grouped by property and you don't want to
change this. Hence, you cannot use the order across statements of
on Labs. I would like to use the central dump on labs
instead of downloading my own copy each time, but right now this delays
dump processing further. I was wondering who is providing the central
entity dumps on labs.
Cheers,
Markus
On 10.05.2016 12:05, Markus Krötzsch wrote:
Pushing
Pushing this up a bit again. The 9 May dump is not available on labs
yet. There is just the empty directory
/public/dumps/public/wikidatawiki/entities/20160509/
I really wonder why it might be taking so long.
Markus
On 02.05.2016 21:36, Markus Kroetzsch wrote:
Hi,
I noticed that there is
Hi,
Yes, 31 days is indeed quite long.
Another workaround for some people might be to use the API
(action=wbgetentties) instead. However, this was not so easy for me
since my JavaScript application complains about the cross-site request
here even though this action is only for reading and
Hi,
I found that Special:EntityData returns outdated JSON data that is not
in agreement with the page. I have fetched the data using wget to ensure
that no browser cache is in the way. Concretely, I have been looking at
https://www.wikidata.org/wiki/Special:EntityData/Q17444909.json
where I
objects in Openstreetmap, I
would think they are interesting enough for Wikidata.
Polyglot
2016-02-13 23:13 GMT+01:00 Markus Krötzsch
<mar...@semantic-mediawiki.org <mailto:mar...@semantic-mediawiki.org>>:
Hi Jo,
You are searching for an item that is assigned to the article
Hi Daniel,
I feel that this tries to evade the real issue by making formal rules
about what kind of "breaking" you have to care about. It would be better
to define "breaking change" based on its consequences: if important
services will stop working, then you should make sure you announce it
Hi David,
Those are the ids of statements. They are formed using the statement's
UUID, which typically uses the item id (sometimes with a lower-case "q")
as its first part. However, the exact form of the IDs should not be used
to find out what the thing is or which item it belongs to: all of
On 01.09.2015 05:17, Stas Malyshev wrote:
Hi!
I would have thought that the correct approach would be to encode these
values as gYear, and just record the four-digit year.
While we do have a ticket for that
(https://phabricator.wikimedia.org/T92009) it's not that simple since
many triple
On 01.09.2015 16:57, Thiemo Mättig wrote:
Hi,
> I now identified another format for API warnings [...] from action
"watch"
I'm not absolutely sure, but I think this is not really an other format.
The "warnings" field contains a list of modules. For each module you can
either have a list of
"messages" or even "html" key is
inserted. I got as far as ApiResult.php, where messages end up being
added in addValue(). It seems that this is the same for all modules,
more or less. I lost the trace after this. I have no idea what happens
with the thus "added" mes
Push
A partial answer would also be helpful (maybe some of my questions are
more tricky than others).
Thanks,
Markus
On 28.08.2015 10:41, Markus Krötzsch wrote:
Hi,
I am wondering how errors and warnings are reported through the API, and
which errors and warnings are possible
Hi,
I am wondering how errors and warnings are reported through the API, and
which errors and warnings are possible. There is some documentation on
Wikidata errors [1], but I could not find documentation on how the
warning messages are communicated in JSON. I have seen structures like this:
Hi,
How do you delete, say, the English label of an entity via wbeditentity?
I could not find documentation on this. Whatever the answer, I guess it
is the same for descriptions, right?
How about aliases? I know that writing one English alias will delete all
existing aliases, but how can
Hi,
I wondered why wbeditentity has a parameter bot. The documentation
says that this will mark the edit as a bot edit, but only if the user is
in the bot group. In other words, users in the bot group can use this
parameter to decide if they want to have their API-based edit flagged as
bot
to simplify the interface.
Cheers,
Markus
Cheers
Addshore
On 26 August 2015 at 10:51, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
Hi,
I wondered why wbeditentity has a parameter bot. The documentation
says that this will mark the edit
Hi,
I missed this request. The formatting features of wbformatvalue are
rather specific to Wikibase/Wikidata. For example, all URLs are
formatted in HTML as an a tag with nofollow set, as well as several
CSS classes. I think most applications will want their own, direct data
format that they
Hi Bo,
Thanks for the information. More query services are never a bad thing,
and I agree that property graph is closest to Wikidata in terms of data
model. However, in own tests with Neo4j (at TUD, not at WMF), we were
not so impressed by raw query performance. In particular, there seemed
On 11.03.2015 00:44, Magnus Manske wrote:
To be fair, the discussion is not what will we do till the end of
time, rather what do we start with.
Knowing neither SPARQL nor the data storage engine terribly well, it
would not be helpful if the service can be DOSed by innocent-looking
queries,
On 11.03.2015 05:59, Tom Morris wrote:
On Tue, Mar 10, 2015 at 6:17 PM, Markus Krötzsch
mar...@semantic-mediawiki.org mailto:mar...@semantic-mediawiki.org
wrote:
TL;DR: No concrete issues with SPARQL were mentioned so far; OTOH
many *simple* SPARQL queries are not possible in WDQ
Hi Daniel,
I can understand your thoughts to some extent, but they seem to apply to
any potential solution. Committing to a primary query interface will
always be, well, a committment. Because of this, I think the big
advantage of SPARQL is exactly that it is a technology standard that is
TL;DR: No concrete issues with SPARQL were mentioned so far; OTOH many
*simple* SPARQL queries are not possible in WDQ; there is still time to
restrict ourselves -- let's give SPARQL a chance before going back.
Hi Daniel,
This discussion is way too abstract. I am missing hard facts about the
On 06.03.2015 15:05, Nikolas Everett wrote:
...
Regarding Markus' points:
...
The obvious question that comes from this point is why not use
Virtuoso? it is exposed publicly all over the place, you can talk to
the dbpedia folks, they do it and this is a very compelling argument
As long as
Hi,
Thanks for all the work. I think this is a sensible decision. What
confused me at first is that I did not know BlazeGraph (and when you
google for it, the first thing is an unrelated sourceforge project). An
important insight for me thus was that BlazeGraph is the project that
has up
Hi Thiemo,
Thanks for the background information. I agree with these default
choices -- seems useful. Just two comments:
There may be an option to enter the precision as a number, if requested,
but I don't think this is necessary at this point.
I think the point is simply that it is not
On 12.01.2015 15:59, Daniel Kinzler wrote:
Am 12.01.2015 14:48, schrieb Markus Krötzsch:
Anybody? If the answer is we not thought about this yet then it would be good
to know this, too. Any considerations that have led to the current
implementation are of interest.
The range is limited
Dear all,
I am happy to announce the third release of Wikidata Toolkit [1], the
Java library for programming with Wikidata and Wikibase. The main new
features are:
* Full support for the (now) standard JSON format used by Wikidata
* Huge performance improvements (decompressing and parsing
I've seen three formats proposed so far:
(1) Map + order fields (current format)
(2) Arrays
(3) Map + sort-index inside each map item
The last was proposed by Fredo; I think it got lost a bit. The idea
there would be to store something like index: 1 in the objects that
are inside the map to
Hi all,
On 20/04/14 01:20, Jeroen De Dauw wrote:
Hey all,
I am happy to announce the 0.7.3 release of Wikibase DataModel.
\o/
On that note, I can also add that I am about to update the documentation
of the data model, so that we also have a written account of these
things. Hopefully this
On 03/03/14 15:50, addshorewiki wrote:
This should probably be directed at
https://github.com/Wikidata/Wikidata-Toolkit/issues ?
Yes, details related to the ongoing development of Wikidata Toolkit
should be discussed elsewhere. In this particular case, part of the
problem can probably be
On 26/02/14 18:41, Jeroen De Dauw wrote:
Hey,
you can create claims with wbsetclaim. But you would need to create
a valid
GUID [1] yourself. The claim-GUID you send with your request needs to be
entityId$GUID (e.g. Q2$5627445f-43cb-ed6d-3adb-760e66bd17ee).
Uh, didn't we fix this a long
Hi Zoltán,
We also plan to support writing API access in Wikidata Toolkit soon [1].
Wikidata Toolkit already has a Java implementation of all Wikidata data
objects, so one can represent statements and claims. We also will soon
start working on JSON serialization of these objects (which you
Hi Fredo,
On 20/02/14 19:59, Fredo Erxleben wrote:
Hello everybody,
Since I am working on the conversion from the dump files to the wdtk
data model, I will have to take apart the refs section of the JSON
representing the stored items.
Now a refs-section most likely looks like this:
(Tried to
33 matches
Mail list logo