it (logs suggest there's virtually no usage
now, but that can change of course) please use the endpoint above.
[1]
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#DCAT-AP
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
or plan to use it and what for. Please either answer here
or even better in the task[2] on Phabricator.
[1]
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#DCAT-AP
[2] https://phabricator.wikimedia.org/T228297
--
Stas Malyshev
smalys...@wikimedia.org
it of course are
welcome.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Hello all!
Here is (at last!) an update on what we are doing to protect the
stability of Wikidata Query Service.
For 4 years we have been offering to Wikidata users the Query Service, a
powerful tool that allows anyone to query the content of Wikidata,
without any identification needed. This
/other/wikidata/
>
> 20190624.json.gz returns a File Not Found.
The dump for that week was not produced due to an error. Please wait for
the next week's dump which should happen quite soon as I understand.
--
Stas Malyshev
smalys...@wikimedia.org
___
X
Hi!
On 6/25/19 11:17 PM, Ariel Glenn WMF wrote:
> I think the issue is with the 0624 json dumps, which do seem a lot
> smaller than previous weeks' runs.
Ah, true, I didn't realize that. I think this may be because of that
dumpJson.php issue, which is now fixed. Maybe rerun the dump?
--
media.org/wikidatawiki/entities/20190617/)
But looking at it now, I see wikidata-20190617-all.json.gz is
comparable with the last week, so looks like it's fine now?
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wiki
CCing Ariel to take a look. Probably needs to be
re-run or we can just wait for the next one.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
rowse/JENA-1077
I will adjust the code in Blazegraph accordingly, so WDQS will comply
with this practice (i.e. result format will be as it was before). This
will be implemented in coming days.
Sorry again for the disruption.
--
Stas Malyshev
smalys..
te further. You can also watch
https://phabricator.wikimedia.org/T225996 for final resolution of this.
[1] https://phabricator.wikimedia.org/T225996
[2] https://www.w3.org/TR/rdf11-concepts/#section-Graph-Literal
--
Stas Malyshev
smalys...@wikimedia.org
___
ies/latest-all.nt.gz would
still be pointing to the right files, and if all you care is downloading
the latest dump, using these links is always recommended.
We will send another message once the change has been implemented and
deployed.
Thanks,
--
Stas Malyshev
smalys...@wik
ies/latest-all.nt.gz would
still be pointing to the right files, and if all you care is downloading
the latest dump, using these links is always recommended.
We will send another message once the change has been implemented and
deployed.
Thanks,
--
Stas Malyshev
smalys...@wik
According to
https://meta.wikimedia.org/wiki/User-Agent_policy, all clients should
identify with valid user agent. We've started enforcing it recently, so
maybe this tool has this issue. If not, please provide the data above.
--
Stas Malyshev
smalys...@wikimedia.org
_
e queries to ensure they are fast and produce proper
results on the setup you propose, then it can be done. Good luck!
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
efer to help than to ban). Otherwise, we'd be forced to put
more limitations on it that will affect everyone.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
one of them every time is
unfeasible. Not to mention this JSON is not an accurate representation
of the RDF data model. So I don't think it is worth spending time in
this direction... I just don't see how any query engine could work with
that storage.
--
Stas Malyshev
smalys...@w
hat.
Replication could certainly be useful I think it it's faster to update
single server and then replicate than simultaneously update all servers
(that's what is happening now).
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikid
st majority of the Wikidata Query Service case.
Would be interesting to see if we can apply anything from the article.
Thanks for the link!
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
kends - KV, document, relational, column store,
whatever you have. The tricky part starts when you need to run millions
of queries on 10B triples database. If your backend is not optimal for
that task, it's not going to perform.
--
Stas Malyshev
smalys...@wikimedia.org
___
aybe getting
some numbers might be useful.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
We use separate data store for search (ElasticSearch) and probably will
have to have separate one for queries, whatever would be the mechanism.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
interested if somebody took on themselves to model Wikidata in terms of
ArangoDB documents, load the whole data and see what the resulting
performance would be, I am not sure it would be wise for us to invest
our team's - very limited currently - resources into that.
Thanks,
--
Stas Malyshev
s
Hi!
> Yes, the api is
> at https://www.wikidata.org/w/api.php?action=query=search=Bush
There's also
https://www.wikidata.org/w/api.php?action=wbsearchentities=Bush=en=json
This is what completion search in Wikidata is using.
--
Stas Malyshev
smalys...@wikimed
Hi!
> and if I enable any of the FILTER lines, it returns 0 results.
> What changed / Why ?
Thanks for reporting, I'll check into it.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
ut how to do.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
database and be hosted on the same hardware. This is
especially important for services like Wikidata Query Service where all
data (at least currently) occupies a shared space and can not be easily
separated.
Any thoughts on this?
--
Stas Malyshev
smalys...@wiki
gain I am not sure what's the best way to treat this situation,
since I am not sure how federation model in SDC is working - the code
suggests there should be some kinds of prefixes for entity IDs, but SDC
does not seem to use any.
Any suggestions about the above are welcome.
Thanks,
--
Stas Malyshe
rmation in smaller chunks using LIMIT/OFFSET clauses.
Note that this doesn't speed up query itself.
4. Use LDF server:
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#Linked_Data_Fragments_endpoint
Depending on what data do you need, there probably would be the options.
--
St
Hi!
There is a discussion going on in W3C SPARQL 1.2 Community Group about
the improvements in SPARQL language. May be interesting to people that
are using SPARQL and those that may have some ideas of how to improve it.
-- Forwarded message -
From: *Andy Seaborne*
Hi!
> Any day can be Tuesday if you really want.
Thanks to Antoine and Brad for figuring the hhvm crash in
https://phabricator.wikimedia.org/T216689. Finding the cause of random
crashes can be very hard and frustrating, but they figured it out and
resolved it quickly. Thanks!
--
Stas Malys
re not included.
Could you provide specific properties and preferably also some Q-ids for
which you expected to find direct-normalized props but didn't?
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
http
ave a
comment from a developer within X time" - but unless X is very large, I
think it will be unsatisfactory, since getting "yes, it's a very
important bug, thanks for submitting it" comment without the bug being
fixed is IMHO no better than getting n
quot;WMF is totally wrong in doing this!" is not realistic. Reasonable
people can and do disagree. Reasonable people also can work through
disagreements and find common interests and ways to contribute to mutual
benefit. I think that's what we're trying to do here.
--
Stas Malyshev
smaly
;Need volunteer" tag that I think can be used for that.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
there might be
difference, e.g. for different logged-in users.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
ady found three bugs we totally missed in our code, just by
upgrading to it.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
? Would
appreciate any help/pointers to docs/examples.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
QA mailing list
QA@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/qa
that having a
> tutorial that explains and teaches the Query Service will help expand
> Wikidata to new audiences worldwide.
This sounds great, thank you!
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wiki
. So the request above applies to the search parts of the
WikibaseLexeme code also.
If you have any questions/comments, please feel free to ask me, on the
lists or on the IRC.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
avoid encouraging bad code.
> Thanks to you all, and again - sorry for the half message.
This looks useful. I think PHPStorm has this check built in, but having
it in sniffs too wouldn't be a bad thing. I've seen such things happen
when refactorin
ust forgot to create a branch.
One useful command for me would be "check out a change and put it in a
branch named after topic of that change, overriding it if it existed".
This allows easy syncing of patches where more than one person
contributes to them.
Thanks,
--
Stas Ma
lesson shouldn't be "let's find somebody to punish". I am
not sure if that was the intent, but it kinda felt this way to me. And I
don't think this is warranted neither in general nor in particular case.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
__
but
> reads are unaffected as far as we can tell.
The incident report for this issue is here:
https://wikitech.wikimedia.org/wiki/Incident_documentation/20190110-WDQS
It will be updated if we have any new developments or new information.
As of now, all servers are working normally.
--
Stas
filter(lang(?label)="fr")
> }
Could you describe in a bit more detail what you're trying to do here?
Doing two service calls is not a pattern one would commonly use... It
can be slow if query optimizer misunderstands such query, too. I feel
I'd have a bit more insight if I understoo
he
same result. Which depends on query. So I'd suggest providing some info
about the queries and specific issues you're having, and then we could
see if it's possible to improve it.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wi
Hi!
> Also given that it uses oresscores, we recently fixed some performance
> issues caused by it. Do you still have issues with it?
Yes, the issues I have listed still happen. My API calls do not use
ORES. E.g. see:
https://logstash.wikimedia.org/goto/63db4ce68fb5da3cdc7828150de10c59
--
it, I see it
every time I run it on Labs (where Kafka stream is not available). So I
think the issues with RC API on wikidata are still alive.
There's also a parallel issue of
https://phabricator.wikimedia.org/T207718 with RDF fetching, which also
still happens.
--
Stas Malyshev
smalys...@wikimed
g to the users as reusing the Q.
No, that would be confusing. If OSM wants own data type, because Q item
does not fit - e.g. OSM doesn't want descriptions and sitelinks - then
it should use a separate letter, like MediaInfo uses M. But using L
would not be smart since then this data would not integrate well with
rectly there.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
w, and RDF data is generated without
the beta prefix. Please tell me if you notice any problems or have any
questions.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-tech
at would be a bad thing. But I don't think anything we are
discussing here would lead to that happening.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
h for all of them, so you may never get
a chance to find the basset horn. Also, of course, querying big
downstream hierarchies takes time too, which means performance hit.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
class, so we'd have to constantly
update the hierarchy. But this is more of a technical challenge, which
will come after we have some solution for the above.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
esult, i.e.
either open source code (with reusable license, i.e. no patents banning
reuse etc.) or open publication with freely accessible algorithms and
outcomes (or both?) I don't think it would make sense for us to
cooperate if we'd be unable to benefit from the results.
--
Stas Malyshev
smaly
ow
> from OCLC, so if that's not possible, then, yeah, there's no point.
Ah yes, you can combine! Just call Mediawiki API from inside SPRARQL
query and combine with other clauses:
https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual/MWAPI
--
Stas Malyshev
sm
entially) I
decided not to pursue this for now. We'd have basically to duplicate the
work we've done in Mediawiki to compose proper Elastic queries, parse
results, etc. and the best we'd have is the same thing we already have
with Mediawiki API search. So I decided not to duplicate efforts f
essing them would
not be very useful.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
omment in the task) if you have any questions or
concerns.
[1] https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#Header
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wiki
omment in the task) if you have any questions or
concerns.
[1] https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#Header
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
out moving
it to Review status. Sometimes several people may need to cooperate on
WIP patch before it is ready to go. Of course, one can add reviwer and
then move back to WIP, but it'd be nicer to avoid extra actions.
Thanks!
--
Stas Malyshev
smalys...@wikimedia.org
at highlighting issues that volunteers should concentrate
on is a valid need. But I don't think reusing the same mechanism as
ongoing development tracking is using now is going to be good. It may
get very confusing. We should try to find other way to
asks,
Invalid would be when the task is describing something that can not be
done at all (at least by us), or would not produce any desirable result.
Declined is when it describes a valid task in general, but we are not
going to do it because of reasons.
--
Stas Malyshev
smal
the ones that are part of
the ongoing development. And document it in the lifecycle document.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
are
experienced in ontology creation and maintenance.
> to be chosen that then need to be applied consistently? Is this
> something the community can do, or is some more active direction going
> to need to be applied?
I think this is very much something that the
ot
really enforce any of the rules with regard to classes, property
domain/ranges, etc. and have frequent and numerous exceptions to those.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wiki
o, as I typed this email.>
> and now it does appear.
Yes, this is how it should work. There were no changes lately, AFAIK,
but it is possible that you hit some glitch or maintenance on your
previous search. If that happen again, please tell me when and with
which search string /URL, I'll try
Hi!
> When will stemming be supported in Search ?
In general, I think it already should be, for fields and contexts that
use appropriate analyzers, but I'd like to hear more details:
1. Which search?
2. What you're looking for, i.e. search string?
3. What you expect to find?
--
Stas Malys
rwise (ccing Markus in case he knows more on the topic).
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
u think it's a dataset others may want to reuse, tabular data on
Commons may be a venue: https://www.mediawiki.org/wiki/Help:Tabular_Data
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia
erated but automatic, but still this is quality-related data
which is linked to page content (and different for each revision AFAIU).
Currently if I understand right it has its own storage.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailin
about good process, maybe it's because you're guilty" - that's
how this comment sounded to me - is not right.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
etter to
search engines using well-known metadata vocabularies, I think it would
be a very welcome effort.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
this: https://phabricator.wikimedia.org/T163642 ?
This is the task to make strings searchable _without_ haswbstatement
keyword.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mail
der "Wikidata" + "Discovery-Search".
There are multiple tasks for it, but if you want to add any, please feel
welcome to browse and add.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
s might be good). So I'd say public
record while the ban is active is a must, but after that expunging the
record is fine.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/m
ific? I'm not sure I see where this is public.
I think it's this one:
https://lists.wikimedia.org/pipermail/wikitech-l/2018-August/090490.html
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lis
temp.
banned from date A till date B because of comments incompatible with
CoC" doesn't seem to hurt anyone.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
pecifying it in the same place where the action is described, as per
above. Again, establishing and advertising such place should be
something that CoCC does.
It is clear to me - and I think to anybody seeing the volume of
discussion this generated - that we need improvement here. We can do
better and
me rules around removing
tasks from "High" if it's clear we're not doing it anytime soon.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
area, not yet sure how
to do it though.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
ndexing "cast member" would
get you a step closer, but only a tiny step and there are a number of
other steps to take before that can work.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Hi!
> The top 1000
> is:
> https://docs.google.com/spreadsheets/d/1E58W_t_o6vTNUAx_TG3ifW6-eZE4KJ2VGEaBX_74YkY/edit?usp=sharing
This one is pretty interesting, how do I extract this data? It may be
useful independently of what we're discussing here.
--
Stas Malyshev
smalys...@wiki
tain random
> text (esp. natural language) since they are prone to be unique and
> impossible to search.
Yes, we definitely should not do that. I tried to exclude such
properties but if you notice more of them, let's add them to exclusion
config.
--
Stas Malyshev
Hi!
> * I would really like dates (mainly, born/died), especially if they work
> for "greater units", that is, I search for a year and get an item back,
> even though the statament is month- or day-precise
What would be the use case for this?
--
Stas Malyshev
smal
ad to hear thoughts on the matter.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
ery.wikidata.org/sparql.
Thank you! I will take care of it in the next update (next week).
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
ly should write the parameter as:
bd:serviceParam mwapi:search "\"goat cheese\"" .
Probably something like this: http://tinyurl.com/y9fdva5w
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hi!
> Well... it's not being voted on because, despite being six days till
> the originally planned feature freeze date, neither sponsor has opened
> a vote.
And I think it's a good thing. Both for selfish reasons (I haven't
actually have the time to read the proposal and the mail thread
lly CirrusSearch uses some configs without 'wg' prefix, but in
this case it doesn't seem to be an issue.
> Open for review:
> https://gerrit.wikimedia.org/r/#/q/project:operations/mediawiki-config+topic:cleanup+is:open
This one produces 404.
--
Stas Malys
ed
> can't be managed, at least with Wikimedia current ressources.
It's not Wikimedia that will be shouldering the burden, it's every user
of Wikimedia data sets.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
ut you have to inform the user somehow about this.
I think the easiest way would be to change the error message and add a
pointer to a page which describes the issue and how to work around it. I
imagine changing error message in phab shouldn't be too hard?
--
Stas Malyshev
sm
getting more information
Wouldn't it be easy to just log out and read any task (or even use
incognito mode/private browsing in the browser)? It is certainly a small
inconvenience, but I am not sure how it is very important, given a very
simple workaround.
--
Stas Malyshev
smalys...@w
ormat links to Forms? Leszek, do you have any information on this?
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-tech
d getEntityIdForTitle
uses content model to get from Title to ID. So, I could duplicate all
this code but I don't particularly like it. Could we fix
HtmlPageLinkRendererBeginHookHandler instead maybe?
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikid
past, conjunctive", derived from multiple Q-ids.
Yes, of course.
> Again, I don't think any highlighting is needed.
Not strictly speaking needed, but might be nice.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wikid
So, does this display look as what we want to produce for Lexemes? Is
there something that needs to be changed or improved? Would like to hear
some feedback.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wik
em to be a pressing concern that would
do any harm if not brought into compliance right now. So let's see if we
can reach some consensus here.
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
go by default for Special:Search:
1. Search in Items only
2. Search in Items + Properties
3. Search in Items + Properties + Lexemes
4. Search in Items + Lexemes
5. Any of the above plus some of the article spaces (i.e. Wikidata or Help)
This requires mixed search working (except for 1 and 2) but is a
s
/disabled by default.
Thanks,
--
Stas Malyshev
smalys...@wikimedia.org
___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-tech
ing said that, I am curious - what exactly you are doing with this
data set? Why you need a list of all humans - how this list is going to
be used? Knowing that may help to devise better specialized strategy of
achieving the same.
--
Stas Malyshev
smalys...@wikimedia.org
__
1 - 100 of 2366 matches
Mail list logo