[Xmldatadumps-l] Re: Is there a problem with the wikidata-dump?

2024-01-13 Thread Platonides
I would probably open a task to have wget available in the kubernetes
cluster and another, low-priority one, for investigating why connection
gets dropped between toolforge and dumps.w.o

On Sat, 13 Jan 2024 at 08:42, Wurgl  wrote:

> Hello!
>
> wget was the tool I was using with jsub-Environment, but wget is not
> available any more in kubernetes (with toolforge jobs start …) :-(
>
> $ webservice php7.4 shell
> tools.persondata@shell-1705135256:~$ wget
> bash: wget: command not found
>
>
> Wolfgang
>
>
> Am Sa., 13. Jan. 2024 um 02:20 Uhr schrieb Platonides <
> platoni...@gmail.com>:
>
>> Gerhard said that for him the downloading job ran for about 12 hours. It
>> seems the connection was closed.
>> I wouldn't be surprised if this was facing a similar problem as
>> https://phabricator.wikimedia.org/T351876
>>
>> With such long download time, it isn't that strange that there could be
>> connection errors (still something to look into, though, toolserver-to-Prod
>> shouldn't be suffering that).
>>
>> wget (used by Gerhard) retries automatically, perhaps curl isn't and is
>> thus more susceptible to these errors.
>>
>> Try changing your job to
>> wget -O -
>> https://dumps.wikimedia.org/wikidatawiki/latest/wikidatawiki-latest-pages-articles-multistream.xml.bz2
>> | bzip2 -d | tail
>>
>>
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Xmldatadumps-l] Re: Is there a problem with the wikidata-dump?

2024-01-12 Thread Platonides
Gerhard said that for him the downloading job ran for about 12 hours. It
seems the connection was closed.
I wouldn't be surprised if this was facing a similar problem as
https://phabricator.wikimedia.org/T351876

With such long download time, it isn't that strange that there could be
connection errors (still something to look into, though, toolserver-to-Prod
shouldn't be suffering that).

wget (used by Gerhard) retries automatically, perhaps curl isn't and is
thus more susceptible to these errors.

Try changing your job to
wget -O -
https://dumps.wikimedia.org/wikidatawiki/latest/wikidatawiki-latest-pages-articles-multistream.xml.bz2
| bzip2 -d | tail
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Cloud] Re: Reminder about the externallinks migration

2023-12-12 Thread Platonides
While I understand the motivation for the change, I think in this case the
label reversing used is more complex than needed. It would have been
simpler (and thus likely more efficient / less bug-prone) to reverse the
whole hostnames, storing moc.elpmaxe.www instad of com.example.www

On Mon, 11 Dec 2023 at 15:34, Amir Sarabadani 
wrote:

> Hi Tilman,
> Sorry for the late reply.
>
> Regarding finding the actual link from the row. The recommended way is to
> do processing in code afterwards. That's what MediaWiki does (in
> https://gerrit.wikimedia.org/g/mediawiki/core/+/80790ffc21a49fbe7709eaf5ce634b645798cf47/includes/ExternalLinks/LinkFilter.php#264)
> and you can easily replicate the logic of LinkFilter::reverseIndexes() in
> your programming language of choice. Doing all of data processing in SQL is
> not recommended.
>
> Indeed, we these changes are really necessary. For example with the
> current growth of Wikimedia Commons we will have to resort to more drastic
> actions if its database growth doesn't slow down (See
> https://phabricator.wikimedia.org/T343131 and
> https://phabricator.wikimedia.org/F37157040). Noting that database growth
> is not always about the wiki's growth, lots of times it's just high use of
> some features of mediawiki (in here, templates and external links).
>
> We will keep in mind to update documentation for further work (and thank
> you for the feedback!). The next will be pagelinks.
>
> Best
> ___
> Cloud mailing list -- cloud@lists.wikimedia.org
> List information:
> https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/
>
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Cloud] Re: [Cloud-announce] [IMPORTANT] Grid Engine Shutdown Timeline

2023-12-06 Thread Platonides
On Tue, 5 Dec 2023 at 19:13,  wrote:

> * When "grid infrastructure is deleted" on March 14, will there be backups
> of the tools for people who want to migrate them in the future?
>

The "grid infrastructure" is different than the "tools". I expect the tools
would still be there, they would just no longer work if they did so using
the grid.



PS: Thanks for the https://grid-deprecation.toolforge.org link, Magnus. I
had no idea it existed.
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Xmldatadumps-l] Re: Need Wikipedia data dump

2023-11-21 Thread Platonides
Hi

You can retrieve those from https://dumps.wikimedia.org/

Regards

On Mon, 20 Nov 2023 at 06:49, venkat ekkuluri 
wrote:

> Hi team,
>
> I need XML data dumps for the Wikipedia website for analysis. Can you
> please give me access to that.
>
>
> Thanks & Regards
> Narayana Ekkuluri
> ___
> Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
> To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org
>
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[List admins] Re: Spam prevention

2023-11-15 Thread Platonides
Hello MusikAnimal

First of all, regarding Google group mailing lists, they are somewhat
"elitist".
Supposedly, it is possible to use those mailing lists without a google
account, but in practice my experience is that it varies between requiring
jumping through some hoops (*if* you actually find the needed hoops) to not
possible at all.

Second, if you hosted a wikimedia mailing list on Google groups it would
not be possible to manage the subscription on https://lists.wikimedia.org,
it wouldn't be listed there, nor would their archives...

Not to mention it might have implications to the privacy policy.

As for the actual spam problem. I don't know which list you are referring
to but, is it allowing open posting? Does it need to allow that? Are they
going through or getting stopped? In my experience, allowing only the
subscribers to post to the mailing list is the most effective antispam
measure.

It should be possible to configure spamassassin to include the X-Spam-Score
header even for lower scores (and thus filtering mails with lower values),
although I would focus first on why it is not giving them higher score, or
other traits that could allow blocking such messages.

Regards

PS: I see some outdated pieces on
https://meta.wikimedia.org/wiki/Mailing_lists/Administration

that are no longer applicable on mailman 3
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[Xmldatadumps-l] Re: Data about category views or uses

2023-10-17 Thread Platonides
No, that link is about direct page views. Pages begining with "Categoría:"
are the category pages themselves.

It may be more clear when comparing to a normal page:

https://pageviews.wmcloud.org/pageviews/?project=es.wikipedia.org=all-access=user=0=latest-20=Categor%C3%ADa:Ciencias_naturales|Charles_Darwin

Regards

Miquel Centelles  wrote:

> Thank you very much. Maybe I didn't explain my request correctly. If I
> don't make a mistake, the example you raised corresponds to category pages
> linked to the article pages that have been viewed by the users. That is to
> say: the 101 pageviews of the Categoría: Ciencias_Naturales result of the
> sumatory of the views to the article pages categorized in  Categoría:
> Ciencias_Naturales
>
> The data I want are, directly, viewed categories. In fact, my real
> interest is in data about clickstream from category pages to article pages.
> But I fear this kind of clickstream data is not provided. Is it?
>
> Regards.
>
> Miquel Centelles
>
> On Sat, 7 Oct 2023 at 18:33, Platonides  wrote:
>
>> Hi Miquel
>>
>> You can use pageviews to view the data for the actual category pages. For
>> example:
>>
>> https://pageviews.wmcloud.org/pageviews/?project=es.wikipedia.org=all-access=user=0=latest-20=Categor%C3%ADa:Ciencias_naturales|Categor%C3%ADa:Historia
>> <https://pageviews.wmcloud.org/pageviews/?project=es.wikipedia.org=all-access=user=0=latest-20=Categor%C3%ADa:Ciencias_naturales%7CCategor%C3%ADa:Historia>
>>
>> Does this work for you?
>> (I'm not sure what you want exactly)
>>
>> Regards
>>
>>
>
> --
>
>
> Miquel Centelles
>
> miquel.centel...@ub.edu
>
> https://fima.ub.edu/directori/ficha17
>
> Coordinador Máster de Humanidades Digitales
>
> Facultat d’Informació i Mitjans Audiovisuals. Universitat de Barcelona
>
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Xmldatadumps-l] Re: Data about category views or uses

2023-10-07 Thread Platonides
Hi Miquel

You can use pageviews to view the data for the actual category pages. For
example:
https://pageviews.wmcloud.org/pageviews/?project=es.wikipedia.org=all-access=user=0=latest-20=Categor%C3%ADa:Ciencias_naturales|Categor%C3%ADa:Historia

Does this work for you?
(I'm not sure what you want exactly)

Regards
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Wikidata] Re: web reference

2023-07-31 Thread Platonides
Yes. It now shows the correct title. Google updated it by itself.

Regards


On Mon, 31 Jul 2023 at 16:19, Dan Brickley  wrote:

> On Mon, 31 Jul 2023 at 16:40, Marie-Claude Lemaire 
> wrote:
>
>> I do not know how to get the complete  url as required by google to
>> suppress it. Their advice is to contact the webmaster of the site.
>>
>
> If you mean the complete URL of the Wikidata entry for Marie-Claude
> Lemiaire, the URL is:
>
>
> *https://www.wikidata.org/wiki/Q89127676
> *
>
> If I search in Google (using Google Chrome browser, but it should be the
> same using other tools), I see the Wikidata page is, for me at least, the
> 5th result. It looks like this:
>
> [image: image.png]
> Wikidata
> https://www.wikidata.org › wiki
> 
> Marie-Claude. 0 references. family name · Lemaire. 0 references.
> occupation · physicist. 0 references. academic degree · Doctor of
> Philosophy. 0 references.
>
> I believe this shows Google has already updated (re-crawled the Wikidata
> page) and no longer shows the unwanted name that was in the old version of
> Wikidata. That's all I know.
>
> Dan
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/RFYBT4URH43JRG2X6FPIVK4U2IAPAAAY/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: web reference

2023-07-30 Thread Platonides
Wikidata editors don't have access to that.

On Sun, 30 Jul 2023 at 19:04, Simon Gelfand  wrote:

> Actually with webmaster tools it can be done by Wikidata - question is who
> has access to the webmaster tools of wikidata.org
>
> On Sun, Jul 30, 2023 at 9:52 PM Platonides  wrote:
>
>> Wikidata people know how to remove from Wikidata. And "Marie-Claude
>> Mallet-Lemaire" is no more in Wikidata.
>> But to delete from Google, you need Google people, not Wikidata people.
>>
>> That's what we have been trying to explain.
>>
>> Wikidata people may be easier to reach than Google people, but that
>> doesn't make Wikidata people the appropriate contact to remove content from
>> Google.
>>
>> Yesterday I suggested you to use the button "Lancer la demande de
>> suppresion" from
>> https://support.google.com/websearch/answer/9673730?hl=fr to request
>> Google to delete "Marie-Claude Mallet-Lemaire".
>>
>> Good luck
>>
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/6L6CNC6FDNYOOHS7C6WAEG757H6YZTJQ/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: web reference

2023-07-30 Thread Platonides
Wikidata people know how to remove from Wikidata. And "Marie-Claude
Mallet-Lemaire" is no more in Wikidata.
But to delete from Google, you need Google people, not Wikidata people.

That's what we have been trying to explain.

Wikidata people may be easier to reach than Google people, but that doesn't
make Wikidata people the appropriate contact to remove content from Google.

Yesterday I suggested you to use the button "Lancer la demande de
suppresion" from https://support.google.com/websearch/answer/9673730?hl=fr
to request Google to delete "Marie-Claude Mallet-Lemaire".

Good luck
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/7QHJZ6WZ4X4QAFREVJUFEG6U3JAKCIR6/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: web reference

2023-07-29 Thread Platonides
Another resource that may help you to contact Google would be the form to
request from Google the removal of personally identifiable information or
doxxing content.

see https://support.google.com/websearch/answer/9673730?hl=fr and the
button "Lancer la demande de suppresion".

Regards

On Sat, 29 Jul 2023 at 21:47, Platonides  wrote:

> That needs to be requested from Google. As you can see Google is showing a
> result on Wikidata titled "M-C M-L", but the page itself is not named that
> way, but "M-C L".
> Making an analogy, this is not that a book has the author name misspelled,
> but that *it is misspelled in the library catalog*, and when you go pick
> the book from the shelf, it is actually correct on it. In this case
> complaining to the editor would not be too helpful. It's the library that
> needs to fix its catalog. In this case Google.
>
> It is expected that Google will notice and fix the copy automatically...
> but we don't know *when* it will do that.
>
> I see a new function in Google "Send comments to Google" that lets you
> provide feedback for a result item. Maybe you could contact Google that way
> to ask them to fix that entry?
>
> Regards
>
> PS: I would recommend not to continue stating the full name that
> should-not-be-named. Given that there should be zero results for that one,
> and this is a public mailing list, I see a risk of searches for that name
> leading to this very thread.
>
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/CLUPGK7HOERH24QA27CZFFAM7PTLFNUL/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: web reference

2023-07-29 Thread Platonides
That needs to be requested from Google. As you can see Google is showing a
result on Wikidata titled "M-C M-L", but the page itself is not named that
way, but "M-C L".
Making an analogy, this is not that a book has the author name misspelled,
but that *it is misspelled in the library catalog*, and when you go pick
the book from the shelf, it is actually correct on it. In this case
complaining to the editor would not be too helpful. It's the library that
needs to fix its catalog. In this case Google.

It is expected that Google will notice and fix the copy automatically...
but we don't know *when* it will do that.

I see a new function in Google "Send comments to Google" that lets you
provide feedback for a result item. Maybe you could contact Google that way
to ask them to fix that entry?

Regards

PS: I would recommend not to continue stating the full name that
should-not-be-named. Given that there should be zero results for that one,
and this is a public mailing list, I see a risk of searches for that name
leading to this very thread.
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/BPYYROV4Y3422EMXZI3BMLUTJAA5NXMT/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: updating Google indexing on demand

2023-07-29 Thread Platonides
On Sat, 29 Jul 2023 at 16:50, Samuel Klein  wrote:

>
> That would certainly help to correct the
>>> Google hit as well.
>>>
>>
>> See also
>> https://developers.google.com/search/docs/crawling-indexing/ask-google-to-recrawl?hl=en_id=638262447414516079-4071153779=1
>> for Google tooling that Wikidata could use to request URLs to be recrawled.
>> I don't know who is set up (if anyone) to access Google's Search Console on
>> behalf of Wikidata or Wikimedia, but it might potentially help here.
>>
>> Dan
>>
>
> This would be great tooling to have in general: a way for WD editors
> (admins?) to send a "please update now" query to Google and other indexes,
> for cases where time is of the essence.
>

Other tools would perhaps be more suited for this:
https://support.google.com/webmasters/answer/9689846

Nonetheless, take into account that Google treats Wikipedia sites specially
(also probably wiktionary, wikisource... not sure if Wikidata as well). It
usually picks changes real quick, but it seems to be lagging on this case.
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/CYHGPNB3PFAFIQZZSEVSE75LRZFTTPD3/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: WoolNet: New application to find connections between Wikidata entities

2023-07-26 Thread Platonides
That's great!

... Where can we find this application?

On Wed, 26 Jul 2023 at 19:58, Aidan Hogan  wrote:

> Hi everyone,
>
> Cristóbal Torres, in CC, has been working in the past few months on a
> new web application for Wikidata called WoolNet. The application helps
> users to find and visualise connections between Wikidata entities. We
> are pleased to share this application with you, and would love to hear
> your thoughts. We've prepared a quick survey, which links to the system:
>
> https://forms.gle/sCNqrAtJo98388iC6
>
> We are also happy for you to forward this email or the system to
> whomever you think might be interested.
>
> The application was developed by Cristóbal in the context of his
> undergraduate thesis here in the University of Chile.
>
> Thoughts, comments, questions, etc., very welcome!
>
> Best,
> Aidan
> ___
> Wikidata mailing list -- wikidata@lists.wikimedia.org
> Public archives at
> https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/3GLYYUHB5J73DJT7E5XD64TOLTECIILN/
> To unsubscribe send an email to wikidata-le...@lists.wikimedia.org
>
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/6NTZLGBFZGIAZ6C3H4RVLMLKXDXPSP4X/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Xmldatadumps-l] Re: Inconsistency of Wikipedia dump exports with content licenses

2023-07-25 Thread Platonides
On Tue, 25 Jul 2023 at 15:14, Dušan Kreheľ  wrote:

> Hello, Wikipedia export is not right licensed. Could this be brought
> into compliance with the licenses? The wording of the violation is:
> https://krehel.sk/Oprava_poruseni_licencei_CC_BY-SA_a_GFDL/ (Slovak).
>
> Dušan Kreheľ


Hello Dušan

I would encourage you to write in English. I have used an automatic
translator to look at your pages, but such machine translation may not
convey correctly what you intended.

Also note, this is not the right venue for some of the issues you seem to
expect.

The main point I think you are missing is that *all the GFDL content is
also under a CC-BY-SA license*, per the license update performed in 2009
 as
allowed by GFDL 1.3. All the text is under a CC-BY-SA license (or
compatible, e.g. text in Public Domain), *most* of it also under GFDL, but
not all.
It's thus enough to follow the CC-BY-SA terms.

The interpretation is that for webpages it is enough to include a link,
there's no need to include all extra resources (license text, list of
authors, etc.) *on the same HTTP response*. Just like you don't need to
include all of that on *every* page of a book under that license, but only
once, usually placed at the end of the book.

Note that the text of the GFDL is included in the dumps by virtue of being
in pages such as
https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License
(it may not be the best approach, but it *is* included)

Images in the pages are considered an aggregate, and so they are accepted
under a different license than the text.

That you license the text under the *GFDL unversioned, with no invariant
sections, front-cover texts, or back-cover texts* describes how you agree
to license the content that you submit to the site. It does not restrict
your rights granted by the license. You could edit a GFDL article and
publish your version in your blog under a specific GFDL version and
including an invariant section. But that would not be accepted in Wikipedia.

You may have a point in the difference between CC-BY-SA 3.0 and CC-BY-SA
4.0, though. There could be a more straightforward display of the license
for reusers than expecting they determine the exact version by manually
checking the date of last publication.
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Wiki Loves Monuments] Re: RES: Re: Participating without officially registered/published monument list?

2023-06-27 Thread Platonides
You will need a list of what monuments to photograph

This list *should* come from a somewhat authoritative source, like the
government, some museum... not made up by the wikimedians or that random
youtuber.

In fact, I would say that many lists used by WLM organizers are not 1:1
mappings to the "official lists": they may join multiple lists, remove
monuments that cannot be legally photographed...

The local organizers may use a different criteria for local winners than
for photos going to the International Contest, and that is fine. A typical
case is related to sponsors some specific prize (e.g. suppose Nikon had a
branch in St. Eustatius that provides a prize to the best photo with one of
their cameras). You may also have rules about not giving more than one
prize to the same person, etc. It is not uncommon that not all monuments
participate in both local and international competition (e.g. you have two
lists for the local competition, but only monuments in one of them is used
for the International contest).
You need to define, and transmit clearly what your rules will be, but other
than that, that is really no issue. You still have enough time to write it
down before the competition begins.

The international jury will generally not interfere with your decisions of
what should be forwarded and whatnot. I guess they might if a local team
had a crazy proposal, but people there is usually reasonable :)

And of course, the list itself of monuments to photograph should be public.
Ideally you would talk with St Eustatuis government (?) to get permission
to use and publish that unofficial and unpublished so far list for your
competition.

Regards
___
Wiki Loves Monuments mailing list
To unsubscribe send an email to wikilovesmonuments-le...@lists.wikimedia.org
http://www.wikilovesmonuments.org

[List admins] Re: large number of unsubscribes for bounces this morning

2023-05-03 Thread Platonides
Not really. That makes the time at which the unsubscriptions happen look
more random, but it still needs that delivery to many yahoo users failed "5
emails ago", which (unless the list had a large gap between mails then)
should be relatively uncommon.
Many unsubscriptions from the same provider look like yahoo bounced many
emails at that point (an email rejected because it was considered
suspicious, throttling, etc.) which made mailman think all yahoo users
failed, and thus ultimately unsubscribe them all at the same time (still,
if it was content-related, the individual mail check should have prevented
the unsubscription...).

Cheers
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: large number of unsubscribes for bounces this morning

2023-05-01 Thread Platonides
Most likely, these were a consequence of the monthly subscription reminders
bouncing for all those users.

This leads to the question on what caused all those reminders to fail,
though. Looks like some error at their server. Were they all using the same
domain?

I didn't see any unsubscription spike here.
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[Xmldatadumps-l] Re: Which dump to use?

2023-03-02 Thread Platonides
You would need to download them separately. But, are you sure you need them?
Perhaps kiwix would fit your needs? Or a local MediaWiki install configured
to autofetch images?

What is your goal?

Regards

On Fri, 3 Mar 2023 at 02:08, Research Pro  wrote:

> Hi - we need a complete copy of EN wikipedia - along with all the media
> files for images etc that appear in the pages. We can easily locate the EN
> text. But where do we get all the media files for it?
>
> Thanks
> ___
> Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
> To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org
>
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Xmldatadumps-l] Re: Custom dump

2023-02-03 Thread Platonides
On Fri, 3 Feb 2023 at 15:36, someone wrote:

> Why is it showing my address? I set it to be hidden.
>

You were emailing me directly, but you now switched to sending it to the
mailing list (so it gets to everyone subscribed).
The name showed in the email is the one your mail client set.
That's completely different to the preference on showing or not the people
subscribed to a mailing list.

Regards
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Cloud] Re: Scratch database instance?

2023-01-25 Thread Platonides
MediaWiki uses a VARBINARY(255) NOT NULL for   page_title

in doubt, you would use VARCHAR(255) over other length, since that's the
largest value for a VARCHAR (a value larger than 255 would automatically
become a MEDIUMTEXT column)

 > sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1709,
'Index column size too large. The maximum column size is 767 bytes.')

This depends on the column collation. if you store ASCII characters, that's
one byte per character. If you store utf-8 characters, each of them could
be several bytes (also depending if you only support the Basic Multilingual
Plane, as "old utf8", or all of them, "utf8_mb4").
It is possible to define the index as the first N bytes of the column
(generally more than enough for what you will need).

But in this case, using varbinary should solve your issue;

Regards
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Xmldatadumps-l] Re: dataset for graph

2023-01-24 Thread Platonides
Hello Kiran

Do you realise we have no idea what your research is about, what kind of
dataset you might need, why wouldn't https://dumps.wikimedia.org/ work for
you, etc.

Moreover, your message comes off quite rude, and it looks as if you didn't
do even the most basic *research*...
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Xmldatadumps-l] Re: Custom dump

2023-01-24 Thread Platonides
I did such script a long time ago.

I should start by asking, how much disk space do you have for this?

Local images are cheap:

++-+
| number | size|
++-+
| 22 | 2108819 |
++-+

eswiktionary
+++
| number | size   |
+++
| 14 | 390539 |
+++

frwiktionary
+++
| number | size   |
+++
|  6 | 702706 |
+++

dewiktionary
++-+
| number | size|
++-+
|104 | 4943453 |
++-+

enwikiquote
++--+
| number | size |
++--+
|  0 | NULL |
++--+

eswikiquote
++--+
| number | size |
++--+
|  0 | NULL |
++--+

frwikiquote
++--+
| number | size |
++--+
|  0 | NULL |
++--+

dewikiquote
++-+
| number | size|
++-+
| 17 | 2236129 |
++-+

enwikibooks
++---+
| number | size  |
++---+
|   2716 | 153301990 |
++---+

eswikibooks
++--+
| number | size |
++--+
|  0 | NULL |
++--+

frwikibooks
++-+
| number | size|
++-+
|169 | 8244783 |
++-+

dewikibooks
+++
| number | size   |
+++
|   7697 | 2772943547 |
+++



but the commons files will start requiring a bit more space (about 1 TB):

enwiktionary
++--+
| number | size |
++--+
| 724855 | 154587222013 |
++--+

eswiktionary
++-+
| number | size|
++-+
|  18938 | 12918158214 |
++-+

frwiktionary
++--+
| number | size |
++--+
| 560321 | 146933557784 |
++--+

dewiktionary
++-+
| number | size|
++-+
| 933737 | 84462331479 |
++-+

enwikiquote
++-+
| number | size|
++-+
|  55186 | 98435989071 |
++-+

eswikiquote
+++
| number | size   |
+++
|   5712 | 7522618990 |
+++

frwikiquote
++-+
| number | size|
++-+
|   7348 | 11151209748 |
++-+

dewikiquote
+++
| number | size   |
+++
|   4211 | 6149070638 |
+++

enwikibooks
++--+
| number | size |
++--+
| 106405 | 133670795995 |
++--+

eswikibooks
++--+
| number | size |
++--+
|  44309 | 122871671629 |
++--+

frwikibooks
++--+
| number | size |
++--+
|  98843 | 248629243582 |
++--+

dewikibooks
++-+
| number | size|
++-+
|  69669 | 54020348470 |
++-+

All of them:
+-+--+
| number  | size |
+-+--+
| 2021173 | 981218142389 |
+-+--+





for wiki in  enwiktionary eswiktionary frwiktionary dewiktionary
enwikiquote eswikiquote frwikiquote dewikiquote enwikibooks eswikibooks
frwikibooks dewikibooks; do echo $wiki; sql "${wiki}_p" "SELECT
COUNT(img_name) AS number, SUM(img_size) AS size  FROM image;"; echo; done
for wiki in  enwiktionary eswiktionary frwiktionary dewiktionary
enwikiquote eswikiquote frwikiquote dewikiquote enwikibooks eswikibooks
frwikibooks dewikibooks; do echo $wiki; sql commonswiki_p "SELECT
COUNT(img_name) AS number, SUM(img_size) AS size  FROM image WHERE img_name
IN (SELECT DISTINCT gil_to FROM globalimagelinks WHERE gil_wiki='$wiki')";
echo; done
SELECT COUNT(img_name) AS number, SUM(img_size) AS size  FROM image WHERE
img_name IN (SELECT DISTINCT gil_to FROM globalimagelinks WHERE gil_wiki IN
('enwiktionary', 'eswiktionary', 'frwiktionary', 'dewiktionary',
'enwikiquote', 'eswikiquote', 'frwikiquote', 'dewikiquote', 'enwikibooks',
'eswikibooks', 'frwikibooks', 'dewikibooks'))  ;
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Wiki Loves Monuments] Re: Decision of WLM in Ukraine organizers not to submit photos for the international round

2022-12-13 Thread Platonides
Congratulations, WLM-UA team. That's a shocking achievement, given the
current situation that Ukraine is going through!

May 2023 be a *much* happier year for Ukraine, with all the other countries
having to suffer their participants no longer having more pressing concerns
than snapping beautiful pics.
(I'm sadly aware the war consequences are unlikely to disappear so easily,
but one may dream...)

Love,
Platonides
___
Wiki Loves Monuments mailing list
To unsubscribe send an email to wikilovesmonuments-le...@lists.wikimedia.org
http://www.wikilovesmonuments.org

[Wikidata] Re: Can no longer login (with or without VPN)

2022-11-21 Thread Platonides
Someone else could request a password reset email.

Perhaps that message could also be caused by some throttling error,
although I'm not aware of that.
I wonder if the password may somehow be provided differently from your
Chinese system, although that also seems unlikely.

Regards
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/XMXMNLMXWJP23EH4NF3UQRFQUJAINABA/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] Re: Can no longer login (with or without VPN)

2022-11-18 Thread Platonides
On Thu, 17 Nov 2022 at 23:26, Thad Guidry  wrote:

> A few things:
>
> This page does say the username is case sensitive.  Is it really?
> https://www.mediawiki.org/wiki/Help:Logging_in
>

It is case sensitive... except for the first letter.

History note: this derives from usernames being a type of page.
Platonides, PLATONIDES and PlAtOnIdEs are different usernames, just as they
would be different article pages on wikipedia, since the first letter is
always capitalised there.
There are however some wikis (such as wiktionaries) configures so that the
articles do not get an automatic capital letter. But for consistency, the
"always place first username letter in capital" rule is applied everywhere.



>
> I do not see a reset password or request for temporary password on the
> login screen as it described on that help page:
>


> What if I forget the password or username?
>
> *Your username is case sensitive.* If you enter an email address when
> signing up for the account, or in your Preferences, you can make a request
> on the login screen for a temporary password to be sent to that address,
> which will allow you to retrieve your account. If you did not enter an
> email address, or the address was out of date, you will have to create a
> new account under a different username.
>

You would have to use https://www.wikidata.org/wiki/Special:PasswordReset




>
> Instead I only see this when typing in my email address in the username
> box and leaving password box empty:
>

Logging in with an email on the username field is not supported. There is
an old request for that but, in addition of old accounts that could contain
an at sign, there may be multiple accounts with the same email (such as a
bot and its owner).

Regards
___
Wikidata mailing list -- wikidata@lists.wikimedia.org
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/wikidata@lists.wikimedia.org/message/Q2O5OFPZDA6MP2YMRCIMALQAHKKQXWS6/
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Mediawiki-api] Re: WikiSource extracts

2022-09-19 Thread Platonides
On Mon, 19 Sept 2022 at 17:03, Julius Hamilton 
wrote:

> Hey,
>
> It seems the following API call works for Wikipedia pages:
>
>
> https://en.wikipedia.org/w/api.php?action=query=extracts=10=Pet_door
>
> But not for Wikisource pages:
>
>
> https://en.wikisource.org/w/api.php?action=query=extracts=10=A_Simplified_Grammar_of_the_Swedish_Language
>
> Is there documentation somewhere about the API not working for Wikisource
> or perhaps only certain actions / props working for certain sites?
>

Did you look at the wikitext of that page?
https://en.wikisource.org/w/index.php?title=A_Simplified_Grammar_of_the_Swedish_Language=edit

prop=extracts works, but I would say it's a poor fit for many (most?)
wikisource pages.
https://en.wikisource.org/w/api.php?action=query=extracts=10=Wikisource:Community_collaboration/Monthly_Challenge/September_2022


How can I get the full plaintext from an entire book on Wikisource with the
> API?
>

Plaintext as in wikitext or in parsed html converted to plaintext?

You could use something like this to fetch every page under
A_Simplified_Grammar_of_the_Swedish_Language:
https://en.wikisource.org/w/api.php?generator=allpages=query=revisions=content=main=A_Simplified_Grammar_of_the_Swedish_Language


Regards
___
Mediawiki-api mailing list -- mediawiki-api@lists.wikimedia.org
To unsubscribe send an email to mediawiki-api-le...@lists.wikimedia.org


[Mediawiki-i18n] Re: Content translation in sister project

2022-09-17 Thread Platonides
Hello Shakil

As মোহাম্মদ মারুফ says that everyone on bnwikibooks wants to have this
extension installed, I think you should open a request on
https://phabricator.wikimedia.org linking to that consensus asking for it
to get installed/enabled. That would be the proper place to track your
request.
However, you should take into account that Content Translation is currently
focused on the translation of Wikipedia articles, and it might not be doing
a good job with translating the templates used by wikibooks. Caveat emptor.

Kind regards
___
Mediawiki-i18n mailing list -- mediawiki-i18n@lists.wikimedia.org
To unsubscribe send an email to mediawiki-i18n-le...@lists.wikimedia.org


[Wiki Loves Monuments] Re: New members for the WLM-i team!

2022-08-23 Thread Platonides
Welcome to both!

On Tue, 23 Aug 2022 at 19:53, effe iets anders 
wrote:

> Welcome Lucy and Rubén! Glad to see you both join :)
>
> Lodewijk
>
> On Tue, Aug 23, 2022 at 11:44 AM Ciell Wikipedia <
> ciell.wikipe...@gmail.com> wrote:
>
>> Hello everyone,
>>
>> It is my pleasure to introduce to you Lucy and Rubén, two members that
>> recently joined our WLM-international team.
>>
>> Lucy Iwuala is from Nigeria and joined the Wikimedia movement only
>> recently, but has already done her part for Wiki Loves Africa and Wiki
>> Loves Earth. She has organized photo walks for the Wiki Fan Club for the
>> Igbo Wikimedians UG, and helped out with preparations for the international
>> jury in Wiki Loves Africa. She will be helping with the Social Media for
>> Wiki Loves Monuments. Lucy is a big fan of Twitter, which she uses for
>> communicating Wikimedia-related Igbo initiatives, and is also on Facebook
>> and Instagram.
>>
>> Rubén Ojeda is a Wikimedian from Spain. You may have heard his name
>> before, as he has several years of experience as the project-lead of Wiki
>> Loves Monuments in Spain for the Wikimedia España chapter. Rubén will join
>> the international team in his volunteer capacity, and will help out with
>> all sorts of tasks for Wiki Loves Monuments-i.
>>
>> Both Lucy and Rubén will need some time to onboard the team and get
>> familiar with all the tasks ahead, but at least now if you see their names
>> popup somewhere, you know who they are. :))
>>
>> Best,
>> Ciell
>> ___
>> Wiki Loves Monuments mailing list
>> To unsubscribe send an email to
>> wikilovesmonuments-le...@lists.wikimedia.org
>> http://www.wikilovesmonuments.org
>
> ___
> Wiki Loves Monuments mailing list
> To unsubscribe send an email to
> wikilovesmonuments-le...@lists.wikimedia.org
> http://www.wikilovesmonuments.org
___
Wiki Loves Monuments mailing list
To unsubscribe send an email to wikilovesmonuments-le...@lists.wikimedia.org
http://www.wikilovesmonuments.org

[Pywikipedia-bugs] [Maniphest] [Created] T315045: Wrong permissions on created .pywikibot folder

2022-08-11 Thread Platonides
Platonides created this task.
Platonides added a project: Pywikibot.
Restricted Application added subscribers: pywikibot-bugs-list, Aklapper.

TASK DESCRIPTION
  When pywikibot/config.py creates a ~/.pywikibot folder, it does so with mode 
0600, whereas it should be 0700.
  
  On line 350) it is doing
  
python
os.makedirs(dir_s, mode=private_files_permission)
  
  with private_files_permission defined on line 255 as
  
python
private_files_permission = stat.S_IRUSR | stat.S_IWUSR
  
  These make sense for _files_ permissions, but for a folder it should include 
stat.S_IXUSR (+x) as well
  
  I suppose it would probably need a line
  
python
private_directories_permission = stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
  
  used for this

TASK DETAIL
  https://phabricator.wikimedia.org/T315045

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: Platonides
Cc: Aklapper, Platonides, pywikibot-bugs-list, PotsdamLamb, Jyoo1011, 
JohnsonLee01, SHEKH, Dijkstra, Khutuck, Zkhalido, Viztor, Wenyi, Tbscho, MayS, 
Mdupont, JJMC89, Dvorapa, Altostratus, Avicennasis, mys_721tx, Xqt, jayvdb, 
Masti, Alchimista
___
pywikibot-bugs mailing list -- pywikibot-bugs@lists.wikimedia.org
To unsubscribe send an email to pywikibot-bugs-le...@lists.wikimedia.org


[Wikitech-ambassadors] Re: yardım

2022-04-24 Thread Platonides
Dear Bilal

This is not the right venue for your concerns. I recommend you write
instead to info...@wikimedia.org

Best regards

On Sun, 24 Apr 2022 at 09:35, Bilal YAZICI  wrote:

>
> Sayın yetkili
> Milletvekilimiz Sn. Ali Şahin beyin danışmanlığı görevini yürütmekteyim.
>
>
> https://tr.m.wikipedia.org/wiki/Ali_%C5%9Eahin_(1970_do%C4%9Fumlu_siyaset%C3%A7i)
>
> adlı sayfasında yaptığım düzenlemeler yayımlanmamıştır, nerede hata
> yaptığımın ve düzeltme yolunun tarafıma iletilmesi mümkn müdür?
>
> Teşekkürler
>
>
>
> 
>
> Bilal YAZICI
> TBMM Milletvekili Danışmanı - 27.Dönem
> *Türkiye Büyük Millet Meclisi*
> Adres : TBMM İdare Amirliği Bakanlıklar/Ankara
> Telefon : +90 (312) 420 63 82 +90 (312) 420 63 83
> Faks : +90 (312) 420 24 54
>
>
>
> ___
> Wikitech-ambassadors mailing list --
> wikitech-ambassadors@lists.wikimedia.org
> To unsubscribe send an email to
> wikitech-ambassadors-le...@lists.wikimedia.org
>
___
Wikitech-ambassadors mailing list -- wikitech-ambassadors@lists.wikimedia.org
To unsubscribe send an email to wikitech-ambassadors-le...@lists.wikimedia.org


[Cloud] Re: Heartbeat monitoring system?

2022-04-24 Thread Platonides
On Thu, 14 Apr 2022 at 19:41, revi  wrote:

> I personally use healthchecks.io[1] myself but I'm not sure if it gets
> rate limited with the same IP hitting the external services. (I once
> considered asking for self-hosted options made available at labs but if
> wikimedia network is down it won't be sending out notifications, so I
> decided not to.
>
> [1]: https://healthchecks.io/docs/
>

This is something we could host on WMES server (outside of WMF networks)
for the benefit of the general community.

https://github.com/healthchecks/healthchecks/
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[Xmldatadumps-l] Re: Access imageinfo data in a dump

2022-02-07 Thread Platonides
The metadata used to be included in the image table, but it was changed 6
months ago out to External Storage. See
https://phabricator.wikimedia.org/T275268#7178983


On Fri, 4 Feb 2022 at 20:44, Mitar  wrote:

> Hi!
>
> Will do. Thanks.
>
> After going through the image table dump, it seems not all data is in
> there. For example, page count for Djvu files is missing. Instead of
> metadata in the image table dump, a reference to text table [1] is
> provided:
>
> {"data":[],"blobs":{"data":"tt:609531648","text":"tt:609531649"}}
>
> But that table itself does not seem to be available as a dump? Or am I
> missing something or misunderstanding something?
>
> [1] https://www.mediawiki.org/wiki/Manual:Text_table
>
>
> Mitar
>
> On Fri, Feb 4, 2022 at 6:54 AM Ariel Glenn WMF 
> wrote:
> >
> > This looks great! If you like, you might add the link and a  brief
> description to this page:
> https://meta.wikimedia.org/wiki/Data_dumps/Other_tools  so that more
> people can find and use the library :-)
> >
> > (Anyone else have tools they wrote and use, that aren't on this list?
> Please add them!)
> >
> > Ariel
> >
> > On Fri, Feb 4, 2022 at 2:31 AM Mitar  wrote:
> >>
> >> Hi!
> >>
> >> If it is useful to anyone else, I have added to my library [1] in Go
> >> for processing dumps support for processing SQL dumps directly,
> >> without having to load them into a database. So one can process them
> >> directly to extract data, like dumps in other formats.
> >>
> >> [1] https://gitlab.com/tozd/go/mediawiki
> >>
> >>
> >> Mitar
> >>
> >> On Thu, Feb 3, 2022 at 9:13 AM Mitar  wrote:
> >> >
> >> > Hi!
> >> >
> >> > I see. Thanks.
> >> >
> >> >
> >> > Mitar
> >> >
> >> > On Thu, Feb 3, 2022 at 7:17 AM Ariel Glenn WMF 
> wrote:
> >> > >
> >> > > The media/file descriptions contained in the dump are the wikitext
> of the revisions of pages with the File: prefix, plus the metadata about
> those pages and revisions (user that made the edit, timestamp of edit, edit
> comment, and so on).
> >> > >
> >> > > Width and hieght of the image, the media type, the sha1 of the
> image and a few other details can be obtained by looking at the
> image.sql.gz file available for download for the dumps for each wiki. Have
> a look at https://www.mediawiki.org/wiki/Manual:Image_table for more info.
> >> > >
> >> > > Hope that helps!
> >> > >
> >> > > Ariel Glenn
> >> > >
> >> > >
> >> > >
> >> > > On Wed, Feb 2, 2022 at 10:45 PM Mitar  wrote:
> >> > >>
> >> > >> Hi!
> >> > >>
> >> > >> I am trying to find a dump of all imageinfo data [1] for all files
> on
> >> > >> Commons. I thought that "Articles, templates, media/file
> descriptions,
> >> > >> and primary meta-pages" XML dump would contain that, given the
> >> > >> "media/file descriptions" part, but it seems this is not the case.
> Is
> >> > >> there a dump which contains that information? And what is
> "media/file
> >> > >> descriptions" then? Wiki pages of files?
> >> > >>
> >> > >> [1] https://www.mediawiki.org/wiki/API:Imageinfo
> >> > >>
> >> > >>
> >> > >> Mitar
> >> > >>
> >> > >> --
> >> > >> http://mitar.tnode.com/
> >> > >> https://twitter.com/mitar_m
> >> > >> ___
> >> > >> Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
> >> > >> To unsubscribe send an email to
> xmldatadumps-l-le...@lists.wikimedia.org
> >> >
> >> >
> >> >
> >> > --
> >> > http://mitar.tnode.com/
> >> > https://twitter.com/mitar_m
> >>
> >>
> >>
> >> --
> >> http://mitar.tnode.com/
> >> https://twitter.com/mitar_m
>
>
>
> --
> http://mitar.tnode.com/
> https://twitter.com/mitar_m
> ___
> Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
> To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org
>
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[Cloud] Who is ${user_agent} ?

2021-12-19 Thread Platonides
Hello everyone

We have noticed that since Septembrer 2020, our WLM-related site
www.wikilm.es has been receiving a couple of meta=siteinfo api queries per
day from Cloud VPS NAT egress 185.15.56.1 with a user agent of...
${user_agent}
(yes, literally that. Probably a case of single quotes when double were
needed)

It's probably some kind of Wiki Loves Monuments statistics tool. I have
looked on some suspects with no luck.

Anyone has an idea which tool may be doing that?


Regards
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[List admins] Re: False-positive probe messages?

2021-11-21 Thread Platonides
Well, I would expect that to include such bounce message (or either, a
better wording). Probably something like the typical:

  gmail-smtp-in.l.google.com [74.125.71.26]:
 >>> DATA
 <<< 552-5.7.0 This message was blocked because its content presents a
potential
 <<< 552-5.7.0 security issue. Please visit
 <<< 552-5.7.0  https://support.google.com/mail/?p=BlockedMessage to review
our
 <<< 552 5.7.0 message content and attachment content guidelines.
d69si1606333wmd.209 - gsmtp
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[Xmldatadumps-l] Re: still only a partial dump for 20210801 for a lot of wikis

2021-09-01 Thread Platonides
On Wed, 1 Sept 2021 at 12:34, Fernando Ducha  wrote:

> Can someone help me and tell why can't i download the dump? it say's i
> don't have permission who's able to give this permission? its this related
> to @Ariel Glenn WMF response?
>

Which dump and file are you trying to download? I have tried files from
many wikis, from the latest run as well as from the earlier one, and
received no permission error.

So I guess there's nothing wrong with the file permissions (or perhaps you
tried to download *precisely* one of the few "bad" files)

What's the exact error you are receiving? I guess your IP might be blocked
at a lower level, albeit that'd be unlikely.

Best regards
___
Xmldatadumps-l mailing list -- xmldatadumps-l@lists.wikimedia.org
To unsubscribe send an email to xmldatadumps-l-le...@lists.wikimedia.org


[List admins] Re: Preparing an unsubscribe 101

2021-08-27 Thread Platonides
On Thu, 26 Aug 2021 at 23:05, Kunal Mehta  wrote:

> Hi,
>
> On 8/18/21 5:38 PM, Platonides wrote:
> > Could we have a tutorial on how to unsubscribe?
>
> I created <https://meta.wikimedia.org/wiki/Mailing_lists/Unsubscribing>
> as a quick start, it can surely be improved.
>

Thanks for the start!
I will try to find time to improve it.

> 
> > interface), but the users seem to be misled by the fact that they have
> > to /create/ an account so they can see their subscriptions (for which
> > they signed up in mailman2) and then finally unsubscribe.
> > The interface does say "If you have not previously logged in, you may
> > need to set up an account with the appropriate email address.", but
> > that's apparently not enough (and admittedly, is not intuitive).
>
> Yeah, this is...not great. You can unsubscribe without creating an
> account by using the email gateway of
> -le...@lists.wikimedia.org (in the default footer and in
> List-Unsubscribe), but most people probably want to use the web
> interface. <https://gitlab.com/mailman/postorius/-/issues/189> is the
> ticket requesting this upstream.
>
> > So perhaps we should make some content explicitely addressing how to
> > unsubscribe in mailman3, which could then be placed on
> > https://lists.wikimedia.org/unsubscribe
> > <https://lists.wikimedia.org/unsubscribe> for benefit of all lits.
>
> I'd prefer to have a wiki page just so it's easier for people to edit
> and sysadmins don't stand in the way of getting content updated, and we
> can make it translatable, etc. We can set up a redirect from that URL to
> the wiki page if that would be convenient.
>
> On the Tor-announce list, they start most emails with a sentence like:
>
> > (If you are about to reply saying "please take me off this list",
> > instead please follow these instructions:
> > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-announce/
>
> Doing something like that for lists that get a lot of manual unsubscribe
> requests might be beneficial.
>
> -- Kunal
>

We have all seen people doing the wrong thing to unsubscribe. The part that
I found concerning was that multiple people mentioned that they had tried
but failed.

Best regards
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Preparing an unsubscribe 101

2021-08-18 Thread Platonides
Could we have a tutorial on how to unsubscribe?

I am getting multiple users confused that are unable to unsubscribe by
themselves.

A couple of cases I processed today:
Subject: Please unsubscribe me from all of this I’ve tried and I can’t
"Please remove me from everything
Thank you "

"(...) I went to the link as provided for my unsubscription but was found
that no such email existed with an account. Yet I receive multiple such
emails. (...)"

And they aren't the first ones, either. The list does contain a link "
Unsubscribe:
https://lists.wikimedia.org/postorius/lists/daily-image-l.lists.wikimedia.org/;
(this is the best url that seem to be available in the postorious
interface), but the users seem to be misled by the fact that they have to
*create* an account so they can see their subscriptions (for which they
signed up in mailman2) and then finally unsubscribe.
The interface does say "If you have not previously logged in, you may need
to set up an account with the appropriate email address.", but that's
apparently not enough (and admittedly, is not intuitive). Users unable to
unsubscribe is as old as mailing lists, but the interesting point is that
they apparently try yet fail.

So perhaps we should make some content explicitely addressing how to
unsubscribe in mailman3, which could then be placed on
https://lists.wikimedia.org/unsubscribe for benefit of all lits.


Thoughts?
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[Cloud] Re: [Cloud-announce] Debian 11.0 'Bullseye' now available on cloud-vps

2021-08-15 Thread Platonides
You're really quick.

Thanks Andrew!
___
Cloud mailing list -- cloud@lists.wikimedia.org
List information: 
https://lists.wikimedia.org/postorius/lists/cloud.lists.wikimedia.org/


[List admins] Re: Help me with the new interface

2021-08-15 Thread Platonides
Hello Ilario

You need to register an account with the email that was set as
Owner/Moderator of the mailing list (if you already have an account created
with a different email, it is possible to link several email adresses).
Then you will be able to administrate the list. mailman3 administration
uses actual user accounts, not a single password shared by all the admins.

Best regards


On Sun, 15 Aug 2021 at 15:38, Ilario Valdelli 
wrote:

> Hi all
>
> In Wikimedia CH we have some lists on the WMF servers but at the moment we
> are lost about how to administer them.
>
>
>
> We had a simple password to access to the console of administration while
> now this console looks to don’t be actual.
>
>
>
> Maybe the access is done using the login and password as user?
>
>
>
> --
>
> Ilario Valdelli
>
> Education Program Manager and Community liaison
>
> Wikimedia CH
>
> Verein zur Förderung Freien Wissens
>
> Association pour l’avancement des connaissances libre
>
> Associazione per il sostegno alla conoscenza libera
>
> Switzerland - 8008 Zürich
>
> Tel: +41764821371
>
> http://www.wikimedia.ch
>
>
> ___
> Listadmins mailing list -- listadmins@lists.wikimedia.org
> To unsubscribe send an email to listadmins-le...@lists.wikimedia.org
>
> To request technical changes for a specific list, please instead create a
> task in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[Wiki Loves Monuments] Re: banner diet

2021-08-12 Thread Platonides
Great point, Lodewijk

We should ideally be able to measure the relationship between sitenotice
and contributors. And, if we currently don't have enough data for that,
design WLM in such a way we will be able to measure that so it can be used
for future contests.

There is probably a chain of # banner impressions → # visits to contest
page → # users actually paying attention → # users participating

While the absolute number of visits to the contest page coming from
sitenotice should be relatively simple to count, the number of people that
viewed will depend on the campaign configuration and the presence of other
campaigns at the same time (e.g. fundraising), plus on the actual traffic
received from that country (probably complicated, but could be calculated)
*plus* the actual configuration of the user hiding the banner or the diet
restricting the number of times it is shown per device.

Then, we have an additional handicap that our reaction won't be
immediate.Users are unlikely to have photos of designated Monuments without
an image sitting on their computer ready to be uploaded. If a user is going
to actually find a suitable nearby monument in the list, plan a trip with
its camera (next weekend, perhaps?), actually go out and shoot it, come
back, locally review their photos, select good ones and upload them... It
will take several days, so the effect of the impressions won't be clear.
Furthermore, repeated impressions may be needed to remind them that they
intended to actually tak some photos.
Whereas most people will simply think "Oh, nice project" and let other
people do that work, without wishing to be bothered any more about WLM.

I don't think we actually have the resources to properly measure this, with
all its variables. Perhaps we could recruit some help from the Analytics /
Product Analytics teams?

Best regards
___
Wiki Loves Monuments mailing list
To unsubscribe send an email to wikilovesmonuments-le...@lists.wikimedia.org
http://www.wikilovesmonuments.org

[List admins] Re: Updates on bouncing issues

2021-07-06 Thread Platonides
Nice!

I guess those are spamassassin points? or is it using a different system?
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: Reviving the defunct Wikimedia Hong Kong list

2021-06-20 Thread Platonides
Note you could as well "create a new list" with "the old name" (i.e. clear
all subscribers and start again). You might consider to send a one-off mail
to the old subscribers as well. It has pros and cons.

Best
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: Name connected to an email?

2021-06-14 Thread Platonides
First, I don't think the way it was used was "secure". I think it could be
changed by the user himself.

Second, the field probably still existsin the database, but a way to change
it is not exposed. The names in quotations Risker mentions are probably
that field, migrated from mailman2.

Third, for such private I think we should aim for having:
a) A mapping of the private list and the membership condition (e.g. user
needs to belong to either group A on wiki x or group B in wiki Y). This
could live in puppet, a lists repo, etc.
b) A daily cron which automatically unsubscribes from each private list the
mailman3 users in the list which don't have the wiki email linked to a user
with the applicabe permission.

This way, even if moderators lost track of someone no longer being a X (or
made a mistake sigining up the wrong user), it would be automatically
corrected at most after 24 hours.
Note the user wouldn't need to use the same email address on-wiki and on
mailman. Jusr to have mailman know that the wiki mall belongs to the same
mailan account.

Bonus would be not to let a user join the list without the needed
requirement, but that seems more complex.

Best regards
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: Subscription disabled due to a bounce score?

2021-06-14 Thread Platonides
That's probably because that's not a "normal" list, where it is sent. I
think Mailman2 didn't even had the abiliiy to automatically "unsubscribe"
list owners/moderators.
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: Subscription disabled due to a bounce score?

2021-06-13 Thread Platonides
I'm thinking mailman3 should probably keep a list of user bounces. So that
when a user finds out he got unsubscribed, and logs into the interface,
mailman could shown him the evidence on why it got disabled: «see. the last
dozen emails I sent you, at dates D₁... D₁₂ failed with reason "no such
user", "mailbox is full", "i'm a teapot", "This looks like spam"...  What
did you expect me to do if you don't accept my mails?» So that the user
could confront their mail provider on why it is rejecting the emails he
wants to receive.

Best regards

PS: Join me to the list of guys congratulating a good work on migrating the
lists!
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

[List admins] Re: Subscription disabled due to a bounce score?

2021-06-05 Thread Platonides
First, I doubt you would get someone to change the hotmail filters. It's
not like they added a manual rule. It works on autopilot. The spamfilter
*learns* that such message is bad. Not even the Microsoft guys really know
how it decides that. And even if the ip address was blocked, the
technicians would just be allowed to "mitigate" the ip, and are not allowed
to deviate from the runbook.

Second, the migration to mailman3 probably changed them enough so they
"lost" a lot of the reputation it gained through the years. So if may have
learnt that a message with certain wikimedia headers was ok, but mm3 emails
are different enough they don't hit.

Third, mailman *is* sending spam messages to the list owners. Those are
requested, but they are being sent, so hotmail does have a point. *Do not
mark them as spam*, even if it is a spam sent to -
ow...@list.wikimedia.org, you *want* to receive them, not to make hotmail
learn that email sent by lists.wikimedia.org is spam.

Mails blocked to the list and sent to the list owners are problematic (and
spammy) enough that I think mailman should offern an option not to include
the full email not passed to the list, but only the subject. But that'd be
a feature request.
In the end, if you are a list admin, you have requested to receive them,
even if they are spammy. You should be able to configure your mail for
that. However, you may be using a provider which doesn't allow that kind of
flexibility.

By the way, gmail is marking me the previous mails of this list from Leon
Haanstra as: "This message was not sent to Spam because of a filter you
created." (they are not always obeying these filters ☹, but I am glad they
at least allow them). This may be related to gmail considering list mail
flow more spammy than usual, or simply because it's a yahoo.com email
address and thus it fails the silly DMARC policy they added.


Regards
___
Listadmins mailing list -- listadmins@lists.wikimedia.org
To unsubscribe send an email to listadmins-le...@lists.wikimedia.org

To request technical changes for a specific list, please instead create a task 
in Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

Re: [Wiki Loves Monuments] Commons app WLM integration - help needed

2021-04-30 Thread Platonides
I think this would need a property mapping a contest with the used
identifiers (basically, that table but in wikidata form).
So for instance,"Wiki Loves Monuments 2021 Sweden" could be defined as
including the "listed building in Sweden (Q328070)" and the "listed
historical ship in Sweden (Q16501309)", which use P1260 and P2317
identifiers respectively (whereas a few years earlier, maybe only Q328070
were included).

Cheers
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org


Re: [List admins] Mailman3 upgrade beginning soon, action needed

2021-04-22 Thread Platonides
Regarding the url change:
>  will change to
> <
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/>


Is there a technical reason which requires including the list domain ine
name?
I would seem preferable by far to use instead:
https://lists.wikimedia.org/postorius/lists/wikitech-l


and since all (?) our lists are using @list.wikimedia.org, it's not like we
need the domain there to disambiguate it.

Cheers
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins

By sending a message to this list, you email all admins of all lists. To 
request technical changes for a specific list, instead create a task in 
Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists


Re: [Wiki Loves Monuments] Next year: my role

2021-01-11 Thread Platonides
Thanks Lodewijk for yourhard and tireless work on WLM for so many years.
I remember when we started WLM internationally. It was just an idea,
something that had been done in the Netherlands last year, a crazy idea to
expand it as an international contest. And see it nowadays, it's almost
taken for granted that there will be a WLM in September (a trap in itself,
participation in further years is easier than the first one, but still,
there's a lot of work!).

Thanks again Lodewijk, and looking forward for the new team
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org


[Wikies-l] Récord de editores activos

2020-08-01 Thread Platonides
Buenas

El mes de mayo de 2020 tuvimos el mayor número de editores activos* de toda
la historia de Wikipedia en español (5567):

https://stats.wikimedia.org/#/es.wikipedia.org/contributing/active-editors/normal|line|all|~total|monthly

Otras wikis tienen participaciones parecidas.
Wikipedia en francés tiene asimismo un máximo histórico (6946). Wikipedia
en inglés (44352) el máximo de los últimos 10 años (desde marzo 2011:
44839) y wikipedia en portugués alcanzó este junio (2046) otro máximo en 10
años (desde enero 2011: 2157).


* A efectos de esta métrica, se consideran editores activos aquellos
usuarios registrados (no bots) con cinco o más ediciones en un mes dado
https://meta.wikimedia.org/wiki/Research:Wikistats_metrics/Active_editors

Saludos
___
Wikies-l mailing list
Wikies-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikies-l


Re: [Cloud] Requesting help or advice regarding security on Toolforge

2020-08-01 Thread Platonides
Hello Keren

Keeping the code of a Wordpress site up-to-date is not generally hard.
However, in this case, and even more relevant given your query, I must ask:
why use wordpress? This seems a static set of pages, so it could be stored
as .html pages without the overhead of running wordpress... or the security
issues that might happen if that was abused. No code = no security issues.*
This could be developed as html pages (you are probably using raw html
markup already), or if you fancy preparing that in wordpress, you could
have a wordpress for editing the tutorial, and then have a process that
statified its content before publishing on toolforge.

Best regards


* Or almost. I'm glossing over potential vulnerabilities on the web server,
or vulnerable javascript that didn't properly sanitize input. But removing
the web application means the system hundreds of times safer.
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Bug#966293: libnginx-mod-http-subs-filter not working with Content-Encoding: identity

2020-07-25 Thread Platonides
Package: libnginx-mod-http-subs-filter
Version: 1.14.2-2+deb10u1
Severity: normal

When a page is sent with a Content-Encoding header, mod-http-subs-filter
as packaged by Debian does not process the content, even if the encoding
provided is 'identity' (the use of no transformations whatsoever).

This was reported and fixed by indust in April 2016, [1] with the patch
merged upstream the next day. [2]

However, this is still present in Debian since it is using the code
from 2014 (version v0.6.4), which is the last one provided by the project.

Ideally, upstream would tag a new release, [3] with Debian updating to that
(there were only a few changes in these years, but it would be nice to
have the other ones, too).



1- https://github.com/yaoweibin/ngx_http_substitutions_filter_module/pull/20
2- 
https://github.com/yaoweibin/ngx_http_substitutions_filter_module/commit/bc58cb11844bc42735bbaef7085ea86ace46d05b
3- https://github.com/yaoweibin/ngx_http_substitutions_filter_module/issues/37



Re: [Wiki Loves Monuments] Learning pattern from one of our national jury member

2020-03-11 Thread Platonides
Hello Bodhisattwa

I'm not convinced Montage should be *the* communications platform for the
jurors. Actually, I think that having each juror point the images
independently at first is a good thing, as it avoids potential bias based
on other jurors points.
Seems that your national competition used a direct output of the top-10
images as rated by every juror combined, rather than e.g. take the
resulting top-20 and then have the jurors discuss the merits of each one to
make them into a top-10.

Finally, I am a bit concerned about your scenario of a jury member which
«does not want to join calls or does not want to communicate through other
private channels» and «so that communication does not depend on the will of
organizing team or jury team members.». If a jury member reject to
communicate with other jury members, that's a social problem, not a
technical one.
You could make it easier to find all the tools needed (such as adding a
link to the mailing list from montage), and it can be quite hard to get a
suitable time to get everyone online. In that case asynchronous
communication (like mailing lists) may be needed. But ultimately, your jury
members should be willing to communicate with each other. And the
expectation of that they should do so, and may need to perhaps join X
online meetings, should be clear from the start.

What makes you think that someone who refused communicating with other
members would however be willing to communicate with them if done through
montage?

Cheers
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Cloud] [Cloud-announce] [Toolforge] 2020 Kubernetes cluster migration complete

2020-03-06 Thread Platonides
Great work

Kudos to everyone involved!


On Fri, 6 Mar 2020 at 20:54, Bryan Davis  wrote:

> On 2020-03-03, Brooke completed the automatic migration phase of the
> 2020 Kubernetes migration by moving the last workloads from the legacy
> Kubernetes cluster to the 2020 Kubernetes cluster [0].
>
> All Toolforge tools using `webservice --backend=kubernetes ...` and/or
> manually maintained Kubernetes objects are now running on the 2020
> Kubernetes cluster. The Toolforge admin team is in the process of
> tearing down the legacy cluster and cleaning up various documentation
> and tooling related to it [1].
>
> This project involved a lot of hard work that most of the Toolforge
> community did not see. Brooke and Arturo started planning things over
> a year ago [2] to ensure that the Toolforge admin team would be able
> to complete this migration with a minimum amount of disruption to
> tools and their maintainers. Along the journey they researched
> Kubernetes best practices and recommendations, read and re-read
> numerous tutorial and how-to docs, and designed a completely new
> process to automate the deployment of Kubernetes in Toolforge. They
> also sought and received help from other Toolforge admins, Wikimedia
> Foundation staff, and technical volunteers. This was a truly
> collaborative effort.
>
> I am very happy to say that in my opinion we have a well automated and
> monitored Kubernetes cluster in Toolforge today. There are many more
> features that we will continue to work on as we try to make Kubernetes
> use in Toolforge easier for everyone, but we can only do that work
> because we now have this solid base to build on. I look forward to
> announcements of many more features in the coming months.
>
> Thank you to our alpha and beta testers who found more edge cases and
> made good suggestions for simplifying things. Thank you all for your
> patience and understanding when things did not go quite as planned
> during this process. And finally thank you in advance for the edits
> that will be made to help pages on Wikitech and elsewhere as we all
> work on bug #1 (improving documentation).
>
>
> [0]: https://phabricator.wikimedia.org/T246519
> [1]: https://phabricator.wikimedia.org/T246689
> [2]: https://phabricator.wikimedia.org/T214513
>
> Bryan, on behalf of the Toolforge admin team and the Wikimedia Cloud
> Services team
> --
> Bryan Davis  Technical Engagement  Wikimedia Foundation
> Principal Software Engineer   Boise, ID USA
> [[m:User:BDavis_(WMF)]]  irc: bd808
>
> ___
> Wikimedia Cloud Services announce mailing list
> cloud-annou...@lists.wikimedia.org (formerly
> labs-annou...@lists.wikimedia.org)
> https://lists.wikimedia.org/mailman/listinfo/cloud-announce
> ___
> Wikimedia Cloud Services mailing list
> Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
> https://lists.wikimedia.org/mailman/listinfo/cloud
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [Wikies-l] Please remove my email address from your mailing list. Thank you.

2020-03-02 Thread Platonides
Las instrucciones para darte de baja son análogas a las que seguiste para
darte de alta:
https://lists.wikimedia.org/mailman/listinfo/wikies-l?language=es#manage
___
Wikies-l mailing list
Wikies-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikies-l


Re: [Mediawiki-api] So 0 is only regular page right?

2020-02-02 Thread Platonides
That is just regular content. You should look at the conventions of the
wiki on what sections to place in a page, the relevant content that should
be on each of them.

On Sun, Feb 2, 2020 at 2:22 PM Furkan Gözükara 
wrote:

> Where can i find descriptions of titles?
>
> for example ===Initialism===
>
> what is this title is used for?
>
> On Sun, Feb 2, 2020 at 3:57 AM Platonides  wrote:
>
>> At the top of the XML you are looking at, you should have an element
>> listing the namespaces of the wiki.
>>
>> On Fri, Jan 31, 2020 at 10:42 PM Furkan Gözükara 
>> wrote:
>>
>>> Betacommand  where can I learn for Turkish one? mean tr.wiktionary.org
>>>
>>> On Fri, Jan 31, 2020 at 6:15 PM Betacommand 
>>> wrote:
>>>
>>>> That 18 can vary depending on the individual wiki you are working on.
>>>> Namespaces are a fairly wide concept
>>>>
>>>> On Fri, Jan 31, 2020 at 7:58 AM Furkan Gözükara <
>>>> monstermmo...@gmail.com> wrote:
>>>>
>>>>> You mean this page i suppose >
>>>>> https://www.mediawiki.org/wiki/Manual:Namespace
>>>>>
>>>>> It says there are 18 name spaces. 0 (Main) "Real" content articles.[1]
>>>>>
>>>>> On Fri, Jan 31, 2020 at 3:54 PM Betacommand 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> Look up mediawiki namespaces. That’s a fairly wide statement
>>>>>>
>>>>>> On Fri, Jan 31, 2020 at 7:28 AM Furkan Gözükara <
>>>>>> monstermmo...@gmail.com> wrote:
>>>>>>
>>>>>>> If  is other than 0, it means it is some other special page like
>>>>>>> category talk etc right?
>>>>>>>
>>>>>>> Are there any documentation for this?
>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Mediawiki-api mailing list
>>>>>>> Mediawiki-api@lists.wikimedia.org
>>>>>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>>>>>
>>>>>> ___
>>>>>> Mediawiki-api mailing list
>>>>>> Mediawiki-api@lists.wikimedia.org
>>>>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>>>>
>>>>> ___
>>>>> Mediawiki-api mailing list
>>>>> Mediawiki-api@lists.wikimedia.org
>>>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>>>
>>>> ___
>>>> Mediawiki-api mailing list
>>>> Mediawiki-api@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>>
>>> ___
>>> Mediawiki-api mailing list
>>> Mediawiki-api@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>
>> ___
>> Mediawiki-api mailing list
>> Mediawiki-api@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>
> ___
> Mediawiki-api mailing list
> Mediawiki-api@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>
___
Mediawiki-api mailing list
Mediawiki-api@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api


Re: [Mediawiki-api] So 0 is only regular page right?

2020-02-01 Thread Platonides
At the top of the XML you are looking at, you should have an element
listing the namespaces of the wiki.

On Fri, Jan 31, 2020 at 10:42 PM Furkan Gözükara 
wrote:

> Betacommand  where can I learn for Turkish one? mean tr.wiktionary.org
>
> On Fri, Jan 31, 2020 at 6:15 PM Betacommand  wrote:
>
>> That 18 can vary depending on the individual wiki you are working on.
>> Namespaces are a fairly wide concept
>>
>> On Fri, Jan 31, 2020 at 7:58 AM Furkan Gözükara 
>> wrote:
>>
>>> You mean this page i suppose >
>>> https://www.mediawiki.org/wiki/Manual:Namespace
>>>
>>> It says there are 18 name spaces. 0 (Main) "Real" content articles.[1]
>>>
>>> On Fri, Jan 31, 2020 at 3:54 PM Betacommand 
>>> wrote:
>>>

 Look up mediawiki namespaces. That’s a fairly wide statement

 On Fri, Jan 31, 2020 at 7:28 AM Furkan Gözükara <
 monstermmo...@gmail.com> wrote:

> If  is other than 0, it means it is some other special page like
> category talk etc right?
>
> Are there any documentation for this?
>
>
> ___
> Mediawiki-api mailing list
> Mediawiki-api@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>
 ___
 Mediawiki-api mailing list
 Mediawiki-api@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/mediawiki-api

>>> ___
>>> Mediawiki-api mailing list
>>> Mediawiki-api@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>>
>> ___
>> Mediawiki-api mailing list
>> Mediawiki-api@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>>
> ___
> Mediawiki-api mailing list
> Mediawiki-api@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/mediawiki-api
>
___
Mediawiki-api mailing list
Mediawiki-api@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api


Re: [gci-discuss] Is the name of the participant along with the username (nickname) of the participant visible to mentors?

2020-01-25 Thread Platonides
Not at all. Mentors do not see the "real name" field, just the nickname.
Perhaps it is available to Org Admins, but I don't think so.

Look at the user agreement you were presented when signing up (you kept a
copy, right?).
I think you will find that it says the username will be public, but the
"personal information" will be processed by "Google’s trusted service
providers" i.e. not the Organizations.

The organization would need that you told then in order to know that both
nicknames belong to the same user.

Kind regards


On Fri, Jan 24, 2020 at 8:07 PM Divyansh Agarwal 
wrote:

> Don't worry!! There is nothing like that. Mentors know everything.
>
> Divyansh Agarwal
> :-)
>
> On Sat, Jan 25, 2020, 12:58 AM the innovative one <
> kingwarrior...@gmail.com wrote:
>
>> Hi Team,
>>
>> Actually I have contributed to the Organisation through different
>> channels (like Gitlab) which have a different name (my original name), but
>> I have registered in the competition with a different username (although I
>> entered my actual name in the name field while registering).
>> So, is the name of the participant along with the username (nickname) of
>> the participant visible to mentors?
>>
>> Because if my original name isn't listed with the nickname to the
>> organization, how can they come to know that I have also contributed to
>> their projects outside the contest also.
>>
>> Thanks in advance!
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google Code-in Discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to gci-discuss+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/gci-discuss/274e9a46-c7a0-4ba6-85e3-27243c65d0f8%40googlegroups.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Google Code-in Discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to gci-discuss+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/gci-discuss/CAASwXoLFTKdRVhA5R8nfoeChn-VGcdV4YvLLFBwRKa3VoZZoFg%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code-in Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to gci-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/gci-discuss/CAHfUWq9a4XEXu6TDQBdGrJFt7DH9P0r4mp9yE%3D0mJDwRhC4JaA%40mail.gmail.com.


Re: [gci-discuss] Backup of files

2020-01-23 Thread Platonides
Not necessarily. Some may do so, while others won't.

I would expect most contributions to be external to the GCI website (like a
pull request), with most communication on gci website being accessory.
If you want some of the data you sent (and you didn't keep a copy), I would
recommend you to fetch it before it is closed.

If by "data" you mean the registration information, rather than the task
submissions, do note that mentors don't have access to that, they were sent
to Google. The Org will only be able to see your nick (plus anythngn you
told them in a message, obviously),

Google usually destroys the website some months after the contest
completely finishes

Kind regards

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code-in Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to gci-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/gci-discuss/CAHfUWq9JoTZXnKkKZB88QHPpLPcH-eS%3D3JX%3De%2BaDEo2Yes9A1Q%40mail.gmail.com.


Re: [Wiki Loves Monuments] Announcement of WLM International Winners

2020-01-12 Thread Platonides
Oh, of course!
One would think that the detailed explanation of Mohammad should have been
enough so that everyone would come up with the right table at first try. I
failed.

Kind regards
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Wiki Loves Monuments] Announcement of WLM International Winners

2020-01-12 Thread Platonides
Thanks to this year the WLM International team!

I had some trouble figuring out the time for each one, from remembering
that 12 a.m. means midnight to not missing any event. So here is the table
I drated for the convenience of everyone else.

00:00 #25
00:30 #24
01:00 #23
01:30 #22
02:00 #21
02:30 #20
03:00 #19
03:30 #18
04:00 #17
04:30 #16
05:00 #15

08:30 #14
09:00 #13
09:30 #12
10:00 #11
10:30 #10
11:00 #9
11:30 #8
12:00 #7
12:30 #6
13:00 #5
13:30 #4
14:00 #3
14:30 #2
15:00 #1


Cheers
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Cloud] Umlaut issue

2019-12-29 Thread Platonides
Your account is just dhun

Instead of creating a new account, it would be much more simple to just
change your cn and sn to remove the umlaut (maybe you could even change it
yourself). That said, I would very much prefer the Horizon bug to be fixed,
rather than starting using workarounds.

Kind regards
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [gci-discuss] certificate

2019-12-28 Thread Platonides
On Sat, Dec 28, 2019 at 8:36 PM Fatima Atta  wrote:

> when are we going to receive the certificates for the task that we have
> done and are approved?
>

Digital certificates will be sent via email in late January.

https://developers.google.com/open-source/gci/faq#receiveprizes

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code-in Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to gci-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/gci-discuss/CAHfUWq8RMiGph0qJ7Zu2%3DeLVwDpVPaO3HQYp8CnjrSmv_uXehw%40mail.gmail.com.


Re: [gci-discuss] Re: Quality

2019-12-28 Thread Platonides
Note that only the digital certificate and the t-shirt, awarded after
completion of three tasks, are based on the amount of tasks completed. The
rest of the prizes is based on evaluation from the mentors (ie. people),
not by a machine that would blindly look only at the number of tasks.

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code-in Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to gci-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/gci-discuss/CAHfUWq99RYh%2B%2Bf9rdiBbkJufAjFw3DmXYThwnHo431joDEGoLA%40mail.gmail.com.


Re: [Cloud] [Toolforge] New tool to browse the Toolforge Docker registry

2019-12-20 Thread Platonides
Cool. Although it is very confusing to have two different yet so similarly
named tools.
Could perhaps they be renamed to toolforge-docker-registry and
ci.docker-registry?

In the later, would it be possible to view the contents / Dockerfiles
without actually pulling the image?

Cheers
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [gci-discuss] Mentor

2019-11-06 Thread Platonides
On Wed, Nov 6, 2019 at 11:18 PM Dev Vaghani  wrote:

> Can we apply for mentor for GCI for more than one organization?
>

Technically, you can.
However, I don't think you should be applying as a mentor. GCI mentors are
chosen
by the respective Org Admins from existing community members. Maybe you are
very
active on two organizations chosen in GCI, in which case you might be
contacted by them.
But if nobody at those orgs has ever heard from you, your chances to mentor
in their Org
are basically zero, so you should spare both of you from that process.

Kind regards

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code-in Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to gci-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/gci-discuss/CAHfUWq-8nrkB345vYQk0emvP%3DKOc0Sz9TyuPMYKw16rr3kVBfg%40mail.gmail.com.


Re: [List admins] Invitation to test mailing list mirroring and emulation in Wikimedia Space

2019-10-31 Thread Platonides
Hello Quim

Given that this is currently implemented on WMCS, I would think that their
terms would prevail and certain private mailing lists NOT allowed be copied
at all there  (for instance a list where checkusers disccussed about vandal
ranges), since it's on an "untrusted" network/servers that shall not
contain such private data. Other private mailing lists may qualify,
depending on what is actually discussed there (e.g. I would see no issue
with a signpost editorial comments mailing list). And obviously, the
content of public lists is simply... public.

Best regards
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins

By sending a message to this list, you email all admins of all lists. To 
request technical changes for a specific list, instead create a task in 
Phabricator. See https://meta.wikimedia.org/wiki/Mailing_lists

Re: [Wiki Loves Monuments] a peculiar problem with the "newly uploaded" condition

2019-09-27 Thread Platonides
If the new image is exactly the same as the one we had, it makes sense to
delete one, yes. And usually you would remove the last one to be uploaded.
For instance some people may be using of the images with the old name. We
could delete them but place a redirect to the other copy, that seems a good
compromise. Or even merge both copies at the old name, so the uploader
appears in the history.

Kind regards
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Wikidata] Personal news: a new role

2019-09-22 Thread Platonides
Congratulations, Denny!
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wiki Loves Monuments] permission for contributions

2019-09-12 Thread Platonides
An experienced user may use a different upload form, as long as the image
ends up with the appropriate template.
At least, that's what I have always done where i am involved:
- Let the rules say that the images must use a free license such as *X, Y,
Z...* but explicitly allow any license accepted in Commons.
- Suggest using the WLM upload campaign, but document that they may use a
different as ong as they use {{Wiki Loves Monuments 2019|country-code}}.
- Put the burden on the uploader to ensure that they are using the right
identifier but otherwise allow that it gets fixed by someone else.

By focusing the rules on the result instead of the how we avoid needing to
verify *how* it was done, and only the final result is needed. It also
allows advanced users to use other means, such as uploading using Commonist.

Other country contests may -perhaps inadvertently- have set more strict
rules, though.

Kind regards
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Cloud] Downloading Wikipedia Articles with Embedded URLs in Cyrillic

2019-09-04 Thread Platonides
How are you downloading them?

The codification of the urls would be the same as the actual article
(utf-8), so I don't see how you could end up in a situation where the
Cyrillic text is fine but the Cyrillic urls are not.

If you could provide us the steps you are following for downloading the
articles, we may be able to test it.

Best regards
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [Cloud] SQL Question

2019-08-13 Thread Platonides
Why not simply do the comparison client-side?
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [Cloud] Edits using Proxy Servers

2019-02-04 Thread Platonides
Some geolocation databases have a category for proxies, for instance
MaxMind uses the A1 code for the "country" Anonymous Proxyhttps://
dev.maxmind.com/geoip/legacy/codes/iso3166/

(Of course, these categorizations would need to be accurate (both proxy
detection and country detection) to get meaningful results)

I find your consideration that proxy IP addresses are invalid strange,
though. Maybe considering just edits that were not reverted would be a more
useful metric?
___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

[Wikidata-bugs] [Maniphest] [Commented On] T208236: IP unblock requested for 20 new accounts being created at University of Edinburgh Wikidata event.

2018-10-29 Thread Platonides
Platonides added a comment.
If each user is given a different public IP, there shouldn't be throttling problems...TASK DETAILhttps://phabricator.wikimedia.org/T208236EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: PlatonidesCc: Platonides, Aklapper, Stinglehammer, Nandana, Lahi, Gq86, GoranSMilovanovic, QZanden, LawExplorer, Wikidata-bugs, aude, Mbch331___
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs


Re: [Mediawiki-api] Question about summary regex in api for ML dataset

2018-10-12 Thread Platonides
Are you sure you are getting html from the XML and not *wikitext*?

Assuming you are working with wikitext, and want everything up to the first
heading, and handwaving things like a section set by a template, you could
break at the first line matching /^(=={2,5})[ \\t]*(.+?)[ \\t]*\1\\s*$/m
(see the function Parser::doHeadings below).
In practice, splitting at "\n==" will give you the right result on 99% of
articles.

If the library is really giving you html, it's even easier, split the html
at the first .

Note that the wikitext will contain many non-textual characters like
templates, tables, wikitext formatting, references... that you'd need to
clean up before applying your models.
However, other projects have done this in the past (sorry, I have no links
to them), so I would either make a very basic cleaning, or reuse what
others made.

Best regards

===
public function doHeadings( $text ) {
for ( $i = 6; $i >= 1; --$i ) {
$h = str_repeat( '=', $i );
// Trim non-newline whitespace from headings
// Using \s* will break for: "==\n===\n" and parse as
=
$text = preg_replace( "/^(?:$h)[ \\t]*(.+?)[
\\t]*(?:$h)\\s*$/m", "\\1", $text );
}
return $text;
}
https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/parser/Parser.php$1672
___
Mediawiki-api mailing list
Mediawiki-api@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api


Re: [Mediawiki-api] Question about summary regex in api for ML dataset

2018-10-12 Thread Platonides
Hello Daniel

I'm afraid I'm not sure what you are trying to do. What exactly do you want
to extract? The section names, the introduction sections (section 0),
something different... ?

Kind regards
___
Mediawiki-api mailing list
Mediawiki-api@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-api


Re: [Mediawiki-api] Question about summary regex in api for ML dataset

2018-10-12 Thread Platonides
That \1\2 are literal bytes. You would do:

$regexp = '/^(.*?)(?=\x01\x02)/s';

But those bytes are not present in the original wikitext, they are set
by ExtractFormatter
$html = preg_replace( '/\s*(https://lists.wikimedia.org/mailman/listinfo/mediawiki-api


Re: [Wikies-l] Wikies-l Digest, Vol 163, Issue 2

2018-06-26 Thread Platonides
Hola Roberto

Alejandro envió el primer correo tanto a la lista de socios de WM-ES como a
la de Wikipedia. Luego ecemaml respondió desde un punto de vista de socio
WM-ES, pero contestando a ambas listas, por eso llegó también aquí la
respuesta. No hay establecido un reenvío de las listas.

Un saludo
___
Wikies-l mailing list
Wikies-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikies-l


[Wikidata-bugs] [Maniphest] [Commented On] T138371: WordPress plugin to associate tags with Wikidata IDs

2018-05-30 Thread Platonides
Platonides added a comment.
Hello

Thanks @Zeko for his work tackling this development. We have quickly tested the current plugin and it seems there is still ample room for improvement. In particular, the entire lists of tags and categories disappear from the UI each time you access, and the search for items in Wikidata do not seem to work. :/

CheersTASK DETAILhttps://phabricator.wikimedia.org/T138371EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: Zeko, PlatonidesCc: Platonides, Philipjohn21, Liuxinyu970226, Samwilson, Mbch331, Framawiki, DanBri, Esh77, Lydia_Pintscher, abian, Tramullas, Zeko, Aklapper, Pigsonthewing, Zppix, Lahi, Gq86, GoranSMilovanovic, Soteriaspace, Jayprakash12345, JakeTheDeveloper, QZanden, LawExplorer, Culex, Psychoslave, Wikidata-bugs, aude, TheDJ___
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs


[Wikidata-bugs] [Maniphest] [Commented On] T188045: wdqs1004 broken

2018-03-12 Thread Platonides
Platonides added a comment.
Well, if the server itself is needed, it will be doing its work with a different IP address than the one of wdqs1004, since it would have been suffering the same connection issues as wdqs1004 (just the other half time).TASK DETAILhttps://phabricator.wikimedia.org/T188045EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: RobH, PlatonidesCc: Platonides, MoritzMuehlenhoff, TerraCodes, faidon, Liuxinyu970226, ops-monitoring-bot, Joe, RobH, Stashbot, Gehel, Cmjohnson, Papaul, gerritbot, elukey, Smalyshev, Aklapper, Dzahn, Davinaclare77, Qtn1293, Lahi, Gq86, Darkminds3113, ayounsi, Lucas_Werkmeister_WMDE, GoranSMilovanovic, Th3d3v1ls, Hfbn0, QZanden, EBjune, merbst, LawExplorer, Avner, Zppix, Jonas, FloNight, Xmlizer, jkroll, Wikidata-bugs, Jdouglas, aude, Tobias1984, Manybubbles, Southparkfan, mark, Mbch331, Jay8g, akosiaris, fgiunchedi___
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs


[Wikidata-bugs] [Maniphest] [Edited] T188946: High-level discussion for the direction of structured licensing

2018-03-05 Thread Platonides
Platonides updated the task description. (Show Details)
CHANGES TO TASK DESCRIPTION...2. Adopt a structured license selection process (like [[http://rightsstatements.org|rightssstatement.org]]. This allows for very effective licensing selection, presentation, and reuse, but limits potential licensing variationsTASK DETAILhttps://phabricator.wikimedia.org/T188946EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: Keegan, PlatonidesCc: SandraF_WMF, Abit, Ramsey-WMF, Slaporte, Aklapper, Keegan, Lahi, PDrouin-WMF, Gq86, E1presidente, Cparle, GoranSMilovanovic, Ivana_Isadora, QZanden, Tramullas, Acer, LawExplorer, Jseddon, FloNight, Trizek-WMF, Susannaanas, Aschroet, Jane023, Wikidata-bugs, PKM, Base, matthiasmullie, aude, Ricordisamoa, Lydia_Pintscher, Fabrice_Florin, Raymond, Steinsplitter, Mbch331, Elitre, Qgil___
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs


Re: [Wiki Loves Monuments] The upcoming couple of weeks

2017-12-08 Thread Platonides
Have you checked that you can publish those photos on instagram?

4. You represent and warrant that: (i) you own the Content posted by you on
or through the Service or otherwise have the right to grant the rights and
licenses set forth in these Terms of Use;

1. Instagram does not claim ownership of any Content that you post on or
through the Service. Instead, you hereby grant to Instagram a
non-exclusive, fully paid and royalty-free, transferable, sub-licensable,
worldwide license to use the Content that you post on or through the
Service, subject to the Service's Privacy Policy, available here
http://instagram.com/legal/privacy/, including but not limited to sections
3 ("Sharing of Your Information"), 4 ("How We Store Your Information"), and
5 ("Your Choices About Your Information").

https://help.instagram.com/478745558852511


The rights they request may be a subset of those already granted by the
copyright author in the chosen license... but I wouldn't be surprised if
there was actually a legal conflict of their terms, which would preclude
third parties from posting them there (eg. instagram dropping the link to
the license).


Best
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [List admins] Dutch mailing list Moderators-nl broken

2017-12-02 Thread Platonides
Yep, that's the right thing to do. Probably related to DMARC configuration
of the list.

Your email to this list arrived withotu issue :)

Cheers

On Sun, Dec 3, 2017 at 12:00 AM, Trijnstel wp  wrote:

> Hi all,
>
>
> It seems that the mailing list Moderators-nl, meant for administrators on
> the Dutch Wikipedia, is broken. I created a Phabricator ticket:
> https://phabricator.wikimedia.org/T181906 If that wasn't the right thing
> to do, please say so. Thanks.
>
>
> Trijnstel
>
>
>
> ___
> Listadmins mailing list
> Listadmins@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/listadmins
>
>
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins


Re: [Cloud] [Cloud-announce] Wiki Replica c1.labsdb and c3.labsdb to be shutdown 2017-12-13

2017-10-18 Thread Platonides
> It is not a happy thing for us to force anyone to change their
> software, but as explained in the wiki page [0] we can not find a
> reliable method to ensure that the same user created tables are
> available on all three of the new backend servers

Why not provide a fourth host and have those three servers act as slaves of it?

Writes go to the first one, but reads and joins can go to the replicas.

I'm afraid that "make the JOINs in user space" may end up on some
cases with the app fetching all the rows of the tool table or the wiki
replica in order to perform a JOIN that used to be straightforward.
And the tools aren't really the place to implement all kinds of
partial-fetch implementations and query estimates (without EXPLAIN,
even!). Unless you know of such an efficient user space JOIN
implementation, perhaps?

Regards

___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [Cloud] [Labs-l] chmod 0000 files

2017-10-16 Thread Platonides
How is the python script that creates the files being run? What is its umask?

I suspect it may run under different conditions, and sometimes end up
with a 777 umask.

___
Wikimedia Cloud Services mailing list
Cloud@lists.wikimedia.org (formerly lab...@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud

Re: [Cloud] Database queries via PHP

2017-10-07 Thread Platonides
Did you read all results from the second query?

It may be waiting for you to read received data before sending the new query.

What are you using, MYSQLI_USE_RESULT or MYSQLI_STORE_RESULT?

Regards

___
Cloud mailing list
Cloud@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/cloud


Re: [Wikilovesearth] Seeking urgent help for Wiki Loves Africa

2017-09-29 Thread Platonides
Hello Florence

I have setup a campaign based on your data. It is available at
https://commons.wikimedia.org/wiki/Special:UploadWizard?campaign=wla

Please note that you need to create Template:Wiki Loves Africa 2017

Also, tell me if you want a default category to be added to the images.

Should the UploadWizard be asking both the country where the photo was
taken and the uploader's country? What should it do with them? Is
there a template to be used to wrap the answers?

It can be refined later, but hopefully, this should be enough for
getting uploads for the time being :)

Best regards



On Sat, Sep 30, 2017 at 1:32 AM, Florence Devouard
 wrote:
> Hello everyone
>
>
> I need urgent help for the Wiki Loves Africa photo contest.
>
>
> The contest starts in less than 24 hours. Due to family issues, Romaine, who
> set up our site notice and landing pages last years, appears to be missing
> in action.
>
> As a result, we have no site notice set up (which is something annoying, but
> at worse... we will do for a few days without a site notice, just some fun
> on Facebook and Twitter and blogs).
>
> But there is also no dedicated upload wizard (which is significantly more
> annoying as pictures will not get taggued and categorized.
>
>
> So... I am looking for someone who know how to do this and I thought WLE and
> WLM were places where I could find such person maybe...
>
>
> We do not start from nowhere.
>
>
> == Landing pages ==
>
> I know that I do not know how to do it (been there, done that).
> Last year landing pages and messages are listed there :
> https://commons.wikimedia.org/wiki/Commons_talk:Wiki_Loves_Africa_2017#Site_notice_in_2015_and_2016
>
> And I already put the elements that Romaine usually asks me (the language,
> image etc.) here :
> https://commons.wikimedia.org/wiki/Commons_talk:Wiki_Loves_Africa_2017#Site_notice_and_call_in_2017
>
>
> Does anyone know how to do a dedicated upload system ? Does anyone know
> someone who know how to do that ? Who can help us ?
>
>
> == Banners==
>
> Banner wise, we used two banners last year, which may be reused probably in
> a very similar way.
>
> https://meta.wikimedia.org/w/index.php?title=Special:CentralNotice=noticeDetail=WLAfrica+2016
>
> If someone knows how to create the campaign and two banners for 2017 out of
> the 2016 model... that would be awesome.
>
> But I think that I can probably do it myself and if needed, I'll do it
> tomorrow. I am less worried about this one (perhaps I should though ;))
>
>
> Anyway... biggest problem at the moment is really no landing page with
> upload tool.
>
> Your help would be very very very very welcome
>
>
> Florence
>
>
>
> ___
> Wikilovesearth mailing list
> Wikilovesearth@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikilovesearth

___
Wikilovesearth mailing list
Wikilovesearth@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesearth


Re: [Wiki Loves Monuments] checking commons uploads for malfunctions

2017-09-03 Thread Platonides
Hello Lodewijk

Here is the list of the 13401 images uploaded yesterday that don't
include {{Wiki Loves Monuments 2017}} (and were not flagged as bot):
 https://archivos.wikimedia.es/wikimedia/es/wlm/non-wlm-uploads_20170902.html

For the record, there were 7332 uploads (7451 including bots) that did
embed {{Wiki Loves Monuments 2017}} template.

I don't know who would volunteer to review them, but if there's
interest, a tool could be done to provide such lists.

Best regards

___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Xmldatadumps-l] Collecting data on page revisions over time

2017-08-31 Thread Platonides
On Thu, Aug 31, 2017 at 7:50 PM, Jérémie Roquet <jroq...@arkanosis.net>
wrote:

> Hi Platonides,
>
> 2017-08-31 19:40 GMT+02:00 Platonides <platoni...@gmail.com>:
> > On Thu, Aug 31, 2017 at 3:10 PM, Jérémie Roquet <jroq...@arkanosis.net>
> > wrote:
> >>
> >> PS : what could be incredibly useful to dive into articles histories
> >> would be to import them in git², as it would allow the user to see
> >> diffs between revisions the way you see them online, to look for when
> >> a given sentence has been added / removed, etc. There are some very
> >> user-friendly tools to present the histories to non-technical users
> >> once the import has been made.
> >
> > Not as much as you think. I did that once, but the results were worse
> than
> > expected. git (and other scms) diffing is line-based. You have many
> > relatively-independent lines of code, and diff based on that. Whereas on
> > wikipedia articles, each line is a full paragraph, Thus, as soon as
> someone
> > added a sentence (or a word), the full paragraph showed as changed.
>
> Good point, thanks!
>
> Did you try with git's builtin diff UI, or with some other frontend? I
> have never tried on Wikimedia dumps (I really should!) but I have to
> diff XML files with horribly long lines on a regular basis — which is
> something I naively believe to be very close to what diffing Wikimedia
> dumps would look like — and diff-so-fancy and vimdiff do wonders with
> that. Unfortunately, “user-friendly” GUIs like  GitKraken, which I'd
> have recommended to non-technical users, appear to handle diffs as
> poorly as git builtin UI…
>
> Best regards,
>
> --
> Jérémie
>

I think I attempted to use git gui blame, and perhaps git bisect. Not sure
how I finally handled whatever I was looking for. It has been a long time
ago.
You might be able to get better results with some preprocessing, though.

Cheers
___
Xmldatadumps-l mailing list
Xmldatadumps-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l


Re: [Wiki Loves Monuments] maps for Wiki Loves Monuments 2017

2017-08-31 Thread Platonides
On Thu, Aug 31, 2017 at 7:23 PM, Cristian Cenci  wrote:

> Have you zoomed until a "very close" level?
> Cause I see monuments in Italy starting from a sub-regional level.
>
> Cristian
>

You are right!
You need to zoom in a lot in order for them to be loaded. Then you can zoom
out and they are still shown even at the full-globe level.
It's probably related to not being able to load too many monuments at once,
but it is confusing. :/
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Wiki Loves Monuments] maps for Wiki Loves Monuments 2017

2017-08-31 Thread Platonides
On Thu, Aug 31, 2017 at 1:01 PM, Mārtiņš Bruņenieks <marti...@gmail.com>
wrote:

> On Thu, Aug 31, 2017 at 12:59 AM, Platonides <platoni...@gmail.com> wrote:
>
>>  However, what surprised me is that got to the map doesn't show any
>> monument :(
>>
>
> This requires to add monuments to Wikidata:
>

The "go to the map" link shows no monument at all. Nothing on Ireland,
nothing on Latvia... whereas browsing differently, they are included.
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [Wiki Loves Monuments] maps for Wiki Loves Monuments 2017

2017-08-30 Thread Platonides
Wow, that's really nice.

Great work, Slaporte and yarl!

I guess the "browse countries" and "more countries soon" text will become a
link shortly. However, what surprised me is that got to the map doesn't
show any monument :(


Best regards
___
Wiki Loves Monuments mailing list
WikiLovesMonuments@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikilovesmonuments
http://www.wikilovesmonuments.org

Re: [List admins] Fwd: FW: I170821-0616 about "Phidhing scam problem Fwd: [Wikimediauk-l] #4947276 Invoice secondary Notice" has been resolved

2017-08-24 Thread Platonides
"From: ewan.mcand...@ed.ac.uk "
Sigh,Thanks for the mail, Katie.

Here I blame many email clients that show only the real name. «Your name is
"john@example.com"? That's not a problem, I think it will be enough to
show that, no need to additionally mention that your email is
gangs...@evilguys.biz» And there are some really popular ones doing this. :(
Not sure how these will turn out if evilguys.biz set up DMARC, once we
additionally add From-header replacement into the mix.

Obviously, the link from the email leads to a virus download, so be careful
those peeking at it:
https://www.virustotal.com/#/file/17bba5b4fbf997163f1f0f316b5bc08bd1cdde4e8c4211eb8d2bc151b48b546c/detection


Note: I am a bit confused by the mention on the thread of "
ewan.mcand...@ed.ac.uk< liane.eichenber...@buendes-bueroservice.de>" were
there *several* phishing emails with a "Ewan name"?
If they continue playing impersonating Ewan that way, emails using such
name could be blocked with a regex in the list config.


Regards


PS: Gmail is complaining that ed.ac.uk email server doesn't support
STARTTLS That is something they can implement, unlike avoiding such
messages.
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins


Re: [List admins] Improving Mailman DMARC Compatibility

2017-07-25 Thread Platonides
:(

What will be the behavior when a final user presses reply to one of those
emails from the list?

Should a bofh enable the reject option, will the rejection message properly
explain that «they are sending an email to a public mailing list and their
domain is configured to not allow that, and should they have any issue to
complain to their email provider»?

Quite sad, but it's actually what everybody should be doing instead of
working around it by overwriting the author of the message with the sender
and clobbering the Reply-to (cf. rfc 5322 section 3.6.2).
(and leaving everyone else with badly emails)

Does it at least leave a pattern that allows a faithful reconstruction?
Could deliveries to the gmane archive be exempted?

There's a place for DMARC, but not for providers of people using mailing
lists.

Kind regards
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins


Re: [List admins] Problem with wikimediafr list

2017-07-16 Thread Platonides
There are some settings that could be used, but sadly, the best solution is
not to subscribe to mailing lists with a yahoo email :(
___
Listadmins mailing list
Listadmins@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/listadmins


[Wikidata-bugs] [Maniphest] [Commented On] T162166: For contintcloud either add RAM or Swap to the instances

2017-04-05 Thread Platonides
Platonides added a comment.
Given that the problem seems to be just the algorithm memory check rather than actually needing that extra memory, this has a really simple solution:

echo 1 > /proc/sys/vm/overcommit_memoryTASK DETAILhttps://phabricator.wikimedia.org/T162166EMAIL PREFERENCEShttps://phabricator.wikimedia.org/settings/panel/emailpreferences/To: PlatonidesCc: Platonides, Smalyshev, chasemp, StudiesWorld, JanZerebecki, Aklapper, hoo, Paladox, thcipriani, aude, Anomie, WMDE-leszek, Ladsgroup, Aleksey_WMDE, Lydia_Pintscher, Stashbot, Legoktm, greg, hashar, QZanden, Tbscho, Salgo60, D3r1ck01, Izno, Wikidata-bugs, Dinoguy1000, Gryllida, jayvdb, MrStradivarius, scfc, Jackmcbarn, Mbch331___
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs


[MediaWiki-commits] [Gerrit] mediawiki...OpenStackManager[master]: Add also support for ecdsa ssh keys

2017-02-26 Thread Platonides (Code Review)
Platonides has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/340032 )

Change subject: Add also support for ecdsa ssh keys
..

Add also support for ecdsa ssh keys

Related task: T159070

Change-Id: I7da680ed0126fac7cef6740c947c2868efe4
---
M special/SpecialNovaKey.php
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull 
ssh://gerrit.wikimedia.org:29418/mediawiki/extensions/OpenStackManager 
refs/changes/32/340032/1

diff --git a/special/SpecialNovaKey.php b/special/SpecialNovaKey.php
index 60bebb0..d115f4a 100644
--- a/special/SpecialNovaKey.php
+++ b/special/SpecialNovaKey.php
@@ -278,7 +278,7 @@
 
$key = trim( $formData['key'] ); # Because people copy paste it 
with an accidental newline
$returnto = Title::newFromText( $formData['returnto'] );
-   if ( !preg_match( '/(^| )ssh-(rsa|dss|ed25519) /', $key ) ) {
+   if ( !preg_match( '/(^| 
)(ssh-(rsa|dss|ed25519)|ecdsa-sha2-nistp256) /', $key ) ) {
# This doesn't look like openssh format, it's probably a
# Windows user providing it in PuTTY format.
$key = self::opensshFormatKey( $key );

-- 
To view, visit https://gerrit.wikimedia.org/r/340032
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I7da680ed0126fac7cef6740c947c2868efe4
Gerrit-PatchSet: 1
Gerrit-Project: mediawiki/extensions/OpenStackManager
Gerrit-Branch: master
Gerrit-Owner: Platonides <platoni...@gmail.com>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


[MediaWiki-commits] [Gerrit] operations/mediawiki-config[master]: Remove outdated comment

2017-02-08 Thread Platonides (Code Review)
Platonides has uploaded a new change for review. ( 
https://gerrit.wikimedia.org/r/336704 )

Change subject: Remove outdated comment
..

Remove outdated comment

The parsoid entry was changed to a codfw service in 6fd6935ac1

Change-Id: Ie88b873ac8b082f97913ec0c58cbb36b37fa1ae4
---
M wmf-config/ProductionServices.php
1 file changed, 1 insertion(+), 1 deletion(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/mediawiki-config 
refs/changes/04/336704/1

diff --git a/wmf-config/ProductionServices.php 
b/wmf-config/ProductionServices.php
index dcc44ce..bfedb8c 100644
--- a/wmf-config/ProductionServices.php
+++ b/wmf-config/ProductionServices.php
@@ -31,7 +31,7 @@
'search' => [ 'search.svc.codfw.wmnet' ], # elasticsearch must be 
accessed by hostname for SSL certificate verification to work
'ocg' => 'http://ocg.svc.eqiad.wmnet:8000',
'urldownloader' => 'http://url-downloader.codfw.wikimedia.org:8080',
-   'parsoid' => 'http://parsoid.svc.codfw.wmnet:8000', # Change this once 
parsoid is up and running in codfw
+   'parsoid' => 'http://parsoid.svc.codfw.wmnet:8000',
'mathoid' => 'http://mathoid.svc.codfw.wmnet:10042',
'eventlogging' => 'udp://10.64.32.167:8421',  # 
eventlog1001.eqiad.wmnet,
'eventbus' => 'http://eventbus.svc.codfw.wmnet:8085',

-- 
To view, visit https://gerrit.wikimedia.org/r/336704
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ie88b873ac8b082f97913ec0c58cbb36b37fa1ae4
Gerrit-PatchSet: 1
Gerrit-Project: operations/mediawiki-config
Gerrit-Branch: master
Gerrit-Owner: Platonides <platoni...@gmail.com>

___
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits


  1   2   3   4   5   6   7   8   9   10   >