Xqt has uploaded a new change for review.

  https://gerrit.wikimedia.org/r/323704

Change subject: [DOC] Keep -help doc beneath 79 chars
......................................................................

[DOC] Keep -help doc beneath 79 chars

- Help doc must not exceed 80 chars. Otherwise there is an additional
  line feed printed.

Change-Id: Ic8255b49ec52cbb6b271b2669fd030ee44cfdfe3
---
M scripts/archivebot.py
M scripts/blockpageschecker.py
M scripts/checkimages.py
M scripts/coordinate_import.py
M scripts/illustrate_wikidata.py
M scripts/image.py
M scripts/imagecopy_self.py
M scripts/imagerecat.py
M scripts/imageuncat.py
M scripts/interwiki.py
M scripts/patrol.py
M scripts/redirect.py
M scripts/reflinks.py
M scripts/template.py
M scripts/transferbot.py
M scripts/upload.py
16 files changed, 47 insertions(+), 46 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/pywikibot/core 
refs/changes/04/323704/1

diff --git a/scripts/archivebot.py b/scripts/archivebot.py
index ea20c78..4e28140 100755
--- a/scripts/archivebot.py
+++ b/scripts/archivebot.py
@@ -34,9 +34,9 @@
                      Must be a subpage of the current page. Variables are
                      supported.
 algo                 specifies the maximum age of a thread. Must be in the form
-                     old(<delay>) where <delay> specifies the age in 
minutes(m),
-                     hours (h), days (d), weeks(w), months (M) or years (y)
-                     like 24h or 5d. Default is old(24h)
+                     old(<delay>) where <delay> specifies the age in
+                     minutes (m), hours (h), days (d), weeks(w), months (M) or
+                     years (y)  like 24h or 5d. Default is old(24h)
 counter              The current value of a counter which could be assigned as
                      variable. Will be actualized by bot. Initial value is 1.
 maxarchivesize       The maximum archive size before incrementing the counter.
diff --git a/scripts/blockpageschecker.py b/scripts/blockpageschecker.py
index e90b0c5..0e9bd35 100755
--- a/scripts/blockpageschecker.py
+++ b/scripts/blockpageschecker.py
@@ -26,8 +26,8 @@
 
 Furthermore, the following command line parameters are supported:
 
--always         Doesn't ask every time if the bot should make the change or 
not,
-                do it always.
+-always         Doesn't ask every time whether the bot should make the change.
+                Do it always.
 
 -show           When the bot can't delete the template from the page (wrong
                 regex or something like that) it will ask you if it should show
diff --git a/scripts/checkimages.py b/scripts/checkimages.py
index 8fc472b..028b7ca 100755
--- a/scripts/checkimages.py
+++ b/scripts/checkimages.py
@@ -63,10 +63,8 @@
 right parameter.
 
 * Name=     Set the name of the block
-* Find=     Use it to define what search in the text of the image's 
description,
-            while
-  Findonly= search only if the exactly text that you give is in the image's
-            description.
+* Find=     search this text in the image's description
+* Findonly= search for exactly this text in the image's description
 * Summary=  That's the summary that the bot will use when it will notify the
             problem.
 * Head=     That's the incipit that the bot will use for the message.
diff --git a/scripts/coordinate_import.py b/scripts/coordinate_import.py
index ec7a891..aeb9a4a 100755
--- a/scripts/coordinate_import.py
+++ b/scripts/coordinate_import.py
@@ -11,7 +11,8 @@
 This will work on all pages in the category "coordinates not on Wikidata" and
 will import the coordinates on these pages to Wikidata.
 
-The data from the "GeoData" extension 
(https://www.mediawiki.org/wiki/Extension:GeoData)
+The data from the "GeoData" extension
+(https://www.mediawiki.org/wiki/Extension:GeoData)
 is used so that extension has to be setup properly. You can look at the
 [[Special:Nearby]] page on your local Wiki to see if it's populated.
 
diff --git a/scripts/illustrate_wikidata.py b/scripts/illustrate_wikidata.py
index 7fa88bd..bd257cd 100755
--- a/scripts/illustrate_wikidata.py
+++ b/scripts/illustrate_wikidata.py
@@ -1,10 +1,11 @@
 #!/usr/bin/python
 # -*- coding: utf-8 -*-
 """
-Bot to add images to Wikidata items. The image is extracted from the 
page_props.
+Bot to add images to Wikidata items.
 
-For this to be available the PageImages extension
-(https://www.mediawiki.org/wiki/Extension:PageImages) needs to be installed
+The image is extracted from the page_props. For this to be available the
+PageImages extension (https://www.mediawiki.org/wiki/Extension:PageImages)
+needs to be installed
 
 Usage:
 
diff --git a/scripts/image.py b/scripts/image.py
index 9138549..c4f652d 100755
--- a/scripts/image.py
+++ b/scripts/image.py
@@ -26,8 +26,8 @@
 
 Examples:
 
-The image "FlagrantCopyvio.jpg" is about to be deleted, so let's first remove 
it
-from everything that displays it:
+The image "FlagrantCopyvio.jpg" is about to be deleted, so let's first remove
+it from everything that displays it:
 
     python pwb.py image FlagrantCopyvio.jpg
 
diff --git a/scripts/imagecopy_self.py b/scripts/imagecopy_self.py
index 3b4eb1d..287adf6 100644
--- a/scripts/imagecopy_self.py
+++ b/scripts/imagecopy_self.py
@@ -1,6 +1,6 @@
 # -*- coding: utf-8 -*-
 """
-Script to copy self published files from English Wikipedia to Wikimedia 
Commons.
+Script to copy self published files from English Wikipedia to Commons.
 
 This bot is based on imagecopy.py and intended to be used to empty out
 http://en.wikipedia.org/wiki/Category:Self-published_work
diff --git a/scripts/imagerecat.py b/scripts/imagerecat.py
index a8d0bba..cf098df 100755
--- a/scripts/imagerecat.py
+++ b/scripts/imagerecat.py
@@ -9,8 +9,8 @@
 
 The following command line parameters are supported:
 
--onlyfilter     Don't use Commonsense to get categories, just filter the 
current
-                categories
+-onlyfilter     Don't use Commonsense to get categories, just filter the
+                current categories
 
 -onlyuncat      Only work on uncategorized images. Will prevent the bot from
                 working on an image multiple times.
diff --git a/scripts/imageuncat.py b/scripts/imageuncat.py
index 38cdaf8..476bac3 100755
--- a/scripts/imageuncat.py
+++ b/scripts/imageuncat.py
@@ -3,16 +3,17 @@
 """
 Program to add uncat template to images without categories at commons.
 
-See imagerecat.py (still working on that one) to add these images to 
categories.
+See imagerecat.py to add these images to categories.
 
 This script is working on the given site, so if the commons should be handled,
 the site commons should be given and not a Wikipedia or similar.
 
--yesterday        Go through all uploads from yesterday. (Deprecated here, 
moved
-                  to pagegenerators)
+-yesterday        Go through all uploads from yesterday. (Deprecated here,
+                  moved to pagegenerators)
 
--recentchanges    Go through the changes made from 'offset' minutes with 
'duration'
-                  minutes of timespan. It must be given two arguments as
+-recentchanges    Go through the changes made from 'offset' minutes with
+                  'duration' minutes of timespan. It must be given two
+                  arguments as
                   '-recentchanges:offset,duration'
 
                   Default value of offset is 120, and that of duration is 70
diff --git a/scripts/interwiki.py b/scripts/interwiki.py
index bcc6ab0..d3b3386 100755
--- a/scripts/interwiki.py
+++ b/scripts/interwiki.py
@@ -49,9 +49,9 @@
                    process interrupts again, it saves all unprocessed pages in
                    one new dump file of the given site.
 
-    -continue:     like restore, but after having gone through the dumped 
pages,
-                   continue alphabetically starting at the last of the dumped
-                   pages. The dump file will be subsequently removed.
+    -continue:     like restore, but after having gone through the dumped
+                   pages, continue alphabetically starting at the last of the
+                   dumped pages. The dump file will be subsequently removed.
 
     -warnfile:     used as -warnfile:filename, reads all warnings from the
                    given file that apply to the home wiki language,
diff --git a/scripts/patrol.py b/scripts/patrol.py
index 10f1db9..eb5b4a2 100755
--- a/scripts/patrol.py
+++ b/scripts/patrol.py
@@ -17,8 +17,8 @@
 which start with the mentioned link (e.g. [[foo]] will also patrol [[foobar]]).
 
 To avoid redlinks it's possible to use Special:PrefixIndex as a prefix so that
-it will list all pages which will be patrolled. The page after the slash will 
be
-used then.
+it will list all pages which will be patrolled. The page after the slash will
+be used then.
 
 On Wikisource, it'll also check if the page is on the author namespace in which
 case it'll also patrol pages which are linked from that page.
diff --git a/scripts/redirect.py b/scripts/redirect.py
index abe9b34..0573014 100755
--- a/scripts/redirect.py
+++ b/scripts/redirect.py
@@ -17,9 +17,9 @@
 
 broken         Tries to fix redirect which point to nowhere by using the last
 br             moved target of the destination page. If this fails and the
-               -delete option is set, it either deletes the page or marks it 
for
-               deletion depending on whether the account has admin rights. It
-               will mark the redirect not for deletion if there is no speedy
+               -delete option is set, it either deletes the page or marks it
+               for deletion depending on whether the account has admin rights.
+               It will mark the redirect not for deletion if there is no speedy
                deletion template available. Shortcut action command is "br".
 
 both           Both of the above. Retrieves redirect pages from live wiki,
diff --git a/scripts/reflinks.py b/scripts/reflinks.py
index c5daf1d..dc12165 100755
--- a/scripts/reflinks.py
+++ b/scripts/reflinks.py
@@ -3,13 +3,13 @@
 """
 Fetch and add titles for bare links in references.
 
-This bot will search for references which are only made of a link without 
title,
+This bot will search for references which are only made of a link without title
 (i.e. <ref>[https://www.google.fr/]</ref> or <ref>https://www.google.fr/</ref>)
 and will fetch the html title from the link to use it as the title of the wiki
 link in the reference, i.e.
 <ref>[https://www.google.fr/search?q=test test - Google Search]</ref>
 
-The bot checks every 20 edits a special stop page : if the page has been 
edited,
+The bot checks every 20 edits a special stop page. If the page has been edited,
 it stops.
 
 DumZiBoT is running that script on en: & fr: at every new dump, running it on
diff --git a/scripts/template.py b/scripts/template.py
index c4bbc6c..32e38fb 100755
--- a/scripts/template.py
+++ b/scripts/template.py
@@ -46,8 +46,8 @@
 
 -addcat:     Appends the given category to every page that is edited. This is
              useful when a category is being broken out from a template
-             parameter or when templates are being upmerged but more 
information
-             must be preserved.
+             parameter or when templates are being upmerged but more
+             information must be preserved.
 
 other:       First argument is the old template name, second one is the new
              name.
diff --git a/scripts/transferbot.py b/scripts/transferbot.py
index b9a690e..e6ece36 100755
--- a/scripts/transferbot.py
+++ b/scripts/transferbot.py
@@ -22,8 +22,8 @@
 
 Example commands:
 
-Transfer all pages in category "Query service" from the English Wikipedia to 
the
-Arabic Wiktionary, adding "Wiktionary:Import enwp/" as prefix:
+Transfer all pages in category "Query service" from the English Wikipedia to
+the Arabic Wiktionary, adding "Wiktionary:Import enwp/" as prefix:
 
     python pwb.py transferbot -family:wikipedia -lang:en -cat:"Query service" \
         -tofamily:wiktionary -tolang:ar -prefix:"Wiktionary:Import enwp/"
diff --git a/scripts/upload.py b/scripts/upload.py
index 3f9935d..d4b3e27 100755
--- a/scripts/upload.py
+++ b/scripts/upload.py
@@ -33,17 +33,17 @@
 It is possible to combine -abortonwarn and -ignorewarn so that if the specific
 warning is given it won't apply the general one but more specific one. So if it
 should ignore specific warnings and abort on the rest it's possible by defining
-no warning for -abortonwarn and the specific warnings for -ignorewarn. The 
order
-does not matter. If both are unspecific or a warning is specified by both, 
it'll
-prefer aborting.
+no warning for -abortonwarn and the specific warnings for -ignorewarn. The
+order does not matter. If both are unspecific or a warning is specified by
+both, it'll prefer aborting.
 
-If any other arguments are given, the first is either URL, filename or 
directory
-to upload, and the rest is a proposed description to go with the upload. If 
none
-of these are given, the user is asked for the directory, file or URL to upload.
-The bot will then upload the image to the wiki.
+If any other arguments are given, the first is either URL, filename or
+directory to upload, and the rest is a proposed description to go with the
+upload. If none of these are given, the user is asked for the directory, file
+or URL to upload. The bot will then upload the image to the wiki.
 
-The script will ask for the location of an image(s), if not given as a 
parameter,
-and for a description.
+The script will ask for the location of an image(s), if not given as a
+parameter, and for a description.
 """
 #
 # (C) Rob W.W. Hooft, Andre Engels 2003-2004

-- 
To view, visit https://gerrit.wikimedia.org/r/323704
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic8255b49ec52cbb6b271b2669fd030ee44cfdfe3
Gerrit-PatchSet: 1
Gerrit-Project: pywikibot/core
Gerrit-Branch: master
Gerrit-Owner: Xqt <i...@gno.de>

_______________________________________________
MediaWiki-commits mailing list
MediaWiki-commits@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits

Reply via email to