Hello community,

here is the log from the commit of package python-Unidecode for 
openSUSE:Factory checked in at 2019-06-17 10:30:24
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-Unidecode (Old)
 and      /work/SRC/openSUSE:Factory/.python-Unidecode.new.4811 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-Unidecode"

Mon Jun 17 10:30:24 2019 rev:9 rq:710078 version:1.1.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-Unidecode/python-Unidecode.changes        
2018-12-07 14:36:21.427000009 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-Unidecode.new.4811/python-Unidecode.changes  
    2019-06-17 10:30:29.537356057 +0200
@@ -1,0 +2,10 @@
+Fri Jun 14 16:48:52 UTC 2019 - Benoît Monin <benoit.mo...@gmx.fr>
+
+- update to Unidecode 1.1.0:
+  * Add more Latin letter variants in U+1F1xx page
+  * Make it possible to use the Unidecode command-line utility via
+    "python -m unidecode" (thanks to Jon Dufresne)
+  * General clean up of code and documentation
+    (thanks to Jon Dufresne)
+
+-------------------------------------------------------------------

Old:
----
  Unidecode-1.0.23.tar.gz

New:
----
  Unidecode-1.1.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-Unidecode.spec ++++++
--- /var/tmp/diff_new_pack.XbFcdS/_old  2019-06-17 10:30:32.977354158 +0200
+++ /var/tmp/diff_new_pack.XbFcdS/_new  2019-06-17 10:30:32.997354147 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package python-Unidecode
 #
-# Copyright (c) 2018 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -18,7 +18,7 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-Unidecode
-Version:        1.0.23
+Version:        1.1.0
 Release:        0
 Summary:        ASCII transliterations of Unicode text
 License:        GPL-2.0-or-later

++++++ Unidecode-1.0.23.tar.gz -> Unidecode-1.1.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/ChangeLog 
new/Unidecode-1.1.0/ChangeLog
--- old/Unidecode-1.0.23/ChangeLog      2018-11-19 08:09:39.000000000 +0100
+++ new/Unidecode-1.1.0/ChangeLog       2019-06-14 16:26:44.000000000 +0200
@@ -1,3 +1,9 @@
+2019-06-14     unidecode 1.1.0
+       * Add more Latin letter variants in U+1F1xx page.
+       * Make it possible to use the Unidecode command-line utility via
+         "python -m unidecode" (thanks to Jon Dufresne)
+       * General clean up of code and documentation (thanks to Jon Dufresne)
+
 2018-11-19     unidecode 1.0.23
        * Improve transliteration of Hebrew letters (thanks to Alon Bar-Lev)
        * Add transliterations for the phonetic block U+1D00 - U+1D7F
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/PKG-INFO 
new/Unidecode-1.1.0/PKG-INFO
--- old/Unidecode-1.0.23/PKG-INFO       2018-11-19 08:38:18.000000000 +0100
+++ new/Unidecode-1.1.0/PKG-INFO        2019-06-14 16:35:37.000000000 +0200
@@ -1,6 +1,6 @@
-Metadata-Version: 1.1
+Metadata-Version: 1.2
 Name: Unidecode
-Version: 1.0.23
+Version: 1.1.0
 Summary: ASCII transliterations of Unicode text
 Home-page: UNKNOWN
 Author: Tomaz Solc
@@ -15,13 +15,13 @@
         keyboard, or when constructing ASCII machine identifiers from
         human-readable Unicode strings that should still be somewhat 
intelligible
         (a popular example of this is when making an URL slug from an article
-        title). 
+        title).
         
-        In most of these examples you could represent Unicode characters as
-        `???` or `\\15BA\\15A0\\1610`, to mention two extreme cases. But that's
-        nearly useless to someone who actually wants to read what the text 
says.
+        In most of these examples you could represent Unicode characters as 
``???`` or
+        ``\\15BA\\15A0\\1610``, to mention two extreme cases. But that's 
nearly useless
+        to someone who actually wants to read what the text says.
         
-        What Unidecode provides is a middle road: function `unidecode()` takes
+        What Unidecode provides is a middle road: the function ``unidecode()`` 
takes
         Unicode data and tries to represent it in ASCII characters (i.e., the
         universally displayable characters between 0x00 and 0x7F), where the
         compromises taken when mapping between two character sets are chosen 
to be
@@ -43,8 +43,8 @@
         example also contain ASCII approximations for symbols and non-Latin
         alphabets.
         
-        This is a Python port of `Text::Unidecode` Perl module by
-        Sean M. Burke <sbu...@cpan.org>.
+        This is a Python port of ``Text::Unidecode`` Perl module by Sean M. 
Burke
+        <sbu...@cpan.org>.
         
         
         Module content
@@ -79,19 +79,19 @@
             hello
         
         The default encoding used by the utility depends on your system 
locale. You can
-        specify another encoding with the `-e` argument. See `unidecode 
--help` for a
-        full list of available options.
+        specify another encoding with the ``-e`` argument. See ``unidecode 
--help`` for
+        a full list of available options.
         
         Requirements
         ------------
         
-        Nothing except Python itself.
-            
+        Nothing except Python itself. Unidecode supports Python 2.7 and 3.4 or 
later.
+        
         You need a Python build with "wide" Unicode characters (also called 
"UCS-4
-        build") in order for unidecode to work correctly with characters 
outside of
+        build") in order for Unidecode to work correctly with characters 
outside of
         Basic Multilingual Plane (BMP). Common characters outside BMP are 
bold, italic,
         script, etc. variants of the Latin alphabet intended for mathematical 
notation.
-        Surrogate pair encoding of "narrow" builds is not supported in 
unidecode.
+        Surrogate pair encoding of "narrow" builds is not supported in 
Unidecode.
         
         If your Python build supports "wide" Unicode the following expression 
will
         return True::
@@ -100,8 +100,8 @@
             >>> sys.maxunicode > 0xffff
             True
         
-        See PEP 261 for details regarding support for "wide" Unicode 
characters in
-        Python.
+        See `PEP 261 <https://www.python.org/dev/peps/pep-0261/>`_ for details
+        regarding support for "wide" Unicode characters in Python.
         
         
         Installation
@@ -128,7 +128,7 @@
             Turkish). German text transliterated without the extra "e" is much 
more
             readable than other languages transliterated using German rules. A
             workaround is to do your own replacements of these characters 
before
-            passing the string to `unidecode`.
+            passing the string to ``unidecode()``.
         
         Unidecode should support localization (e.g. a language or country 
parameter, inspecting system locale, etc.)
             Language-specific transliteration is a complicated problem and 
beyond the
@@ -143,14 +143,23 @@
             you, please consider using other libraries, such as `text-unidecode
             <https://github.com/kmike/text-unidecode>`_.
         
+        Unidecode produces completely wrong results (e.g. "u" with diaeresis 
transliterating as "A 1/4 ")
+            The strings you are passing to Unidecode have been wrongly decoded
+            somewhere in your program. For example, you might be decoding 
utf-8 encoded
+            strings as latin1. With a misconfigured terminal, locale and/or a 
text
+            editor this might not be immediately apparent. Inspect your 
strings with
+            ``repr()`` and consult the
+            `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
+        
         I've upgraded Unidecode and now some URLs on my website return 404 Not 
Found.
             This is an issue with the software that is running your website, 
not
             Unidecode. Occasionally, new versions of Unidecode library are 
released
             which contain improvements to the transliteration tables. This 
means that
-            you cannot rely that `unidecode` output will not change across 
different
-            versions of Unidecode library. If you use `unidecode` to generate 
URLs for
-            your website, either generate the URL slug once and store it in the
-            database or lock your dependency to `unidecode` to one specific 
version.
+            you cannot rely that ``unidecode()`` output will not change across
+            different versions of Unidecode library. If you use 
``unidecode()`` to
+            generate URLs for your website, either generate the URL slug once 
and store
+            it in the database or lock your dependency of Unidecode to one 
specific
+            version.
         
         Some of the issues in this section are discussed in more detail in 
`this blog
         post 
<https://www.tablix.org/~avian/blog/archives/2013/09/python_unidecode_release_0_04_14/>`_.
@@ -159,19 +168,19 @@
         Performance notes
         -----------------
         
-        By default, `unidecode` optimizes for the use case where most of the 
strings
+        By default, ``unidecode()`` optimizes for the use case where most of 
the strings
         passed to it are already ASCII-only and no transliteration is 
necessary (this
         default might change in future versions).
         
         For performance critical applications, two additional functions are 
exposed:
         
-        `unidecode_expect_ascii` is optimized for ASCII-only inputs 
(approximately 5
-        times faster than `unidecode_expect_nonascii` on 10 character strings, 
more on
-        longer strings), but slightly slower for non-ASCII inputs.
+        ``unidecode_expect_ascii()`` is optimized for ASCII-only inputs 
(approximately
+        5 times faster than ``unidecode_expect_nonascii()`` on 10 character 
strings,
+        more on longer strings), but slightly slower for non-ASCII inputs.
         
-        `unidecode_expect_nonascii` takes approximately the same amount of 
time on
+        ``unidecode_expect_nonascii()`` takes approximately the same amount of 
time on
         ASCII and non-ASCII inputs, but is slightly faster for non-ASCII 
inputs than
-        `unidecode_expect_ascii`.
+        ``unidecode_expect_ascii()``.
         
         Apart from differences in run time, both functions produce identical 
results.
         For most users of Unidecode, the difference in performance should be
@@ -211,7 +220,7 @@
         
         Python code and later additions:
         
-        Copyright 2018, Tomaz Solc <tomaz.s...@tablix.org>
+        Copyright 2019, Tomaz Solc <tomaz.s...@tablix.org>
         
         This program is free software; you can redistribute it and/or modify it
         under the terms of the GNU General Public License as published by the 
Free
@@ -242,7 +251,9 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Text Processing
 Classifier: Topic :: Text Processing :: Filters
+Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/README.rst 
new/Unidecode-1.1.0/README.rst
--- old/Unidecode-1.0.23/README.rst     2018-11-18 19:01:15.000000000 +0100
+++ new/Unidecode-1.1.0/README.rst      2019-05-31 12:56:46.000000000 +0200
@@ -7,13 +7,13 @@
 keyboard, or when constructing ASCII machine identifiers from
 human-readable Unicode strings that should still be somewhat intelligible
 (a popular example of this is when making an URL slug from an article
-title). 
+title).
 
-In most of these examples you could represent Unicode characters as
-`???` or `\\15BA\\15A0\\1610`, to mention two extreme cases. But that's
-nearly useless to someone who actually wants to read what the text says.
+In most of these examples you could represent Unicode characters as ``???`` or
+``\\15BA\\15A0\\1610``, to mention two extreme cases. But that's nearly useless
+to someone who actually wants to read what the text says.
 
-What Unidecode provides is a middle road: function `unidecode()` takes
+What Unidecode provides is a middle road: the function ``unidecode()`` takes
 Unicode data and tries to represent it in ASCII characters (i.e., the
 universally displayable characters between 0x00 and 0x7F), where the
 compromises taken when mapping between two character sets are chosen to be
@@ -35,8 +35,8 @@
 example also contain ASCII approximations for symbols and non-Latin
 alphabets.
 
-This is a Python port of `Text::Unidecode` Perl module by
-Sean M. Burke <sbu...@cpan.org>.
+This is a Python port of ``Text::Unidecode`` Perl module by Sean M. Burke
+<sbu...@cpan.org>.
 
 
 Module content
@@ -71,19 +71,19 @@
     hello
 
 The default encoding used by the utility depends on your system locale. You can
-specify another encoding with the `-e` argument. See `unidecode --help` for a
-full list of available options.
+specify another encoding with the ``-e`` argument. See ``unidecode --help`` for
+a full list of available options.
 
 Requirements
 ------------
 
-Nothing except Python itself.
-    
+Nothing except Python itself. Unidecode supports Python 2.7 and 3.4 or later.
+
 You need a Python build with "wide" Unicode characters (also called "UCS-4
-build") in order for unidecode to work correctly with characters outside of
+build") in order for Unidecode to work correctly with characters outside of
 Basic Multilingual Plane (BMP). Common characters outside BMP are bold, italic,
 script, etc. variants of the Latin alphabet intended for mathematical notation.
-Surrogate pair encoding of "narrow" builds is not supported in unidecode.
+Surrogate pair encoding of "narrow" builds is not supported in Unidecode.
 
 If your Python build supports "wide" Unicode the following expression will
 return True::
@@ -92,8 +92,8 @@
     >>> sys.maxunicode > 0xffff
     True
 
-See PEP 261 for details regarding support for "wide" Unicode characters in
-Python.
+See `PEP 261 <https://www.python.org/dev/peps/pep-0261/>`_ for details
+regarding support for "wide" Unicode characters in Python.
 
 
 Installation
@@ -120,7 +120,7 @@
     Turkish). German text transliterated without the extra "e" is much more
     readable than other languages transliterated using German rules. A
     workaround is to do your own replacements of these characters before
-    passing the string to `unidecode`.
+    passing the string to ``unidecode()``.
 
 Unidecode should support localization (e.g. a language or country parameter, 
inspecting system locale, etc.)
     Language-specific transliteration is a complicated problem and beyond the
@@ -135,14 +135,23 @@
     you, please consider using other libraries, such as `text-unidecode
     <https://github.com/kmike/text-unidecode>`_.
 
+Unidecode produces completely wrong results (e.g. "u" with diaeresis 
transliterating as "A 1/4 ")
+    The strings you are passing to Unidecode have been wrongly decoded
+    somewhere in your program. For example, you might be decoding utf-8 encoded
+    strings as latin1. With a misconfigured terminal, locale and/or a text
+    editor this might not be immediately apparent. Inspect your strings with
+    ``repr()`` and consult the
+    `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
+
 I've upgraded Unidecode and now some URLs on my website return 404 Not Found.
     This is an issue with the software that is running your website, not
     Unidecode. Occasionally, new versions of Unidecode library are released
     which contain improvements to the transliteration tables. This means that
-    you cannot rely that `unidecode` output will not change across different
-    versions of Unidecode library. If you use `unidecode` to generate URLs for
-    your website, either generate the URL slug once and store it in the
-    database or lock your dependency to `unidecode` to one specific version.
+    you cannot rely that ``unidecode()`` output will not change across
+    different versions of Unidecode library. If you use ``unidecode()`` to
+    generate URLs for your website, either generate the URL slug once and store
+    it in the database or lock your dependency of Unidecode to one specific
+    version.
 
 Some of the issues in this section are discussed in more detail in `this blog
 post 
<https://www.tablix.org/~avian/blog/archives/2013/09/python_unidecode_release_0_04_14/>`_.
@@ -151,19 +160,19 @@
 Performance notes
 -----------------
 
-By default, `unidecode` optimizes for the use case where most of the strings
+By default, ``unidecode()`` optimizes for the use case where most of the 
strings
 passed to it are already ASCII-only and no transliteration is necessary (this
 default might change in future versions).
 
 For performance critical applications, two additional functions are exposed:
 
-`unidecode_expect_ascii` is optimized for ASCII-only inputs (approximately 5
-times faster than `unidecode_expect_nonascii` on 10 character strings, more on
-longer strings), but slightly slower for non-ASCII inputs.
+``unidecode_expect_ascii()`` is optimized for ASCII-only inputs (approximately
+5 times faster than ``unidecode_expect_nonascii()`` on 10 character strings,
+more on longer strings), but slightly slower for non-ASCII inputs.
 
-`unidecode_expect_nonascii` takes approximately the same amount of time on
+``unidecode_expect_nonascii()`` takes approximately the same amount of time on
 ASCII and non-ASCII inputs, but is slightly faster for non-ASCII inputs than
-`unidecode_expect_ascii`.
+``unidecode_expect_ascii()``.
 
 Apart from differences in run time, both functions produce identical results.
 For most users of Unidecode, the difference in performance should be
@@ -203,7 +212,7 @@
 
 Python code and later additions:
 
-Copyright 2018, Tomaz Solc <tomaz.s...@tablix.org>
+Copyright 2019, Tomaz Solc <tomaz.s...@tablix.org>
 
 This program is free software; you can redistribute it and/or modify it
 under the terms of the GNU General Public License as published by the Free
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/Unidecode.egg-info/PKG-INFO 
new/Unidecode-1.1.0/Unidecode.egg-info/PKG-INFO
--- old/Unidecode-1.0.23/Unidecode.egg-info/PKG-INFO    2018-11-19 
08:38:18.000000000 +0100
+++ new/Unidecode-1.1.0/Unidecode.egg-info/PKG-INFO     2019-06-14 
16:35:36.000000000 +0200
@@ -1,6 +1,6 @@
-Metadata-Version: 1.1
+Metadata-Version: 1.2
 Name: Unidecode
-Version: 1.0.23
+Version: 1.1.0
 Summary: ASCII transliterations of Unicode text
 Home-page: UNKNOWN
 Author: Tomaz Solc
@@ -15,13 +15,13 @@
         keyboard, or when constructing ASCII machine identifiers from
         human-readable Unicode strings that should still be somewhat 
intelligible
         (a popular example of this is when making an URL slug from an article
-        title). 
+        title).
         
-        In most of these examples you could represent Unicode characters as
-        `???` or `\\15BA\\15A0\\1610`, to mention two extreme cases. But that's
-        nearly useless to someone who actually wants to read what the text 
says.
+        In most of these examples you could represent Unicode characters as 
``???`` or
+        ``\\15BA\\15A0\\1610``, to mention two extreme cases. But that's 
nearly useless
+        to someone who actually wants to read what the text says.
         
-        What Unidecode provides is a middle road: function `unidecode()` takes
+        What Unidecode provides is a middle road: the function ``unidecode()`` 
takes
         Unicode data and tries to represent it in ASCII characters (i.e., the
         universally displayable characters between 0x00 and 0x7F), where the
         compromises taken when mapping between two character sets are chosen 
to be
@@ -43,8 +43,8 @@
         example also contain ASCII approximations for symbols and non-Latin
         alphabets.
         
-        This is a Python port of `Text::Unidecode` Perl module by
-        Sean M. Burke <sbu...@cpan.org>.
+        This is a Python port of ``Text::Unidecode`` Perl module by Sean M. 
Burke
+        <sbu...@cpan.org>.
         
         
         Module content
@@ -79,19 +79,19 @@
             hello
         
         The default encoding used by the utility depends on your system 
locale. You can
-        specify another encoding with the `-e` argument. See `unidecode 
--help` for a
-        full list of available options.
+        specify another encoding with the ``-e`` argument. See ``unidecode 
--help`` for
+        a full list of available options.
         
         Requirements
         ------------
         
-        Nothing except Python itself.
-            
+        Nothing except Python itself. Unidecode supports Python 2.7 and 3.4 or 
later.
+        
         You need a Python build with "wide" Unicode characters (also called 
"UCS-4
-        build") in order for unidecode to work correctly with characters 
outside of
+        build") in order for Unidecode to work correctly with characters 
outside of
         Basic Multilingual Plane (BMP). Common characters outside BMP are 
bold, italic,
         script, etc. variants of the Latin alphabet intended for mathematical 
notation.
-        Surrogate pair encoding of "narrow" builds is not supported in 
unidecode.
+        Surrogate pair encoding of "narrow" builds is not supported in 
Unidecode.
         
         If your Python build supports "wide" Unicode the following expression 
will
         return True::
@@ -100,8 +100,8 @@
             >>> sys.maxunicode > 0xffff
             True
         
-        See PEP 261 for details regarding support for "wide" Unicode 
characters in
-        Python.
+        See `PEP 261 <https://www.python.org/dev/peps/pep-0261/>`_ for details
+        regarding support for "wide" Unicode characters in Python.
         
         
         Installation
@@ -128,7 +128,7 @@
             Turkish). German text transliterated without the extra "e" is much 
more
             readable than other languages transliterated using German rules. A
             workaround is to do your own replacements of these characters 
before
-            passing the string to `unidecode`.
+            passing the string to ``unidecode()``.
         
         Unidecode should support localization (e.g. a language or country 
parameter, inspecting system locale, etc.)
             Language-specific transliteration is a complicated problem and 
beyond the
@@ -143,14 +143,23 @@
             you, please consider using other libraries, such as `text-unidecode
             <https://github.com/kmike/text-unidecode>`_.
         
+        Unidecode produces completely wrong results (e.g. "u" with diaeresis 
transliterating as "A 1/4 ")
+            The strings you are passing to Unidecode have been wrongly decoded
+            somewhere in your program. For example, you might be decoding 
utf-8 encoded
+            strings as latin1. With a misconfigured terminal, locale and/or a 
text
+            editor this might not be immediately apparent. Inspect your 
strings with
+            ``repr()`` and consult the
+            `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
+        
         I've upgraded Unidecode and now some URLs on my website return 404 Not 
Found.
             This is an issue with the software that is running your website, 
not
             Unidecode. Occasionally, new versions of Unidecode library are 
released
             which contain improvements to the transliteration tables. This 
means that
-            you cannot rely that `unidecode` output will not change across 
different
-            versions of Unidecode library. If you use `unidecode` to generate 
URLs for
-            your website, either generate the URL slug once and store it in the
-            database or lock your dependency to `unidecode` to one specific 
version.
+            you cannot rely that ``unidecode()`` output will not change across
+            different versions of Unidecode library. If you use 
``unidecode()`` to
+            generate URLs for your website, either generate the URL slug once 
and store
+            it in the database or lock your dependency of Unidecode to one 
specific
+            version.
         
         Some of the issues in this section are discussed in more detail in 
`this blog
         post 
<https://www.tablix.org/~avian/blog/archives/2013/09/python_unidecode_release_0_04_14/>`_.
@@ -159,19 +168,19 @@
         Performance notes
         -----------------
         
-        By default, `unidecode` optimizes for the use case where most of the 
strings
+        By default, ``unidecode()`` optimizes for the use case where most of 
the strings
         passed to it are already ASCII-only and no transliteration is 
necessary (this
         default might change in future versions).
         
         For performance critical applications, two additional functions are 
exposed:
         
-        `unidecode_expect_ascii` is optimized for ASCII-only inputs 
(approximately 5
-        times faster than `unidecode_expect_nonascii` on 10 character strings, 
more on
-        longer strings), but slightly slower for non-ASCII inputs.
+        ``unidecode_expect_ascii()`` is optimized for ASCII-only inputs 
(approximately
+        5 times faster than ``unidecode_expect_nonascii()`` on 10 character 
strings,
+        more on longer strings), but slightly slower for non-ASCII inputs.
         
-        `unidecode_expect_nonascii` takes approximately the same amount of 
time on
+        ``unidecode_expect_nonascii()`` takes approximately the same amount of 
time on
         ASCII and non-ASCII inputs, but is slightly faster for non-ASCII 
inputs than
-        `unidecode_expect_ascii`.
+        ``unidecode_expect_ascii()``.
         
         Apart from differences in run time, both functions produce identical 
results.
         For most users of Unidecode, the difference in performance should be
@@ -211,7 +220,7 @@
         
         Python code and later additions:
         
-        Copyright 2018, Tomaz Solc <tomaz.s...@tablix.org>
+        Copyright 2019, Tomaz Solc <tomaz.s...@tablix.org>
         
         This program is free software; you can redistribute it and/or modify it
         under the terms of the GNU General Public License as published by the 
Free
@@ -242,7 +251,9 @@
 Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Text Processing
 Classifier: Topic :: Text Processing :: Filters
+Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/Unidecode.egg-info/SOURCES.txt 
new/Unidecode-1.1.0/Unidecode.egg-info/SOURCES.txt
--- old/Unidecode-1.0.23/Unidecode.egg-info/SOURCES.txt 2018-11-19 
08:38:18.000000000 +0100
+++ new/Unidecode-1.1.0/Unidecode.egg-info/SOURCES.txt  2019-06-14 
16:35:36.000000000 +0200
@@ -15,6 +15,7 @@
 tests/test_unidecode.py
 tests/test_utility.py
 unidecode/__init__.py
+unidecode/__main__.py
 unidecode/util.py
 unidecode/x000.py
 unidecode/x001.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/setup.py 
new/Unidecode-1.1.0/setup.py
--- old/Unidecode-1.0.23/setup.py       2018-11-18 19:01:15.000000000 +0100
+++ new/Unidecode-1.1.0/setup.py        2019-06-14 16:28:38.000000000 +0200
@@ -6,11 +6,12 @@
 
 
 def get_long_description():
-    return open(os.path.join(os.path.dirname(__file__), "README.rst")).read()
+    with open(os.path.join(os.path.dirname(__file__), "README.rst")) as fp:
+        return fp.read()
 
 setup(
     name='Unidecode',
-    version='1.0.23',
+    version='1.1.0',
     description='ASCII transliterations of Unicode text',
     license='GPL',
     long_description=get_long_description(),
@@ -18,6 +19,7 @@
     author_email='tomaz.s...@tablix.org',
 
     packages=['unidecode'],
+    python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*",
 
     test_suite='tests',
 
@@ -35,6 +37,7 @@
         "Programming Language :: Python :: 3.4",
         "Programming Language :: Python :: 3.5",
         "Programming Language :: Python :: 3.6",
+        "Programming Language :: Python :: 3.7",
         "Programming Language :: Python :: Implementation :: CPython",
         "Programming Language :: Python :: Implementation :: PyPy",
         "Topic :: Text Processing",
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/tests/test_unidecode.py 
new/Unidecode-1.1.0/tests/test_unidecode.py
--- old/Unidecode-1.0.23/tests/test_unidecode.py        2018-07-19 
10:48:51.000000000 +0200
+++ new/Unidecode-1.1.0/tests/test_unidecode.py 2019-01-19 11:34:57.000000000 
+0100
@@ -31,17 +31,8 @@
 
 if sys.version_info[0] >= 3:
     _chr = chr
-    def _u(x):
-        return x
 else:
     _chr = unichr
-    def _u(x):
-        try:
-            x.decode('ascii')
-        except UnicodeDecodeError:
-            return x.decode('utf8')
-        else:
-            return x.decode('unicode-escape')
 
 class BaseTestUnidecode():
     @unittest.skipIf(sys.version_info[0] >= 3, "not python 2")
@@ -101,17 +92,26 @@
         wlog.stop()
         self.assertEqual(0xe000-0xd800, len(wlog.log))
 
+    def test_space(self):
+        for n in range(0x80, 0x10000):
+            t = _chr(n)
+            if t.isspace():
+                s = self.unidecode(t)
+                self.assertTrue((s == '') or s.isspace(),
+                        'unidecode(%r) should return an empty string or ASCII 
space, '
+                        'since %r.isspace() is true. Instead it returns %r' % 
(t, t, s))
+
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_surrogate_pairs(self):
         # same character, written as a non-BMP character and a
         # surrogate pair
-        s = _u('\U0001d4e3')
+        s = u'\U0001d4e3'
 
         # Note: this needs to be constructed at run-time, otherwise
         # a "wide" Python seems to optimize it automatically into a
         # single character.
-        s_sp_1 = _u('\ud835')
-        s_sp_2 = _u('\udce3')
+        s_sp_1 = u'\ud835'
+        s_sp_2 = u'\udce3'
         s_sp = s_sp_1 + s_sp_2
 
         if sys.version_info < (3,4):
@@ -172,49 +172,49 @@
     def test_specific(self):
 
         TESTS = [
-                (_u('Hello, World!'),
+                (u'Hello, World!',
                 "Hello, World!"),
 
-                (_u('\'"\r\n'),
+                (u'\'"\r\n',
                  "'\"\r\n"),
 
-                (_u('ČŽŠčžš'),
+                (u'ČŽŠčžš',
                  "CZSczs"),
 
-                (_u('ア'),
+                (u'ア',
                  "a"),
 
-                (_u('α'),
+                (u'α',
                 "a"),
 
-                (_u('а'),
+                (u'а',
                 "a"),
 
-                (_u('ch\u00e2teau'),
+                (u'ch\u00e2teau',
                 "chateau"),
 
-                (_u('vi\u00f1edos'),
+                (u'vi\u00f1edos',
                 "vinedos"),
 
-                (_u('\u5317\u4EB0'),
+                (u'\u5317\u4EB0',
                 "Bei Jing "),
 
-                (_u('Efficient'),
+                (u'Efficient',
                 "Efficient"),
 
                 # 
https://github.com/iki/unidecode/commit/4a1d4e0a7b5a11796dc701099556876e7a520065
-                (_u('příliš žluťoučký kůň pěl ďábelské ódy'),
+                (u'příliš žluťoučký kůň pěl ďábelské ódy',
                 'prilis zlutoucky kun pel dabelske ody'),
 
-                (_u('PŘÍLIŠ ŽLUŤOUČKÝ KŮŇ PĚL ĎÁBELSKÉ ÓDY'),
+                (u'PŘÍLIŠ ŽLUŤOUČKÝ KŮŇ PĚL ĎÁBELSKÉ ÓDY',
                 'PRILIS ZLUTOUCKY KUN PEL DABELSKE ODY'),
 
                 # Table that doesn't exist
-                (_u('\ua500'),
+                (u'\ua500',
                 ''),
 
                 # Table that has less than 256 entries
-                (_u('\u1eff'),
+                (u'\u1eff',
                 ''),
             ]
 
@@ -228,14 +228,14 @@
 
         TESTS = [
                 # Non-BMP character
-                (_u('\U0001d5a0'),
+                (u'\U0001d5a0',
                 'A'),
 
                 # Mathematical
-                (_u('\U0001d5c4\U0001d5c6/\U0001d5c1'),
+                (u'\U0001d5c4\U0001d5c6/\U0001d5c1',
                 'km/h'),
 
-                (_u('\u2124\U0001d552\U0001d55c\U0001d552\U0001d55b 
\U0001d526\U0001d52a\U0001d51e 
\U0001d4e4\U0001d4f7\U0001d4f2\U0001d4ec\U0001d4f8\U0001d4ed\U0001d4ee 
\U0001d4c8\U0001d4c5\u212f\U0001d4b8\U0001d4be\U0001d4bb\U0001d4be\U0001d4c0\U0001d4b6\U0001d4b8\U0001d4be\U0001d4bf\u212f
 \U0001d59f\U0001d586 
\U0001d631\U0001d62a\U0001d634\U0001d622\U0001d637\U0001d626?!'),
+                (u'\u2124\U0001d552\U0001d55c\U0001d552\U0001d55b 
\U0001d526\U0001d52a\U0001d51e 
\U0001d4e4\U0001d4f7\U0001d4f2\U0001d4ec\U0001d4f8\U0001d4ed\U0001d4ee 
\U0001d4c8\U0001d4c5\u212f\U0001d4b8\U0001d4be\U0001d4bb\U0001d4be\U0001d4c0\U0001d4b6\U0001d4b8\U0001d4be\U0001d4bf\u212f
 \U0001d59f\U0001d586 
\U0001d631\U0001d62a\U0001d634\U0001d622\U0001d637\U0001d626?!',
                 'Zakaj ima Unicode specifikacije za pisave?!'),
             ]
 
@@ -458,17 +458,17 @@
         # Examples from http://www.panix.com/~eli/unicode/convert.cgi
         lower = [
             # Fullwidth
-            _u('\uff54\uff48\uff45 \uff51\uff55\uff49\uff43\uff4b 
\uff42\uff52\uff4f\uff57\uff4e \uff46\uff4f\uff58 
\uff4a\uff55\uff4d\uff50\uff53 \uff4f\uff56\uff45\uff52 \uff54\uff48\uff45 
\uff4c\uff41\uff5a\uff59 \uff44\uff4f\uff47 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10'),
+            u'\uff54\uff48\uff45 \uff51\uff55\uff49\uff43\uff4b 
\uff42\uff52\uff4f\uff57\uff4e \uff46\uff4f\uff58 
\uff4a\uff55\uff4d\uff50\uff53 \uff4f\uff56\uff45\uff52 \uff54\uff48\uff45 
\uff4c\uff41\uff5a\uff59 \uff44\uff4f\uff47 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
             # Double-struck
-            _u('\U0001d565\U0001d559\U0001d556 
\U0001d562\U0001d566\U0001d55a\U0001d554\U0001d55c 
\U0001d553\U0001d563\U0001d560\U0001d568\U0001d55f 
\U0001d557\U0001d560\U0001d569 
\U0001d55b\U0001d566\U0001d55e\U0001d561\U0001d564 
\U0001d560\U0001d567\U0001d556\U0001d563 \U0001d565\U0001d559\U0001d556 
\U0001d55d\U0001d552\U0001d56b\U0001d56a \U0001d555\U0001d560\U0001d558 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8'),
+            u'\U0001d565\U0001d559\U0001d556 
\U0001d562\U0001d566\U0001d55a\U0001d554\U0001d55c 
\U0001d553\U0001d563\U0001d560\U0001d568\U0001d55f 
\U0001d557\U0001d560\U0001d569 
\U0001d55b\U0001d566\U0001d55e\U0001d561\U0001d564 
\U0001d560\U0001d567\U0001d556\U0001d563 \U0001d565\U0001d559\U0001d556 
\U0001d55d\U0001d552\U0001d56b\U0001d56a \U0001d555\U0001d560\U0001d558 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
             # Bold
-            _u('\U0001d42d\U0001d421\U0001d41e 
\U0001d42a\U0001d42e\U0001d422\U0001d41c\U0001d424 
\U0001d41b\U0001d42b\U0001d428\U0001d430\U0001d427 
\U0001d41f\U0001d428\U0001d431 
\U0001d423\U0001d42e\U0001d426\U0001d429\U0001d42c 
\U0001d428\U0001d42f\U0001d41e\U0001d42b \U0001d42d\U0001d421\U0001d41e 
\U0001d425\U0001d41a\U0001d433\U0001d432 \U0001d41d\U0001d428\U0001d420 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce'),
+            u'\U0001d42d\U0001d421\U0001d41e 
\U0001d42a\U0001d42e\U0001d422\U0001d41c\U0001d424 
\U0001d41b\U0001d42b\U0001d428\U0001d430\U0001d427 
\U0001d41f\U0001d428\U0001d431 
\U0001d423\U0001d42e\U0001d426\U0001d429\U0001d42c 
\U0001d428\U0001d42f\U0001d41e\U0001d42b \U0001d42d\U0001d421\U0001d41e 
\U0001d425\U0001d41a\U0001d433\U0001d432 \U0001d41d\U0001d428\U0001d420 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
             # Bold italic
-            _u('\U0001d495\U0001d489\U0001d486 
\U0001d492\U0001d496\U0001d48a\U0001d484\U0001d48c 
\U0001d483\U0001d493\U0001d490\U0001d498\U0001d48f 
\U0001d487\U0001d490\U0001d499 
\U0001d48b\U0001d496\U0001d48e\U0001d491\U0001d494 
\U0001d490\U0001d497\U0001d486\U0001d493 \U0001d495\U0001d489\U0001d486 
\U0001d48d\U0001d482\U0001d49b\U0001d49a \U0001d485\U0001d490\U0001d488 
1234567890'),
+            u'\U0001d495\U0001d489\U0001d486 
\U0001d492\U0001d496\U0001d48a\U0001d484\U0001d48c 
\U0001d483\U0001d493\U0001d490\U0001d498\U0001d48f 
\U0001d487\U0001d490\U0001d499 
\U0001d48b\U0001d496\U0001d48e\U0001d491\U0001d494 
\U0001d490\U0001d497\U0001d486\U0001d493 \U0001d495\U0001d489\U0001d486 
\U0001d48d\U0001d482\U0001d49b\U0001d49a \U0001d485\U0001d490\U0001d488 
1234567890',
             # Bold script
-            _u('\U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4fa\U0001d4fe\U0001d4f2\U0001d4ec\U0001d4f4 
\U0001d4eb\U0001d4fb\U0001d4f8\U0001d500\U0001d4f7 
\U0001d4ef\U0001d4f8\U0001d501 
\U0001d4f3\U0001d4fe\U0001d4f6\U0001d4f9\U0001d4fc 
\U0001d4f8\U0001d4ff\U0001d4ee\U0001d4fb \U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4f5\U0001d4ea\U0001d503\U0001d502 \U0001d4ed\U0001d4f8\U0001d4f0 
1234567890'),
+            u'\U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4fa\U0001d4fe\U0001d4f2\U0001d4ec\U0001d4f4 
\U0001d4eb\U0001d4fb\U0001d4f8\U0001d500\U0001d4f7 
\U0001d4ef\U0001d4f8\U0001d501 
\U0001d4f3\U0001d4fe\U0001d4f6\U0001d4f9\U0001d4fc 
\U0001d4f8\U0001d4ff\U0001d4ee\U0001d4fb \U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4f5\U0001d4ea\U0001d503\U0001d502 \U0001d4ed\U0001d4f8\U0001d4f0 
1234567890',
             # Fraktur
-            _u('\U0001d599\U0001d58d\U0001d58a 
\U0001d596\U0001d59a\U0001d58e\U0001d588\U0001d590 
\U0001d587\U0001d597\U0001d594\U0001d59c\U0001d593 
\U0001d58b\U0001d594\U0001d59d 
\U0001d58f\U0001d59a\U0001d592\U0001d595\U0001d598 
\U0001d594\U0001d59b\U0001d58a\U0001d597 \U0001d599\U0001d58d\U0001d58a 
\U0001d591\U0001d586\U0001d59f\U0001d59e \U0001d589\U0001d594\U0001d58c 
1234567890'),
+            u'\U0001d599\U0001d58d\U0001d58a 
\U0001d596\U0001d59a\U0001d58e\U0001d588\U0001d590 
\U0001d587\U0001d597\U0001d594\U0001d59c\U0001d593 
\U0001d58b\U0001d594\U0001d59d 
\U0001d58f\U0001d59a\U0001d592\U0001d595\U0001d598 
\U0001d594\U0001d59b\U0001d58a\U0001d597 \U0001d599\U0001d58d\U0001d58a 
\U0001d591\U0001d586\U0001d59f\U0001d59e \U0001d589\U0001d594\U0001d58c 
1234567890',
         ]
 
         for s in lower:
@@ -478,17 +478,17 @@
 
         upper = [
             # Fullwidth
-            _u('\uff34\uff28\uff25 \uff31\uff35\uff29\uff23\uff2b 
\uff22\uff32\uff2f\uff37\uff2e \uff26\uff2f\uff38 
\uff2a\uff35\uff2d\uff30\uff33 \uff2f\uff36\uff25\uff32 \uff34\uff28\uff25 
\uff2c\uff21\uff3a\uff39 \uff24\uff2f\uff27 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10'),
+            u'\uff34\uff28\uff25 \uff31\uff35\uff29\uff23\uff2b 
\uff22\uff32\uff2f\uff37\uff2e \uff26\uff2f\uff38 
\uff2a\uff35\uff2d\uff30\uff33 \uff2f\uff36\uff25\uff32 \uff34\uff28\uff25 
\uff2c\uff21\uff3a\uff39 \uff24\uff2f\uff27 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
             # Double-struck
-            _u('\U0001d54b\u210d\U0001d53c 
\u211a\U0001d54c\U0001d540\u2102\U0001d542 
\U0001d539\u211d\U0001d546\U0001d54e\u2115 \U0001d53d\U0001d546\U0001d54f 
\U0001d541\U0001d54c\U0001d544\u2119\U0001d54a 
\U0001d546\U0001d54d\U0001d53c\u211d \U0001d54b\u210d\U0001d53c 
\U0001d543\U0001d538\u2124\U0001d550 \U0001d53b\U0001d546\U0001d53e 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8'),
+            u'\U0001d54b\u210d\U0001d53c 
\u211a\U0001d54c\U0001d540\u2102\U0001d542 
\U0001d539\u211d\U0001d546\U0001d54e\u2115 \U0001d53d\U0001d546\U0001d54f 
\U0001d541\U0001d54c\U0001d544\u2119\U0001d54a 
\U0001d546\U0001d54d\U0001d53c\u211d \U0001d54b\u210d\U0001d53c 
\U0001d543\U0001d538\u2124\U0001d550 \U0001d53b\U0001d546\U0001d53e 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
             # Bold
-            _u('\U0001d413\U0001d407\U0001d404 
\U0001d410\U0001d414\U0001d408\U0001d402\U0001d40a 
\U0001d401\U0001d411\U0001d40e\U0001d416\U0001d40d 
\U0001d405\U0001d40e\U0001d417 
\U0001d409\U0001d414\U0001d40c\U0001d40f\U0001d412 
\U0001d40e\U0001d415\U0001d404\U0001d411 \U0001d413\U0001d407\U0001d404 
\U0001d40b\U0001d400\U0001d419\U0001d418 \U0001d403\U0001d40e\U0001d406 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce'),
+            u'\U0001d413\U0001d407\U0001d404 
\U0001d410\U0001d414\U0001d408\U0001d402\U0001d40a 
\U0001d401\U0001d411\U0001d40e\U0001d416\U0001d40d 
\U0001d405\U0001d40e\U0001d417 
\U0001d409\U0001d414\U0001d40c\U0001d40f\U0001d412 
\U0001d40e\U0001d415\U0001d404\U0001d411 \U0001d413\U0001d407\U0001d404 
\U0001d40b\U0001d400\U0001d419\U0001d418 \U0001d403\U0001d40e\U0001d406 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
             # Bold italic
-            _u('\U0001d47b\U0001d46f\U0001d46c 
\U0001d478\U0001d47c\U0001d470\U0001d46a\U0001d472 
\U0001d469\U0001d479\U0001d476\U0001d47e\U0001d475 
\U0001d46d\U0001d476\U0001d47f 
\U0001d471\U0001d47c\U0001d474\U0001d477\U0001d47a 
\U0001d476\U0001d47d\U0001d46c\U0001d479 \U0001d47b\U0001d46f\U0001d46c 
\U0001d473\U0001d468\U0001d481\U0001d480 \U0001d46b\U0001d476\U0001d46e 
1234567890'),
+            u'\U0001d47b\U0001d46f\U0001d46c 
\U0001d478\U0001d47c\U0001d470\U0001d46a\U0001d472 
\U0001d469\U0001d479\U0001d476\U0001d47e\U0001d475 
\U0001d46d\U0001d476\U0001d47f 
\U0001d471\U0001d47c\U0001d474\U0001d477\U0001d47a 
\U0001d476\U0001d47d\U0001d46c\U0001d479 \U0001d47b\U0001d46f\U0001d46c 
\U0001d473\U0001d468\U0001d481\U0001d480 \U0001d46b\U0001d476\U0001d46e 
1234567890',
             # Bold script
-            _u('\U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4e0\U0001d4e4\U0001d4d8\U0001d4d2\U0001d4da 
\U0001d4d1\U0001d4e1\U0001d4de\U0001d4e6\U0001d4dd 
\U0001d4d5\U0001d4de\U0001d4e7 
\U0001d4d9\U0001d4e4\U0001d4dc\U0001d4df\U0001d4e2 
\U0001d4de\U0001d4e5\U0001d4d4\U0001d4e1 \U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4db\U0001d4d0\U0001d4e9\U0001d4e8 \U0001d4d3\U0001d4de\U0001d4d6 
1234567890'),
+            u'\U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4e0\U0001d4e4\U0001d4d8\U0001d4d2\U0001d4da 
\U0001d4d1\U0001d4e1\U0001d4de\U0001d4e6\U0001d4dd 
\U0001d4d5\U0001d4de\U0001d4e7 
\U0001d4d9\U0001d4e4\U0001d4dc\U0001d4df\U0001d4e2 
\U0001d4de\U0001d4e5\U0001d4d4\U0001d4e1 \U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4db\U0001d4d0\U0001d4e9\U0001d4e8 \U0001d4d3\U0001d4de\U0001d4d6 
1234567890',
             # Fraktur
-            _u('\U0001d57f\U0001d573\U0001d570 
\U0001d57c\U0001d580\U0001d574\U0001d56e\U0001d576 
\U0001d56d\U0001d57d\U0001d57a\U0001d582\U0001d579 
\U0001d571\U0001d57a\U0001d583 
\U0001d575\U0001d580\U0001d578\U0001d57b\U0001d57e 
\U0001d57a\U0001d581\U0001d570\U0001d57d \U0001d57f\U0001d573\U0001d570 
\U0001d577\U0001d56c\U0001d585\U0001d584 \U0001d56f\U0001d57a\U0001d572 
1234567890'),
+            u'\U0001d57f\U0001d573\U0001d570 
\U0001d57c\U0001d580\U0001d574\U0001d56e\U0001d576 
\U0001d56d\U0001d57d\U0001d57a\U0001d582\U0001d579 
\U0001d571\U0001d57a\U0001d583 
\U0001d575\U0001d580\U0001d578\U0001d57b\U0001d57e 
\U0001d57a\U0001d581\U0001d570\U0001d57d \U0001d57f\U0001d573\U0001d570 
\U0001d577\U0001d56c\U0001d585\U0001d584 \U0001d56f\U0001d57a\U0001d572 
1234567890',
         ]
 
         for s in upper:
@@ -499,7 +499,7 @@
     def test_enclosed_alphanumerics(self):
         self.assertEqual(
             'aA20(20)20.20100',
-            self.unidecode(_u('ⓐⒶ⑳⒇⒛⓴⓾⓿')),
+            self.unidecode(u'ⓐⒶ⑳⒇⒛⓴⓾⓿'),
         )
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/tests/test_utility.py 
new/Unidecode-1.1.0/tests/test_utility.py
--- old/Unidecode-1.0.23/tests/test_utility.py  2018-06-19 20:28:46.000000000 
+0200
+++ new/Unidecode-1.1.0/tests/test_utility.py   2019-01-19 11:34:57.000000000 
+0100
@@ -6,16 +6,9 @@
 import sys
 import tempfile
 
-PY3 = sys.version_info[0] >= 3
 
 here = os.path.dirname(__file__)
 
-if PY3:
-    def _u(x):
-        return x
-else:
-    def _u(x):
-        return x.decode('unicode-escape')
 
 def get_cmd():
     sys_path = os.path.join(here, "..")
@@ -39,7 +32,7 @@
 
 class TestUnidecodeUtility(unittest.TestCase):
 
-    TEST_UNICODE = _u('\u9769')
+    TEST_UNICODE = u'\u9769'
     TEST_ASCII = 'Ge '
 
     def test_encoding_error(self):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/unidecode/__main__.py 
new/Unidecode-1.1.0/unidecode/__main__.py
--- old/Unidecode-1.0.23/unidecode/__main__.py  1970-01-01 01:00:00.000000000 
+0100
+++ new/Unidecode-1.1.0/unidecode/__main__.py   2019-01-19 12:00:08.000000000 
+0100
@@ -0,0 +1,3 @@
+from unidecode.util import main
+
+main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/unidecode/util.py 
new/Unidecode-1.1.0/unidecode/util.py
--- old/Unidecode-1.0.23/unidecode/util.py      2018-06-19 20:28:46.000000000 
+0200
+++ new/Unidecode-1.1.0/unidecode/util.py       2019-01-19 11:34:57.000000000 
+0100
@@ -1,10 +1,9 @@
 # vim:ts=4 sw=4 expandtab softtabstop=4
 from __future__ import print_function
-import optparse
+import argparse
 import locale
 import os
 import sys
-import warnings
 
 from unidecode import unidecode
 
@@ -17,33 +16,34 @@
 def main():
     default_encoding = locale.getpreferredencoding()
 
-    parser = optparse.OptionParser('%prog [options] [FILE]',
+    parser = argparse.ArgumentParser(
             description="Transliterate Unicode text into ASCII. FILE is path 
to file to transliterate. "
             "Standard input is used if FILE is omitted and -c is not 
specified.")
-    parser.add_option('-e', '--encoding', metavar='ENCODING', 
default=default_encoding,
+    parser.add_argument('-e', '--encoding', metavar='ENCODING', 
default=default_encoding,
             help='Specify an encoding (default is %s)' % (default_encoding,))
-    parser.add_option('-c', metavar='TEXT', dest='text',
+    parser.add_argument('-c', metavar='TEXT', dest='text',
             help='Transliterate TEXT instead of FILE')
+    parser.add_argument('path', nargs='?', metavar='FILE')
 
-    options, args = parser.parse_args()
+    args = parser.parse_args()
 
-    encoding = options.encoding
+    encoding = args.encoding
 
-    if args:
-        if options.text:
+    if args.path:
+        if args.text:
             fatal("Can't use both FILE and -c option")
         else:
-            with open(args[0], 'rb') as f:
+            with open(args.path, 'rb') as f:
                 stream = f.read()
-    elif options.text:
+    elif args.text:
         if PY3:
-            stream = os.fsencode(options.text)
+            stream = os.fsencode(args.text)
         else:
-            stream = options.text
+            stream = args.text
         # add a newline to the string if it comes from the
         # command line so that the result is printed nicely
         # on the console.
-        stream += '\n'.encode('ascii')
+        stream += b'\n'
     else:
         if PY3:
             stream = sys.stdin.buffer.read()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.0.23/unidecode/x1f1.py 
new/Unidecode-1.1.0/unidecode/x1f1.py
--- old/Unidecode-1.0.23/unidecode/x1f1.py      2018-06-19 20:28:46.000000000 
+0200
+++ new/Unidecode-1.1.0/unidecode/x1f1.py       2019-01-19 12:08:18.000000000 
+0100
@@ -41,102 +41,102 @@
 '(X)',    # 0x27
 '(Y)',    # 0x28
 '(Z)',    # 0x29
-'',    # 0x2a
-'',    # 0x2b
-'',    # 0x2c
+'<S>',    # 0x2a
+'(C)',    # 0x2b
+'(R)',    # 0x2c
 '',    # 0x2d
 '',    # 0x2e
 '',    # 0x2f
-'',    # 0x30
-'',    # 0x31
-'',    # 0x32
-'',    # 0x33
-'',    # 0x34
-'',    # 0x35
-'',    # 0x36
-'',    # 0x37
-'',    # 0x38
-'',    # 0x39
-'',    # 0x3a
-'',    # 0x3b
-'',    # 0x3c
-'',    # 0x3d
-'',    # 0x3e
-'',    # 0x3f
-'',    # 0x40
-'',    # 0x41
-'',    # 0x42
-'',    # 0x43
-'',    # 0x44
-'',    # 0x45
-'',    # 0x46
-'',    # 0x47
-'',    # 0x48
-'',    # 0x49
+'[A]',    # 0x30
+'[B]',    # 0x31
+'[C]',    # 0x32
+'[D]',    # 0x33
+'[E]',    # 0x34
+'[F]',    # 0x35
+'[G]',    # 0x36
+'[H]',    # 0x37
+'[I]',    # 0x38
+'[J]',    # 0x39
+'[K]',    # 0x3a
+'[L]',    # 0x3b
+'[M]',    # 0x3c
+'[N]',    # 0x3d
+'[O]',    # 0x3e
+'[P]',    # 0x3f
+'[Q]',    # 0x40
+'[R]',    # 0x41
+'[S]',    # 0x42
+'[T]',    # 0x43
+'[U]',    # 0x44
+'[V]',    # 0x45
+'[W]',    # 0x46
+'[X]',    # 0x47
+'[Y]',    # 0x48
+'[Z]',    # 0x49
 '',    # 0x4a
 '',    # 0x4b
 '',    # 0x4c
 '',    # 0x4d
 '',    # 0x4e
 '',    # 0x4f
-'',    # 0x50
-'',    # 0x51
-'',    # 0x52
-'',    # 0x53
-'',    # 0x54
-'',    # 0x55
-'',    # 0x56
-'',    # 0x57
-'',    # 0x58
-'',    # 0x59
-'',    # 0x5a
-'',    # 0x5b
-'',    # 0x5c
-'',    # 0x5d
-'',    # 0x5e
-'',    # 0x5f
-'',    # 0x60
-'',    # 0x61
-'',    # 0x62
-'',    # 0x63
-'',    # 0x64
-'',    # 0x65
-'',    # 0x66
-'',    # 0x67
-'',    # 0x68
-'',    # 0x69
+'(A)',    # 0x50
+'(B)',    # 0x51
+'(C)',    # 0x52
+'(D)',    # 0x53
+'(E)',    # 0x54
+'(F)',    # 0x55
+'(G)',    # 0x56
+'(H)',    # 0x57
+'(I)',    # 0x58
+'(J)',    # 0x59
+'(K)',    # 0x5a
+'(L)',    # 0x5b
+'(M)',    # 0x5c
+'(N)',    # 0x5d
+'(O)',    # 0x5e
+'(P)',    # 0x5f
+'(Q)',    # 0x60
+'(R)',    # 0x61
+'(S)',    # 0x62
+'(T)',    # 0x63
+'(U)',    # 0x64
+'(V)',    # 0x65
+'(W)',    # 0x66
+'(X)',    # 0x67
+'(Y)',    # 0x68
+'(Z)',    # 0x69
 '',    # 0x6a
 '',    # 0x6b
 '',    # 0x6c
 '',    # 0x6d
 '',    # 0x6e
 '',    # 0x6f
-'',    # 0x70
-'',    # 0x71
-'',    # 0x72
-'',    # 0x73
-'',    # 0x74
-'',    # 0x75
-'',    # 0x76
-'',    # 0x77
-'',    # 0x78
-'',    # 0x79
-'',    # 0x7a
-'',    # 0x7b
-'',    # 0x7c
-'',    # 0x7d
-'',    # 0x7e
-'',    # 0x7f
-'',    # 0x80
-'',    # 0x81
-'',    # 0x82
-'',    # 0x83
-'',    # 0x84
-'',    # 0x85
-'',    # 0x86
-'',    # 0x87
-'',    # 0x88
-'',    # 0x89
+'[A]',    # 0x70
+'[B]',    # 0x71
+'[C]',    # 0x72
+'[D]',    # 0x73
+'[E]',    # 0x74
+'[F]',    # 0x75
+'[G]',    # 0x76
+'[H]',    # 0x77
+'[I]',    # 0x78
+'[J]',    # 0x79
+'[K]',    # 0x7a
+'[L]',    # 0x7b
+'[M]',    # 0x7c
+'[N]',    # 0x7d
+'[O]',    # 0x7e
+'[P]',    # 0x7f
+'[Q]',    # 0x80
+'[R]',    # 0x81
+'[S]',    # 0x82
+'[T]',    # 0x83
+'[U]',    # 0x84
+'[V]',    # 0x85
+'[W]',    # 0x86
+'[X]',    # 0x87
+'[Y]',    # 0x88
+'[Z]',    # 0x89
 '',    # 0x8a
 '',    # 0x8b
 '',    # 0x8c


Reply via email to