Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-Unidecode for 
openSUSE:Factory checked in at 2021-10-20 20:23:22
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-Unidecode (Old)
 and      /work/SRC/openSUSE:Factory/.python-Unidecode.new.1890 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-Unidecode"

Wed Oct 20 20:23:22 2021 rev:13 rq:925642 version:1.3.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-Unidecode/python-Unidecode.changes        
2021-02-11 12:48:18.949569982 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-Unidecode.new.1890/python-Unidecode.changes  
    2021-10-20 20:24:05.505370944 +0200
@@ -1,0 +2,18 @@
+Thu Sep  9 20:38:32 UTC 2021 - Beno??t Monin <benoit.mo...@gmx.fr>
+
+- update to version 1.3.1:
+  * Fix issue with wheel package falsely claiming support for
+    Python 2.
+
+-------------------------------------------------------------------
+Mon Sep  6 18:03:00 UTC 2021 - Beno??t Monin <benoit.mo...@gmx.fr>
+
+- update to version 1.3.0:
+  * Drop support for Python <3.5.
+  * Improvements to Hebrew and Yiddish transliterations (thanks to
+    Alon Bar-Lev and @eyaler on GitHub)
+- move update-alternative to postun instead of preun:
+  fix rpmlint warning
+- disable python2 build: unsupported by upstream now
+
+-------------------------------------------------------------------

Old:
----
  Unidecode-1.2.0.tar.gz

New:
----
  Unidecode-1.3.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-Unidecode.spec ++++++
--- /var/tmp/diff_new_pack.hOUzkj/_old  2021-10-20 20:24:06.009371255 +0200
+++ /var/tmp/diff_new_pack.hOUzkj/_new  2021-10-20 20:24:06.009371255 +0200
@@ -16,9 +16,10 @@
 #
 
 
+%define skip_python2 1
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-Unidecode
-Version:        1.2.0
+Version:        1.3.1
 Release:        0
 Summary:        ASCII transliterations of Unicode text
 License:        GPL-2.0-or-later
@@ -29,7 +30,7 @@
 BuildRequires:  fdupes
 BuildRequires:  python-rpm-macros
 Requires(post): update-alternatives
-Requires(preun): update-alternatives
+Requires(postun):update-alternatives
 BuildArch:      noarch
 %python_subpackages
 
@@ -90,7 +91,7 @@
 %post
 %python_install_alternative unidecode
 
-%preun
+%postun
 %python_uninstall_alternative unidecode
 
 %files %{python_files}

++++++ Unidecode-1.2.0.tar.gz -> Unidecode-1.3.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/ChangeLog 
new/Unidecode-1.3.1/ChangeLog
--- old/Unidecode-1.2.0/ChangeLog       2021-02-05 12:27:40.000000000 +0100
+++ new/Unidecode-1.3.1/ChangeLog       2021-09-09 22:19:39.000000000 +0200
@@ -1,3 +1,11 @@
+2021-09-09     unidecode 1.3.1
+       * Fix issue with wheel package falsely claiming support for Python 2.
+
+2021-09-06     unidecode 1.3.0
+       * Drop support for Python <3.5.
+       * Improvements to Hebrew and Yiddish transliterations (thanks to Alon
+         Bar-Lev and @eyaler on GitHub)
+
 2021-02-05     unidecode 1.2.0
        * Add 'errors' argument that specifies how characters with unknown
          replacements are handled. Default is 'ignore' to replicate the
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/MANIFEST.in 
new/Unidecode-1.3.1/MANIFEST.in
--- old/Unidecode-1.2.0/MANIFEST.in     2020-12-20 12:43:16.000000000 +0100
+++ new/Unidecode-1.3.1/MANIFEST.in     2021-02-05 14:19:53.000000000 +0100
@@ -2,4 +2,5 @@
 include ChangeLog
 include LICENSE
 include README.rst
+include tox.ini
 recursive-include tests *.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/PKG-INFO new/Unidecode-1.3.1/PKG-INFO
--- old/Unidecode-1.2.0/PKG-INFO        2021-02-05 12:51:35.000000000 +0100
+++ new/Unidecode-1.3.1/PKG-INFO        2021-09-09 22:23:28.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: Unidecode
-Version: 1.2.0
+Version: 1.3.1
 Summary: ASCII transliterations of Unicode text
 Home-page: UNKNOWN
 Author: Tomaz Solc
@@ -60,12 +60,12 @@
         Module content
         --------------
         
-        The module exports a function that takes an Unicode object (Python 
2.x) or
-        string (Python 3.x) and returns a string (that can be encoded to ASCII 
bytes in
-        Python 3.x)::
+        This library contains a function that takes a string object, possibly
+        containing non-ASCII characters, and returns a string that can be 
safely
+        encoded to ASCII::
         
             >>> from unidecode import unidecode
-            >>> unidecode('ko\u017eu\u0161\u010dek')
+            >>> unidecode('ko??u????ek')
             'kozuscek'
             >>> unidecode('30 \U0001d5c4\U0001d5c6/\U0001d5c1')
             '30 km/h'
@@ -113,10 +113,7 @@
         Requirements
         ------------
         
-        Nothing except Python itself. Current release of Unidecode supports 
Python 2.7
-        and 3.4 or later.
-        
-        **Support for versions earlier than 3.5 will be dropped in the next 
release.**
+        Nothing except Python itself. Unidecode supports Python 3.5 or later.
         
         You need a Python build with "wide" Unicode characters (also called 
"UCS-4
         build") in order for Unidecode to work correctly with characters 
outside of
@@ -199,6 +196,14 @@
             ``repr()`` and consult the
             `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
         
+        Why does Unidecode not replace \\u and \\U backslash escapes in my 
strings?
+            Unidecode knows nothing about escape sequences. Interpreting these 
sequences
+            and replacing them with actual Unicode characters in string 
literals is the
+            task of the Python interpreter. If you are asking this question 
you are
+            very likely misunderstanding the purpose of this library. Consult 
the
+            `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_ 
and possibly
+            the ``unicode_escape`` encoding in the standard library.
+        
         I've upgraded Unidecode and now some URLs on my website return 404 Not 
Found.
             This is an issue with the software that is running your website, 
not
             Unidecode. Occasionally, new versions of Unidecode library are 
released
@@ -268,7 +273,7 @@
         
         Python code and later additions:
         
-        Copyright 2021, Tomaz Solc <tomaz.s...@tablix.org>
+        Copyright 2021, Toma?? ??olc <tomaz.s...@tablix.org>
         
         This program is free software; you can redistribute it and/or modify it
         under the terms of the GNU General Public License as published by the 
Free
@@ -293,15 +298,14 @@
 Platform: UNKNOWN
 Classifier: License :: OSI Approved :: GNU General Public License v2 or later 
(GPLv2+)
 Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Text Processing
 Classifier: Topic :: Text Processing :: Filters
-Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
+Requires-Python: >=3.5
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/README.rst 
new/Unidecode-1.3.1/README.rst
--- old/Unidecode-1.2.0/README.rst      2021-02-05 12:27:40.000000000 +0100
+++ new/Unidecode-1.3.1/README.rst      2021-03-06 18:04:08.000000000 +0100
@@ -52,12 +52,12 @@
 Module content
 --------------
 
-The module exports a function that takes an Unicode object (Python 2.x) or
-string (Python 3.x) and returns a string (that can be encoded to ASCII bytes in
-Python 3.x)::
+This library contains a function that takes a string object, possibly
+containing non-ASCII characters, and returns a string that can be safely
+encoded to ASCII::
 
     >>> from unidecode import unidecode
-    >>> unidecode('ko\u017eu\u0161\u010dek')
+    >>> unidecode('ko??u????ek')
     'kozuscek'
     >>> unidecode('30 \U0001d5c4\U0001d5c6/\U0001d5c1')
     '30 km/h'
@@ -105,10 +105,7 @@
 Requirements
 ------------
 
-Nothing except Python itself. Current release of Unidecode supports Python 2.7
-and 3.4 or later.
-
-**Support for versions earlier than 3.5 will be dropped in the next release.**
+Nothing except Python itself. Unidecode supports Python 3.5 or later.
 
 You need a Python build with "wide" Unicode characters (also called "UCS-4
 build") in order for Unidecode to work correctly with characters outside of
@@ -191,6 +188,14 @@
     ``repr()`` and consult the
     `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
 
+Why does Unidecode not replace \\u and \\U backslash escapes in my strings?
+    Unidecode knows nothing about escape sequences. Interpreting these 
sequences
+    and replacing them with actual Unicode characters in string literals is the
+    task of the Python interpreter. If you are asking this question you are
+    very likely misunderstanding the purpose of this library. Consult the
+    `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_ and 
possibly
+    the ``unicode_escape`` encoding in the standard library.
+
 I've upgraded Unidecode and now some URLs on my website return 404 Not Found.
     This is an issue with the software that is running your website, not
     Unidecode. Occasionally, new versions of Unidecode library are released
@@ -260,7 +265,7 @@
 
 Python code and later additions:
 
-Copyright 2021, Tomaz Solc <tomaz.s...@tablix.org>
+Copyright 2021, Toma?? ??olc <tomaz.s...@tablix.org>
 
 This program is free software; you can redistribute it and/or modify it
 under the terms of the GNU General Public License as published by the Free
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/Unidecode.egg-info/PKG-INFO 
new/Unidecode-1.3.1/Unidecode.egg-info/PKG-INFO
--- old/Unidecode-1.2.0/Unidecode.egg-info/PKG-INFO     2021-02-05 
12:51:34.000000000 +0100
+++ new/Unidecode-1.3.1/Unidecode.egg-info/PKG-INFO     2021-09-09 
22:23:27.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: Unidecode
-Version: 1.2.0
+Version: 1.3.1
 Summary: ASCII transliterations of Unicode text
 Home-page: UNKNOWN
 Author: Tomaz Solc
@@ -60,12 +60,12 @@
         Module content
         --------------
         
-        The module exports a function that takes an Unicode object (Python 
2.x) or
-        string (Python 3.x) and returns a string (that can be encoded to ASCII 
bytes in
-        Python 3.x)::
+        This library contains a function that takes a string object, possibly
+        containing non-ASCII characters, and returns a string that can be 
safely
+        encoded to ASCII::
         
             >>> from unidecode import unidecode
-            >>> unidecode('ko\u017eu\u0161\u010dek')
+            >>> unidecode('ko??u????ek')
             'kozuscek'
             >>> unidecode('30 \U0001d5c4\U0001d5c6/\U0001d5c1')
             '30 km/h'
@@ -113,10 +113,7 @@
         Requirements
         ------------
         
-        Nothing except Python itself. Current release of Unidecode supports 
Python 2.7
-        and 3.4 or later.
-        
-        **Support for versions earlier than 3.5 will be dropped in the next 
release.**
+        Nothing except Python itself. Unidecode supports Python 3.5 or later.
         
         You need a Python build with "wide" Unicode characters (also called 
"UCS-4
         build") in order for Unidecode to work correctly with characters 
outside of
@@ -199,6 +196,14 @@
             ``repr()`` and consult the
             `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_.
         
+        Why does Unidecode not replace \\u and \\U backslash escapes in my 
strings?
+            Unidecode knows nothing about escape sequences. Interpreting these 
sequences
+            and replacing them with actual Unicode characters in string 
literals is the
+            task of the Python interpreter. If you are asking this question 
you are
+            very likely misunderstanding the purpose of this library. Consult 
the
+            `Unicode HOWTO <https://docs.python.org/3/howto/unicode.html>`_ 
and possibly
+            the ``unicode_escape`` encoding in the standard library.
+        
         I've upgraded Unidecode and now some URLs on my website return 404 Not 
Found.
             This is an issue with the software that is running your website, 
not
             Unidecode. Occasionally, new versions of Unidecode library are 
released
@@ -268,7 +273,7 @@
         
         Python code and later additions:
         
-        Copyright 2021, Tomaz Solc <tomaz.s...@tablix.org>
+        Copyright 2021, Toma?? ??olc <tomaz.s...@tablix.org>
         
         This program is free software; you can redistribute it and/or modify it
         under the terms of the GNU General Public License as published by the 
Free
@@ -293,15 +298,14 @@
 Platform: UNKNOWN
 Classifier: License :: OSI Approved :: GNU General Public License v2 or later 
(GPLv2+)
 Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 2
-Classifier: Programming Language :: Python :: 2.7
 Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
 Classifier: Programming Language :: Python :: 3.5
 Classifier: Programming Language :: Python :: 3.6
 Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
 Classifier: Programming Language :: Python :: Implementation :: CPython
 Classifier: Programming Language :: Python :: Implementation :: PyPy
 Classifier: Topic :: Text Processing
 Classifier: Topic :: Text Processing :: Filters
-Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
+Requires-Python: >=3.5
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/Unidecode.egg-info/SOURCES.txt 
new/Unidecode-1.3.1/Unidecode.egg-info/SOURCES.txt
--- old/Unidecode-1.2.0/Unidecode.egg-info/SOURCES.txt  2021-02-05 
12:51:35.000000000 +0100
+++ new/Unidecode-1.3.1/Unidecode.egg-info/SOURCES.txt  2021-09-09 
22:23:28.000000000 +0200
@@ -5,6 +5,7 @@
 perl2python.pl
 setup.cfg
 setup.py
+tox.ini
 Unidecode.egg-info/PKG-INFO
 Unidecode.egg-info/SOURCES.txt
 Unidecode.egg-info/dependency_links.txt
@@ -15,7 +16,6 @@
 tests/test_unidecode.py
 tests/test_utility.py
 unidecode/__init__.py
-unidecode/__init__.pyi
 unidecode/__main__.py
 unidecode/py.typed
 unidecode/util.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/setup.cfg 
new/Unidecode-1.3.1/setup.cfg
--- old/Unidecode-1.2.0/setup.cfg       2021-02-05 12:51:35.000000000 +0100
+++ new/Unidecode-1.3.1/setup.cfg       2021-09-09 22:23:28.000000000 +0200
@@ -1,6 +1,3 @@
-[bdist_wheel]
-universal = 1
-
 [metadata]
 license_file = LICENSE
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/setup.py new/Unidecode-1.3.1/setup.py
--- old/Unidecode-1.2.0/setup.py        2021-02-05 12:27:40.000000000 +0100
+++ new/Unidecode-1.3.1/setup.py        2021-09-09 22:18:48.000000000 +0200
@@ -6,12 +6,12 @@
 
 
 def get_long_description():
-    with open(os.path.join(os.path.dirname(__file__), "README.rst")) as fp:
+    with open(os.path.join(os.path.dirname(__file__), "README.rst"), 
encoding='utf-8') as fp:
         return fp.read()
 
 setup(
     name='Unidecode',
-    version='1.2.0',
+    version='1.3.1',
     description='ASCII transliterations of Unicode text',
     license='GPL',
     long_description=get_long_description(),
@@ -19,8 +19,8 @@
     author_email='tomaz.s...@tablix.org',
 
     packages=['unidecode'],
-    package_data={'unidecode': ['py.typed', '__init__.pyi']},
-    python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*",
+    package_data={'unidecode': ['py.typed']},
+    python_requires=">=3.5",
 
     test_suite='tests',
 
@@ -32,13 +32,12 @@
     classifiers=[
         "License :: OSI Approved :: GNU General Public License v2 or later 
(GPLv2+)",
         "Programming Language :: Python",
-        "Programming Language :: Python :: 2",
-        "Programming Language :: Python :: 2.7",
         "Programming Language :: Python :: 3",
-        "Programming Language :: Python :: 3.4",
         "Programming Language :: Python :: 3.5",
         "Programming Language :: Python :: 3.6",
         "Programming Language :: Python :: 3.7",
+        "Programming Language :: Python :: 3.8",
+        "Programming Language :: Python :: 3.9",
         "Programming Language :: Python :: Implementation :: CPython",
         "Programming Language :: Python :: Implementation :: PyPy",
         "Topic :: Text Processing",
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/tests/test_readme.py 
new/Unidecode-1.3.1/tests/test_readme.py
--- old/Unidecode-1.2.0/tests/test_readme.py    2021-01-08 15:45:54.000000000 
+0100
+++ new/Unidecode-1.3.1/tests/test_readme.py    2021-02-05 12:59:14.000000000 
+0100
@@ -2,7 +2,7 @@
 import sys
 
 def additional_tests():
-       if sys.version_info[0] >= 3 and sys.maxunicode >= 0x10000:
+       if sys.maxunicode >= 0x10000:
                return doctest.DocFileSuite("../README.rst")
        else:
                return doctest.DocFileSuite()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/tests/test_unidecode.py 
new/Unidecode-1.3.1/tests/test_unidecode.py
--- old/Unidecode-1.2.0/tests/test_unidecode.py 2021-01-08 16:24:39.000000000 
+0100
+++ new/Unidecode-1.3.1/tests/test_unidecode.py 2021-02-05 13:05:32.000000000 
+0100
@@ -29,27 +29,7 @@
     def clear(self):
         self.log = []
 
-if sys.version_info[0] >= 3:
-    _chr = chr
-else:
-    _chr = unichr
-
 class BaseTestUnidecode():
-    @unittest.skipIf(sys.version_info[0] >= 3, "not python 2")
-    def test_ascii_warning(self):
-        wlog = WarningLogger()
-        wlog.start("not an unicode object")
-
-        for n in range(0,128):
-            t = chr(n)
-
-            r = self.unidecode(t)
-            self.assertEqual(r, t)
-            self.assertEqual(type(r), str)
-
-        # Passing string objects to unidecode should raise a warning
-        self.assertEqual(128, len(wlog.log))
-        wlog.stop()
 
     def test_ascii(self):
 
@@ -57,7 +37,7 @@
         wlog.start("not an unicode object")
 
         for n in range(0,128):
-            t = _chr(n)
+            t = chr(n)
 
             r = self.unidecode(t)
             self.assertEqual(r, t)
@@ -75,7 +55,7 @@
                 continue
 
             # Just check that it doesn't throw an exception
-            t = _chr(n)
+            t = chr(n)
             self.unidecode(t)
 
     def test_surrogates(self):
@@ -83,7 +63,7 @@
         wlog.start("Surrogate character")
 
         for n in range(0xd800, 0xe000):
-            t = _chr(n)
+            t = chr(n)
             s = self.unidecode(t)
 
             # Check that surrogate characters translate to nothing.
@@ -94,7 +74,7 @@
 
     def test_space(self):
         for n in range(0x80, 0x10000):
-            t = _chr(n)
+            t = chr(n)
             if t.isspace():
                 s = self.unidecode(t)
                 self.assertTrue((s == '') or s.isspace(),
@@ -105,19 +85,16 @@
     def test_surrogate_pairs(self):
         # same character, written as a non-BMP character and a
         # surrogate pair
-        s = u'\U0001d4e3'
+        s = '\U0001d4e3'
 
         # Note: this needs to be constructed at run-time, otherwise
         # a "wide" Python seems to optimize it automatically into a
         # single character.
-        s_sp_1 = u'\ud835'
-        s_sp_2 = u'\udce3'
+        s_sp_1 = '\ud835'
+        s_sp_2 = '\udce3'
         s_sp = s_sp_1 + s_sp_2
 
-        if sys.version_info < (3,4):
-            self.assertEqual(s.encode('utf16'), s_sp.encode('utf16'))
-        else:
-            self.assertEqual(s.encode('utf16'), s_sp.encode('utf16', 
errors='surrogatepass'))
+        self.assertEqual(s.encode('utf16'), s_sp.encode('utf16', 
errors='surrogatepass'))
 
         wlog = WarningLogger()
         wlog.start("Surrogate character")
@@ -136,7 +113,7 @@
         # 1 sequence of a-z
         for n in range(0, 26):
             a = chr(ord('a') + n)
-            b = self.unidecode(_chr(0x24d0 + n))
+            b = self.unidecode(chr(0x24d0 + n))
 
             self.assertEqual(b, a)
 
@@ -151,7 +128,7 @@
                 a = chr(ord('A') + n % 26)
             else:
                 a = chr(ord('a') + n % 26)
-            b = self.unidecode(_chr(n))
+            b = self.unidecode(chr(n))
 
             if not b:
                 empty += 1
@@ -165,56 +142,56 @@
         # 5 consecutive sequences of 0-9
         for n in range(0x1d7ce, 0x1d800):
             a = chr(ord('0') + (n-0x1d7ce) % 10)
-            b = self.unidecode(_chr(n))
+            b = self.unidecode(chr(n))
 
             self.assertEqual(b, a)
 
     def test_specific(self):
 
         TESTS = [
-                (u'Hello, World!',
+                ('Hello, World!',
                 "Hello, World!"),
 
-                (u'\'"\r\n',
+                ('\'"\r\n',
                  "'\"\r\n"),
 
-                (u'????????????',
+                ('????????????',
                  "CZSczs"),
 
-                (u'???',
+                ('???',
                  "a"),
 
-                (u'??',
+                ('??',
                 "a"),
 
-                (u'??',
+                ('??',
                 "a"),
 
-                (u'ch\u00e2teau',
+                ('ch\u00e2teau',
                 "chateau"),
 
-                (u'vi\u00f1edos',
+                ('vi\u00f1edos',
                 "vinedos"),
 
-                (u'\u5317\u4EB0',
+                ('\u5317\u4EB0',
                 "Bei Jing "),
 
-                (u'Ef???cient',
+                ('Ef???cient',
                 "Efficient"),
 
                 # 
https://github.com/iki/unidecode/commit/4a1d4e0a7b5a11796dc701099556876e7a520065
-                (u'p????li?? ??lu??ou??k?? k???? p??l ????belsk?? ??dy',
+                ('p????li?? ??lu??ou??k?? k???? p??l ????belsk?? ??dy',
                 'prilis zlutoucky kun pel dabelske ody'),
 
-                (u'P????LI?? ??LU??OU??K?? K???? P??L ????BELSK?? ??DY',
+                ('P????LI?? ??LU??OU??K?? K???? P??L ????BELSK?? ??DY',
                 'PRILIS ZLUTOUCKY KUN PEL DABELSKE ODY'),
 
                 # Table that doesn't exist
-                (u'\ua500',
+                ('\ua500',
                 ''),
 
                 # Table that has less than 256 entries
-                (u'\u1eff',
+                ('\u1eff',
                 ''),
             ]
 
@@ -228,14 +205,14 @@
 
         TESTS = [
                 # Non-BMP character
-                (u'\U0001d5a0',
+                ('\U0001d5a0',
                 'A'),
 
                 # Mathematical
-                (u'\U0001d5c4\U0001d5c6/\U0001d5c1',
+                ('\U0001d5c4\U0001d5c6/\U0001d5c1',
                 'km/h'),
 
-                (u'\u2124\U0001d552\U0001d55c\U0001d552\U0001d55b 
\U0001d526\U0001d52a\U0001d51e 
\U0001d4e4\U0001d4f7\U0001d4f2\U0001d4ec\U0001d4f8\U0001d4ed\U0001d4ee 
\U0001d4c8\U0001d4c5\u212f\U0001d4b8\U0001d4be\U0001d4bb\U0001d4be\U0001d4c0\U0001d4b6\U0001d4b8\U0001d4be\U0001d4bf\u212f
 \U0001d59f\U0001d586 
\U0001d631\U0001d62a\U0001d634\U0001d622\U0001d637\U0001d626?!',
+                ('\u2124\U0001d552\U0001d55c\U0001d552\U0001d55b 
\U0001d526\U0001d52a\U0001d51e 
\U0001d4e4\U0001d4f7\U0001d4f2\U0001d4ec\U0001d4f8\U0001d4ed\U0001d4ee 
\U0001d4c8\U0001d4c5\u212f\U0001d4b8\U0001d4be\U0001d4bb\U0001d4be\U0001d4c0\U0001d4b6\U0001d4b8\U0001d4be\U0001d4bf\u212f
 \U0001d59f\U0001d586 
\U0001d631\U0001d62a\U0001d634\U0001d622\U0001d637\U0001d626?!',
                 'Zakaj ima Unicode specifikacije za pisave?!'),
             ]
 
@@ -444,10 +421,7 @@
         }
 
         for utf8_input, correct_output in wp_remove_accents.items():
-            if sys.version_info[0] >= 3:
-                inp = bytes(utf8_input).decode('utf8')
-            else:
-                inp = ''.join(map(chr, utf8_input)).decode('utf8')
+            inp = bytes(utf8_input).decode('utf8')
 
             output = self.unidecode(inp)
 
@@ -458,17 +432,17 @@
         # Examples from http://www.panix.com/~eli/unicode/convert.cgi
         lower = [
             # Fullwidth
-            u'\uff54\uff48\uff45 \uff51\uff55\uff49\uff43\uff4b 
\uff42\uff52\uff4f\uff57\uff4e \uff46\uff4f\uff58 
\uff4a\uff55\uff4d\uff50\uff53 \uff4f\uff56\uff45\uff52 \uff54\uff48\uff45 
\uff4c\uff41\uff5a\uff59 \uff44\uff4f\uff47 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
+            '\uff54\uff48\uff45 \uff51\uff55\uff49\uff43\uff4b 
\uff42\uff52\uff4f\uff57\uff4e \uff46\uff4f\uff58 
\uff4a\uff55\uff4d\uff50\uff53 \uff4f\uff56\uff45\uff52 \uff54\uff48\uff45 
\uff4c\uff41\uff5a\uff59 \uff44\uff4f\uff47 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
             # Double-struck
-            u'\U0001d565\U0001d559\U0001d556 
\U0001d562\U0001d566\U0001d55a\U0001d554\U0001d55c 
\U0001d553\U0001d563\U0001d560\U0001d568\U0001d55f 
\U0001d557\U0001d560\U0001d569 
\U0001d55b\U0001d566\U0001d55e\U0001d561\U0001d564 
\U0001d560\U0001d567\U0001d556\U0001d563 \U0001d565\U0001d559\U0001d556 
\U0001d55d\U0001d552\U0001d56b\U0001d56a \U0001d555\U0001d560\U0001d558 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
+            '\U0001d565\U0001d559\U0001d556 
\U0001d562\U0001d566\U0001d55a\U0001d554\U0001d55c 
\U0001d553\U0001d563\U0001d560\U0001d568\U0001d55f 
\U0001d557\U0001d560\U0001d569 
\U0001d55b\U0001d566\U0001d55e\U0001d561\U0001d564 
\U0001d560\U0001d567\U0001d556\U0001d563 \U0001d565\U0001d559\U0001d556 
\U0001d55d\U0001d552\U0001d56b\U0001d56a \U0001d555\U0001d560\U0001d558 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
             # Bold
-            u'\U0001d42d\U0001d421\U0001d41e 
\U0001d42a\U0001d42e\U0001d422\U0001d41c\U0001d424 
\U0001d41b\U0001d42b\U0001d428\U0001d430\U0001d427 
\U0001d41f\U0001d428\U0001d431 
\U0001d423\U0001d42e\U0001d426\U0001d429\U0001d42c 
\U0001d428\U0001d42f\U0001d41e\U0001d42b \U0001d42d\U0001d421\U0001d41e 
\U0001d425\U0001d41a\U0001d433\U0001d432 \U0001d41d\U0001d428\U0001d420 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
+            '\U0001d42d\U0001d421\U0001d41e 
\U0001d42a\U0001d42e\U0001d422\U0001d41c\U0001d424 
\U0001d41b\U0001d42b\U0001d428\U0001d430\U0001d427 
\U0001d41f\U0001d428\U0001d431 
\U0001d423\U0001d42e\U0001d426\U0001d429\U0001d42c 
\U0001d428\U0001d42f\U0001d41e\U0001d42b \U0001d42d\U0001d421\U0001d41e 
\U0001d425\U0001d41a\U0001d433\U0001d432 \U0001d41d\U0001d428\U0001d420 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
             # Bold italic
-            u'\U0001d495\U0001d489\U0001d486 
\U0001d492\U0001d496\U0001d48a\U0001d484\U0001d48c 
\U0001d483\U0001d493\U0001d490\U0001d498\U0001d48f 
\U0001d487\U0001d490\U0001d499 
\U0001d48b\U0001d496\U0001d48e\U0001d491\U0001d494 
\U0001d490\U0001d497\U0001d486\U0001d493 \U0001d495\U0001d489\U0001d486 
\U0001d48d\U0001d482\U0001d49b\U0001d49a \U0001d485\U0001d490\U0001d488 
1234567890',
+            '\U0001d495\U0001d489\U0001d486 
\U0001d492\U0001d496\U0001d48a\U0001d484\U0001d48c 
\U0001d483\U0001d493\U0001d490\U0001d498\U0001d48f 
\U0001d487\U0001d490\U0001d499 
\U0001d48b\U0001d496\U0001d48e\U0001d491\U0001d494 
\U0001d490\U0001d497\U0001d486\U0001d493 \U0001d495\U0001d489\U0001d486 
\U0001d48d\U0001d482\U0001d49b\U0001d49a \U0001d485\U0001d490\U0001d488 
1234567890',
             # Bold script
-            u'\U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4fa\U0001d4fe\U0001d4f2\U0001d4ec\U0001d4f4 
\U0001d4eb\U0001d4fb\U0001d4f8\U0001d500\U0001d4f7 
\U0001d4ef\U0001d4f8\U0001d501 
\U0001d4f3\U0001d4fe\U0001d4f6\U0001d4f9\U0001d4fc 
\U0001d4f8\U0001d4ff\U0001d4ee\U0001d4fb \U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4f5\U0001d4ea\U0001d503\U0001d502 \U0001d4ed\U0001d4f8\U0001d4f0 
1234567890',
+            '\U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4fa\U0001d4fe\U0001d4f2\U0001d4ec\U0001d4f4 
\U0001d4eb\U0001d4fb\U0001d4f8\U0001d500\U0001d4f7 
\U0001d4ef\U0001d4f8\U0001d501 
\U0001d4f3\U0001d4fe\U0001d4f6\U0001d4f9\U0001d4fc 
\U0001d4f8\U0001d4ff\U0001d4ee\U0001d4fb \U0001d4fd\U0001d4f1\U0001d4ee 
\U0001d4f5\U0001d4ea\U0001d503\U0001d502 \U0001d4ed\U0001d4f8\U0001d4f0 
1234567890',
             # Fraktur
-            u'\U0001d599\U0001d58d\U0001d58a 
\U0001d596\U0001d59a\U0001d58e\U0001d588\U0001d590 
\U0001d587\U0001d597\U0001d594\U0001d59c\U0001d593 
\U0001d58b\U0001d594\U0001d59d 
\U0001d58f\U0001d59a\U0001d592\U0001d595\U0001d598 
\U0001d594\U0001d59b\U0001d58a\U0001d597 \U0001d599\U0001d58d\U0001d58a 
\U0001d591\U0001d586\U0001d59f\U0001d59e \U0001d589\U0001d594\U0001d58c 
1234567890',
+            '\U0001d599\U0001d58d\U0001d58a 
\U0001d596\U0001d59a\U0001d58e\U0001d588\U0001d590 
\U0001d587\U0001d597\U0001d594\U0001d59c\U0001d593 
\U0001d58b\U0001d594\U0001d59d 
\U0001d58f\U0001d59a\U0001d592\U0001d595\U0001d598 
\U0001d594\U0001d59b\U0001d58a\U0001d597 \U0001d599\U0001d58d\U0001d58a 
\U0001d591\U0001d586\U0001d59f\U0001d59e \U0001d589\U0001d594\U0001d58c 
1234567890',
         ]
 
         for s in lower:
@@ -478,17 +452,17 @@
 
         upper = [
             # Fullwidth
-            u'\uff34\uff28\uff25 \uff31\uff35\uff29\uff23\uff2b 
\uff22\uff32\uff2f\uff37\uff2e \uff26\uff2f\uff38 
\uff2a\uff35\uff2d\uff30\uff33 \uff2f\uff36\uff25\uff32 \uff34\uff28\uff25 
\uff2c\uff21\uff3a\uff39 \uff24\uff2f\uff27 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
+            '\uff34\uff28\uff25 \uff31\uff35\uff29\uff23\uff2b 
\uff22\uff32\uff2f\uff37\uff2e \uff26\uff2f\uff38 
\uff2a\uff35\uff2d\uff30\uff33 \uff2f\uff36\uff25\uff32 \uff34\uff28\uff25 
\uff2c\uff21\uff3a\uff39 \uff24\uff2f\uff27 
\uff11\uff12\uff13\uff14\uff15\uff16\uff17\uff18\uff19\uff10',
             # Double-struck
-            u'\U0001d54b\u210d\U0001d53c 
\u211a\U0001d54c\U0001d540\u2102\U0001d542 
\U0001d539\u211d\U0001d546\U0001d54e\u2115 \U0001d53d\U0001d546\U0001d54f 
\U0001d541\U0001d54c\U0001d544\u2119\U0001d54a 
\U0001d546\U0001d54d\U0001d53c\u211d \U0001d54b\u210d\U0001d53c 
\U0001d543\U0001d538\u2124\U0001d550 \U0001d53b\U0001d546\U0001d53e 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
+            '\U0001d54b\u210d\U0001d53c 
\u211a\U0001d54c\U0001d540\u2102\U0001d542 
\U0001d539\u211d\U0001d546\U0001d54e\u2115 \U0001d53d\U0001d546\U0001d54f 
\U0001d541\U0001d54c\U0001d544\u2119\U0001d54a 
\U0001d546\U0001d54d\U0001d53c\u211d \U0001d54b\u210d\U0001d53c 
\U0001d543\U0001d538\u2124\U0001d550 \U0001d53b\U0001d546\U0001d53e 
\U0001d7d9\U0001d7da\U0001d7db\U0001d7dc\U0001d7dd\U0001d7de\U0001d7df\U0001d7e0\U0001d7e1\U0001d7d8',
             # Bold
-            u'\U0001d413\U0001d407\U0001d404 
\U0001d410\U0001d414\U0001d408\U0001d402\U0001d40a 
\U0001d401\U0001d411\U0001d40e\U0001d416\U0001d40d 
\U0001d405\U0001d40e\U0001d417 
\U0001d409\U0001d414\U0001d40c\U0001d40f\U0001d412 
\U0001d40e\U0001d415\U0001d404\U0001d411 \U0001d413\U0001d407\U0001d404 
\U0001d40b\U0001d400\U0001d419\U0001d418 \U0001d403\U0001d40e\U0001d406 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
+            '\U0001d413\U0001d407\U0001d404 
\U0001d410\U0001d414\U0001d408\U0001d402\U0001d40a 
\U0001d401\U0001d411\U0001d40e\U0001d416\U0001d40d 
\U0001d405\U0001d40e\U0001d417 
\U0001d409\U0001d414\U0001d40c\U0001d40f\U0001d412 
\U0001d40e\U0001d415\U0001d404\U0001d411 \U0001d413\U0001d407\U0001d404 
\U0001d40b\U0001d400\U0001d419\U0001d418 \U0001d403\U0001d40e\U0001d406 
\U0001d7cf\U0001d7d0\U0001d7d1\U0001d7d2\U0001d7d3\U0001d7d4\U0001d7d5\U0001d7d6\U0001d7d7\U0001d7ce',
             # Bold italic
-            u'\U0001d47b\U0001d46f\U0001d46c 
\U0001d478\U0001d47c\U0001d470\U0001d46a\U0001d472 
\U0001d469\U0001d479\U0001d476\U0001d47e\U0001d475 
\U0001d46d\U0001d476\U0001d47f 
\U0001d471\U0001d47c\U0001d474\U0001d477\U0001d47a 
\U0001d476\U0001d47d\U0001d46c\U0001d479 \U0001d47b\U0001d46f\U0001d46c 
\U0001d473\U0001d468\U0001d481\U0001d480 \U0001d46b\U0001d476\U0001d46e 
1234567890',
+            '\U0001d47b\U0001d46f\U0001d46c 
\U0001d478\U0001d47c\U0001d470\U0001d46a\U0001d472 
\U0001d469\U0001d479\U0001d476\U0001d47e\U0001d475 
\U0001d46d\U0001d476\U0001d47f 
\U0001d471\U0001d47c\U0001d474\U0001d477\U0001d47a 
\U0001d476\U0001d47d\U0001d46c\U0001d479 \U0001d47b\U0001d46f\U0001d46c 
\U0001d473\U0001d468\U0001d481\U0001d480 \U0001d46b\U0001d476\U0001d46e 
1234567890',
             # Bold script
-            u'\U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4e0\U0001d4e4\U0001d4d8\U0001d4d2\U0001d4da 
\U0001d4d1\U0001d4e1\U0001d4de\U0001d4e6\U0001d4dd 
\U0001d4d5\U0001d4de\U0001d4e7 
\U0001d4d9\U0001d4e4\U0001d4dc\U0001d4df\U0001d4e2 
\U0001d4de\U0001d4e5\U0001d4d4\U0001d4e1 \U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4db\U0001d4d0\U0001d4e9\U0001d4e8 \U0001d4d3\U0001d4de\U0001d4d6 
1234567890',
+            '\U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4e0\U0001d4e4\U0001d4d8\U0001d4d2\U0001d4da 
\U0001d4d1\U0001d4e1\U0001d4de\U0001d4e6\U0001d4dd 
\U0001d4d5\U0001d4de\U0001d4e7 
\U0001d4d9\U0001d4e4\U0001d4dc\U0001d4df\U0001d4e2 
\U0001d4de\U0001d4e5\U0001d4d4\U0001d4e1 \U0001d4e3\U0001d4d7\U0001d4d4 
\U0001d4db\U0001d4d0\U0001d4e9\U0001d4e8 \U0001d4d3\U0001d4de\U0001d4d6 
1234567890',
             # Fraktur
-            u'\U0001d57f\U0001d573\U0001d570 
\U0001d57c\U0001d580\U0001d574\U0001d56e\U0001d576 
\U0001d56d\U0001d57d\U0001d57a\U0001d582\U0001d579 
\U0001d571\U0001d57a\U0001d583 
\U0001d575\U0001d580\U0001d578\U0001d57b\U0001d57e 
\U0001d57a\U0001d581\U0001d570\U0001d57d \U0001d57f\U0001d573\U0001d570 
\U0001d577\U0001d56c\U0001d585\U0001d584 \U0001d56f\U0001d57a\U0001d572 
1234567890',
+            '\U0001d57f\U0001d573\U0001d570 
\U0001d57c\U0001d580\U0001d574\U0001d56e\U0001d576 
\U0001d56d\U0001d57d\U0001d57a\U0001d582\U0001d579 
\U0001d571\U0001d57a\U0001d583 
\U0001d575\U0001d580\U0001d578\U0001d57b\U0001d57e 
\U0001d57a\U0001d581\U0001d570\U0001d57d \U0001d57f\U0001d573\U0001d570 
\U0001d577\U0001d56c\U0001d585\U0001d584 \U0001d56f\U0001d57a\U0001d572 
1234567890',
         ]
 
         for s in upper:
@@ -499,40 +473,39 @@
     def test_enclosed_alphanumerics(self):
         self.assertEqual(
             'aA20(20)20.20100',
-            self.unidecode(u'????????????????????????'),
+            self.unidecode('????????????????????????'),
         )
 
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_ignore(self):
         # unidecode doesn't have replacements for private use characters
-        o = self.unidecode(u"test \U000f0000 test", errors='ignore')
+        o = self.unidecode("test \U000f0000 test", errors='ignore')
         self.assertEqual('test  test', o)
 
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_replace(self):
-        o = self.unidecode(u"test \U000f0000 test", errors='replace')
+        o = self.unidecode("test \U000f0000 test", errors='replace')
         self.assertEqual('test ? test', o)
 
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_replace_str(self):
-        o = self.unidecode(u"test \U000f0000 test", errors='replace', 
replace_str='[?] ')
+        o = self.unidecode("test \U000f0000 test", errors='replace', 
replace_str='[?] ')
         self.assertEqual('test [?]  test', o)
 
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_strict(self):
         with self.assertRaises(UnidecodeError) as e:
-            o = self.unidecode(u"test \U000f0000 test", errors='strict')
+            o = self.unidecode("test \U000f0000 test", errors='strict')
 
         self.assertEqual(5, e.exception.index)
 
         # This checks that the exception is not chained (i.e. you don't get the
         # "During handling of the above exception, another exception occurred")
-        if sys.version_info[0] >= 3:
-            self.assertIsNone(e.exception.__context__)
+        self.assertIsNone(e.exception.__context__)
 
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_preserve(self):
-        s = u"test \U000f0000 test"
+        s = "test \U000f0000 test"
         o = self.unidecode(s, errors='preserve')
 
         self.assertEqual(s, o)
@@ -540,12 +513,11 @@
     @unittest.skipIf(sys.maxunicode < 0x10000, "narrow build")
     def test_errors_invalid(self):
         with self.assertRaises(UnidecodeError) as e:
-            self.unidecode(u"test \U000f0000 test", errors='invalid')
+            self.unidecode("test \U000f0000 test", errors='invalid')
 
         # This checks that the exception is not chained (i.e. you don't get the
         # "During handling of the above exception, another exception occurred")
-        if sys.version_info[0] >= 3:
-            self.assertIsNone(e.exception.__context__)
+        self.assertIsNone(e.exception.__context__)
 
 class TestUnidecode(BaseTestUnidecode, unittest.TestCase):
     unidecode = staticmethod(unidecode)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/tests/test_utility.py 
new/Unidecode-1.3.1/tests/test_utility.py
--- old/Unidecode-1.2.0/tests/test_utility.py   2020-12-20 12:43:16.000000000 
+0100
+++ new/Unidecode-1.3.1/tests/test_utility.py   2021-02-05 14:21:36.000000000 
+0100
@@ -10,10 +10,6 @@
 
 here = os.path.dirname(__file__)
 
-# Python 2.7 does not have assertRegex
-if not hasattr(unittest.TestCase, 'assertRegex'):
-    unittest.TestCase.assertRegex = lambda self, text, exp: 
self.assertTrue(re.search(exp, text))
-
 def get_cmd():
     sys_path = os.path.join(here, "..")
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/tox.ini new/Unidecode-1.3.1/tox.ini
--- old/Unidecode-1.2.0/tox.ini 1970-01-01 01:00:00.000000000 +0100
+++ new/Unidecode-1.3.1/tox.ini 2021-04-04 11:37:23.000000000 +0200
@@ -0,0 +1,9 @@
+[tox]
+envlist = py{35,36,37,38,39,py3}
+
+[testenv]
+deps =
+       pytest
+       pytest-cov
+       pytest-mypy
+commands = pytest --mypy --cov=unidecode tests
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/unidecode/__init__.py 
new/Unidecode-1.3.1/unidecode/__init__.py
--- old/Unidecode-1.2.0/unidecode/__init__.py   2021-02-04 08:29:23.000000000 
+0100
+++ new/Unidecode-1.3.1/unidecode/__init__.py   2021-02-05 14:21:36.000000000 
+0100
@@ -3,24 +3,26 @@
 """Transliterate Unicode text into plain 7-bit ASCII.
 
 Example usage:
+
 >>> from unidecode import unidecode
->>> unidecode(u"\u5317\u4EB0")
+>>> unidecode("\u5317\u4EB0")
 "Bei Jing "
 
 The transliteration uses a straightforward map, and doesn't have alternatives
 for the same character based on language, position, or anything else.
 
-In Python 3, a standard string object will be returned. If you need bytes, use:
+A standard string object will be returned. If you need bytes, use:
+
 >>> unidecode("????????????").encode("ascii")
 b'Knosos'
 """
 import warnings
-from sys import version_info
+from typing import Dict, Optional, Sequence
 
-Cache = {}
+Cache = {} # type: Dict[int, Optional[Sequence[Optional[str]]]]
 
 class UnidecodeError(ValueError):
-    def __init__(self, message, index=None):
+    def __init__(self, message: str, index: Optional[int] = None) -> None:
         """Raised for Unidecode-related errors.
 
         The index attribute contains the index of the character that caused
@@ -29,18 +31,11 @@
         super(UnidecodeError, self).__init__(message)
         self.index = index
 
-def _warn_if_not_unicode(string):
-    if version_info[0] < 3 and not isinstance(string, unicode):
-        warnings.warn(  "Argument %r is not an unicode object. "
-                        "Passing an encoded string will likely have "
-                        "unexpected results." % (type(string),),
-                        RuntimeWarning, 2)
-
 
-def unidecode_expect_ascii(string, errors='ignore', replace_str='?'):
+def unidecode_expect_ascii(string: str, errors: str = 'ignore', replace_str: 
str = '?') -> str:
     """Transliterate an Unicode object into an ASCII string
 
-    >>> unidecode(u"\u5317\u4EB0")
+    >>> unidecode("\u5317\u4EB0")
     "Bei Jing "
 
     This function first tries to convert the string using ASCII codec.
@@ -61,34 +56,29 @@
     ASCII!
     """
 
-    _warn_if_not_unicode(string)
     try:
         bytestring = string.encode('ASCII')
     except UnicodeEncodeError:
         pass
     else:
-        if version_info[0] >= 3:
-            return string
-        else:
-            return bytestring
+        return string
 
     return _unidecode(string, errors, replace_str)
 
-def unidecode_expect_nonascii(string, errors='ignore', replace_str='?'):
+def unidecode_expect_nonascii(string: str, errors: str = 'ignore', 
replace_str: str = '?') -> str:
     """Transliterate an Unicode object into an ASCII string
 
-    >>> unidecode(u"\u5317\u4EB0")
+    >>> unidecode("\u5317\u4EB0")
     "Bei Jing "
 
     See unidecode_expect_ascii.
     """
 
-    _warn_if_not_unicode(string)
     return _unidecode(string, errors, replace_str)
 
 unidecode = unidecode_expect_ascii
 
-def _get_repl_str(char):
+def _get_repl_str(char: str) -> Optional[str]:
     codepoint = ord(char)
 
     if codepoint < 0x80:
@@ -124,7 +114,7 @@
     else:
         return None
 
-def _unidecode(string, errors, replace_str):
+def _unidecode(string: str, errors: str, replace_str:str) -> str:
     retval = []
 
     for index, char in enumerate(string):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/unidecode/__init__.pyi 
new/Unidecode-1.3.1/unidecode/__init__.pyi
--- old/Unidecode-1.2.0/unidecode/__init__.pyi  2021-02-04 08:29:31.000000000 
+0100
+++ new/Unidecode-1.3.1/unidecode/__init__.pyi  1970-01-01 01:00:00.000000000 
+0100
@@ -1,11 +0,0 @@
-from typing import Any, Dict, Optional, Sequence
-
-Cache: Dict[int, Optional[Sequence[Optional[str]]]]
-
-class UnidecodeError(ValueError):
-    index: Optional[int] = ...
-    def __init__(self, message: str, index: Optional[int] = ...) -> None: ...
-
-def unidecode_expect_ascii(string: str, errors: str = ..., replace_str: str = 
...) -> str: ...
-def unidecode_expect_nonascii(string: str, errors: str = ..., replace_str: str 
= ...) -> str: ...
-unidecode = unidecode_expect_ascii
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/unidecode/util.py 
new/Unidecode-1.3.1/unidecode/util.py
--- old/Unidecode-1.2.0/unidecode/util.py       2020-12-20 12:43:16.000000000 
+0100
+++ new/Unidecode-1.3.1/unidecode/util.py       2021-02-05 14:21:36.000000000 
+0100
@@ -1,5 +1,4 @@
 # vim:ts=4 sw=4 expandtab softtabstop=4
-from __future__ import print_function
 import argparse
 import locale
 import os
@@ -7,8 +6,6 @@
 
 from unidecode import unidecode
 
-PY3 = sys.version_info[0] >= 3
-
 def fatal(msg):
     sys.stderr.write(msg + "\n")
     sys.exit(1)
@@ -36,19 +33,13 @@
             with open(args.path, 'rb') as f:
                 stream = f.read()
     elif args.text:
-        if PY3:
-            stream = os.fsencode(args.text)
-        else:
-            stream = args.text
+        stream = os.fsencode(args.text)
         # add a newline to the string if it comes from the
         # command line so that the result is printed nicely
         # on the console.
         stream += b'\n'
     else:
-        if PY3:
-            stream = sys.stdin.buffer.read()
-        else:
-            stream = sys.stdin.read()
+        stream = sys.stdin.buffer.read()
 
     try:
         stream = stream.decode(encoding)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/unidecode/x005.py 
new/Unidecode-1.3.1/unidecode/x005.py
--- old/Unidecode-1.2.0/unidecode/x005.py       2020-12-20 12:43:16.000000000 
+0100
+++ new/Unidecode-1.3.1/unidecode/x005.py       2021-09-06 17:00:02.000000000 
+0200
@@ -175,7 +175,7 @@
 '',    # 0xad
 '',    # 0xae
 '',    # 0xaf
-'@',    # 0xb0
+'',    # 0xb0
 'e',    # 0xb1
 'a',    # 0xb2
 'o',    # 0xb3
@@ -187,14 +187,14 @@
 'o',    # 0xb9
 'o',    # 0xba
 'u',    # 0xbb
-'\'',    # 0xbc
+'',    # 0xbc
 '',    # 0xbd
 '-',    # 0xbe
-'-',    # 0xbf
+'',    # 0xbf
 '|',    # 0xc0
 '',    # 0xc1
 '',    # 0xc2
-':',    # 0xc3
+'.',    # 0xc3
 '',    # 0xc4
 '',    # 0xc5
 'n',    # 0xc6
@@ -214,11 +214,11 @@
 'h',    # 0xd4
 'v',    # 0xd5
 'z',    # 0xd6
-'KH',    # 0xd7
-'t',    # 0xd8
+'H',    # 0xd7
+'T',    # 0xd8
 'y',    # 0xd9
-'k',    # 0xda
-'k',    # 0xdb
+'KH',    # 0xda
+'KH',    # 0xdb
 'l',    # 0xdc
 'm',    # 0xdd
 'm',    # 0xde
@@ -230,7 +230,7 @@
 'p',    # 0xe4
 'TS',    # 0xe5
 'TS',    # 0xe6
-'q',    # 0xe7
+'k',    # 0xe7
 'r',    # 0xe8
 'SH',    # 0xe9
 't',    # 0xea
@@ -238,15 +238,15 @@
 None,    # 0xec
 None,    # 0xed
 None,    # 0xee
-None,    # 0xef
+'YYY',    # 0xef
 'V',    # 0xf0
 'OY',    # 0xf1
-'i',    # 0xf2
+'EY',    # 0xf2
 '\'',    # 0xf3
 '"',    # 0xf4
-'v',    # 0xf5
-'n',    # 0xf6
-'q',    # 0xf7
+None,    # 0xf5
+None,    # 0xf6
+None,    # 0xf7
 None,    # 0xf8
 None,    # 0xf9
 None,    # 0xfa
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Unidecode-1.2.0/unidecode/x0fb.py 
new/Unidecode-1.3.1/unidecode/x0fb.py
--- old/Unidecode-1.2.0/unidecode/x0fb.py       2020-12-20 12:43:16.000000000 
+0100
+++ new/Unidecode-1.3.1/unidecode/x0fb.py       2021-09-06 17:00:02.000000000 
+0200
@@ -28,26 +28,26 @@
 None,    # 0x1a
 None,    # 0x1b
 None,    # 0x1c
-'yi',    # 0x1d
+'i',    # 0x1d
 '',    # 0x1e
-'ay',    # 0x1f
+'AY',    # 0x1f
 '`',    # 0x20
-'',    # 0x21
+'A',    # 0x21
 'd',    # 0x22
 'h',    # 0x23
-'k',    # 0x24
+'KH',    # 0x24
 'l',    # 0x25
 'm',    # 0x26
-'m',    # 0x27
+'r',    # 0x27
 't',    # 0x28
 '+',    # 0x29
-'sh',    # 0x2a
-'s',    # 0x2b
-'sh',    # 0x2c
-'s',    # 0x2d
+'SH',    # 0x2a
+'S',    # 0x2b
+'SH',    # 0x2c
+'S',    # 0x2d
 'a',    # 0x2e
 'a',    # 0x2f
-'',    # 0x30
+'A',    # 0x30
 'b',    # 0x31
 'g',    # 0x32
 'd',    # 0x33
@@ -57,28 +57,28 @@
 None,    # 0x37
 't',    # 0x38
 'y',    # 0x39
-'k',    # 0x3a
-'k',    # 0x3b
+'KH',    # 0x3a
+'KH',    # 0x3b
 'l',    # 0x3c
 None,    # 0x3d
-'l',    # 0x3e
+'m',    # 0x3e
 None,    # 0x3f
 'n',    # 0x40
-'n',    # 0x41
+'s',    # 0x41
 None,    # 0x42
 'p',    # 0x43
 'p',    # 0x44
 None,    # 0x45
-'ts',    # 0x46
-'ts',    # 0x47
+'TS',    # 0x46
+'k',    # 0x47
 'r',    # 0x48
-'sh',    # 0x49
+'SH',    # 0x49
 't',    # 0x4a
-'vo',    # 0x4b
-'b',    # 0x4c
-'k',    # 0x4d
-'p',    # 0x4e
-'l',    # 0x4f
+'o',    # 0x4b
+'v',    # 0x4c
+'KH',    # 0x4d
+'f',    # 0x4e
+'EL',    # 0x4f
 '',    # 0x50
 '',    # 0x51
 '',    # 0x52

Reply via email to