Script 'mail_helper' called by obssrc Hello community, here is the log from the commit of package python-tldextract for openSUSE:Factory checked in at 2023-11-07 21:26:16 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/python-tldextract (Old) and /work/SRC/openSUSE:Factory/.python-tldextract.new.17445 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-tldextract" Tue Nov 7 21:26:16 2023 rev:21 rq:1123695 version:5.1.0 Changes: -------- --- /work/SRC/openSUSE:Factory/python-tldextract/python-tldextract.changes 2023-10-23 23:40:45.999097212 +0200 +++ /work/SRC/openSUSE:Factory/.python-tldextract.new.17445/python-tldextract.changes 2023-11-07 21:26:56.708809799 +0100 @@ -1,0 +2,10 @@ +Mon Nov 6 23:25:32 UTC 2023 - Mia Herkt <m...@0x0.st> + +- Update to 5.1.0: +Features: + * Allow passing in `requests.Session` + #gh/john-kurkowski/tldextract#311 + * Add "-j, --json" option to support output in json format + #gh/john-kurkowski/tldextract#313 + +------------------------------------------------------------------- Old: ---- tldextract-5.0.1.tar.gz New: ---- tldextract-5.1.0.tar.gz ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ python-tldextract.spec ++++++ --- /var/tmp/diff_new_pack.vK5GhP/_old 2023-11-07 21:26:57.192827623 +0100 +++ /var/tmp/diff_new_pack.vK5GhP/_new 2023-11-07 21:26:57.192827623 +0100 @@ -18,7 +18,7 @@ %define oldpython python Name: python-tldextract -Version: 5.0.1 +Version: 5.1.0 Release: 0 Summary: Python module to separate the TLD of a URL License: BSD-3-Clause ++++++ tldextract-5.0.1.tar.gz -> tldextract-5.1.0.tar.gz ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/.travis.yml new/tldextract-5.1.0/.travis.yml --- old/tldextract-5.0.1/.travis.yml 2023-10-11 10:28:34.000000000 +0200 +++ new/tldextract-5.1.0/.travis.yml 2023-10-28 20:47:14.000000000 +0200 @@ -17,5 +17,7 @@ - env: TOXENV=lint - env: TOXENV=typecheck python: "3.10" -install: pip install tox +install: + - pip install --upgrade pip + - pip install --upgrade --editable '.[testing]' script: tox diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/CHANGELOG.md new/tldextract-5.1.0/CHANGELOG.md --- old/tldextract-5.0.1/CHANGELOG.md 2023-10-17 22:04:42.000000000 +0200 +++ new/tldextract-5.1.0/CHANGELOG.md 2023-11-06 07:09:30.000000000 +0100 @@ -3,6 +3,17 @@ After upgrading, update your cache file by deleting it or via `tldextract --update`. +## 5.1.0 (2023-11-05) + +* Features + * Allow passing in `requests.Session` ([#311](https://github.com/john-kurkowski/tldextract/issues/311)) + * Add "-j, --json" option to support output in json format ([#313](https://github.com/john-kurkowski/tldextract/issues/313)) +* Docs + * Improve clarity of absolute path ([#312](https://github.com/john-kurkowski/tldextract/issues/312)) +* Misc. + * Extract all testing deps from tox.ini to pyproject.toml extras ([#310](https://github.com/john-kurkowski/tldextract/issues/310)) + * Work around responses type union error, in tests + ## 5.0.1 (2023-10-17) * Bugfixes diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/PKG-INFO new/tldextract-5.1.0/PKG-INFO --- old/tldextract-5.0.1/PKG-INFO 2023-10-17 22:05:39.746655700 +0200 +++ new/tldextract-5.1.0/PKG-INFO 2023-11-06 07:11:58.886822700 +0100 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: tldextract -Version: 5.0.1 +Version: 5.1.0 Summary: Accurately separates a URL's subdomain, domain, and public suffix, using the Public Suffix List (PSL). By default, this includes the public ICANN TLDs and their exceptions. You can optionally support the Public Suffix List's private domains as well. Author-email: John Kurkowski <john.kurkow...@gmail.com> License: BSD-3-Clause @@ -21,6 +21,17 @@ Requires-Dist: requests>=2.1.0 Requires-Dist: requests-file>=1.4 Requires-Dist: filelock>=3.0.8 +Provides-Extra: testing +Requires-Dist: black; extra == "testing" +Requires-Dist: mypy; extra == "testing" +Requires-Dist: pytest; extra == "testing" +Requires-Dist: pytest-gitignore; extra == "testing" +Requires-Dist: pytest-mock; extra == "testing" +Requires-Dist: responses; extra == "testing" +Requires-Dist: ruff; extra == "testing" +Requires-Dist: tox; extra == "testing" +Requires-Dist: types-filelock; extra == "testing" +Requires-Dist: types-requests; extra == "testing" # tldextract [](https://badge.fury.io/py/tldextract) [](https://app.travis-ci.com/github/john-kurkowski/tldextract) @@ -210,7 +221,7 @@ ```python extract = tldextract.TLDExtract( - suffix_list_urls=["file://absolute/path/to/your/local/suffix/list/file"], + suffix_list_urls=["file://" + "/absolute/path/to/your/local/suffix/list/file"], cache_dir='/path/to/your/cache/', fallback_to_snapshot=False) ``` @@ -271,7 +282,7 @@ 1. `git clone` this repository. 2. Change into the new directory. -3. `pip install tox` +3. `pip install --upgrade --editable '.[testing]'` ### Running the test suite @@ -293,6 +304,5 @@ Automatically format all code: ```zsh -pip install black black . ``` diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/README.md new/tldextract-5.1.0/README.md --- old/tldextract-5.0.1/README.md 2023-10-11 10:30:22.000000000 +0200 +++ new/tldextract-5.1.0/README.md 2023-10-28 20:47:14.000000000 +0200 @@ -186,7 +186,7 @@ ```python extract = tldextract.TLDExtract( - suffix_list_urls=["file://absolute/path/to/your/local/suffix/list/file"], + suffix_list_urls=["file://" + "/absolute/path/to/your/local/suffix/list/file"], cache_dir='/path/to/your/cache/', fallback_to_snapshot=False) ``` @@ -247,7 +247,7 @@ 1. `git clone` this repository. 2. Change into the new directory. -3. `pip install tox` +3. `pip install --upgrade --editable '.[testing]'` ### Running the test suite @@ -269,6 +269,5 @@ Automatically format all code: ```zsh -pip install black black . ``` diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/pyproject.toml new/tldextract-5.1.0/pyproject.toml --- old/tldextract-5.0.1/pyproject.toml 2023-10-17 21:40:00.000000000 +0200 +++ new/tldextract-5.1.0/pyproject.toml 2023-11-05 01:24:19.000000000 +0100 @@ -29,14 +29,29 @@ "Programming Language :: Python :: 3.11", ] requires-python = ">=3.8" +dynamic = ["version"] +readme = "README.md" + dependencies = [ "idna", "requests>=2.1.0", "requests-file>=1.4", "filelock>=3.0.8", ] -dynamic = ["version"] -readme = "README.md" + +[project.optional-dependencies] +testing = [ + "black", + "mypy", + "pytest", + "pytest-gitignore", + "pytest-mock", + "responses", + "ruff", + "tox", + "types-filelock", + "types-requests", +] [project.urls] Homepage = "https://github.com/john-kurkowski/tldextract" diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tests/cli_test.py new/tldextract-5.1.0/tests/cli_test.py --- old/tldextract-5.0.1/tests/cli_test.py 2023-10-17 21:54:37.000000000 +0200 +++ new/tldextract-5.1.0/tests/cli_test.py 2023-11-06 07:02:50.000000000 +0100 @@ -1,5 +1,6 @@ """tldextract integration tests.""" +import json import sys import pytest @@ -63,3 +64,25 @@ stdout, stderr = capsys.readouterr() assert not stderr assert stdout == " example com\n bbc co.uk\nforums bbc co.uk\n" + + +def test_cli_json_output( + capsys: pytest.CaptureFixture[str], monkeypatch: pytest.MonkeyPatch +) -> None: + """Test CLI with --json option.""" + monkeypatch.setattr(sys, "argv", ["tldextract", "--json", "www.bbc.co.uk"]) + + main() + + stdout, stderr = capsys.readouterr() + assert not stderr + assert json.loads(stdout) == { + "subdomain": "www", + "domain": "bbc", + "suffix": "co.uk", + "fqdn": "www.bbc.co.uk", + "ipv4": "", + "ipv6": "", + "is_private": False, + "registered_domain": "bbc.co.uk", + } diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tests/main_test.py new/tldextract-5.1.0/tests/main_test.py --- old/tldextract-5.0.1/tests/main_test.py 2023-10-11 21:15:35.000000000 +0200 +++ new/tldextract-5.1.0/tests/main_test.py 2023-10-28 20:47:14.000000000 +0200 @@ -8,6 +8,7 @@ from collections.abc import Sequence from pathlib import Path from typing import Any +from unittest.mock import Mock import pytest import pytest_mock @@ -449,6 +450,34 @@ tldextract.suffix_list.find_first_response(cache, [server], 5) +@responses.activate +def test_find_first_response_without_session(tmp_path: Path) -> None: + """Test it is able to find first response without session passed in.""" + server = "http://some-server.com" + response_text = "server response" + responses.add(responses.GET, server, status=200, body=response_text) + cache = DiskCache(str(tmp_path)) + + result = tldextract.suffix_list.find_first_response(cache, [server], 5) + assert result == response_text + + +def test_find_first_response_with_session(tmp_path: Path) -> None: + """Test it is able to find first response with passed in session.""" + server = "http://some-server.com" + response_text = "server response" + cache = DiskCache(str(tmp_path)) + mock_session = Mock() + mock_session.get.return_value.text = response_text + + result = tldextract.suffix_list.find_first_response( + cache, [server], 5, mock_session + ) + assert result == response_text + mock_session.get.assert_called_once_with(server, timeout=5) + mock_session.close.assert_not_called() + + def test_include_psl_private_domain_attr() -> None: """Test private domains, which default to not being treated differently.""" extract_private = tldextract.TLDExtract(include_psl_private_domains=True) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tests/test_parallel.py new/tldextract-5.1.0/tests/test_parallel.py --- old/tldextract-5.0.1/tests/test_parallel.py 2023-10-11 10:28:34.000000000 +0200 +++ new/tldextract-5.1.0/tests/test_parallel.py 2023-11-06 05:56:54.000000000 +0100 @@ -1,4 +1,5 @@ """Test ability to run in parallel with shared cache.""" + import os import os.path from multiprocessing import Pool @@ -19,14 +20,15 @@ assert sum(http_request_counts) == 1 -@responses.activate def _run_extractor(cache_dir: Path) -> int: """Run the extractor.""" - responses.add(responses.GET, PUBLIC_SUFFIX_LIST_URLS[0], status=208, body="uk.co") - extract = TLDExtract(cache_dir=str(cache_dir)) - - extract("bar.uk.com", include_psl_private_domains=True) - return len(responses.calls) + with responses.RequestsMock(assert_all_requests_are_fired=False) as rsps: + rsps.add(responses.GET, PUBLIC_SUFFIX_LIST_URLS[0], status=208, body="uk.co") + extract = TLDExtract(cache_dir=str(cache_dir)) + + extract("bar.uk.com", include_psl_private_domains=True) + num_calls = len(rsps.calls) + return num_calls @responses.activate diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract/_version.py new/tldextract-5.1.0/tldextract/_version.py --- old/tldextract-5.0.1/tldextract/_version.py 2023-10-17 22:05:39.000000000 +0200 +++ new/tldextract-5.1.0/tldextract/_version.py 2023-11-06 07:11:58.000000000 +0100 @@ -12,5 +12,5 @@ __version_tuple__: VERSION_TUPLE version_tuple: VERSION_TUPLE -__version__ = version = '5.0.1' -__version_tuple__ = version_tuple = (5, 0, 1) +__version__ = version = '5.1.0' +__version_tuple__ = version_tuple = (5, 1, 0) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract/cli.py new/tldextract-5.1.0/tldextract/cli.py --- old/tldextract-5.0.1/tldextract/cli.py 2023-10-11 10:30:22.000000000 +0200 +++ new/tldextract-5.1.0/tldextract/cli.py 2023-11-06 07:02:59.000000000 +0100 @@ -1,7 +1,8 @@ """tldextract CLI.""" - import argparse +import dataclasses +import json import logging import os.path import pathlib @@ -23,6 +24,13 @@ "--version", action="version", version="%(prog)s " + __version__ ) parser.add_argument( + "-j", + "--json", + default=False, + action="store_true", + help="output in json format", + ) + parser.add_argument( "input", metavar="fqdn|url", type=str, nargs="*", help="fqdn or url" ) @@ -89,4 +97,15 @@ for i in args.input: ext = tld_extract(i) - print(f"{ext.subdomain} {ext.domain} {ext.suffix}") + if args.json: + properties = ("fqdn", "ipv4", "ipv6", "registered_domain") + print( + json.dumps( + { + **dataclasses.asdict(ext), + **{prop: getattr(ext, prop) for prop in properties}, + } + ) + ) + else: + print(f"{ext.subdomain} {ext.domain} {ext.suffix}") diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract/suffix_list.py new/tldextract-5.1.0/tldextract/suffix_list.py --- old/tldextract-5.0.1/tldextract/suffix_list.py 2023-10-17 21:52:28.000000000 +0200 +++ new/tldextract-5.1.0/tldextract/suffix_list.py 2023-10-28 20:47:14.000000000 +0200 @@ -31,11 +31,16 @@ cache: DiskCache, urls: Sequence[str], cache_fetch_timeout: float | int | None = None, + session: requests.Session | None = None, ) -> str: """Decode the first successfully fetched URL, from UTF-8 encoding to Python unicode.""" - with requests.Session() as session: + session_created = False + if session is None: + session = requests.Session() session.mount("file://", FileAdapter()) + session_created = True + try: for url in urls: try: return cache.cached_fetch_url( @@ -43,6 +48,11 @@ ) except requests.exceptions.RequestException: LOG.exception("Exception reading Public Suffix List url %s", url) + finally: + # Ensure the session is always closed if it's constructed in the method + if session_created: + session.close() + raise SuffixListNotFound( "No remote Public Suffix List found. Consider using a mirror, or avoid this" " fetch by constructing your TLDExtract with `suffix_list_urls=()`." @@ -65,6 +75,7 @@ urls: Sequence[str], cache_fetch_timeout: float | int | None, fallback_to_snapshot: bool, + session: requests.Session | None = None, ) -> tuple[list[str], list[str]]: """Fetch, parse, and cache the suffix lists.""" return cache.run_and_cache( @@ -75,6 +86,7 @@ "urls": urls, "cache_fetch_timeout": cache_fetch_timeout, "fallback_to_snapshot": fallback_to_snapshot, + "session": session, }, hashed_argnames=["urls", "fallback_to_snapshot"], ) @@ -85,10 +97,13 @@ urls: Sequence[str], cache_fetch_timeout: float | int | None, fallback_to_snapshot: bool, + session: requests.Session | None = None, ) -> tuple[list[str], list[str]]: """Fetch, parse, and cache the suffix lists.""" try: - text = find_first_response(cache, urls, cache_fetch_timeout=cache_fetch_timeout) + text = find_first_response( + cache, urls, cache_fetch_timeout=cache_fetch_timeout, session=session + ) except SuffixListNotFound as exc: if fallback_to_snapshot: maybe_pkg_data = pkgutil.get_data("tldextract", ".tld_set_snapshot") diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract/tldextract.py new/tldextract-5.1.0/tldextract/tldextract.py --- old/tldextract-5.0.1/tldextract/tldextract.py 2023-10-17 21:53:27.000000000 +0200 +++ new/tldextract-5.1.0/tldextract/tldextract.py 2023-10-28 20:47:14.000000000 +0200 @@ -44,6 +44,7 @@ from functools import wraps import idna +import requests from .cache import DiskCache, get_cache_dir from .remote import lenient_netloc, looks_like_ip, looks_like_ipv6 @@ -221,13 +222,19 @@ self._cache = DiskCache(cache_dir) def __call__( - self, url: str, include_psl_private_domains: bool | None = None + self, + url: str, + include_psl_private_domains: bool | None = None, + session: requests.Session | None = None, ) -> ExtractResult: """Alias for `extract_str`.""" - return self.extract_str(url, include_psl_private_domains) + return self.extract_str(url, include_psl_private_domains, session=session) def extract_str( - self, url: str, include_psl_private_domains: bool | None = None + self, + url: str, + include_psl_private_domains: bool | None = None, + session: requests.Session | None = None, ) -> ExtractResult: """Take a string URL and splits it into its subdomain, domain, and suffix components. @@ -238,13 +245,27 @@ ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False) >>> extractor.extract_str('http://forums.bbc.co.uk/') ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False) + + Allows configuring the HTTP request via the optional `session` + parameter. For example, if you need to use a HTTP proxy. See also + `requests.Session`. + + >>> import requests + >>> session = requests.Session() + >>> # customize your session here + >>> with session: + ... extractor.extract_str("http://forums.news.cnn.com/", session=session) + ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False) """ - return self._extract_netloc(lenient_netloc(url), include_psl_private_domains) + return self._extract_netloc( + lenient_netloc(url), include_psl_private_domains, session=session + ) def extract_urllib( self, url: urllib.parse.ParseResult | urllib.parse.SplitResult, include_psl_private_domains: bool | None = None, + session: requests.Session | None = None, ) -> ExtractResult: """Take the output of urllib.parse URL parsing methods and further splits the parsed URL. @@ -260,10 +281,15 @@ >>> extractor.extract_urllib(urllib.parse.urlsplit('http://forums.bbc.co.uk/')) ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False) """ - return self._extract_netloc(url.netloc, include_psl_private_domains) + return self._extract_netloc( + url.netloc, include_psl_private_domains, session=session + ) def _extract_netloc( - self, netloc: str, include_psl_private_domains: bool | None + self, + netloc: str, + include_psl_private_domains: bool | None, + session: requests.Session | None = None, ) -> ExtractResult: netloc_with_ascii_dots = ( netloc.replace("\u3002", "\u002e") @@ -282,9 +308,9 @@ labels = netloc_with_ascii_dots.split(".") - suffix_index, is_private = self._get_tld_extractor().suffix_index( - labels, include_psl_private_domains=include_psl_private_domains - ) + suffix_index, is_private = self._get_tld_extractor( + session=session + ).suffix_index(labels, include_psl_private_domains=include_psl_private_domains) num_ipv4_labels = 4 if suffix_index == len(labels) == num_ipv4_labels and looks_like_ip( @@ -297,23 +323,27 @@ domain = labels[suffix_index - 1] if suffix_index else "" return ExtractResult(subdomain, domain, suffix, is_private) - def update(self, fetch_now: bool = False) -> None: + def update( + self, fetch_now: bool = False, session: requests.Session | None = None + ) -> None: """Force fetch the latest suffix list definitions.""" self._extractor = None self._cache.clear() if fetch_now: - self._get_tld_extractor() + self._get_tld_extractor(session=session) @property - def tlds(self) -> list[str]: + def tlds(self, session: requests.Session | None = None) -> list[str]: """ Returns the list of tld's used by default. This will vary based on `include_psl_private_domains` and `extra_suffixes` """ - return list(self._get_tld_extractor().tlds()) + return list(self._get_tld_extractor(session=session).tlds()) - def _get_tld_extractor(self) -> _PublicSuffixListTLDExtractor: + def _get_tld_extractor( + self, session: requests.Session | None = None + ) -> _PublicSuffixListTLDExtractor: """Get or compute this object's TLDExtractor. Looks up the TLDExtractor in roughly the following order, based on the @@ -332,6 +362,7 @@ urls=self.suffix_list_urls, cache_fetch_timeout=self.cache_fetch_timeout, fallback_to_snapshot=self.fallback_to_snapshot, + session=session, ) if not any([public_tlds, private_tlds, self.extra_suffixes]): @@ -400,9 +431,13 @@ @wraps(TLD_EXTRACTOR.__call__) def extract( # noqa: D103 - url: str, include_psl_private_domains: bool | None = False + url: str, + include_psl_private_domains: bool | None = False, + session: requests.Session | None = None, ) -> ExtractResult: - return TLD_EXTRACTOR(url, include_psl_private_domains=include_psl_private_domains) + return TLD_EXTRACTOR( + url, include_psl_private_domains=include_psl_private_domains, session=session + ) @wraps(TLD_EXTRACTOR.update) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract.egg-info/PKG-INFO new/tldextract-5.1.0/tldextract.egg-info/PKG-INFO --- old/tldextract-5.0.1/tldextract.egg-info/PKG-INFO 2023-10-17 22:05:39.000000000 +0200 +++ new/tldextract-5.1.0/tldextract.egg-info/PKG-INFO 2023-11-06 07:11:58.000000000 +0100 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: tldextract -Version: 5.0.1 +Version: 5.1.0 Summary: Accurately separates a URL's subdomain, domain, and public suffix, using the Public Suffix List (PSL). By default, this includes the public ICANN TLDs and their exceptions. You can optionally support the Public Suffix List's private domains as well. Author-email: John Kurkowski <john.kurkow...@gmail.com> License: BSD-3-Clause @@ -21,6 +21,17 @@ Requires-Dist: requests>=2.1.0 Requires-Dist: requests-file>=1.4 Requires-Dist: filelock>=3.0.8 +Provides-Extra: testing +Requires-Dist: black; extra == "testing" +Requires-Dist: mypy; extra == "testing" +Requires-Dist: pytest; extra == "testing" +Requires-Dist: pytest-gitignore; extra == "testing" +Requires-Dist: pytest-mock; extra == "testing" +Requires-Dist: responses; extra == "testing" +Requires-Dist: ruff; extra == "testing" +Requires-Dist: tox; extra == "testing" +Requires-Dist: types-filelock; extra == "testing" +Requires-Dist: types-requests; extra == "testing" # tldextract [](https://badge.fury.io/py/tldextract) [](https://app.travis-ci.com/github/john-kurkowski/tldextract) @@ -210,7 +221,7 @@ ```python extract = tldextract.TLDExtract( - suffix_list_urls=["file://absolute/path/to/your/local/suffix/list/file"], + suffix_list_urls=["file://" + "/absolute/path/to/your/local/suffix/list/file"], cache_dir='/path/to/your/cache/', fallback_to_snapshot=False) ``` @@ -271,7 +282,7 @@ 1. `git clone` this repository. 2. Change into the new directory. -3. `pip install tox` +3. `pip install --upgrade --editable '.[testing]'` ### Running the test suite @@ -293,6 +304,5 @@ Automatically format all code: ```zsh -pip install black black . ``` diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tldextract.egg-info/requires.txt new/tldextract-5.1.0/tldextract.egg-info/requires.txt --- old/tldextract-5.0.1/tldextract.egg-info/requires.txt 2023-10-17 22:05:39.000000000 +0200 +++ new/tldextract-5.1.0/tldextract.egg-info/requires.txt 2023-11-06 07:11:58.000000000 +0100 @@ -2,3 +2,15 @@ requests>=2.1.0 requests-file>=1.4 filelock>=3.0.8 + +[testing] +black +mypy +pytest +pytest-gitignore +pytest-mock +responses +ruff +tox +types-filelock +types-requests diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/tldextract-5.0.1/tox.ini new/tldextract-5.1.0/tox.ini --- old/tldextract-5.0.1/tox.ini 2023-10-17 21:56:05.000000000 +0200 +++ new/tldextract-5.1.0/tox.ini 2023-10-28 20:47:14.000000000 +0200 @@ -2,34 +2,21 @@ envlist = py{38,39,310,311,py3},codestyle,lint,typecheck [testenv] -deps = - pytest - pytest-gitignore - pytest-mock - responses commands = pytest {posargs} +extras = testing [testenv:codestyle] basepython = python3.8 -deps = - black commands = black --check {posargs:.} +extras = testing [testenv:lint] basepython = python3.8 -deps = - ruff commands = ruff check {posargs:.} +extras = testing [testenv:typecheck] basepython = python3.8 -deps = - mypy - pytest - pytest-gitignore - pytest-mock - responses - types-filelock - types-requests commands = mypy --show-error-codes tldextract tests +extras = testing