Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package urlwatch for openSUSE:Factory 
checked in at 2023-05-03 12:57:56
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/urlwatch (Old)
 and      /work/SRC/openSUSE:Factory/.urlwatch.new.1533 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "urlwatch"

Wed May  3 12:57:56 2023 rev:25 rq:1084225 version:2.28

Changes:
--------
--- /work/SRC/openSUSE:Factory/urlwatch/urlwatch.changes        2023-04-11 
15:54:42.430051044 +0200
+++ /work/SRC/openSUSE:Factory/.urlwatch.new.1533/urlwatch.changes      
2023-05-03 12:58:00.176088316 +0200
@@ -1,0 +2,18 @@
+Wed May  3 09:05:29 UTC 2023 - Michael Vetter <mvet...@suse.com>
+
+- Update to 2.28:
+  * Browser jobs: Migrate from Pyppeteer to Playwright (#761, #751)
+
+-------------------------------------------------------------------
+Wed May  3 09:04:35 UTC 2023 - Michael Vetter <mvet...@suse.com>
+
+- Update to 2.27:
+  Added:
+  * css and xpath filters now accept a sort subfilter to sort matched elements 
lexicographically
+  Fixed:
+  * Rework handling of running from a source checkout, fixes issues with 
example
+    files when urlwatch was run as /usr/sbin/urlwatch, e.g. on Void Linux 
(fixes #755)
+  * Add support for docutils >= 0.18, which deprecated frontend.OptionParser 
(fixes #754)
+  * Browser jobs: Fix support for Python 3.11 with @asyncio.coroutine removal 
(#759)
+
+-------------------------------------------------------------------

Old:
----
  urlwatch-2.26.tar.gz

New:
----
  urlwatch-2.28.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ urlwatch.spec ++++++
--- /var/tmp/diff_new_pack.Fz7GLU/_old  2023-05-03 12:58:00.712091465 +0200
+++ /var/tmp/diff_new_pack.Fz7GLU/_new  2023-05-03 12:58:00.716091489 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           urlwatch
-Version:        2.26
+Version:        2.28
 Release:        0
 Summary:        A tool for monitoring webpages for updates
 License:        BSD-3-Clause

++++++ urlwatch-2.26.tar.gz -> urlwatch-2.28.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/CHANGELOG.md 
new/urlwatch-2.28/CHANGELOG.md
--- old/urlwatch-2.26/CHANGELOG.md      2023-04-11 13:28:02.000000000 +0200
+++ new/urlwatch-2.28/CHANGELOG.md      2023-05-03 10:05:47.000000000 +0200
@@ -4,13 +4,32 @@
 
 The format mostly follows [Keep a 
Changelog](http://keepachangelog.com/en/1.0.0/).
 
+## [2.28] -- 2023-05-03
+
+### Changed
+
+- Browser jobs: Migrate from Pyppeteer to Playwright (#761, by Paul 
Sattlegger, fixes #751)
+
+## [2.27] -- 2023-05-03
+
+### Added
+
+- `css` and `xpath` filters now accept a `sort` subfilter to sort matched 
elements lexicographically
+
+### Fixed
+
+- Rework handling of running from a source checkout, fixes issues with example 
files
+  when `urlwatch` was run as `/usr/sbin/urlwatch`, e.g. on Void Linux (fixes 
#755)
+- Add support for docutils >= 0.18, which deprecated `frontend.OptionParser` 
(fixes #754)
+- Browser jobs: Fix support for Python 3.11 with `@asyncio.coroutine` removal 
(#759, by Faster IT)
+
 ## [2.26] -- 2023-04-11
 
 ### Added
 
 - `browser` job: Add support for specifying `useragent` (#700, by Francesco 
Versaci)
 - Document how to ignore whitespace changes (PR#707, by Paulo Magalhaes)
-- `shell` reporter: Call a script or program when chanegs are detected (fixes 
#650)
+- `shell` reporter: Call a script or program when changes are detected (fixes 
#650)
 - New `separate` configuration option for reporters to split reports into 
one-per-job (contributed by Ryne Everett)
 - `--change-location` option allowing job location to be changed without 
losing job history (#739, by trevorshannon)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/Dockerfile new/urlwatch-2.28/Dockerfile
--- old/urlwatch-2.26/Dockerfile        2023-04-11 13:28:02.000000000 +0200
+++ new/urlwatch-2.28/Dockerfile        2023-05-03 10:05:47.000000000 +0200
@@ -3,7 +3,7 @@
 
 # Optional python modules for additional functionality
 # https://urlwatch.readthedocs.io/en/latest/dependencies.html#optional-packages
-ENV OPT_PYPKGS="beautifulsoup4 jsbeautifier cssbeautifier aioxmpp"
+ARG OPT_PYPKGS="beautifulsoup4 jsbeautifier cssbeautifier aioxmpp"
 ENV HOME="/home/user"
 
 RUN adduser -D user
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/advanced.rst 
new/urlwatch-2.28/docs/source/advanced.rst
--- old/urlwatch-2.26/docs/source/advanced.rst  2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/docs/source/advanced.rst  2023-05-03 10:05:47.000000000 
+0200
@@ -18,7 +18,7 @@
 You can also specify an external ``diff``-style tool (a tool that takes
 two filenames (old, new) as parameter and returns on its standard output
 the difference of the files), for example to use :manpage:`wdiff(1)` to get
-word-based differences instead of line-based difference:
+word-based differences instead of line-based difference, or `pandiff 
<https://github.com/davidar/pandiff>`_ to get markdown differences:
 
 .. code-block:: yaml
 
@@ -323,12 +323,20 @@
 --------------------------------------------------------
 
 For browser jobs, you can configure how long the headless browser will wait
-before a page is considered loaded by using the `wait_until` option. It can 
take one of four values:
+before a page is considered loaded by using the ``wait_until`` option.
 
-  - `load` will wait until the `load` browser event is fired (default).
-  - `documentloaded` will wait until the `DOMContentLoaded` browser event is 
fired.
-  - `networkidle0` will wait until there are no more than 0 network 
connections for at least 500 ms.
-  - `networkidle2` will wait until there are no more than 2 network 
connections for at least 500 ms.
+It can take one of four values (see `wait_until docs`_ of Playwright):
+
+   - ``load`` - consider operation to be finished when the load event is fired
+   - ``domcontentloaded`` - consider operation to be finished when the
+     DOMContentLoaded event is fired
+   - ``networkidle`` - **discouraged** consider operation to be finished when 
there
+     are no network connections for at least 500 ms. Don't use this method for
+     testing, rely on web assertions to assess readiness instead
+   - ``commit`` - consider operation to be finished when network response is
+     received and the document started loading
+
+.. _`wait_until docs`: 
https://playwright.dev/python/docs/api/class-page#page-goto-option-wait-until
 
 
 Treating ``NEW`` jobs as ``CHANGED``
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/conf.py 
new/urlwatch-2.28/docs/source/conf.py
--- old/urlwatch-2.26/docs/source/conf.py       2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/docs/source/conf.py       2023-05-03 10:05:47.000000000 
+0200
@@ -27,7 +27,7 @@
 author = 'Thomas Perl'
 
 # The full version, including alpha/beta/rc tags
-release = '2.26'
+release = '2.28'
 
 
 # -- General configuration ---------------------------------------------------
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/dependencies.rst 
new/urlwatch-2.28/docs/source/dependencies.rst
--- old/urlwatch-2.26/docs/source/dependencies.rst      2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/docs/source/dependencies.rst      2023-05-03 
10:05:47.000000000 +0200
@@ -48,7 +48,8 @@
 | `stdout` reporter with  | `colorama <https://github.com/tartley/colorama>`__ 
                 |
 | color on Windows        |                                                    
                 |
 
+-------------------------+---------------------------------------------------------------------+
-| `browser` job kind      | `pyppeteer 
<https://github.com/pyppeteer/pyppeteer>`__              |
+| `browser` job kind      | `playwright 
<https://github.com/microsoft/playwright-python>`__     |
+|                         | (since version 2.28)                               
                 |
 
+-------------------------+---------------------------------------------------------------------+
 | Unit testing            | `pycodestyle 
<http://pycodestyle.pycqa.org/en/latest/>`__,          |
 |                         | `docutils <https://docutils.sourceforge.io>`__,    
                 |
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/deprecated.rst 
new/urlwatch-2.28/docs/source/deprecated.rst
--- old/urlwatch-2.26/docs/source/deprecated.rst        2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/docs/source/deprecated.rst        2023-05-03 
10:05:47.000000000 +0200
@@ -5,6 +5,20 @@
 here with steps to update your configuration for replacements.
 
 
+``networkidle0`` and ``networkidle2`` for ``wait_until`` in browser jobs 
(since 2.28)
+-------------------------------------------------------------------------------------
+
+Since version 2.28, execution of browser jobs uses Playwright instead of 
pyppetteer.
+
+The previously-supported ``wait_until`` values of ``networkidle0`` and 
``networkidle2``
+are not supported anymore. Playwright supports the values ``load``, 
``domcontentloaded``,
+``networkidle`` (discouraged) or ``commit`` instead.
+
+Existing settings of ``networkidle0`` and ``networkidle2`` will be mapped to
+``networkidle``, and a warning will be issued. To silence the warning and 
continue
+to use ``networkidle``, specify ``wait_until: networkidle`` explicitly.
+
+
 Filters without subfilters (since 2.22)
 ---------------------------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/filters.rst 
new/urlwatch-2.28/docs/source/filters.rst
--- old/urlwatch-2.26/docs/source/filters.rst   2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/docs/source/filters.rst   2023-05-03 10:05:47.000000000 
+0200
@@ -285,6 +285,30 @@
 item.
 
 
+Fixing list reorderings with CSS Selector or XPath filters
+----------------------------------------------------------
+
+In some cases, the ordering of items on a webpage might change regularly
+without the actual content changing. By default, this would show up in
+the diff output as an element being removed from one part of the page and
+inserted in another part of the page.
+
+In cases where the order of items doesn't matter, it's possible to sort
+matched items lexicographically to avoid spurious reports when only the
+ordering of items changes on the page.
+
+The subfilter for ``css`` and ``xpath`` filters is ``sort``, and can be
+``true`` or ``false`` (the default):
+
+.. code:: yaml
+
+   url: https://example.org/items-random-order.html
+   filter:
+     - css:
+         selector: span.item
+         sort: true
+
+
 Filtering PDF documents
 -----------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/docs/source/jobs.rst 
new/urlwatch-2.28/docs/source/jobs.rst
--- old/urlwatch-2.26/docs/source/jobs.rst      2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/docs/source/jobs.rst      2023-05-03 10:05:47.000000000 
+0200
@@ -67,21 +67,14 @@
 Browser
 -------
 
-This job type is a resource-intensive variant of "URL" to handle web pages
-requiring JavaScript in order to render the content to be monitored.
+This job type is a resource-intensive variant of "URL" to handle web pages that
+require JavaScript to render the content being monitored.
 
-The optional ``pyppeteer`` package must be installed to run "Browser" jobs
-(see :ref:`dependencies`).
+The optional `playwright` package must be installed in order to run Browser 
jobs
+(see :ref:`dependencies`). You will also need to install the browsers using
+``playwright install`` (see `Playwright Installation`_ for details).
 
-At the moment, the Chromium version used by ``pyppeteer`` only supports
-macOS (x86_64), Windows (both x86 and x64) and Linux (x86_64). See
-`this issue <https://github.com/pyppeteer/pyppeteer/issues/155>`__ in the
-Pyppeteer issue tracker for progress on getting ARM devices supported
-(e.g. Raspberry Pi).
-
-Because ``pyppeteer`` downloads a special version of Chromium (~ 100 MiB),
-the first execution of a ``browser`` job could take some time (and bandwidth).
-It is possible to run ``pyppeteer-install`` to pre-download Chromium.
+.. _`Playwright Installation`: https://playwright.dev/python/docs/intro
 
 .. code-block:: yaml
 
@@ -94,20 +87,23 @@
 
 Job-specific optional keys:
 
-- ``wait_until``:  Either ``load``, ``domcontentloaded``, ``networkidle0``, or 
``networkidle2`` (see :ref:`advanced_topics`)
-- ``useragent``:  Change useragent (will be passed to pyppeteer)
-
-As this job uses `Pyppeteer <https://github.com/pyppeteer/pyppeteer>`__
-to render the page in a headless Chromium instance, it requires massively
-more resources than a "URL" job. Use it only on pages where ``url`` does not
-give the right results.
-
-Hint: in many instances instead of using a "Browser" job you can
-monitor the output of an API called by the site during page loading
-containing the information you're after using the much faster "URL" job type.
+- ``wait_until``: Either ``load``, ``domcontentloaded``, ``networkidle``, or
+  ``commit`` (see :ref:`advanced_topics`)
+- ``useragent``: ``User-Agent`` header used for requests (otherwise browser 
default is used)
+- ``browser``:  Either ``chromium``, ``chrome``, ``chrome-beta``, ``msedge``,
+  ``msedge-beta``, ``msedge-dev``, ``firefox``, ``webkit`` (must be installed 
with ``playwright install``)
+
+Because this job uses Playwright_ to
+render the page in a headless browser instance, it uses massively more 
resources
+than a "URL" job. Use it only on pages where ``url`` does not return the 
correct
+results. In many cases, instead of using a "Browser" job, you can use the 
output
+of an API called by the page as it loads, which contains the information you 
are
+you're looking for by using the much faster "URL" job type.
 
 (Note: ``navigate`` implies ``kind: browser``)
 
+.. _Playwright: https://playwright.dev/python/
+
 
 Shell
 -----
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/__init__.py 
new/urlwatch-2.28/lib/urlwatch/__init__.py
--- old/urlwatch-2.26/lib/urlwatch/__init__.py  2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/__init__.py  2023-05-03 10:05:47.000000000 
+0200
@@ -12,5 +12,5 @@
 __author__ = 'Thomas Perl <m...@thp.io>'
 __license__ = 'BSD'
 __url__ = 'https://thp.io/2008/urlwatch/'
-__version__ = '2.26'
+__version__ = '2.28'
 __user_agent__ = '%s/%s (+https://thp.io/2008/urlwatch/info.html)' % (pkgname, 
__version__)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/browser.py 
new/urlwatch-2.28/lib/urlwatch/browser.py
--- old/urlwatch-2.26/lib/urlwatch/browser.py   2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/browser.py   1970-01-01 01:00:00.000000000 
+0100
@@ -1,134 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# This file is part of urlwatch (https://thp.io/2008/urlwatch/).
-# Copyright (c) 2008-2023 Thomas Perl <m...@thp.io>
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-#    notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-#    notice, this list of conditions and the following disclaimer in the
-#    documentation and/or other materials provided with the distribution.
-# 3. The name of the author may not be used to endorse or promote products
-#    derived from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
-# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
-# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
-# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
-# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
-# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-
-import logging
-
-import pyppeteer
-import asyncio
-import threading
-
-from .cli import setup_logger
-
-logger = logging.getLogger(__name__)
-
-
-class BrowserLoop(object):
-    def __init__(self):
-        self._event_loop = asyncio.new_event_loop()
-        self._browser = 
self._event_loop.run_until_complete(self._launch_browser())
-        self._loop_thread = 
threading.Thread(target=self._event_loop.run_forever)
-        self._loop_thread.start()
-
-    @asyncio.coroutine
-    def _launch_browser(self):
-        browser = yield from pyppeteer.launch()
-        for p in (yield from browser.pages()):
-            yield from p.close()
-        return browser
-
-    @asyncio.coroutine
-    def _get_content(self, url, wait_until=None, useragent=None):
-        context = yield from self._browser.createIncognitoBrowserContext()
-        page = yield from context.newPage()
-        opts = {}
-        if wait_until is not None:
-            opts['waitUntil'] = wait_until
-        if useragent is not None:
-            yield from page.setUserAgent(useragent)
-        yield from page.goto(url, opts)
-        content = yield from page.content()
-        yield from context.close()
-        return content
-
-    def process(self, url, wait_until=None, useragent=None):
-        coroutine = self._get_content(url, wait_until=wait_until, 
useragent=useragent)
-        return asyncio.run_coroutine_threadsafe(coroutine, 
self._event_loop).result()
-
-    def destroy(self):
-        self._event_loop.call_soon_threadsafe(self._event_loop.stop)
-        self._loop_thread.join()
-        self._loop_thread = None
-        self._event_loop.run_until_complete(self._browser.close())
-        self._browser = None
-        self._event_loop = None
-
-
-class BrowserContext(object):
-    _BROWSER_LOOP = None
-    _BROWSER_LOCK = threading.Lock()
-    _BROWSER_REFCNT = 0
-
-    def __init__(self):
-        with BrowserContext._BROWSER_LOCK:
-            if BrowserContext._BROWSER_REFCNT == 0:
-                logger.info('Creating browser main loop')
-                BrowserContext._BROWSER_LOOP = BrowserLoop()
-            BrowserContext._BROWSER_REFCNT += 1
-
-    def process(self, url, wait_until=None, useragent=None):
-        return BrowserContext._BROWSER_LOOP.process(url, 
wait_until=wait_until, useragent=useragent)
-
-    def close(self):
-        with BrowserContext._BROWSER_LOCK:
-            BrowserContext._BROWSER_REFCNT -= 1
-            if BrowserContext._BROWSER_REFCNT == 0:
-                logger.info('Destroying browser main loop')
-                BrowserContext._BROWSER_LOOP.destroy()
-                BrowserContext._BROWSER_LOOP = None
-
-
-def main():
-    import argparse
-
-    parser = argparse.ArgumentParser(description='Browser handler')
-    parser.add_argument('url', help='URL to retrieve')
-    parser.add_argument('-v', '--verbose', action='store_true', help='show 
debug output')
-    parser.add_argument('-w',
-                        '--wait-until',
-                        dest='wait_until',
-                        choices=['load', 'domcontentloaded', 'networkidle0', 
'networkidle2'],
-                        help='When to consider a pageload finished')
-    parser.add_argument('-u',
-                        '--useragent',
-                        dest='useragent',
-                        help='Change the useragent (sent by pyppeteer)')
-    args = parser.parse_args()
-
-    setup_logger(args.verbose)
-
-    try:
-        ctx = BrowserContext()
-        print(ctx.process(args.url, wait_until=args.wait_until, 
useragent=args.useragent))
-    finally:
-        ctx.close()
-
-
-if __name__ == '__main__':
-    main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/cli.py 
new/urlwatch-2.28/lib/urlwatch/cli.py
--- old/urlwatch-2.26/lib/urlwatch/cli.py       2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/cli.py       2023-05-03 10:05:47.000000000 
+0200
@@ -47,9 +47,6 @@
 # Check if we are installed in the system already
 (prefix, bindir) = os.path.split(os.path.dirname(os.path.abspath(sys.argv[0])))
 
-if bindir != 'bin':
-    sys.path.insert(0, os.path.join(prefix, bindir, 'lib'))
-
 from urlwatch.command import UrlwatchCommand
 from urlwatch.config import CommandConfig
 from urlwatch.main import Urlwatch
@@ -90,7 +87,7 @@
     if os.path.exists(old_cache_file) and not os.path.exists(new_cache_file):
         cache_file = old_cache_file
 
-    command_config = CommandConfig(sys.argv[1:], pkgname, urlwatch_dir, 
bindir, prefix,
+    command_config = CommandConfig(sys.argv[1:], pkgname, urlwatch_dir, prefix,
                                    config_file, urls_file, hooks_file, 
cache_file, False)
     setup_logger(command_config.verbose)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/config.py 
new/urlwatch-2.28/lib/urlwatch/config.py
--- old/urlwatch-2.26/lib/urlwatch/config.py    2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/config.py    2023-05-03 10:05:47.000000000 
+0200
@@ -52,20 +52,12 @@
 
 class CommandConfig(BaseConfig):
 
-    def __init__(self, args, pkgname, urlwatch_dir, bindir, prefix, config, 
urls, hooks, cache, verbose):
+    def __init__(self, args, pkgname, urlwatch_dir, prefix, config, urls, 
hooks, cache, verbose):
         super().__init__(pkgname, urlwatch_dir, config, urls, cache, hooks, 
verbose)
-        self.bindir = bindir
-        self.prefix = prefix
         self.migrate_cache = migrate_cache
         self.migrate_urls = migrate_urls
 
-        if self.bindir == 'bin':
-            # Installed system-wide
-            self.examples_dir = os.path.join(prefix, 'share', self.pkgname, 
'examples')
-        else:
-            # Assume we are not yet installed
-            self.examples_dir = os.path.join(prefix, bindir, 'share', 
self.pkgname, 'examples')
-
+        self.examples_dir = os.path.join(prefix, 'share', self.pkgname, 
'examples')
         self.urls_yaml_example = os.path.join(self.examples_dir, 
'urls.yaml.example')
         self.hooks_py_example = os.path.join(self.examples_dir, 
'hooks.py.example')
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/filters.py 
new/urlwatch-2.28/lib/urlwatch/filters.py
--- old/urlwatch-2.26/lib/urlwatch/filters.py   2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/filters.py   2023-05-03 10:05:47.000000000 
+0200
@@ -674,6 +674,7 @@
         self.namespaces = subfilter.get('namespaces')
         self.skip = int(subfilter.get('skip', 0))
         self.maxitems = int(subfilter.get('maxitems', 0))
+        self.sort_items = bool(subfilter.get('sort', False))
         if self.method not in ('html', 'xml'):
             raise ValueError('%s method must be "html" or "xml", got %r' % 
(filter_kind, self.method))
         if self.method == 'html' and self.namespaces is not None:
@@ -777,7 +778,8 @@
             elements = elements[self.skip:]
         if self.maxitems:
             elements = elements[:self.maxitems]
-        return '\n'.join(self._to_string(element) for element in elements)
+        elements = (self._to_string(element) for element in elements)
+        return '\n'.join(sorted(elements) if self.sort_items else elements)
 
 
 LXML_PARSER_COMMON_SUBFILTERS = {
@@ -786,6 +788,7 @@
     'namespaces': 'Mapping of XML namespaces for matching',
     'skip': 'Number of elements to skip from the beginning (default: 0)',
     'maxitems': 'Maximum number of items to return (default: all)',
+    'sort': 'Sort matched items after filtering (default: False)',
 }
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/jobs.py 
new/urlwatch-2.28/lib/urlwatch/jobs.py
--- old/urlwatch-2.26/lib/urlwatch/jobs.py      2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/lib/urlwatch/jobs.py      2023-05-03 10:05:47.000000000 
+0200
@@ -34,13 +34,15 @@
 import os
 import re
 import subprocess
-import requests
 import textwrap
-import urlwatch
+
+import requests
 from requests.packages.urllib3.exceptions import InsecureRequestWarning
 
-from .util import TrackSubClasses
+import urlwatch
+
 from .filters import FilterBase
+from .util import TrackSubClasses
 
 requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
 
@@ -406,7 +408,7 @@
 
     __required__ = ('navigate',)
 
-    __optional__ = ('wait_until', 'useragent')
+    __optional__ = ('wait_until', 'useragent', 'browser')
 
     def get_location(self):
         return self.user_visible_url or self.navigate
@@ -414,12 +416,16 @@
     def set_base_location(self, location):
         self.navigate = location
 
-    def main_thread_enter(self):
-        from .browser import BrowserContext
-        self.ctx = BrowserContext()
-
-    def main_thread_exit(self):
-        self.ctx.close()
-
     def retrieve(self, job_state):
-        return self.ctx.process(self.navigate, wait_until=self.wait_until, 
useragent=self.useragent)
+        from playwright.sync_api import sync_playwright
+        with sync_playwright() as playwright:
+            browser = playwright[self.browser or "chromium"].launch()
+            page = browser.new_page(user_agent=self.useragent)
+
+            if self.wait_until in ('networkidle0', 'networkidle2'):
+                logger.warning(f'wait_until has deprecated value of 
{self.wait_until}, see docs')
+                # Pyppetteer -> Playwright migration
+                self.wait_until = 'networkidle'
+
+            page.goto(self.navigate, wait_until=self.wait_until)
+            return page.content()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/urlwatch-2.26/lib/urlwatch/tests/data/filter_documentation_testdata.yaml 
new/urlwatch-2.28/lib/urlwatch/tests/data/filter_documentation_testdata.yaml
--- 
old/urlwatch-2.26/lib/urlwatch/tests/data/filter_documentation_testdata.yaml    
    2023-04-11 13:28:02.000000000 +0200
+++ 
new/urlwatch-2.28/lib/urlwatch/tests/data/filter_documentation_testdata.yaml    
    2023-05-03 10:05:47.000000000 +0200
@@ -363,6 +363,22 @@
     <div class="cpu">Pentium</div>
 
     <div class="cpu">Pentium MMX</div>
+https://example.org/items-random-order.html:
+  input: |
+    <body>
+      This is a test. <span class="item">B</span>
+      And some other content. <span class="item">D</span>
+      <span class="item">A</span> Sort it please.
+      Thank you. <span class="item">C</span>
+    </body>
+  output: |
+    <span class="item">A</span>
+
+    <span class="item">B</span>
+
+    <span class="item">C</span>
+
+    <span class="item">D</span>
 https://example.net/jobs.json:
   input: |
     [
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/urlwatch-2.26/lib/urlwatch/tests/test_filter_documentation.py 
new/urlwatch-2.28/lib/urlwatch/tests/test_filter_documentation.py
--- old/urlwatch-2.26/lib/urlwatch/tests/test_filter_documentation.py   
2023-04-11 13:28:02.000000000 +0200
+++ new/urlwatch-2.28/lib/urlwatch/tests/test_filter_documentation.py   
2023-05-03 10:05:47.000000000 +0200
@@ -15,10 +15,15 @@
 
 
 # https://stackoverflow.com/a/48719723/1047040
+# https://stackoverflow.com/a/75996218/1047040
 def parse_rst(text):
     parser = docutils.parsers.rst.Parser()
-    components = (docutils.parsers.rst.Parser,)
-    settings = 
docutils.frontend.OptionParser(components=components).get_default_values()
+    if hasattr(docutils.frontend, 'get_default_settings'):
+        # docutils >= 0.18
+        settings = 
docutils.frontend.get_default_settings(docutils.parsers.rst.Parser)
+    else:
+        # docutils < 0.18
+        settings = 
docutils.frontend.OptionParser(components=(docutils.parsers.rst.Parser,)).get_default_values()
     document = docutils.utils.new_document('<rst-doc>', settings=settings)
     parser.parse(text, document)
     return document
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/lib/urlwatch/tests/test_handler.py 
new/urlwatch-2.28/lib/urlwatch/tests/test_handler.py
--- old/urlwatch-2.26/lib/urlwatch/tests/test_handler.py        2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/lib/urlwatch/tests/test_handler.py        2023-05-03 
10:05:47.000000000 +0200
@@ -89,8 +89,7 @@
 
 class ConfigForTest(CommandConfig):
     def __init__(self, config, urls, cache, hooks, verbose):
-        (prefix, bindir) = 
os.path.split(os.path.dirname(os.path.abspath(sys.argv[0])))
-        super().__init__([], 'urlwatch', os.path.dirname(__file__), bindir, 
prefix, config, urls, hooks, cache, verbose)
+        super().__init__([], 'urlwatch', os.path.dirname(__file__), root, 
config, urls, hooks, cache, verbose)
 
 
 @contextlib.contextmanager
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man1/urlwatch.1 
new/urlwatch-2.28/share/man/man1/urlwatch.1
--- old/urlwatch-2.26/share/man/man1/urlwatch.1 2023-04-11 13:28:02.000000000 
+0200
+++ new/urlwatch-2.28/share/man/man1/urlwatch.1 2023-05-03 10:05:47.000000000 
+0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH" "1" "Apr 11, 2023" "urlwatch 2.26" "urlwatch 2.26 Documentation"
+.TH "URLWATCH" "1" "May 03, 2023" "urlwatch 2.28" "urlwatch 2.28 Documentation"
 .SH NAME
 urlwatch \- Monitor webpages and command output for changes
 .SH SYNOPSIS
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man5/urlwatch-config.5 
new/urlwatch-2.28/share/man/man5/urlwatch-config.5
--- old/urlwatch-2.26/share/man/man5/urlwatch-config.5  2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man5/urlwatch-config.5  2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-CONFIG" "5" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-CONFIG" "5" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-config \- Configuration of urlwatch behavior
 .SH SYNOPSIS
@@ -36,7 +36,7 @@
 .SH DESCRIPTION
 .sp
 The global configuration for urlwatch contains basic settings for the generic
-behavior of urlwatch as well as the reporters\&.
+behavior of urlwatch as well as the \fI\%Reporters\fP\&.
 .SH DISPLAY
 .sp
 In addition to always reporting changes (which is the whole point of urlwatch),
@@ -76,7 +76,7 @@
 current page contents.
 .SH REPORTERS
 .sp
-"Reporters" are the modules that deliver notifications through their
+\(dqReporters\(dq are the modules that deliver notifications through their
 respective medium when they are enabled through the configuration file.
 .sp
 See \fBurlwatch\-reporters(5)\fP for reporter\-specific options.
@@ -197,7 +197,7 @@
 \fBbrowser\fP: Applies only to \fBbrowser\fP jobs (with key \fBnavigate\fP)
 .UNINDENT
 .sp
-See jobs about the different job kinds and what the possible keys are.
+See \fI\%Jobs\fP about the different job kinds and what the possible keys are.
 .SH FILES
 .sp
 \fB$XDG_CONFIG_HOME/urlwatch/urlwatch.yaml\fP
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man5/urlwatch-filters.5 
new/urlwatch-2.28/share/man/man5/urlwatch-filters.5
--- old/urlwatch-2.26/share/man/man5/urlwatch-filters.5 2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man5/urlwatch-filters.5 2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-FILTERS" "5" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-FILTERS" "5" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-filters \- Filtering output and diff data of urlwatch jobs
 .SH SYNOPSIS
@@ -67,7 +67,7 @@
 The \fBfilter\fP is only applied to new content, the old content was
 already filtered when it was retrieved. This means that changes to
 \fBfilter\fP are not visible when reporting unchanged contents
-(see configuration_display for details), and the diff output
+(see \fI\%Display\fP for details), and the diff output
 will be between (old content with filter at the time old content was
 retrieved) and (new content with current filter).
 .sp
@@ -147,7 +147,7 @@
 .SH PICKING OUT ELEMENTS FROM A WEBPAGE
 .sp
 You can pick only a given HTML element with the built\-in filter, for
-example to extract \fB<div id="something">.../<div>\fP from a page, you
+example to extract \fB<div id=\(dqsomething\(dq>.../<div>\fP from a page, you
 can use the following in your \fBurls.yaml\fP:
 .INDENT 0.0
 .INDENT 3.5
@@ -189,7 +189,7 @@
 url: https://example.net/version.html
 filter:
   \- html2text
-  \- grep: "Current.*version"
+  \- grep: \(dqCurrent.*version\(dq
   \- strip
 .ft P
 .fi
@@ -252,8 +252,8 @@
 .UNINDENT
 .UNINDENT
 .sp
-This would filter only \fB<li class="unchecked">\fP tags directly
-below \fB<ul id="groceries">\fP elements.
+This would filter only \fB<li class=\(dqunchecked\(dq>\fP tags directly
+below \fB<ul id=\(dqgroceries\(dq>\fP elements.
 .sp
 Some limitations and extensions exist as explained in \fI\%cssselect’s
 documentation\fP 
<\fBhttps://cssselect.readthedocs.io/en/latest/#supported-selectors\fP>\&.
@@ -388,6 +388,33 @@
 the same HTML document, and shows/hides one via CSS depending on the
 viewport size), you can use \fBmaxitems: 1\fP to only return the first
 item.
+.SH FIXING LIST REORDERINGS WITH CSS SELECTOR OR XPATH FILTERS
+.sp
+In some cases, the ordering of items on a webpage might change regularly
+without the actual content changing. By default, this would show up in
+the diff output as an element being removed from one part of the page and
+inserted in another part of the page.
+.sp
+In cases where the order of items doesn\(aqt matter, it\(aqs possible to sort
+matched items lexicographically to avoid spurious reports when only the
+ordering of items changes on the page.
+.sp
+The subfilter for \fBcss\fP and \fBxpath\fP filters is \fBsort\fP, and can be
+\fBtrue\fP or \fBfalse\fP (the default):
+.INDENT 0.0
+.INDENT 3.5
+.sp
+.nf
+.ft C
+url: https://example.org/items\-random\-order.html
+filter:
+  \- css:
+      selector: span.item
+      sort: true
+.ft P
+.fi
+.UNINDENT
+.UNINDENT
 .SH FILTERING PDF DOCUMENTS
 .sp
 To monitor the text of a PDF file, you use the \fIpdf2text\fP filter. It 
requires
@@ -502,7 +529,7 @@
 url: http://example.org/paragraphs.txt
 filter:
   \- sort:
-      separator: "\en\en"
+      separator: \(dq\en\en\(dq
 .ft P
 .fi
 .UNINDENT
@@ -559,7 +586,7 @@
 .UNINDENT
 .sp
 Alternatively, the filter can be specified more verbose with a dict.
-In this example \fB"\en\en"\fP is used to separate paragraphs (items that
+In this example \fB\(dq\en\en\(dq\fP is used to separate paragraphs (items that
 are separated by an empty line):
 .INDENT 0.0
 .INDENT 3.5
@@ -569,7 +596,7 @@
 url: http://example.org/reverse\-paragraphs.txt
 filter:
   \- reverse:
-      separator: "\en\en"
+      separator: \(dq\en\en\(dq
 .ft P
 .fi
 .UNINDENT
@@ -585,7 +612,7 @@
 .ft C
 url: https://github.com/tulir/gomuks/releases
 filter:
-  \- xpath: \(aq(//div[contains(@class,"d\-flex flex\-column flex\-md\-row 
my\-5 flex\-justify\-center")]//h1//a)[1]\(aq
+  \- xpath: \(aq(//div[contains(@class,\(dqd\-flex flex\-column flex\-md\-row 
my\-5 flex\-justify\-center\(dq)]//h1//a)[1]\(aq
   \- html2text: re
   \- strip
 .ft P
@@ -602,7 +629,7 @@
 url: https://github.com/thp/urlwatch/tags
 filter:
   \- xpath:
-      path: //*[@class="Link\-\-primary"]
+      path: //*[@class=\(dqLink\-\-primary\(dq]
       maxitems: 1
   \- html2text:
 .ft P
@@ -618,7 +645,7 @@
 .ft C
 url: https://gitlab.com/chinstrap/gammastep/\-/tags
 filter:
-  \- xpath: (//a[contains(@class,"item\-title ref\-name")])[1]
+  \- xpath: (//a[contains(@class,\(dqitem\-title ref\-name\(dq)])[1]
   \- html2text
 .ft P
 .fi
@@ -667,7 +694,7 @@
 .ft C
 url: https://example.com/regex\-substitute.html
 filter:
-    \- re.sub: \(aq\es*href="[^"]*"\(aq
+    \- re.sub: \(aq\es*href=\(dq[^\(dq]*\(dq\(aq
     \- re.sub:
         pattern: \(aq<h1>\(aq
         repl: \(aqHEADING 1: \(aq
@@ -680,7 +707,7 @@
 .UNINDENT
 .sp
 If you want to enable certain flags (e.g. \fBre.MULTILINE\fP) in the
-call, this is possible by inserting an "inline flag" documented in
+call, this is possible by inserting an \(dqinline flag\(dq documented in
 \fI\%flags in re.compile\fP 
<\fBhttps://docs.python.org/3/library/re.html#re.compile\fP>, here are some 
examples:
 .INDENT 0.0
 .IP \(bu 2
@@ -727,7 +754,7 @@
 .ft C
 url: https://example.net/shellpipe\-grep.txt
 filter:
-  \- shellpipe: "grep \-i \-o \(aqprice: <span>.*</span>\(aq"
+  \- shellpipe: \(dqgrep \-i \-o \(aqprice: <span>.*</span>\(aq\(dq
 .ft P
 .fi
 .UNINDENT
@@ -744,7 +771,7 @@
 .ft C
 url: https://example.net/shellpipe\-awk\-oneliner.txt
 filter:
-  \- shellpipe: awk \(aq{ print FNR " " $0 }\(aq
+  \- shellpipe: awk \(aq{ print FNR \(dq \(dq $0 }\(aq
 .ft P
 .fi
 .UNINDENT
@@ -764,7 +791,7 @@
       # Copy the input to a temporary file, then pipe through awk
       tee $FILENAME | awk \(aq/The numbers for (.*) are:/,/The next draw is on 
(.*)./\(aq
       # Analyze the input file in some other way
-      echo "Input lines: $(wc \-l $FILENAME | awk \(aq{ print $1 }\(aq)"
+      echo \(dqInput lines: $(wc \-l $FILENAME | awk \(aq{ print $1 }\(aq)\(dq
       rm \-f $FILENAME
 .ft P
 .fi
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man5/urlwatch-jobs.5 
new/urlwatch-2.28/share/man/man5/urlwatch-jobs.5
--- old/urlwatch-2.26/share/man/man5/urlwatch-jobs.5    2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man5/urlwatch-jobs.5    2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-JOBS" "5" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-JOBS" "5" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-jobs \- Job types and configuration for urlwatch
 .SH SYNOPSIS
@@ -49,7 +49,7 @@
 .sp
 .nf
 .ft C
-name: "This is a human\-readable name/label of the job"
+name: \(dqThis is a human\-readable name/label of the job\(dq
 .ft P
 .fi
 .UNINDENT
@@ -64,8 +64,8 @@
 .sp
 .nf
 .ft C
-name: "urlwatch homepage"
-url: "https://thp.io/2008/urlwatch/";
+name: \(dqurlwatch homepage\(dq
+url: \(dqhttps://thp.io/2008/urlwatch/\(dq
 .ft P
 .fi
 .UNINDENT
@@ -80,7 +80,7 @@
 Job\-specific optional keys:
 .INDENT 0.0
 .IP \(bu 2
-\fBcookies\fP: Cookies to send with the request (see advanced_topics)
+\fBcookies\fP: Cookies to send with the request (see \fI\%Advanced Topics\fP)
 .IP \(bu 2
 \fBmethod\fP: HTTP method to use (default: \fBGET\fP)
 .IP \(bu 2
@@ -96,44 +96,35 @@
 .IP \(bu 2
 \fBheaders\fP: HTTP header to send along with the request
 .IP \(bu 2
-\fBencoding\fP: Override the character encoding from the server (see 
advanced_topics)
+\fBencoding\fP: Override the character encoding from the server (see 
\fI\%Advanced Topics\fP)
 .IP \(bu 2
-\fBtimeout\fP: Override the default socket timeout (see advanced_topics)
+\fBtimeout\fP: Override the default socket timeout (see \fI\%Advanced 
Topics\fP)
 .IP \(bu 2
-\fBignore_connection_errors\fP: Ignore (temporary) connection errors (see 
advanced_topics)
+\fBignore_connection_errors\fP: Ignore (temporary) connection errors (see 
\fI\%Advanced Topics\fP)
 .IP \(bu 2
-\fBignore_http_error_codes\fP: List of HTTP errors to ignore (see 
advanced_topics)
+\fBignore_http_error_codes\fP: List of HTTP errors to ignore (see 
\fI\%Advanced Topics\fP)
 .IP \(bu 2
 \fBignore_timeout_errors\fP: Do not report errors when the timeout is hit
 .IP \(bu 2
-\fBignore_too_many_redirects\fP: Ignore redirect loops (see advanced_topics)
+\fBignore_too_many_redirects\fP: Ignore redirect loops (see \fI\%Advanced 
Topics\fP)
 .UNINDENT
 .sp
 (Note: \fBurl\fP implies \fBkind: url\fP)
 .SH BROWSER
 .sp
-This job type is a resource\-intensive variant of "URL" to handle web pages
-requiring JavaScript in order to render the content to be monitored.
+This job type is a resource\-intensive variant of \(dqURL\(dq to handle web 
pages that
+require JavaScript to render the content being monitored.
 .sp
-The optional \fBpyppeteer\fP package must be installed to run "Browser" jobs
-(see dependencies).
-.sp
-At the moment, the Chromium version used by \fBpyppeteer\fP only supports
-macOS (x86_64), Windows (both x86 and x64) and Linux (x86_64). See
-\fI\%this issue\fP <\fBhttps://github.com/pyppeteer/pyppeteer/issues/155\fP> 
in the
-Pyppeteer issue tracker for progress on getting ARM devices supported
-(e.g. Raspberry Pi).
-.sp
-Because \fBpyppeteer\fP downloads a special version of Chromium (~ 100 MiB),
-the first execution of a \fBbrowser\fP job could take some time (and 
bandwidth).
-It is possible to run \fBpyppeteer\-install\fP to pre\-download Chromium.
+The optional \fIplaywright\fP package must be installed in order to run 
Browser jobs
+(see \fI\%Dependencies\fP). You will also need to install the browsers using
+\fBplaywright install\fP (see \fI\%Playwright Installation\fP 
<\fBhttps://playwright.dev/python/docs/intro\fP> for details).
 .INDENT 0.0
 .INDENT 3.5
 .sp
 .nf
 .ft C
-name: "A page with JavaScript"
-navigate: "https://example.org/";
+name: \(dqA page with JavaScript\(dq
+navigate: \(dqhttps://example.org/\(dq
 .ft P
 .fi
 .UNINDENT
@@ -148,19 +139,21 @@
 Job\-specific optional keys:
 .INDENT 0.0
 .IP \(bu 2
-\fBwait_until\fP:  Either \fBload\fP, \fBdomcontentloaded\fP, 
\fBnetworkidle0\fP, or \fBnetworkidle2\fP (see advanced_topics)
+\fBwait_until\fP: Either \fBload\fP, \fBdomcontentloaded\fP, 
\fBnetworkidle\fP, or
+\fBcommit\fP (see \fI\%Advanced Topics\fP)
 .IP \(bu 2
-\fBuseragent\fP:  Change useragent (will be passed to pyppeteer)
+\fBuseragent\fP: \fBUser\-Agent\fP header used for requests (otherwise browser 
default is used)
+.IP \(bu 2
+\fBbrowser\fP:  Either \fBchromium\fP, \fBchrome\fP, \fBchrome\-beta\fP, 
\fBmsedge\fP,
+\fBmsedge\-beta\fP, \fBmsedge\-dev\fP, \fBfirefox\fP, \fBwebkit\fP (must be 
installed with \fBplaywright install\fP)
 .UNINDENT
 .sp
-As this job uses \fI\%Pyppeteer\fP 
<\fBhttps://github.com/pyppeteer/pyppeteer\fP>
-to render the page in a headless Chromium instance, it requires massively
-more resources than a "URL" job. Use it only on pages where \fBurl\fP does not
-give the right results.
-.sp
-Hint: in many instances instead of using a "Browser" job you can
-monitor the output of an API called by the site during page loading
-containing the information you\(aqre after using the much faster "URL" job 
type.
+Because this job uses \fI\%Playwright\fP 
<\fBhttps://playwright.dev/python/\fP> to
+render the page in a headless browser instance, it uses massively more 
resources
+than a \(dqURL\(dq job. Use it only on pages where \fBurl\fP does not return 
the correct
+results. In many cases, instead of using a \(dqBrowser\(dq job, you can use 
the output
+of an API called by the page as it loads, which contains the information you 
are
+you\(aqre looking for by using the much faster \(dqURL\(dq job type.
 .sp
 (Note: \fBnavigate\fP implies \fBkind: browser\fP)
 .SH SHELL
@@ -173,8 +166,8 @@
 .sp
 .nf
 .ft C
-name: "What is in my Home Directory?"
-command: "ls \-al ~"
+name: \(dqWhat is in my Home Directory?\(dq
+command: \(dqls \-al ~\(dq
 .ft P
 .fi
 .UNINDENT
@@ -221,8 +214,8 @@
 .nf
 .ft C
 command: |
-  echo "Normal standard output."
-  echo "Something goes to stderr, which makes this job fail." 1>&2
+  echo \(dqNormal standard output.\(dq
+  echo \(dqSomething goes to stderr, which makes this job fail.\(dq 1>&2
   exit 0
 stderr: fail
 .ft P
@@ -237,8 +230,8 @@
 .nf
 .ft C
 command: |
-  echo "An important line on stdout."
-  echo "Another important line on stderr." 1>&2
+  echo \(dqAn important line on stdout.\(dq
+  echo \(dqAnother important line on stderr.\(dq 1>&2
 stderr: stdout
 .ft P
 .fi
@@ -249,13 +242,13 @@
 .IP \(bu 2
 \fBname\fP: Human\-readable name/label of the job
 .IP \(bu 2
-\fBfilter\fP: filters (if any) to apply to the output (can be tested with 
\fB\-\-test\-filter\fP)
+\fBfilter\fP: \fI\%Filters\fP (if any) to apply to the output (can be tested 
with \fB\-\-test\-filter\fP)
 .IP \(bu 2
 \fBmax_tries\fP: Number of times to retry fetching the resource
 .IP \(bu 2
 \fBdiff_tool\fP: Command to a custom tool for generating diff text
 .IP \(bu 2
-\fBdiff_filter\fP: filters (if any) to apply to the diff result (can be tested 
with \fB\-\-test\-diff\-filter\fP)
+\fBdiff_filter\fP: \fI\%Filters\fP (if any) to apply to the diff result (can 
be tested with \fB\-\-test\-diff\-filter\fP)
 .IP \(bu 2
 \fBtreat_new_as_changed\fP: Will treat jobs that don\(aqt have any historic 
data as \fBCHANGED\fP instead of \fBNEW\fP (and create a diff for new jobs)
 .IP \(bu 2
@@ -267,7 +260,7 @@
 .UNINDENT
 .SH SETTING KEYS FOR ALL JOBS AT ONCE
 .sp
-The main configuration file has a \fBjob_defaults\fP
+The main \fI\%Configuration\fP file has a \fBjob_defaults\fP
 key that can be used to configure keys for all jobs at once.
 .sp
 See \fBurlwatch\-config(5)\fP for how to configure job defaults.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man5/urlwatch-reporters.5 
new/urlwatch-2.28/share/man/man5/urlwatch-reporters.5
--- old/urlwatch-2.26/share/man/man5/urlwatch-reporters.5       2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man5/urlwatch-reporters.5       2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-REPORTERS" "5" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-REPORTERS" "5" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-reporters \- Reporters for change notifications
 .SH SYNOPSIS
@@ -61,7 +61,7 @@
 .sp
 This will create a test report with \fBnew\fP, \fBchanged\fP, \fBunchanged\fP 
and
 \fBerror\fP notifications (only the ones configured in \fBdisplay\fP in the
-configuration will be shown in the report) and send it via the
+\fI\%Configuration\fP will be shown in the report) and send it via the
 \fBstdout\fP reporter (if it is enabled).
 .sp
 To test if your e\-mail reporter is configured correctly, you can use:
@@ -129,7 +129,7 @@
 .sp
 You can configure urlwatch to send real time notifications about changes
 via \fI\%Pushover\fP <\fBhttps://pushover.net/\fP>\&. To enable this, ensure 
you have the
-\fBchump\fP python package installed (see dependencies). Then edit your config
+\fBchump\fP python package installed (see \fI\%Dependencies\fP). Then edit 
your config
 (\fBurlwatch \-\-edit\-config\fP) and enable pushover. You will also need to
 add to the config your Pushover user key and a unique app key (generated
 by registering urlwatch as an application on your \fI\%Pushover account\fP 
<\fBhttps://pushover.net/apps/build\fP>\&.
@@ -272,7 +272,7 @@
 .UNINDENT
 .UNINDENT
 .sp
-To set up Discord, from your Discord Server settings, select Integration and 
then create a "New Webhook", give the webhook a name to post under, select a 
channel, push "Copy Webhook URL" and paste it into the configuration as seen 
above.
+To set up Discord, from your Discord Server settings, select Integration and 
then create a \(dqNew Webhook\(dq, give the webhook a name to post under, 
select a channel, push \(dqCopy Webhook URL\(dq and paste it into the 
configuration as seen above.
 .sp
 Embedded content might be easier to read and identify individual reports. 
Subject precedes the embedded report and is only used when \fIembed\fP is true.
 .sp
@@ -283,7 +283,7 @@
 .sp
 \fI\%https://ifttt.com/maker_webhooks/settings\fP
 .sp
-The URL shown in "Account Info" has the following format:
+The URL shown in \(dqAccount Info\(dq has the following format:
 .INDENT 0.0
 .INDENT 3.5
 .sp
@@ -340,7 +340,7 @@
 .IP 3. 3
 Set the display name and avatar, if desired.
 .IP 4. 3
-In the settings page, select the "Help & About" tab, scroll down to the bottom 
and click Access
+In the settings page, select the \(dqHelp & About\(dq tab, scroll down to the 
bottom and click Access
 Token: <click to reveal>.
 .IP 5. 3
 Copy the highlighted text to your configuration.
@@ -362,8 +362,8 @@
 .ft C
 matrix:
   homeserver: https://matrix.org
-  access_token: "YOUR_TOKEN_HERE"
-  room_id: "!roomroomroom:matrix.org"
+  access_token: \(dqYOUR_TOKEN_HERE\(dq
+  room_id: \(dq!roomroomroom:matrix.org\(dq
   enabled: true
 .ft P
 .fi
@@ -507,8 +507,8 @@
 .ft C
 xmpp:
   enabled: true
-  sender: "BOT_ACCOUNT_NAME"
-  recipient: "YOUR_ACCOUNT_NAME"
+  sender: \(dqBOT_ACCOUNT_NAME\(dq
+  recipient: \(dqYOUR_ACCOUNT_NAME\(dq
 .ft P
 .fi
 .UNINDENT
@@ -563,7 +563,7 @@
 .UNINDENT
 .UNINDENT
 .sp
-The “subject" field is similar to the subject field in the email, and
+The “subject\(dq field is similar to the subject field in the email, and
 will be used as the name of the Prowl event. The application is prepended
 to the event and shown as the source of the event in the Prowl App.
 .SH SHELL
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man7/urlwatch-cookbook.7 
new/urlwatch-2.28/share/man/man7/urlwatch-cookbook.7
--- old/urlwatch-2.26/share/man/man7/urlwatch-cookbook.7        2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man7/urlwatch-cookbook.7        2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-COOKBOOK" "7" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-COOKBOOK" "7" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-cookbook \- Advanced topics and recipes for urlwatch
 .SH ADDING URLS FROM THE COMMAND LINE
@@ -48,7 +48,7 @@
 You can also specify an external \fBdiff\fP\-style tool (a tool that takes
 two filenames (old, new) as parameter and returns on its standard output
 the difference of the files), for example to use \fBwdiff(1)\fP to get
-word\-based differences instead of line\-based difference:
+word\-based differences instead of line\-based difference, or \fI\%pandiff\fP 
<\fBhttps://github.com/davidar/pandiff\fP> to get markdown differences:
 .INDENT 0.0
 .INDENT 3.5
 .sp
@@ -75,7 +75,7 @@
 .sp
 .nf
 .ft C
-diff_tool: "diff \-\-ignore\-all\-space \-\-unified"
+diff_tool: \(dqdiff \-\-ignore\-all\-space \-\-unified\(dq
 .ft P
 .fi
 .UNINDENT
@@ -86,7 +86,7 @@
 .SH ONLY SHOW ADDED OR REMOVED LINES
 .sp
 The \fBdiff_filter\fP feature can be used to filter the diff output text
-with the same tools (see filters) used for filtering web pages.
+with the same tools (see \fI\%Filters\fP) used for filtering web pages.
 .sp
 In order to show only diff lines with added lines, use:
 .INDENT 0.0
@@ -146,8 +146,8 @@
 .UNINDENT
 .UNINDENT
 .sp
-We want to filter all lines starting with "+" only, but because of
-the headers we also want to filter lines that start with "+++",
+We want to filter all lines starting with \(dq+\(dq only, but because of
+the headers we also want to filter lines that start with \(dq+++\(dq,
 which can be accomplished like so:
 .INDENT 0.0
 .INDENT 3.5
@@ -156,15 +156,15 @@
 .ft C
 url: http://example.com/only\-added.html
 diff_filter:
-  \- grep: \(aq^[+]\(aq      # Include all lines starting with "+"
-  \- grepi: \(aq^[+]{3}\(aq  # Exclude the line starting with "+++"
+  \- grep: \(aq^[+]\(aq      # Include all lines starting with \(dq+\(dq
+  \- grepi: \(aq^[+]{3}\(aq  # Exclude the line starting with \(dq+++\(dq
 .ft P
 .fi
 .UNINDENT
 .UNINDENT
 .sp
 This deals with all diff lines now, but since urlwatch reports
-"changed" pages even when the \fBdiff_filter\fP returns an empty string
+\(dqchanged\(dq pages even when the \fBdiff_filter\fP returns an empty string
 (which might be useful in some cases), you have to explicitly opt out
 by using \fBurlwatch \-\-edit\-config\fP and setting the \fBempty\-diff\fP
 option to \fBfalse\fP in the \fBdisplay\fP category:
@@ -189,7 +189,7 @@
 The output of the custom script will then be the diff result as reported by
 urlwatch, so if it outputs any status, the \fBCHANGED\fP notification that
 urlwatch does will contain the output of the custom script, not the original
-diff. This can even have a "normal" filter attached to only watch links
+diff. This can even have a \(dqnormal\(dq filter attached to only watch links
 (the \fBcss: a\fP part of the filter definitions):
 .INDENT 0.0
 .INDENT 3.5
@@ -337,8 +337,8 @@
 .sp
 .nf
 .ft C
-name: "urlwatch watchdog"
-command: "date"
+name: \(dqurlwatch watchdog\(dq
+command: \(dqdate\(dq
 .ft P
 .fi
 .UNINDENT
@@ -396,7 +396,7 @@
       selector: div#objects_container
       exclude: \(aqdiv.x, #m_more_friends_who_like_this, img\(aq
   \- re.sub:
-      pattern: \(aq(/events/\ed*)[^"]*\(aq
+      pattern: \(aq(/events/\ed*)[^\(dq]*\(aq
       repl: \(aq\e1\(aq
   \- html2text: pyhtml2text
 .ft P
@@ -426,18 +426,24 @@
 .SH CONFIGURING HOW LONG BROWSER JOBS WAIT FOR PAGES TO LOAD
 .sp
 For browser jobs, you can configure how long the headless browser will wait
-before a page is considered loaded by using the \fIwait_until\fP option. It 
can take one of four values:
+before a page is considered loaded by using the \fBwait_until\fP option.
+.sp
+It can take one of four values (see \fI\%wait_until docs\fP 
<\fBhttps://playwright.dev/python/docs/api/class-page#page-goto-option-wait-until\fP>
 of Playwright):
 .INDENT 0.0
 .INDENT 3.5
 .INDENT 0.0
 .IP \(bu 2
-\fIload\fP will wait until the \fIload\fP browser event is fired (default).
+\fBload\fP \- consider operation to be finished when the load event is fired
 .IP \(bu 2
-\fIdocumentloaded\fP will wait until the \fIDOMContentLoaded\fP browser event 
is fired.
+\fBdomcontentloaded\fP \- consider operation to be finished when the
+DOMContentLoaded event is fired
 .IP \(bu 2
-\fInetworkidle0\fP will wait until there are no more than 0 network 
connections for at least 500 ms.
+\fBnetworkidle\fP \- \fBdiscouraged\fP consider operation to be finished when 
there
+are no network connections for at least 500 ms. Don\(aqt use this method for
+testing, rely on web assertions to assess readiness instead
 .IP \(bu 2
-\fInetworkidle2\fP will wait until there are no more than 2 network 
connections for at least 500 ms.
+\fBcommit\fP \- consider operation to be finished when network response is
+received and the document started loading
 .UNINDENT
 .UNINDENT
 .UNINDENT
@@ -475,15 +481,15 @@
 .sp
 .nf
 .ft C
-name: "Looking for Thing A"
+name: \(dqLooking for Thing A\(dq
 url: http://example.com/#1
 filter:
-  \- grep: "Thing A"
+  \- grep: \(dqThing A\(dq
 \-\-\-
-name: "Looking for Thing B"
+name: \(dqLooking for Thing B\(dq
 url: http://example.com/#2
 filter:
-  \- grep: "Thing B"
+  \- grep: \(dqThing B\(dq
 .ft P
 .fi
 .UNINDENT
@@ -530,12 +536,12 @@
 .sp
 .nf
 .ft C
-name: "My POST Job"
+name: \(dqMy POST Job\(dq
 url: http://example.com/foo
 data:
-  username: "foo"
-  password: "bar"
-  submit: "Send query"
+  username: \(dqfoo\(dq
+  password: \(dqbar\(dq
+  submit: \(dqSend query\(dq
 .ft P
 .fi
 .UNINDENT
@@ -552,12 +558,12 @@
 .sp
 .nf
 .ft C
-name: "My PUT Request"
+name: \(dqMy PUT Request\(dq
 url: http://example.com/item/new
 method: PUT
 headers:
   Content\-type: application/json
-data: \(aq{"foo": true}\(aq
+data: \(aq{\(dqfoo\(dq: true}\(aq
 .ft P
 .fi
 .UNINDENT
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man7/urlwatch-deprecated.7 
new/urlwatch-2.28/share/man/man7/urlwatch-deprecated.7
--- old/urlwatch-2.26/share/man/man7/urlwatch-deprecated.7      2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man7/urlwatch-deprecated.7      2023-05-03 
10:05:47.000000000 +0200
@@ -27,12 +27,23 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-DEPRECATED" "7" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-DEPRECATED" "7" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-deprecated \- Documentation of feature deprecation in urlwatch
 .sp
 As features are deprecated and cleaned up, they are documented
 here with steps to update your configuration for replacements.
+.SH NETWORKIDLE0 AND NETWORKIDLE2 FOR WAIT_UNTIL IN BROWSER JOBS (SINCE 2.28)
+.sp
+Since version 2.28, execution of browser jobs uses Playwright instead of 
pyppetteer.
+.sp
+The previously\-supported \fBwait_until\fP values of \fBnetworkidle0\fP and 
\fBnetworkidle2\fP
+are not supported anymore. Playwright supports the values \fBload\fP, 
\fBdomcontentloaded\fP,
+\fBnetworkidle\fP (discouraged) or \fBcommit\fP instead.
+.sp
+Existing settings of \fBnetworkidle0\fP and \fBnetworkidle2\fP will be mapped 
to
+\fBnetworkidle\fP, and a warning will be issued. To silence the warning and 
continue
+to use \fBnetworkidle\fP, specify \fBwait_until: networkidle\fP explicitly.
 .SH FILTERS WITHOUT SUBFILTERS (SINCE 2.22)
 .sp
 In older urlwatch versions, it was possible to write custom
@@ -45,7 +56,7 @@
 .nf
 .ft C
 class CustomFilter(filters.FilterBase):
-    """My old custom filter"""
+    \(dq\(dq\(dqMy old custom filter\(dq\(dq\(dq
 
     __kind__ = \(aqfoo\(aq
 
@@ -65,7 +76,7 @@
 .nf
 .ft C
 class CustomFilter(filters.FilterBase):
-    """My new custom filter"""
+    \(dq\(dq\(dqMy new custom filter\(dq\(dq\(dq
 
     __kind__ = \(aqfoo\(aq
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/share/man/man7/urlwatch-intro.7 
new/urlwatch-2.28/share/man/man7/urlwatch-intro.7
--- old/urlwatch-2.26/share/man/man7/urlwatch-intro.7   2023-04-11 
13:28:02.000000000 +0200
+++ new/urlwatch-2.28/share/man/man7/urlwatch-intro.7   2023-05-03 
10:05:47.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "URLWATCH-INTRO" "7" "Apr 11, 2023" "" "urlwatch"
+.TH "URLWATCH-INTRO" "7" "May 03, 2023" "" "urlwatch"
 .SH NAME
 urlwatch-intro \- Introduction to basic urlwatch usage
 .SH QUICK START
@@ -61,14 +61,14 @@
 .IP \(bu 2
 retrieves the output of each job and filters it
 .IP \(bu 2
-compares it with the version retrieved the previous time ("diffing")
+compares it with the version retrieved the previous time (\(dqdiffing\(dq)
 .IP \(bu 2
 if it finds any differences, it invokes enabled reporters (e.g.
 text reporter, e\-mail reporter, ...) to notify you of the changes
 .UNINDENT
 .SH JOBS AND FILTERS
 .sp
-Each website or shell command to be monitored constitutes a "job".
+Each website or shell command to be monitored constitutes a \(dqjob\(dq.
 .sp
 The instructions for each such job are contained in a config file in the 
\fI\%YAML
 format\fP <\fBhttps://yaml.org/spec/\fP>\&. If you have more than one job, you 
separate them with a line
@@ -121,7 +121,7 @@
 See \fBurlwatch\-jobs(5)\fP for detailed information on job configuration.
 .SS Filters
 .sp
-You may use the \fBfilter\fP key to select one or more filters to apply to
+You may use the \fBfilter\fP key to select one or more \fI\%Filters\fP to 
apply to
 the data after it is retrieved, for example to:
 .INDENT 0.0
 .IP \(bu 2
@@ -152,12 +152,12 @@
 .sp
 .nf
 .ft C
-name: "Sample urlwatch job definition"
-url: "https://example.dummy/";
-https_proxy: "http://dummy.proxy/";
+name: \(dqSample urlwatch job definition\(dq
+url: \(dqhttps://example.dummy/\(dq
+https_proxy: \(dqhttp://dummy.proxy/\(dq
 max_tries: 2
 filter:
-  \- xpath: \(aq//section[@role="main"]\(aq
+  \- xpath: \(aq//section[@role=\(dqmain\(dq]\(aq
   \- html2text:
       method: pyhtml2text
       unicode_snob: true
@@ -167,7 +167,7 @@
       ignore_images: true
       pad_tables: false
       single_line_break: true
-  \- grep: "lines I care about"
+  \- grep: \(dqlines I care about\(dq
   \- sort:
 \-\-\-
 .ft P
@@ -181,7 +181,7 @@
 \fIurlwatch\fP can be configured to do something with its report besides
 (or in addition to) the default of displaying it on the console.
 .sp
-reporters are configured in the global configuration file:
+\fI\%Reporters\fP are configured in the global configuration file:
 .INDENT 0.0
 .INDENT 3.5
 .sp
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.26/urlwatch new/urlwatch-2.28/urlwatch
--- old/urlwatch-2.26/urlwatch  2023-04-11 13:28:02.000000000 +0200
+++ new/urlwatch-2.28/urlwatch  2023-05-03 10:05:47.000000000 +0200
@@ -1,9 +1,17 @@
 #!/usr/bin/env python3
 # Convenience script to run urlwatch from a Git checkout
-# This is NOT the script that gets installed as part of "setup.py install"
+# This is NOT the script that gets installed as part of "pip install",
+# for that see the definition of "entry_points" in setup.py.
+
 
 import os
 import sys
-sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), 
'lib'))
-from urlwatch.cli import main
-main()
+
+HERE = os.path.dirname(os.path.realpath(__file__))
+
+sys.path.insert(0, os.path.join(HERE, 'lib'))
+
+from urlwatch import cli
+
+cli.prefix = HERE
+cli.main()

Reply via email to