Hello community,

here is the log from the commit of package python-distributed for 
openSUSE:Factory checked in at 2020-10-07 14:17:34
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-distributed (Old)
 and      /work/SRC/openSUSE:Factory/.python-distributed.new.4249 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-distributed"

Wed Oct  7 14:17:34 2020 rev:36 rq:839705 version:2.29.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-distributed/python-distributed.changes    
2020-09-21 17:46:04.813063253 +0200
+++ 
/work/SRC/openSUSE:Factory/.python-distributed.new.4249/python-distributed.changes
  2020-10-07 14:17:37.957444944 +0200
@@ -1,0 +2,19 @@
+Mon Oct  5 20:16:12 UTC 2020 - Arun Persaud <a...@gmx.de>
+
+- update to version 2.29.0:
+  * Use pandas.testing (GH#4138) jakirkham
+  * Fix a few typos (GH#4131) Pav A
+  * Return right away in Cluster.close if cluster is already closed
+    (GH#4116) Tom Rochette
+  * Update async doc with example on .compute() vs client.compute()
+    (GH#4137) Benjamin Zaitlen
+  * Correctly tear down LoopRunner in Client (GH#4112) Sergey Kozlov
+  * Simplify Client._graph_to_futures() (GH#4127) Mads
+    R. B. Kristensen
+  * Cleanup new exception traceback (GH#4125) Krishan Bhasin
+  * Stop writing config files by default (GH#4123) Matthew Rocklin
+
+- changes from version 2.28.0:
+  * Fix SSL connection_args for progressbar connect (GH#4122) jennalc
+
+-------------------------------------------------------------------

Old:
----
  distributed-2.27.0.tar.gz

New:
----
  distributed-2.29.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-distributed.spec ++++++
--- /var/tmp/diff_new_pack.JXXPVY/_old  2020-10-07 14:17:38.573445434 +0200
+++ /var/tmp/diff_new_pack.JXXPVY/_new  2020-10-07 14:17:38.573445434 +0200
@@ -21,7 +21,7 @@
 # Test requires network connection
 %bcond_with     test
 Name:           python-distributed
-Version:        2.27.0
+Version:        2.29.0
 Release:        0
 Summary:        Library for distributed computing with Python
 License:        BSD-3-Clause

++++++ distributed-2.27.0.tar.gz -> distributed-2.29.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/PKG-INFO 
new/distributed-2.29.0/PKG-INFO
--- old/distributed-2.27.0/PKG-INFO     2020-09-19 04:34:27.506927000 +0200
+++ new/distributed-2.29.0/PKG-INFO     2020-10-03 01:23:00.334293100 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: distributed
-Version: 2.27.0
+Version: 2.29.0
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.dask.org
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/_version.py 
new/distributed-2.29.0/distributed/_version.py
--- old/distributed-2.27.0/distributed/_version.py      2020-09-19 
04:34:27.507908000 +0200
+++ new/distributed-2.29.0/distributed/_version.py      2020-10-03 
01:23:00.335679500 +0200
@@ -8,11 +8,11 @@
 
 version_json = '''
 {
- "date": "2020-09-18T21:33:59-0500",
+ "date": "2020-10-02T18:22:26-0500",
  "dirty": false,
  "error": null,
- "full-revisionid": "ecaf14097f5e69e5b884e9c87b708c85d181a9ef",
- "version": "2.27.0"
+ "full-revisionid": "a80b867cf40b05aa423a19aea1a077764ffba0f4",
+ "version": "2.29.0"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/client.py 
new/distributed-2.29.0/distributed/client.py
--- old/distributed-2.27.0/distributed/client.py        2020-09-19 
04:28:28.000000000 +0200
+++ new/distributed-2.29.0/distributed/client.py        2020-10-03 
01:12:36.000000000 +0200
@@ -684,7 +684,6 @@
 
         self._connecting_to_scheduler = False
         self._asynchronous = asynchronous
-        self._should_close_loop = not loop
         self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
         self.io_loop = self.loop = self._loop_runner.loop
 
@@ -1433,7 +1432,7 @@
 
         assert self.status == "closed"
 
-        if self._should_close_loop and not shutting_down():
+        if not shutting_down():
             self._loop_runner.stop()
 
     async def _shutdown(self):
@@ -2569,6 +2568,13 @@
             if actors is not None and actors is not True and actors is not 
False:
                 actors = list(self._expand_key(actors))
 
+            if restrictions:
+                restrictions = keymap(tokey, restrictions)
+                restrictions = valmap(list, restrictions)
+
+            if loose_restrictions is not None:
+                loose_restrictions = list(map(tokey, loose_restrictions))
+
             keyset = set(keys)
 
             values = {
@@ -2579,55 +2585,60 @@
             if values:
                 dsk = subs_multiple(dsk, values)
 
-            d = {k: unpack_remotedata(v, byte_keys=True) for k, v in 
dsk.items()}
-            extra_futures = set.union(*[v[1] for v in d.values()]) if d else 
set()
-            extra_keys = {tokey(future.key) for future in extra_futures}
-            dsk2 = str_graph({k: v[0] for k, v in d.items()}, extra_keys)
-            dsk3 = {k: v for k, v in dsk2.items() if k is not v}
-            for future in extra_futures:
+            # Unpack remote data in `dsk`, which are "WrappedKeys" that are
+            # unknown to `dsk` but known to the scheduler.
+            dsk = {k: unpack_remotedata(v) for k, v in dsk.items()}
+            unpacked_futures = (
+                set.union(*[v[1] for v in dsk.values()]) if dsk else set()
+            )
+            for future in unpacked_futures:
                 if future.client is not self:
                     msg = "Inputs contain futures that were created by another 
client."
                     raise ValueError(msg)
+                if tokey(future.key) not in self.futures:
+                    raise CancelledError(tokey(future.key))
+            unpacked_futures_deps = {}
+            for k, v in dsk.items():
+                if len(v[1]):
+                    unpacked_futures_deps[k] = {f.key for f in v[1]}
+            dsk = {k: v[0] for k, v in dsk.items()}
 
-            if restrictions:
-                restrictions = keymap(tokey, restrictions)
-                restrictions = valmap(list, restrictions)
-
-            if loose_restrictions is not None:
-                loose_restrictions = list(map(tokey, loose_restrictions))
-
-            future_dependencies = {
-                tokey(k): {tokey(f.key) for f in v[1]} for k, v in d.items()
-            }
-
-            for s in future_dependencies.values():
-                for v in s:
-                    if v not in self.futures:
-                        raise CancelledError(v)
-
+            # Find dependencies for the scheduler,
             dependencies = {k: get_dependencies(dsk, k) for k in dsk}
 
             if priority is None:
-                priority = dask.order.order(dsk, dependencies=dependencies)
+                # Removing all unpacked futures before calling order()
+                unpacked_keys = {future.key for future in unpacked_futures}
+                stripped_dsk = {k: v for k, v in dsk.items() if k not in 
unpacked_keys}
+                stripped_deps = {
+                    k: v - unpacked_keys
+                    for k, v in dependencies.items()
+                    if k not in unpacked_keys
+                }
+                priority = dask.order.order(stripped_dsk, 
dependencies=stripped_deps)
                 priority = keymap(tokey, priority)
 
+            # Append the dependencies of unpacked futures.
+            for k, v in unpacked_futures_deps.items():
+                dependencies[k] = set(dependencies.get(k, ())) | v
+
+            # The scheduler expect all keys to be strings
             dependencies = {
                 tokey(k): [tokey(dep) for dep in deps]
                 for k, deps in dependencies.items()
                 if deps
             }
-            for k, deps in future_dependencies.items():
-                if deps:
-                    dependencies[k] = list(set(dependencies.get(k, ())) | deps)
+            dsk = str_graph(dsk, extra_values={f.key for f in 
unpacked_futures})
 
             if isinstance(retries, Number) and retries > 0:
-                retries = {k: retries for k in dsk3}
+                retries = {k: retries for k in dsk}
 
+            # Create futures before sending graph (helps avoid contention)
             futures = {key: Future(key, self, inform=False) for key in keyset}
             self._send_to_scheduler(
                 {
                     "op": "update-graph",
-                    "tasks": valmap(dumps_task, dsk3),
+                    "tasks": valmap(dumps_task, dsk),
                     "dependencies": dependencies,
                     "keys": list(map(tokey, keys)),
                     "restrictions": restrictions or {},
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/comm/core.py 
new/distributed-2.29.0/distributed/comm/core.py
--- old/distributed-2.27.0/distributed/comm/core.py     2020-09-18 
22:52:00.000000000 +0200
+++ new/distributed-2.29.0/distributed/comm/core.py     2020-10-03 
01:12:36.000000000 +0200
@@ -140,12 +140,12 @@
                     local["pickle-protocol"], remote["pickle-protocol"]
                 )
             }
-        except KeyError:
+        except KeyError as e:
             raise ValueError(
                 "Your Dask versions may not be in sync. "
                 "Please ensure that you have the same version of dask "
                 "and distributed on your client, scheduler, and worker 
machines"
-            )
+            ) from e
 
         if local["compression"] == remote["compression"]:
             out["compression"] = local["compression"]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/config.py 
new/distributed-2.29.0/distributed/config.py
--- old/distributed-2.27.0/distributed/config.py        2020-08-25 
19:13:46.000000000 +0200
+++ new/distributed-2.29.0/distributed/config.py        2020-10-03 
01:12:36.000000000 +0200
@@ -12,7 +12,6 @@
 
 
 fn = os.path.join(os.path.dirname(__file__), "distributed.yaml")
-dask.config.ensure_file(source=fn)
 
 with open(fn) as f:
     defaults = yaml.safe_load(f)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/deploy/cluster.py 
new/distributed-2.29.0/distributed/deploy/cluster.py
--- old/distributed-2.27.0/distributed/deploy/cluster.py        2020-09-19 
04:28:28.000000000 +0200
+++ new/distributed-2.29.0/distributed/deploy/cluster.py        2020-10-03 
01:12:36.000000000 +0200
@@ -91,6 +91,15 @@
         self.status = Status.closed
 
     def close(self, timeout=None):
+        # If the cluster is already closed, we're already done
+        if self.status == Status.closed:
+            if self.asynchronous:
+                future = asyncio.Future()
+                future.set_result(None)
+                return future
+            else:
+                return
+
         with suppress(RuntimeError):  # loop closed during process shutdown
             return self.sync(self._close, callback_timeout=timeout)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-2.27.0/distributed/deploy/tests/test_adaptive.py 
new/distributed-2.29.0/distributed/deploy/tests/test_adaptive.py
--- old/distributed-2.27.0/distributed/deploy/tests/test_adaptive.py    
2020-09-18 22:52:00.000000000 +0200
+++ new/distributed-2.29.0/distributed/deploy/tests/test_adaptive.py    
2020-10-03 01:12:36.000000000 +0200
@@ -298,7 +298,7 @@
             start = time()
             while len(cluster.scheduler.workers) != 2:
                 await asyncio.sleep(0.1)
-                assert time() < start + 1
+                assert time() < start + 3
 
 
 @gen_test(timeout=30)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-2.27.0/distributed/diagnostics/progressbar.py 
new/distributed-2.29.0/distributed/diagnostics/progressbar.py
--- old/distributed-2.27.0/distributed/diagnostics/progressbar.py       
2020-09-18 22:52:06.000000000 +0200
+++ new/distributed-2.29.0/distributed/diagnostics/progressbar.py       
2020-09-26 05:40:54.000000000 +0200
@@ -256,8 +256,7 @@
             return result
 
         self.comm = await connect(
-            self.scheduler,
-            connection_args=self.client().connection_args if self.client else 
None,
+            self.scheduler, **(self.client().connection_args if self.client 
else {})
         )
         logger.debug("Progressbar Connected to scheduler")
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/semaphore.py 
new/distributed-2.29.0/distributed/semaphore.py
--- old/distributed-2.27.0/distributed/semaphore.py     2020-09-18 
22:52:01.000000000 +0200
+++ new/distributed-2.29.0/distributed/semaphore.py     2020-10-03 
01:12:36.000000000 +0200
@@ -386,7 +386,7 @@
         self.client._periodic_callbacks[self._periodic_callback_name] = pc
 
         # Need to start the callback using IOLoop.add_callback to ensure that 
the
-        # PC uses the correct event lopp.
+        # PC uses the correct event loop.
         self.client.io_loop.add_callback(pc.start)
 
     def register(self):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/tests/test_client.py 
new/distributed-2.29.0/distributed/tests/test_client.py
--- old/distributed-2.27.0/distributed/tests/test_client.py     2020-09-18 
22:52:01.000000000 +0200
+++ new/distributed-2.29.0/distributed/tests/test_client.py     2020-10-03 
01:12:36.000000000 +0200
@@ -5403,10 +5403,10 @@
     await c.close()
 
 
-def test_client_doesnt_close_given_loop(loop, s, a, b):
-    with Client(s["address"], loop=loop) as c:
+def test_client_doesnt_close_given_loop(loop_in_thread, s, a, b):
+    with Client(s["address"], loop=loop_in_thread) as c:
         assert c.submit(inc, 1).result() == 2
-    with Client(s["address"], loop=loop) as c:
+    with Client(s["address"], loop=loop_in_thread) as c:
         assert c.submit(inc, 2).result() == 3
 
 
@@ -6130,7 +6130,7 @@
 
 
 @pytest.mark.asyncio
-async def test_client_gather_semaphor_loop(cleanup):
+async def test_client_gather_semaphore_loop(cleanup):
     async with Scheduler(port=0) as s:
         async with Client(s.address, asynchronous=True) as c:
             assert c._gather_semaphore._loop is c.loop.asyncio_loop
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-2.27.0/distributed/tests/test_client_loop.py 
new/distributed-2.29.0/distributed/tests/test_client_loop.py
--- old/distributed-2.27.0/distributed/tests/test_client_loop.py        
1970-01-01 01:00:00.000000000 +0100
+++ new/distributed-2.29.0/distributed/tests/test_client_loop.py        
2020-10-03 01:12:36.000000000 +0200
@@ -0,0 +1,34 @@
+import pytest
+from distributed import LocalCluster, Client
+from distributed.utils import LoopRunner
+
+
+# Test if Client stops LoopRunner on close.
+@pytest.mark.parametrize("with_own_loop", [True, False])
+def test_close_loop_sync(with_own_loop):
+    loop_runner = loop = None
+
+    # Setup simple cluster with one threaded worker.
+    # Complex setup is not required here since we test only IO loop teardown.
+    cluster_params = dict(n_workers=1, dashboard_address=None, processes=False)
+
+    loops_before = LoopRunner._all_loops.copy()
+
+    # Start own loop or use current thread's one.
+    if with_own_loop:
+        loop_runner = LoopRunner()
+        loop_runner.start()
+        loop = loop_runner.loop
+
+    with LocalCluster(loop=loop, **cluster_params) as cluster:
+        with Client(cluster, loop=loop) as client:
+            client.run(max, 1, 2)
+
+    # own loop must be explicitly stopped.
+    if with_own_loop:
+        loop_runner.stop()
+
+    # Internal loops registry must the same as before cluster running.
+    # This means loop runners in LocalCluster and Client correctly stopped.
+    # See LoopRunner._stop_unlocked().
+    assert loops_before == LoopRunner._all_loops
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-2.27.0/distributed/tests/test_collections.py 
new/distributed-2.29.0/distributed/tests/test_collections.py
--- old/distributed-2.27.0/distributed/tests/test_collections.py        
2020-08-25 19:13:46.000000000 +0200
+++ new/distributed-2.29.0/distributed/tests/test_collections.py        
2020-10-03 01:12:36.000000000 +0200
@@ -1,3 +1,5 @@
+from distutils.version import LooseVersion
+
 import pytest
 
 pytest.importorskip("numpy")
@@ -11,7 +13,14 @@
 from distributed.utils_test import client, cluster_fixture, loop  # noqa F401
 import numpy as np
 import pandas as pd
-import pandas.testing as tm
+
+PANDAS_VERSION = LooseVersion(pd.__version__)
+PANDAS_GT_100 = PANDAS_VERSION >= LooseVersion("1.0.0")
+
+if PANDAS_GT_100:
+    import pandas.testing as tm  # noqa: F401
+else:
+    import pandas.util.testing as tm  # noqa: F401
 
 
 dfs = [
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/tests/test_steal.py 
new/distributed-2.29.0/distributed/tests/test_steal.py
--- old/distributed-2.27.0/distributed/tests/test_steal.py      2020-08-25 
19:13:46.000000000 +0200
+++ new/distributed-2.29.0/distributed/tests/test_steal.py      2020-10-03 
01:12:36.000000000 +0200
@@ -149,7 +149,7 @@
     def do_nothing(x, y=None):
         pass
 
-    # execute and meassure runtime once
+    # execute and measure runtime once
     await wait(c.submit(do_nothing, 1))
 
     futures = c.map(do_nothing, range(1000), y=x)
@@ -166,7 +166,7 @@
     x = c.submit(slowinc, 1, workers=[b.address])
 
     # If the blacklist of fast tasks is tracked somewhere else, this needs to 
be
-    # changed. This test requies *any* key which is blacklisted.
+    # changed. This test requires *any* key which is blacklisted.
     from distributed.stealing import fast_tasks
 
     blacklisted_key = next(iter(fast_tasks))
@@ -174,7 +174,7 @@
     def fast_blacklisted(x, y=None):
         # The task should observe a certain computation time such that we can
         # ensure that it is not stolen due to the blacklisting. If it is too
-        # fast, the standard mechansim shouldn't allow stealing
+        # fast, the standard mechanism shouldn't allow stealing
         import time
 
         time.sleep(0.01)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed/utils.py 
new/distributed-2.29.0/distributed/utils.py
--- old/distributed-2.27.0/distributed/utils.py 2020-09-19 04:28:28.000000000 
+0200
+++ new/distributed-2.29.0/distributed/utils.py 2020-10-03 01:12:36.000000000 
+0200
@@ -371,10 +371,8 @@
                 # We're expecting the loop to run in another thread,
                 # avoid re-using this thread's assigned loop
                 self._loop = IOLoop()
-            self._should_close_loop = True
         else:
             self._loop = loop
-            self._should_close_loop = False
         self._asynchronous = asynchronous
         self._loop_thread = None
         self._started = False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed.egg-info/PKG-INFO 
new/distributed-2.29.0/distributed.egg-info/PKG-INFO
--- old/distributed-2.27.0/distributed.egg-info/PKG-INFO        2020-09-19 
04:34:27.000000000 +0200
+++ new/distributed-2.29.0/distributed.egg-info/PKG-INFO        2020-10-03 
01:22:59.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: distributed
-Version: 2.27.0
+Version: 2.29.0
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.dask.org
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/distributed.egg-info/SOURCES.txt 
new/distributed-2.29.0/distributed.egg-info/SOURCES.txt
--- old/distributed-2.27.0/distributed.egg-info/SOURCES.txt     2020-09-19 
04:34:27.000000000 +0200
+++ new/distributed-2.29.0/distributed.egg-info/SOURCES.txt     2020-10-03 
01:22:59.000000000 +0200
@@ -223,6 +223,7 @@
 distributed/tests/test_batched.py
 distributed/tests/test_client.py
 distributed/tests/test_client_executor.py
+distributed/tests/test_client_loop.py
 distributed/tests/test_collections.py
 distributed/tests/test_config.py
 distributed/tests/test_core.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/docs/source/asynchronous.rst 
new/distributed-2.29.0/docs/source/asynchronous.rst
--- old/distributed-2.27.0/docs/source/asynchronous.rst 2020-08-25 
19:13:46.000000000 +0200
+++ new/distributed-2.29.0/docs/source/asynchronous.rst 2020-10-03 
01:12:36.000000000 +0200
@@ -64,6 +64,18 @@
    client.sync(f)
 
 
+.. note: Blocking operations like the .compute() method aren’t ok to use in
+         asynchronous mode. Instead you’ll have to use the Client.compute
+         method
+
+
+.. code-block:: python
+
+    async with Client(asynchronous=True) as client:
+        arr = da.random.random((1000, 1000), chunks=(1000, 100))
+        await client.compute(arr.mean())
+
+
 Example
 -------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/docs/source/changelog.rst 
new/distributed-2.29.0/docs/source/changelog.rst
--- old/distributed-2.27.0/docs/source/changelog.rst    2020-09-19 
04:33:08.000000000 +0200
+++ new/distributed-2.29.0/docs/source/changelog.rst    2020-10-03 
01:21:56.000000000 +0200
@@ -1,6 +1,25 @@
 Changelog
 =========
 
+2.29.0 - 2020-10-02
+-------------------
+
+- Use ``pandas.testing`` (:pr:`4138`) `jakirkham`_
+- Fix a few typos (:pr:`4131`) `Pav A`_
+- Return right away in ``Cluster.close`` if cluster is already closed 
(:pr:`4116`) `Tom Rochette`_
+- Update async doc with example on ``.compute()`` vs ``client.compute()`` 
(:pr:`4137`) `Benjamin Zaitlen`_
+- Correctly tear down ``LoopRunner`` in ``Client`` (:pr:`4112`) `Sergey 
Kozlov`_
+- Simplify ``Client._graph_to_futures()`` (:pr:`4127`) `Mads R. B. Kristensen`_
+- Cleanup new exception traceback (:pr:`4125`) `Krishan Bhasin`_
+- Stop writing config files by default (:pr:`4123`) `Matthew Rocklin`_
+
+
+2.28.0 - 2020-09-25
+-------------------
+
+- Fix SSL ``connection_args`` for ``progressbar`` connect (:pr:`4122`) 
`jennalc`_
+
+
 2.27.0 - 2020-09-18
 -------------------
 
@@ -1965,3 +1984,5 @@
 .. _`Roberto Panai`: https://github.com/rpanai
 .. _`Dror Speiser`: https://github.com/drorspei
 .. _`Poruri Sai Rahul`: https://github.com/rahulporuri
+.. _`jennalc`: https://github.com/jennalc
+.. _`Sergey Kozlov`: https://github.com/skozlovf
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-2.27.0/docs/source/http_services.rst 
new/distributed-2.29.0/docs/source/http_services.rst
--- old/distributed-2.27.0/docs/source/http_services.rst        2020-09-18 
22:52:01.000000000 +0200
+++ new/distributed-2.29.0/docs/source/http_services.rst        2020-10-03 
01:12:36.000000000 +0200
@@ -2,7 +2,7 @@
 ==============
 
 A subset of the following pages will be available from the scheduler or
-workers of a running cluster. The list of currently available endpoins can
+workers of a running cluster. The list of currently available endpoints can
 be found by examining ``/sitemap.json``.
 
 


Reply via email to