Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-prompt_toolkit for 
openSUSE:Factory checked in at 2022-12-15 19:24:05
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-prompt_toolkit (Old)
 and      /work/SRC/openSUSE:Factory/.python-prompt_toolkit.new.1835 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-prompt_toolkit"

Thu Dec 15 19:24:05 2022 rev:21 rq:1042877 version:3.0.36

Changes:
--------
--- 
/work/SRC/openSUSE:Factory/python-prompt_toolkit/python-prompt_toolkit.changes  
    2022-12-07 17:36:02.528893004 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-prompt_toolkit.new.1835/python-prompt_toolkit.changes
    2022-12-15 19:24:06.931684321 +0100
@@ -1,0 +2,21 @@
+Tue Dec 13 16:16:08 UTC 2022 - Yogalakshmi Arunachalam <yarunacha...@suse.com>
+
+- Update to version 3.0.36 
+  * Fixes:
+  - Another Python 3.6 fix for a bug that was introduced in 3.0.34.
+
+- Update to version 3.0.35
+  Fixes:
+  - Fix bug introduced in 3.0.34 for Python 3.6. Use asynccontextmanager
+  implementation from prompt_toolkit itself.
+
+- Update to version 3.0.34
+  Fixes:
+  - Improve completion performance in various places.
+  - Improve renderer performance.
+  - Handle `KeyboardInterrupt` when the stacktrace of an unhandled error is
+  displayed.
+  - Use correct event loop in `Application.create_background_task()`.
+  - Fix `show_cursor` attribute in `ScrollablePane`.
+
+-------------------------------------------------------------------

Old:
----
  prompt_toolkit-3.0.33.tar.gz

New:
----
  prompt_toolkit-3.0.36.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-prompt_toolkit.spec ++++++
--- /var/tmp/diff_new_pack.C3jkba/_old  2022-12-15 19:24:07.551687848 +0100
+++ /var/tmp/diff_new_pack.C3jkba/_new  2022-12-15 19:24:07.559687894 +0100
@@ -19,7 +19,7 @@
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 %define skip_python2 1
 Name:           python-prompt_toolkit
-Version:        3.0.33
+Version:        3.0.36
 Release:        0
 Summary:        Library for building interactive command lines in Python
 License:        BSD-3-Clause

++++++ prompt_toolkit-3.0.33.tar.gz -> prompt_toolkit-3.0.36.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/prompt_toolkit-3.0.33/CHANGELOG 
new/prompt_toolkit-3.0.36/CHANGELOG
--- old/prompt_toolkit-3.0.33/CHANGELOG 2022-11-21 14:39:57.000000000 +0100
+++ new/prompt_toolkit-3.0.36/CHANGELOG 2022-12-06 23:35:25.000000000 +0100
@@ -1,6 +1,33 @@
 CHANGELOG
 =========
 
+3.0.36: 2022-12-06
+------------------
+
+Fixes:
+- Another Python 3.6 fix for a bug that was introduced in 3.0.34.
+
+
+3.0.35: 2022-12-06
+------------------
+
+Fixes:
+- Fix bug introduced in 3.0.34 for Python 3.6. Use asynccontextmanager
+  implementation from prompt_toolkit itself.
+
+
+3.0.34: 2022-12-06
+------------------
+
+Fixes:
+- Improve completion performance in various places.
+- Improve renderer performance.
+- Handle `KeyboardInterrupt` when the stacktrace of an unhandled error is
+  displayed.
+- Use correct event loop in `Application.create_background_task()`.
+- Fix `show_cursor` attribute in `ScrollablePane`.
+
+
 3.0.33: 2022-11-21
 ------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/prompt_toolkit-3.0.33/PKG-INFO 
new/prompt_toolkit-3.0.36/PKG-INFO
--- old/prompt_toolkit-3.0.33/PKG-INFO  2022-11-21 14:41:26.350330600 +0100
+++ new/prompt_toolkit-3.0.36/PKG-INFO  2022-12-06 23:36:14.484341000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: prompt_toolkit
-Version: 3.0.33
+Version: 3.0.36
 Summary: Library for building powerful interactive command lines in Python
 Home-page: https://github.com/prompt-toolkit/python-prompt-toolkit
 Author: Jonathan Slenders
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/prompt_toolkit-3.0.33/src/prompt_toolkit/__init__.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/__init__.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/__init__.py    2022-11-21 
14:40:23.000000000 +0100
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/__init__.py    2022-12-06 
23:35:40.000000000 +0100
@@ -18,7 +18,7 @@
 from .shortcuts import PromptSession, print_formatted_text, prompt
 
 # Don't forget to update in `docs/conf.py`!
-__version__ = "3.0.33"
+__version__ = "3.0.36"
 
 # Version tuple.
 VERSION = tuple(__version__.split("."))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/application/application.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/application/application.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/application/application.py     
2022-11-21 14:36:18.000000000 +0100
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/application/application.py     
2022-12-06 23:09:29.000000000 +0100
@@ -1077,7 +1077,8 @@
 
         This is not threadsafe.
         """
-        task: asyncio.Task[None] = get_event_loop().create_task(coroutine)
+        loop = self.loop or get_event_loop()
+        task: asyncio.Task[None] = loop.create_task(coroutine)
         self._background_tasks.add(task)
 
         task.add_done_callback(self._on_background_task_done)
@@ -1469,7 +1470,10 @@
     session: PromptSession[None] = PromptSession(
         message=wait_text, key_bindings=key_bindings
     )
-    await session.app.run_async()
+    try:
+        await session.app.run_async()
+    except KeyboardInterrupt:
+        pass  # Control-c pressed. Don't propagate this error.
 
 
 @contextmanager
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/prompt_toolkit-3.0.33/src/prompt_toolkit/buffer.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/buffer.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/buffer.py      2022-09-01 
17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/buffer.py      2022-12-06 
23:32:52.000000000 +0100
@@ -42,6 +42,7 @@
     get_common_complete_suffix,
 )
 from .document import Document
+from .eventloop import aclosing
 from .filters import FilterOrBool, to_filter
 from .history import History, InMemoryHistory
 from .search import SearchDirection, SearchState
@@ -1736,15 +1737,41 @@
                 while generating completions."""
                 return self.complete_state == complete_state
 
-            async for completion in self.completer.get_completions_async(
-                document, complete_event
-            ):
-                complete_state.completions.append(completion)
-                self.on_completions_changed.fire()
+            refresh_needed = asyncio.Event()
+
+            async def refresh_while_loading() -> None:
+                """Background loop to refresh the UI at most 3 times a second
+                while the completion are loading. Calling
+                `on_completions_changed.fire()` for every completion that we
+                receive is too expensive when there are many completions. (We
+                could tune `Application.max_render_postpone_time` and
+                `Application.min_redraw_interval`, but having this here is a
+                better approach.)
+                """
+                while True:
+                    self.on_completions_changed.fire()
+                    refresh_needed.clear()
+                    await asyncio.sleep(0.3)
+                    await refresh_needed.wait()
 
-                # If the input text changes, abort.
-                if not proceed():
-                    break
+            refresh_task = asyncio.ensure_future(refresh_while_loading())
+            try:
+                # Load.
+                async with aclosing(
+                    self.completer.get_completions_async(document, 
complete_event)
+                ) as async_generator:
+                    async for completion in async_generator:
+                        complete_state.completions.append(completion)
+                        refresh_needed.set()
+
+                        # If the input text changes, abort.
+                        if not proceed():
+                            break
+            finally:
+                refresh_task.cancel()
+
+                # Refresh one final time after we got everything.
+                self.on_completions_changed.fire()
 
             completions = complete_state.completions
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/completion/base.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/completion/base.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/completion/base.py     
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/completion/base.py     
2022-12-06 23:09:29.000000000 +0100
@@ -1,10 +1,14 @@
 """
 """
 from abc import ABCMeta, abstractmethod
-from typing import AsyncGenerator, Callable, Iterable, Optional, Sequence
+from typing import AsyncGenerator, Callable, Iterable, List, Optional, Sequence
 
 from prompt_toolkit.document import Document
-from prompt_toolkit.eventloop import generator_to_async_generator
+from prompt_toolkit.eventloop import (
+    aclosing,
+    generator_to_async_generator,
+    get_event_loop,
+)
 from prompt_toolkit.filters import FilterOrBool, to_filter
 from prompt_toolkit.formatted_text import AnyFormattedText, StyleAndTextTuples
 
@@ -224,10 +228,61 @@
         """
         Asynchronous generator of completions.
         """
-        async for completion in generator_to_async_generator(
-            lambda: self.completer.get_completions(document, complete_event)
-        ):
-            yield completion
+        # NOTE: Right now, we are consuming the `get_completions` generator in
+        #       a synchronous background thread, then passing the results one
+        #       at a time over a queue, and consuming this queue in the main
+        #       thread (that's what `generator_to_async_generator` does). That
+        #       means that if the completer is *very* slow, we'll be showing
+        #       completions in the UI once they are computed.
+
+        #       It's very tempting to replace this implementation with the
+        #       commented code below for several reasons:
+
+        #       - `generator_to_async_generator` is not perfect and hard to get
+        #         right. It's a lot of complexity for little gain. The
+        #         implementation needs a huge buffer for it to be efficient
+        #         when there are many completions (like 50k+).
+        #       - Normally, a completer is supposed to be fast, users can have
+        #         "complete while typing" enabled, and want to see the
+        #         completions within a second. Handling one completion at a
+        #         time, and rendering once we get it here doesn't make any
+        #         sense if this is quick anyway.
+        #       - Completers like `FuzzyCompleter` prepare all completions
+        #         anyway so that they can be sorted by accuracy before they are
+        #         yielded. At the point that we start yielding completions
+        #         here, we already have all completions.
+        #       - The `Buffer` class has complex logic to invalidate the UI
+        #         while it is consuming the completions. We don't want to
+        #         invalidate the UI for every completion (if there are many),
+        #         but we want to do it often enough so that completions are
+        #         being displayed while they are produced.
+
+        #       We keep the current behavior mainly for backward-compatibility.
+        #       Similarly, it would be better for this function to not return
+        #       an async generator, but simply be a coroutine that returns a
+        #       list of `Completion` objects, containing all completions at
+        #       once.
+
+        #       Note that this argument doesn't mean we shouldn't use
+        #       `ThreadedCompleter`. It still makes sense to produce
+        #       completions in a background thread, because we don't want to
+        #       freeze the UI while the user is typing. But sending the
+        #       completions one at a time to the UI maybe isn't worth it.
+
+        # def get_all_in_thread() -> List[Completion]:
+        #   return list(self.get_completions(document, complete_event))
+
+        # completions = await get_event_loop().run_in_executor(None, 
get_all_in_thread)
+        # for completion in completions:
+        #   yield completion
+
+        async with aclosing(
+            generator_to_async_generator(
+                lambda: self.completer.get_completions(document, 
complete_event)
+            )
+        ) as async_generator:
+            async for completion in async_generator:
+                yield completion
 
     def __repr__(self) -> str:
         return f"ThreadedCompleter({self.completer!r})"
@@ -306,10 +361,11 @@
 
         # Get all completions in a non-blocking way.
         if self.filter():
-            async for item in self.completer.get_completions_async(
-                document, complete_event
-            ):
-                yield item
+            async with aclosing(
+                self.completer.get_completions_async(document, complete_event)
+            ) as async_generator:
+                async for item in async_generator:
+                    yield item
 
 
 class _MergedCompleter(Completer):
@@ -333,8 +389,11 @@
 
         # Get all completions from the other completers in a non-blocking way.
         for completer in self.completers:
-            async for item in completer.get_completions_async(document, 
complete_event):
-                yield item
+            async with aclosing(
+                completer.get_completions_async(document, complete_event)
+            ) as async_generator:
+                async for item in async_generator:
+                    yield item
 
 
 def merge_completers(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/completion/fuzzy_completer.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/completion/fuzzy_completer.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/completion/fuzzy_completer.py  
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/completion/fuzzy_completer.py  
2022-12-06 23:09:29.000000000 +0100
@@ -49,7 +49,7 @@
         WORD: bool = False,
         pattern: Optional[str] = None,
         enable_fuzzy: FilterOrBool = True,
-    ):
+    ) -> None:
 
         assert pattern is None or pattern.startswith("^")
 
@@ -77,7 +77,6 @@
     def _get_fuzzy_completions(
         self, document: Document, complete_event: CompleteEvent
     ) -> Iterable[Completion]:
-
         word_before_cursor = document.get_word_before_cursor(
             pattern=re.compile(self._get_pattern())
         )
@@ -88,27 +87,35 @@
             cursor_position=document.cursor_position - len(word_before_cursor),
         )
 
-        completions = list(self.completer.get_completions(document2, 
complete_event))
+        inner_completions = list(
+            self.completer.get_completions(document2, complete_event)
+        )
 
         fuzzy_matches: List[_FuzzyMatch] = []
 
-        pat = ".*?".join(map(re.escape, word_before_cursor))
-        pat = f"(?=({pat}))"  # lookahead regex to manage overlapping matches
-        regex = re.compile(pat, re.IGNORECASE)
-        for compl in completions:
-            matches = list(regex.finditer(compl.text))
-            if matches:
-                # Prefer the match, closest to the left, then shortest.
-                best = min(matches, key=lambda m: (m.start(), len(m.group(1))))
-                fuzzy_matches.append(
-                    _FuzzyMatch(len(best.group(1)), best.start(), compl)
-                )
-
-        def sort_key(fuzzy_match: "_FuzzyMatch") -> Tuple[int, int]:
-            "Sort by start position, then by the length of the match."
-            return fuzzy_match.start_pos, fuzzy_match.match_length
+        if word_before_cursor == "":
+            # If word before the cursor is an empty string, consider all
+            # completions, without filtering everything with an empty regex
+            # pattern.
+            fuzzy_matches = [_FuzzyMatch(0, 0, compl) for compl in 
inner_completions]
+        else:
+            pat = ".*?".join(map(re.escape, word_before_cursor))
+            pat = f"(?=({pat}))"  # lookahead regex to manage overlapping 
matches
+            regex = re.compile(pat, re.IGNORECASE)
+            for compl in inner_completions:
+                matches = list(regex.finditer(compl.text))
+                if matches:
+                    # Prefer the match, closest to the left, then shortest.
+                    best = min(matches, key=lambda m: (m.start(), 
len(m.group(1))))
+                    fuzzy_matches.append(
+                        _FuzzyMatch(len(best.group(1)), best.start(), compl)
+                    )
+
+            def sort_key(fuzzy_match: "_FuzzyMatch") -> Tuple[int, int]:
+                "Sort by start position, then by the length of the match."
+                return fuzzy_match.start_pos, fuzzy_match.match_length
 
-        fuzzy_matches = sorted(fuzzy_matches, key=sort_key)
+            fuzzy_matches = sorted(fuzzy_matches, key=sort_key)
 
         for match in fuzzy_matches:
             # Include these completions, but set the correct `display`
@@ -117,7 +124,8 @@
                 text=match.completion.text,
                 start_position=match.completion.start_position
                 - len(word_before_cursor),
-                display_meta=match.completion.display_meta,
+                # We access to private `_display_meta` attribute, because that 
one is lazy.
+                display_meta=match.completion._display_meta,
                 display=self._get_display(match, word_before_cursor),
                 style=match.completion.style,
             )
@@ -128,37 +136,41 @@
         """
         Generate formatted text for the display label.
         """
-        m = fuzzy_match
-        word = m.completion.text
 
-        if m.match_length == 0:
-            # No highlighting when we have zero length matches (no input text).
-            # In this case, use the original display text (which can include
-            # additional styling or characters).
-            return m.completion.display
-
-        result: StyleAndTextTuples = []
-
-        # Text before match.
-        result.append(("class:fuzzymatch.outside", word[: m.start_pos]))
-
-        # The match itself.
-        characters = list(word_before_cursor)
-
-        for c in word[m.start_pos : m.start_pos + m.match_length]:
-            classname = "class:fuzzymatch.inside"
-            if characters and c.lower() == characters[0].lower():
-                classname += ".character"
-                del characters[0]
-
-            result.append((classname, c))
-
-        # Text after match.
-        result.append(
-            ("class:fuzzymatch.outside", word[m.start_pos + m.match_length :])
-        )
+        def get_display() -> AnyFormattedText:
+            m = fuzzy_match
+            word = m.completion.text
+
+            if m.match_length == 0:
+                # No highlighting when we have zero length matches (no input 
text).
+                # In this case, use the original display text (which can 
include
+                # additional styling or characters).
+                return m.completion.display
+
+            result: StyleAndTextTuples = []
+
+            # Text before match.
+            result.append(("class:fuzzymatch.outside", word[: m.start_pos]))
+
+            # The match itself.
+            characters = list(word_before_cursor)
+
+            for c in word[m.start_pos : m.start_pos + m.match_length]:
+                classname = "class:fuzzymatch.inside"
+                if characters and c.lower() == characters[0].lower():
+                    classname += ".character"
+                    del characters[0]
+
+                result.append((classname, c))
+
+            # Text after match.
+            result.append(
+                ("class:fuzzymatch.outside", word[m.start_pos + m.match_length 
:])
+            )
+
+            return result
 
-        return result
+        return get_display()
 
 
 class FuzzyWordCompleter(Completer):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/eventloop/__init__.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/eventloop/__init__.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/eventloop/__init__.py  
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/eventloop/__init__.py  
2022-12-06 23:09:29.000000000 +0100
@@ -1,4 +1,4 @@
-from .async_generator import generator_to_async_generator
+from .async_generator import aclosing, generator_to_async_generator
 from .inputhook import (
     InputHookContext,
     InputHookSelector,
@@ -15,6 +15,7 @@
 __all__ = [
     # Async generator
     "generator_to_async_generator",
+    "aclosing",
     # Utils.
     "run_in_executor_with_context",
     "call_soon_threadsafe",
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/eventloop/async_generator.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/eventloop/async_generator.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/eventloop/async_generator.py   
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/eventloop/async_generator.py   
2022-12-06 23:25:27.000000000 +0100
@@ -1,16 +1,62 @@
 """
 Implementation for async generators.
 """
-from asyncio import Queue
-from typing import AsyncGenerator, Callable, Iterable, TypeVar, Union
+from queue import Empty, Full, Queue
+from threading import Event
+from typing import (
+    TYPE_CHECKING,
+    AsyncGenerator,
+    Awaitable,
+    Callable,
+    Iterable,
+    TypeVar,
+    Union,
+)
 
+from .async_context_manager import asynccontextmanager
 from .utils import get_event_loop, run_in_executor_with_context
 
 __all__ = [
+    "aclosing",
     "generator_to_async_generator",
 ]
 
 
+if TYPE_CHECKING:
+    # Thanks: 
https://github.com/python/typeshed/blob/main/stdlib/contextlib.pyi
+    from typing_extensions import Protocol
+
+    class _SupportsAclose(Protocol):
+        def aclose(self) -> Awaitable[object]:
+            ...
+
+    _SupportsAcloseT = TypeVar("_SupportsAcloseT", bound=_SupportsAclose)
+
+
+@asynccontextmanager
+async def aclosing(
+    thing: "_SupportsAcloseT",
+) -> AsyncGenerator["_SupportsAcloseT", None]:
+    "Similar to `contextlib.aclosing`, in Python 3.10."
+    try:
+        yield thing
+    finally:
+        await thing.aclose()
+
+
+# By default, choose a buffer size that's a good balance between having enough
+# throughput, but not consuming too much memory. We use this to consume a sync
+# generator of completions as an async generator. If the queue size is very
+# small (like 1), consuming the completions goes really slow (when there are a
+# lot of items). If the queue size would be unlimited or too big, this can
+# cause overconsumption of memory, and cause CPU time spent producing items
+# that are no longer needed (if the consumption of the async generator stops at
+# some point). We need a fixed size in order to get some back pressure from the
+# async consumer to the sync producer. We choose 1000 by default here. If we
+# have around 50k completions, measurements show that 1000 is still
+# significantly faster than a buffer of 100.
+DEFAULT_BUFFER_SIZE: int = 1000
+
 _T = TypeVar("_T")
 
 
@@ -19,7 +65,8 @@
 
 
 async def generator_to_async_generator(
-    get_iterable: Callable[[], Iterable[_T]]
+    get_iterable: Callable[[], Iterable[_T]],
+    buffer_size: int = DEFAULT_BUFFER_SIZE,
 ) -> AsyncGenerator[_T, None]:
     """
     Turn a generator or iterable into an async generator.
@@ -28,10 +75,12 @@
 
     :param get_iterable: Function that returns a generator or iterable when
         called.
+    :param buffer_size: Size of the queue between the async consumer and the
+        synchronous generator that produces items.
     """
     quitting = False
-    _done = _Done()
-    q: Queue[Union[_T, _Done]] = Queue()
+    # NOTE: We are limiting the queue size in order to have back-pressure.
+    q: Queue[Union[_T, _Done]] = Queue(maxsize=buffer_size)
     loop = get_event_loop()
 
     def runner() -> None:
@@ -44,19 +93,38 @@
                 # When this async generator was cancelled (closed), stop this
                 # thread.
                 if quitting:
-                    break
+                    return
 
-                loop.call_soon_threadsafe(q.put_nowait, item)
+                while True:
+                    try:
+                        q.put(item, timeout=1)
+                    except Full:
+                        if quitting:
+                            return
+                        continue
+                    else:
+                        break
 
         finally:
-            loop.call_soon_threadsafe(q.put_nowait, _done)
+            while True:
+                try:
+                    q.put(_Done(), timeout=1)
+                except Full:
+                    if quitting:
+                        return
+                    continue
+                else:
+                    break
 
     # Start background thread.
     runner_f = run_in_executor_with_context(runner)
 
     try:
         while True:
-            item = await q.get()
+            try:
+                item = q.get_nowait()
+            except Empty:
+                item = await loop.run_in_executor(None, q.get)
             if isinstance(item, _Done):
                 break
             else:
@@ -67,8 +135,5 @@
         quitting = True
 
         # Wait for the background thread to finish. (should happen right after
-        # the next item is yielded). If we don't do this, and the event loop
-        # gets closed before the runner is done, then we'll get a
-        # `RuntimeError: Event loop is closed` exception printed to stdout that
-        # we can't handle.
+        # the last item is yielded).
         await runner_f
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/menus.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/menus.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/menus.py        
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/menus.py        
2022-12-06 23:09:29.000000000 +0100
@@ -13,6 +13,7 @@
     Union,
     cast,
 )
+from weakref import WeakKeyDictionary, WeakValueDictionary
 
 from prompt_toolkit.application.current import get_app
 from prompt_toolkit.buffer import CompletionState
@@ -163,9 +164,13 @@
             return get_cwidth(completion.display_meta_text)
 
         if self._show_meta(complete_state):
-            return min(
-                max_width, max(meta_width(c) for c in 
complete_state.completions) + 2
-            )
+            # If the amount of completions is over 200, compute the width based
+            # on the first 200 completions, otherwise this can be very slow.
+            completions = complete_state.completions
+            if len(completions) > 200:
+                completions = completions[:200]
+
+            return min(max_width, max(meta_width(c) for c in completions) + 2)
         else:
             return 0
 
@@ -333,6 +338,16 @@
         self.suggested_max_column_width = suggested_max_column_width
         self.scroll = 0
 
+        # Cache for column width computations. This computation is not cheap,
+        # so we don't want to do it over and over again while the user
+        # navigates through the completions.
+        # (map `completion_state` to `(completion_count, width)`. We remember
+        # the count, because a completer can add new completions to the
+        # `CompletionState` while loading.)
+        self._column_width_for_completion_state: 
"WeakKeyDictionary[CompletionState, Tuple[int, int]]" = (
+            WeakKeyDictionary()
+        )
+
         # Info of last rendering.
         self._rendered_rows = 0
         self._rendered_columns = 0
@@ -509,11 +524,26 @@
 
         return UIContent(get_line=get_line, line_count=len(rows_))
 
-    def _get_column_width(self, complete_state: CompletionState) -> int:
+    def _get_column_width(self, completion_state: CompletionState) -> int:
         """
         Return the width of each column.
         """
-        return max(get_cwidth(c.display_text) for c in 
complete_state.completions) + 1
+        try:
+            count, width = 
self._column_width_for_completion_state[completion_state]
+            if count != len(completion_state.completions):
+                # Number of completions changed, recompute.
+                raise KeyError
+            return width
+        except KeyError:
+            result = (
+                max(get_cwidth(c.display_text) for c in 
completion_state.completions)
+                + 1
+            )
+            self._column_width_for_completion_state[completion_state] = (
+                len(completion_state.completions),
+                result,
+            )
+            return result
 
     def mouse_handler(self, mouse_event: MouseEvent) -> "NotImplementedOrNone":
         """
@@ -683,7 +713,19 @@
         app = get_app()
         if app.current_buffer.complete_state:
             state = app.current_buffer.complete_state
-            return 2 + max(get_cwidth(c.display_meta_text) for c in 
state.completions)
+
+            if len(state.completions) >= 30:
+                # When there are many completions, calling `get_cwidth` for
+                # every `display_meta_text` is too expensive. In this case,
+                # just return the max available width. There will be enough
+                # columns anyway so that the whole screen is filled with
+                # completions and `create_content` will then take up as much
+                # space as needed.
+                return max_available_width
+
+            return 2 + max(
+                get_cwidth(c.display_meta_text) for c in 
state.completions[:100]
+            )
         else:
             return 0
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/screen.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/screen.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/screen.py       
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/screen.py       
2022-12-06 22:53:48.000000000 +0100
@@ -272,7 +272,7 @@
 
         for y, row in b.items():
             for x, char in row.items():
-                b[y][x] = char_cache[char.char, char.style + append_style]
+                row[x] = char_cache[char.char, char.style + append_style]
 
     def fill_area(
         self, write_position: "WritePosition", style: str = "", after: bool = 
False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/scrollable_pane.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/scrollable_pane.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/layout/scrollable_pane.py      
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/layout/scrollable_pane.py      
2022-12-06 22:53:48.000000000 +0100
@@ -146,6 +146,7 @@
         # First, write the content to a virtual screen, then copy over the
         # visible part to the real screen.
         temp_screen = Screen(default_char=Char(char=" ", style=parent_style))
+        temp_screen.show_cursor = screen.show_cursor
         temp_write_position = WritePosition(
             xpos=0, ypos=0, width=virtual_width, height=virtual_height
         )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit/output/flush_stdout.py 
new/prompt_toolkit-3.0.36/src/prompt_toolkit/output/flush_stdout.py
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit/output/flush_stdout.py 
2022-09-01 17:02:15.000000000 +0200
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit/output/flush_stdout.py 
2022-12-06 23:09:29.000000000 +0100
@@ -27,7 +27,6 @@
             # UnicodeEncodeError crashes. E.g. u'\xb7' does not appear in 
'ascii'.)
             # My Arch Linux installation of july 2015 reported 'ANSI_X3.4-1968'
             # for sys.stdout.encoding in xterm.
-            out: IO[bytes]
             if has_binary_io:
                 stdout.buffer.write(data.encode(stdout.encoding or "utf-8", 
"replace"))
             else:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/prompt_toolkit-3.0.33/src/prompt_toolkit.egg-info/PKG-INFO 
new/prompt_toolkit-3.0.36/src/prompt_toolkit.egg-info/PKG-INFO
--- old/prompt_toolkit-3.0.33/src/prompt_toolkit.egg-info/PKG-INFO      
2022-11-21 14:41:26.000000000 +0100
+++ new/prompt_toolkit-3.0.36/src/prompt_toolkit.egg-info/PKG-INFO      
2022-12-06 23:36:14.000000000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: prompt-toolkit
-Version: 3.0.33
+Version: 3.0.36
 Summary: Library for building powerful interactive command lines in Python
 Home-page: https://github.com/prompt-toolkit/python-prompt-toolkit
 Author: Jonathan Slenders

Reply via email to