Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-langfuse for openSUSE:Factory 
checked in at 2024-11-20 17:44:43
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-langfuse (Old)
 and      /work/SRC/openSUSE:Factory/.python-langfuse.new.28523 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-langfuse"

Wed Nov 20 17:44:43 2024 rev:3 rq:1225319 version:2.54.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-langfuse/python-langfuse.changes  
2024-11-01 21:06:27.431650520 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-langfuse.new.28523/python-langfuse.changes   
    2024-11-20 17:44:44.916578569 +0100
@@ -1,0 +2,103 @@
+Wed Nov 20 15:19:09 UTC 2024 - Dirk Müller <[email protected]>
+
+- update to 2.54.1:
+  * fix(media): allow setting IO media via decorator update
+- update to 2.54.0:
+  * feat(core): add multimodal support
+  * fix(openai): pass parsed_n only if greater 1
+- update to 2.53.9:
+  * perf: move serialization to background threads
+- update to 2.53.8:
+  * fix(datasets): encoding
+- update to 2.53.7:
+  * fix(openai): revert default stream option setting
+- update to 2.53.6:
+  * fix(serializer): reduce log level to debug on failed
+    serialization
+- update to 2.53.5:
+  * fix(serializer): pydantic compat v1 v2
+- update to 2.53.4:
+  * feat(openai): parse usage if stream_options has include_usage
+- update to 2.53.3:
+  * fix(datasets): url encode dataset name and run name
+  * refactor(llama-index): send generation updates directly from
+    event handler
+- update to 2.53.2:
+  * fix(llama-index): CompletionResponse Serialization by
+    @hassiebp
+- update to 2.53.1:
+  * fix: 'NoneType' object has no attribute '__dict__'
+- update to 2.53.0:
+  * feat(client): allow masking event input and output by
+    @shawnzhu and @hassiebp in
+    https://github.com/langfuse/langfuse-python/pull/977
+  * fix(decorator): improve stack access safety
+- update to 2.52.2:
+  * fix(openai): handle NoneType responses gracefully
+  * docs(llama-index): deprecate CallbackHandler and promote
+    Instrumentor
+  * fix(langchain): DeepInfra model parsing
+- update to 2.52.1:
+  * fix(decorators): stack trace on failed auth_check
+  * fix(api): list models parsing errors
+  * fix(langchain): parse tool calls in input
+- update to 2.52.0:
+  * feat(llama-index): add LlamaIndexInstrumentor for use with
+    instrumentation module instead of LlamaIndexSpanHandler
+- update to 2.51.5:
+  * fix(openai): structured output parsing with openai >= 1.50
+  * chore(deps): bump django from 5.0.8 to 5.0.9 in
+    /examples/django_example
+- update to 2.51.4:
+  * fix(serializer): Fix None serialization without langchain
+    installed
+  * fix(langchain): invalid user_id / session_id in TraceBody
+  * chore(deps-dev): bump pytest from 8.2.0 to 8.3.3
+  * chore: automerge dependabot patch PRs
+  * chore(deps): bump pydantic from 2.7.4 to 2.9.2
+  * chore: fix dependabot automerge
+  * chore: fix auto merging
+  * chore: fix auto merging
+- update to 2.51.3:
+  * fix(langchain): time to first token
+- update to 2.51.2:
+  * fix(prompts): remove traceback from fetch error logs
+  * fix(langchain): parse model name for ChatOCIGenAIAgent
+- update to 2.51.0:
+  * feat(langchain): time to first token on streamed generations
+  * feat(langchain): allow passing trace attributes on chain
+    invocation
+  * fix(langchain): python langchain retriever - parent run not
+    found
+  * fix handle_span_events exception
+- update to 2.50.3:
+  * fix(serializer): support numpy scalars
+- update to 2.50.2:
+  * feat(prompts): allow passing kwargs to precompile langchain
+    prompt
+- update to 2.50.1:
+  * fix(langchain): link prompts to nested generations as well
+- update to 2.50.0:
+  * feat(decorator): allow setting parent trace or observation id
+- update to 2.49.0:
+  * feat(langchain): link langfuse prompts to langchain
+    executions
+- update to 2.48.1:
+  * fix(openai): multiple streamed tool calls
+  * fix(openai): time to first token
+- update to 2.48.0:
+  * feat(decorator): Enhance `observe` decorator to support usage
+    without parentheses
+  * fix(llama-index): initialize OpenAI model serializers
+  * fix(langchain): ollama usage parsing
+  * fix(langchain/AzureChatOpenai): unnecessary logs
+  * fix(langchain): batch run to update trace outputs
+- update to 2.47.1:
+  * feat: improve error messages
+- update to 2.47.0:
+  * feat(prompts): optionally disable prompt caching when cache
+    ttl is 0
+  * Docs: https://langfuse.com/docs/prompts/get-started#disable-
+    caching
+
+-------------------------------------------------------------------

Old:
----
  langfuse-2.44.0.tar.gz

New:
----
  langfuse-2.54.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-langfuse.spec ++++++
--- /var/tmp/diff_new_pack.UlugqK/_old  2024-11-20 17:44:45.640608729 +0100
+++ /var/tmp/diff_new_pack.UlugqK/_new  2024-11-20 17:44:45.644608896 +0100
@@ -18,11 +18,11 @@
 
 %{?sle15_python_module_pythons}
 Name:           python-langfuse
-Version:        2.44.0
+Version:        2.54.1
 Release:        0
 Summary:        A client library for accessing langfuse
 License:        MIT
-URL:            https://github.com/langfuse/langfuse
+URL:            https://github.com/langfuse/langfuse-python
 Source:         
https://files.pythonhosted.org/packages/source/l/langfuse/langfuse-%{version}.tar.gz
 BuildRequires:  %{python_module pip}
 BuildRequires:  %{python_module poetry-core >= 1.0.0}

++++++ langfuse-2.44.0.tar.gz -> langfuse-2.54.1.tar.gz ++++++
++++ 26636 lines of diff (skipped)

Reply via email to