Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package openvino for openSUSE:Factory 
checked in at 2025-12-04 11:22:31
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/openvino (Old)
 and      /work/SRC/openSUSE:Factory/.openvino.new.1939 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "openvino"

Thu Dec  4 11:22:31 2025 rev:17 rq:1320927 version:2025.4.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/openvino/openvino.changes        2025-09-08 
09:58:28.104747799 +0200
+++ /work/SRC/openSUSE:Factory/.openvino.new.1939/openvino.changes      
2025-12-04 11:27:17.251551908 +0100
@@ -1,0 +2,50 @@
+Tue Dec  2 22:43:52 UTC 2025 - Alessandro de Oliveira Faria 
<[email protected]>
+
+- Update to 2025.4.0
+- More GenAI coverage and framework integrations to minimize code
+  changes
+  * New models supported:
+    + On CPUs & GPUs: Qwen3-Embedding-0.6B, Qwen3-Reranker-0.6B, 
+      Mistral-Small-24B-Instruct-2501.
+    + On NPUs: Gemma-3-4b-it and Qwen2.5-VL-3B-Instruct.
+  * Preview: Mixture of Experts (MoE) models optimized for CPUs 
+    and GPUs, validated for Qwen3-30B-A3B.
+  * GenAI pipeline integrations: Qwen3-Embedding-0.6B and 
+    Qwen3-Reranker-0.6B for enhanced retrieval/ranking, and 
+    Qwen2.5VL-7B for video pipeline.
+- Broader LLM model support and more model compression 
+  techniques
+  * The Neural Network Compression Framework (NNCF) ONNX backend 
+    now supports INT8 static post-training quantization (PTQ) 
+    and INT8/INT4 weight-only compression to ensure accuracy 
+    parity with OpenVINO IR format models. SmoothQuant algorithm 
+    support added for INT8 quantization.
+  * Accelerated multi-token generation for GenAI, leveraging 
+    optimized GPU kernels to deliver faster inference, smarter 
+    KV-cache reuse, and scalable LLM performance.
+  * GPU plugin updates include improved performance with prefix 
+    caching for chat history scenarios and enhanced LLM accuracy 
+    with dynamic quantization support for INT8.
+- More portability and performance to run AI at the edge, in the 
+  cloud, or locally.
+  * Announcing support for Intel® Core Ultra Processor Series 3.
+  * Encrypted blob format support added for secure model 
+    deployment with OpenVINO GenAI. Model weights and artifacts
+    are stored and transmitted in an encrypted format, reducing
+    risks of IP theft during deployment. Developers can deploy 
+    with minimal code changes using OpenVINO GenAI pipelines.
+  * OpenVINO™ Model Server and OpenVINO™ GenAI now extend 
+    support for Agentic AI scenarios with new features such as
+    output parsing and improved chat templates for reliable 
+    multi-turn interactions, and preview functionality for the 
+    Qwen3-30B-A3B model. OVMS also introduces a preview for 
+    audio endpoints.
+  * NPU deployment is simplified with batch support, enabling 
+    seamless model execution across Intel® Core Ultra 
+    processors while eliminating driver dependencies. Models 
+    are reshaped to batch_size=1 before compilation.
+  * The improved NVIDIA Triton Server* integration with 
+    OpenVINO backend now enables developers to utilize Intel
+    GPUs or NPUs for deployment.
+
+-------------------------------------------------------------------
@@ -238 +288 @@
-    Windows and Linux.
+    Linux.
@@ -293,4 +342,0 @@
-  * The OpenVINO Model Server now supports native Windows Server
-    deployments, allowing developers to leverage better
-    performance by eliminating container overhead and simplifying
-    GPU deployment.

Old:
----
  openvino-2025.3.0.obscpio

New:
----
  openvino-2025.4.0.obscpio

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ openvino.spec ++++++
--- /var/tmp/diff_new_pack.O5OKMJ/_old  2025-12-04 11:27:21.703740305 +0100
+++ /var/tmp/diff_new_pack.O5OKMJ/_new  2025-12-04 11:27:21.703740305 +0100
@@ -31,13 +31,13 @@
 %define pythons python3
 %endif
 %define __builder ninja
-%define so_ver 2530
+%define so_ver 2540
 %define shlib lib%{name}%{so_ver}
 %define shlib_c lib%{name}_c%{so_ver}
 %define prj_name OpenVINO
 
 Name:           openvino
-Version:        2025.3.0
+Version:        2025.4.0
 Release:        0
 Summary:        A toolkit for optimizing and deploying AI inference
 # Let's be safe and put all third party licenses here, no matter that we use 
specific thirdparty libs or not

++++++ _service ++++++
--- /var/tmp/diff_new_pack.O5OKMJ/_old  2025-12-04 11:27:21.747742167 +0100
+++ /var/tmp/diff_new_pack.O5OKMJ/_new  2025-12-04 11:27:21.747742167 +0100
@@ -2,8 +2,8 @@
   <service name="obs_scm" mode="manual">
     <param name="url">https://github.com/openvinotoolkit/openvino.git</param>
     <param name="scm">git</param>
-    <param name="revision">2025.3.0</param>
-    <param name="version">2025.3.0</param>
+    <param name="revision">2025.4.0</param>
+    <param name="version">2025.4.0</param>
     <param name="submodules">enable</param>
     <param name="filename">openvino</param>
     <param name="exclude">.git</param>

++++++ openvino-2025.3.0.obscpio -> openvino-2025.4.0.obscpio ++++++
/work/SRC/openSUSE:Factory/openvino/openvino-2025.3.0.obscpio 
/work/SRC/openSUSE:Factory/.openvino.new.1939/openvino-2025.4.0.obscpio differ: 
char 48, line 1

++++++ openvino.obsinfo ++++++
--- /var/tmp/diff_new_pack.O5OKMJ/_old  2025-12-04 11:27:21.819745214 +0100
+++ /var/tmp/diff_new_pack.O5OKMJ/_new  2025-12-04 11:27:21.823745384 +0100
@@ -1,5 +1,5 @@
 name: openvino
-version: 2025.3.0
-mtime: 1756212984
-commit: 44526285f241251e9543276572676365fbe542a4
+version: 2025.4.0
+mtime: 1763052589
+commit: 7a975177ff432c687e5619e8fb22e4bf265e48b7
 

Reply via email to