dependabot[bot] opened a new pull request, #782: URL: https://github.com/apache/opennlp/pull/782
Bumps `onnxruntime.version` from 1.21.1 to 1.22.0. Updates `com.microsoft.onnxruntime:onnxruntime` from 1.21.1 to 1.22.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/microsoft/onnxruntime/releases">com.microsoft.onnxruntime:onnxruntime's releases</a>.</em></p> <blockquote> <h2>ONNX Runtime v1.22</h2> <h2>Announcements</h2> <ul> <li>This release introduces new API's for Model Editor, Auto EP infrastructure, and AOT Compile</li> <li>OnnxRuntime GPU packages require CUDA 12.x , packages built for CUDA 11.x are no longer published.</li> </ul> <h2>GenAI & Advanced Model Features</h2> <ul> <li><strong>Constrained Decoding:</strong> Introduced new capabilities for constrained decoding, offering more control over generative AI model outputs.</li> </ul> <h2>Execution & Core Optimizations</h2> <h3>Core</h3> <ul> <li><strong>Auto EP Selection Infrastructure:</strong> Added foundational infrastructure to enable automatic selection of Execution Providers via selection policies, aiming to simplify configuration and optimize performance. (Pull Request <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24430">#24430</a>)</li> <li><strong>Compile API:</strong> Introduced new APIs to support explicit compilation of ONNX models. <ul> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_compile_api.html">OrtCompileApi Struct Reference</a> (Assuming a similar link structure for future documentation)</li> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_ep_context.html">EP Context Design</a> (Assuming a similar link structure for future documentation)</li> </ul> </li> <li><strong>Model Editor API</strong> api's for creating or editing ONNX models <ul> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_model_editor_api.html#details">OrtModelEditorApi</a></li> </ul> </li> </ul> <h3>Execution Provider (EP) Updates</h3> <h4>CPU EP/MLAS</h4> <ul> <li><strong>KleidiAI Integration:</strong> Integrated KleidiAI into ONNX Runtime/MLAS for enhanced performance on Arm architectures.</li> <li><strong>MatMulNBits Support:</strong> Added support for <code>MatMulNBits</code>, enabling matrix multiplication with weights quantized to 8 bits.</li> <li><strong>GroupQueryAttention optimizations and enhancements</strong></li> </ul> <h4>OpenVINO EP</h4> <ul> <li>Added support up to OpenVINO 2025.1</li> <li>Introduced Intel compiler level optimizations for QDQ models.</li> <li>Added support to select Intel devices based on LUID</li> <li>Load_config feature improvement to support AUTO, HETERO and MULTI plugin.</li> <li>misc bugfixes/optimizations</li> <li>For detailed updates, refer to Pull Request <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24394">#24394</a>: <a href="https://redirect.github.com/microsoft/onnxruntime/pull/24394">ONNXRuntime OpenVINO - Release 1.22</a></li> </ul> <h4>QNN EP</h4> <ul> <li><strong>SDK Update:</strong> Added support for QNN SDK 2.33.2.</li> <li>operator updates/support to Sum, Softmax, Upsample, Expand, ScatterND, Einsum</li> <li>QNN EP can be built as shared or static library.</li> <li>enable QnnGpu backend</li> <li>For detailed updates refer to <a href="https://github.com/microsoft/onnxruntime/pulls?q=is%3Apr+qnn+ep+is%3Aclosed+label%3Aep%3AQNN">recent QNN tagged PR's</a></li> </ul> <h4>TensorRT EP</h4> <ul> <li><strong>TensorRT Version:</strong> Added support for TensorRT 10.9. <ul> <li><strong>Note for onnx-tensorrt open-source parser users:</strong> Please check <a href="https://onnxruntime.ai/docs/build/eps.html#note-to-ort-1210-open-sourced-parser-users">here</a> for specific requirements (Referencing 1.21 link as a placeholder, this should be updated for 1.22).</li> </ul> </li> <li><strong>New Features:</strong> <ul> <li>EP option to enable TRT Preview Feature</li> <li>Support to load TensorRT V3 plugin</li> </ul> </li> <li><strong>Bug Fixes:</strong> <ul> <li>Resolved an issue related to multithreading scenarios.</li> </ul> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/microsoft/onnxruntime/commit/f217402897f40ebba457e2421bc0a4702771968e"><code>f217402</code></a> Cherry pick fix for NuGet DML Release package Issue (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24696">#24696</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/6c8097a07e4f47e841e290d820a39b447c5a6259"><code>6c8097a</code></a> Qnn nuget package update for arm64x (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24690">#24690</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24694">#24694</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/6b0f7c9c0dd02c415b2231487d429a1e3679133d"><code>6b0f7c9</code></a> Revert "Publish debug symbols for windows (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24643">#24643</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24651">#24651</a>)" (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24668">#24668</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/8fbc5d7880707bc0c9decc5001f67881ee885f43"><code>8fbc5d7</code></a> Publish debug symbols for windows (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24643">#24643</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24651">#24651</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/d08403c5b7fd04f2214560d6f965c0de854f6f13"><code>d08403c</code></a> Add support for selection policy delegate (based on PR <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24653">#24653</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24638">#24638</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/93f85fb7b295dca6cde78b5f3c41015423d7e33a"><code>93f85fb</code></a> Cherry pick <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24629">#24629</a> (QNN prefer npu) into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24630">#24630</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/ab9141ee272f9ebd73dfba22a97d0d8d5ec36454"><code>ab9141e</code></a> Cherry pick <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24625">#24625</a> into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24626">#24626</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/d66cff1da78951912bdffc7e97d47f5a93e6171f"><code>d66cff1</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24624">#24624</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/cf92d982a6f6cabfdfafb36b4e9290e099ae982d"><code>cf92d98</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24611">#24611</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/ef546e93d713c5a9d03d678fc70d8bf2651b014e"><code>ef546e9</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24580">#24580</a>)</li> <li>Additional commits viewable in <a href="https://github.com/microsoft/onnxruntime/compare/v1.21.1...v1.22.0">compare view</a></li> </ul> </details> <br /> Updates `com.microsoft.onnxruntime:onnxruntime_gpu` from 1.21.1 to 1.22.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/microsoft/onnxruntime/releases">com.microsoft.onnxruntime:onnxruntime_gpu's releases</a>.</em></p> <blockquote> <h2>ONNX Runtime v1.22</h2> <h2>Announcements</h2> <ul> <li>This release introduces new API's for Model Editor, Auto EP infrastructure, and AOT Compile</li> <li>OnnxRuntime GPU packages require CUDA 12.x , packages built for CUDA 11.x are no longer published.</li> </ul> <h2>GenAI & Advanced Model Features</h2> <ul> <li><strong>Constrained Decoding:</strong> Introduced new capabilities for constrained decoding, offering more control over generative AI model outputs.</li> </ul> <h2>Execution & Core Optimizations</h2> <h3>Core</h3> <ul> <li><strong>Auto EP Selection Infrastructure:</strong> Added foundational infrastructure to enable automatic selection of Execution Providers via selection policies, aiming to simplify configuration and optimize performance. (Pull Request <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24430">#24430</a>)</li> <li><strong>Compile API:</strong> Introduced new APIs to support explicit compilation of ONNX models. <ul> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_compile_api.html">OrtCompileApi Struct Reference</a> (Assuming a similar link structure for future documentation)</li> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_ep_context.html">EP Context Design</a> (Assuming a similar link structure for future documentation)</li> </ul> </li> <li><strong>Model Editor API</strong> api's for creating or editing ONNX models <ul> <li>See: <a href="https://onnxruntime.ai/docs/api/c/struct_ort_model_editor_api.html#details">OrtModelEditorApi</a></li> </ul> </li> </ul> <h3>Execution Provider (EP) Updates</h3> <h4>CPU EP/MLAS</h4> <ul> <li><strong>KleidiAI Integration:</strong> Integrated KleidiAI into ONNX Runtime/MLAS for enhanced performance on Arm architectures.</li> <li><strong>MatMulNBits Support:</strong> Added support for <code>MatMulNBits</code>, enabling matrix multiplication with weights quantized to 8 bits.</li> <li><strong>GroupQueryAttention optimizations and enhancements</strong></li> </ul> <h4>OpenVINO EP</h4> <ul> <li>Added support up to OpenVINO 2025.1</li> <li>Introduced Intel compiler level optimizations for QDQ models.</li> <li>Added support to select Intel devices based on LUID</li> <li>Load_config feature improvement to support AUTO, HETERO and MULTI plugin.</li> <li>misc bugfixes/optimizations</li> <li>For detailed updates, refer to Pull Request <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24394">#24394</a>: <a href="https://redirect.github.com/microsoft/onnxruntime/pull/24394">ONNXRuntime OpenVINO - Release 1.22</a></li> </ul> <h4>QNN EP</h4> <ul> <li><strong>SDK Update:</strong> Added support for QNN SDK 2.33.2.</li> <li>operator updates/support to Sum, Softmax, Upsample, Expand, ScatterND, Einsum</li> <li>QNN EP can be built as shared or static library.</li> <li>enable QnnGpu backend</li> <li>For detailed updates refer to <a href="https://github.com/microsoft/onnxruntime/pulls?q=is%3Apr+qnn+ep+is%3Aclosed+label%3Aep%3AQNN">recent QNN tagged PR's</a></li> </ul> <h4>TensorRT EP</h4> <ul> <li><strong>TensorRT Version:</strong> Added support for TensorRT 10.9. <ul> <li><strong>Note for onnx-tensorrt open-source parser users:</strong> Please check <a href="https://onnxruntime.ai/docs/build/eps.html#note-to-ort-1210-open-sourced-parser-users">here</a> for specific requirements (Referencing 1.21 link as a placeholder, this should be updated for 1.22).</li> </ul> </li> <li><strong>New Features:</strong> <ul> <li>EP option to enable TRT Preview Feature</li> <li>Support to load TensorRT V3 plugin</li> </ul> </li> <li><strong>Bug Fixes:</strong> <ul> <li>Resolved an issue related to multithreading scenarios.</li> </ul> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/microsoft/onnxruntime/commit/f217402897f40ebba457e2421bc0a4702771968e"><code>f217402</code></a> Cherry pick fix for NuGet DML Release package Issue (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24696">#24696</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/6c8097a07e4f47e841e290d820a39b447c5a6259"><code>6c8097a</code></a> Qnn nuget package update for arm64x (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24690">#24690</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24694">#24694</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/6b0f7c9c0dd02c415b2231487d429a1e3679133d"><code>6b0f7c9</code></a> Revert "Publish debug symbols for windows (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24643">#24643</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24651">#24651</a>)" (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24668">#24668</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/8fbc5d7880707bc0c9decc5001f67881ee885f43"><code>8fbc5d7</code></a> Publish debug symbols for windows (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24643">#24643</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24651">#24651</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/d08403c5b7fd04f2214560d6f965c0de854f6f13"><code>d08403c</code></a> Add support for selection policy delegate (based on PR <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24653">#24653</a>) (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24638">#24638</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/93f85fb7b295dca6cde78b5f3c41015423d7e33a"><code>93f85fb</code></a> Cherry pick <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24629">#24629</a> (QNN prefer npu) into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24630">#24630</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/ab9141ee272f9ebd73dfba22a97d0d8d5ec36454"><code>ab9141e</code></a> Cherry pick <a href="https://redirect.github.com/microsoft/onnxruntime/issues/24625">#24625</a> into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24626">#24626</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/d66cff1da78951912bdffc7e97d47f5a93e6171f"><code>d66cff1</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24624">#24624</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/cf92d982a6f6cabfdfafb36b4e9290e099ae982d"><code>cf92d98</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24611">#24611</a>)</li> <li><a href="https://github.com/microsoft/onnxruntime/commit/ef546e93d713c5a9d03d678fc70d8bf2651b014e"><code>ef546e9</code></a> Cherry-picks into rel-1.22.0 (<a href="https://redirect.github.com/microsoft/onnxruntime/issues/24580">#24580</a>)</li> <li>Additional commits viewable in <a href="https://github.com/microsoft/onnxruntime/compare/v1.21.1...v1.22.0">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@opennlp.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org