dependabot[bot] opened a new pull request, #647:
URL: https://github.com/apache/opennlp/pull/647

   Bumps `onnxruntime.version` from 1.18.0 to 1.19.0.
   Updates `com.microsoft.onnxruntime:onnxruntime` from 1.18.0 to 1.19.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.19</h2>
   <h2>Announcements</h2>
   <ul>
   <li>Training (pypi) packages are delayed from package manager release due to 
some publishing errors. Feel free to contact <a 
href="https://github.com/maanavd";><code>@​maanavd</code></a> if you need 
release candidates for some workflows ASAP. In the meantime, binaries are 
attached to this post. This message will be deleted once this ceases to be the 
case. Thanks for your understanding :)</li>
   <li>Second note that the wrong commit was initially tagged with v1.19.0. The 
final commit has since been correctly tagged: <a 
href="https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907";>https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907</a>.
 This shouldn't effect much, but sorry for the inconvenience!</li>
   </ul>
   <h2>Build System &amp; Packages</h2>
   <ul>
   <li>Numpy support for 2.x has been added</li>
   <li>Qualcomm SDK has been upgraded to 2.25</li>
   <li>ONNX has been upgraded from 1.16 → 1.16.1</li>
   <li>Default GPU packages use CUDA 12.x and Cudnn 9.x  (previously CUDA 
11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS 
feed.</li>
   <li>TensorRT 10.2 support added</li>
   <li>Introduced Java CUDA 12 packages on Maven.</li>
   <li>Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 
2024)</li>
   <li>Discontinued support for macOS 11 and increasing the minimum supported 
macOS version to 12. (macOS 11 reached EOL in September 2023)</li>
   <li>Discontinued support for iOS 12 and increasing the minimum supported iOS 
version to 13.</li>
   </ul>
   <h2>Core</h2>
   <ul>
   <li>Implemented DeformConv</li>
   <li><a 
href="https://redirect.github.com/microsoft/onnxruntime/pull/21133";>Fixed 
big-endian</a> and support build on AIX</li>
   </ul>
   <h2>Performance</h2>
   <ul>
   <li>Added QDQ support for INT4 quantization in CPU and CUDA Execution 
Providers</li>
   <li>Implemented FlashAttention on CPU to improve performance for GenAI 
prompt cases</li>
   <li>Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs</li>
   </ul>
   <h2>Execution Providers</h2>
   <ul>
   <li>
   <p>TensorRT</p>
   <ul>
   <li>Updated to support TensorRT 10.2</li>
   <li>Remove calls to deprecated api’s</li>
   <li>Enable refittable embedded engine when ONNX model provided as byte 
stream</li>
   </ul>
   </li>
   <li>
   <p>CUDA</p>
   <ul>
   <li>Upgraded cutlass to 3.5.0 for performance improvement of memory 
efficient attention.</li>
   <li>Updated MultiHeadAttention and Attention operators to be 
thread-safe.</li>
   <li>Added sdpa_kernel provider option to choose kernel for Scaled 
Dot-Product Attention.</li>
   <li>Expanded op support - Tile (bf16)</li>
   </ul>
   </li>
   <li>
   <p>CPU</p>
   <ul>
   <li>Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 
small)</li>
   </ul>
   </li>
   <li>
   <p>QNN</p>
   <ul>
   <li>Updated to support QNN SDK 2.25</li>
   <li>Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), 
Matmul (int4 weights), Conv (int4 weights), prelu (fp16)</li>
   <li>Expanded fusion support – Conv + Clip/Relu fusion</li>
   </ul>
   </li>
   <li>
   <p>OpenVINO</p>
   <ul>
   <li>Added support for OpenVINO 2024.3</li>
   <li>Support for enabling EpContext using session options</li>
   </ul>
   </li>
   <li>
   <p>DirectML</p>
   </li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907";><code>26250ae</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21726";>#21726</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ccf6a28c3cf9242bed312edecf0c7a2985f90a67";><code>ccf6a28</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21619";>#21619</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ee2fe87e2df9ffcaf5fae6b933eff59178c0916d";><code>ee2fe87</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 0 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21609";>#21609</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/530a2d7b41b0584f67ddfef6679a79e9dbeee556";><code>530a2d7</code></a>
 Enable FP16 Clip and Handle Bias in FP16 Depthwise Conv (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21493";>#21493</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/82036b04978b7930185996a70d2146c2895469ea";><code>82036b0</code></a>
 Remove references to the outdated CUDA EP factory method (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21549";>#21549</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/07d3be5b0e037927c3defd8a7e389e59ec748ad8";><code>07d3be5</code></a>
 CoreML: Add ML Program Split Op (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21456";>#21456</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/5d78b9a17bb6d126f8ae7fa7eef05cabe4a08dae";><code>5d78b9a</code></a>
 [TensorRT EP] Update TRT OSS Parser to 10.2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21552";>#21552</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/8417c325ec160dc8ee62edaf6d1daf91ad979d56";><code>8417c32</code></a>
 Keep QDQ nodes w/ nonpositive scale around MaxPool (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21182";>#21182</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/d98581495f996084af65ae1e6600378bed949460";><code>d985814</code></a>
 Update labeling bot (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21548";>#21548</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/7543dd040b2d32109a2718d7276d3aca1edadaae";><code>7543dd0</code></a>
 Propagate NaNs in the CPU min and max operators (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21492";>#21492</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.18.0...v1.19.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   Updates `com.microsoft.onnxruntime:onnxruntime_gpu` from 1.18.0 to 1.19.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime_gpu's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.19</h2>
   <h2>Announcements</h2>
   <ul>
   <li>Training (pypi) packages are delayed from package manager release due to 
some publishing errors. Feel free to contact <a 
href="https://github.com/maanavd";><code>@​maanavd</code></a> if you need 
release candidates for some workflows ASAP. In the meantime, binaries are 
attached to this post. This message will be deleted once this ceases to be the 
case. Thanks for your understanding :)</li>
   <li>Second note that the wrong commit was initially tagged with v1.19.0. The 
final commit has since been correctly tagged: <a 
href="https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907";>https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907</a>.
 This shouldn't effect much, but sorry for the inconvenience!</li>
   </ul>
   <h2>Build System &amp; Packages</h2>
   <ul>
   <li>Numpy support for 2.x has been added</li>
   <li>Qualcomm SDK has been upgraded to 2.25</li>
   <li>ONNX has been upgraded from 1.16 → 1.16.1</li>
   <li>Default GPU packages use CUDA 12.x and Cudnn 9.x  (previously CUDA 
11.x/CuDNN 8.x) CUDA 11.x/CuDNN 8.x packages are moved to the aiinfra VS 
feed.</li>
   <li>TensorRT 10.2 support added</li>
   <li>Introduced Java CUDA 12 packages on Maven.</li>
   <li>Discontinued support for Xamarin. (Xamarin reached EOL on May 1, 
2024)</li>
   <li>Discontinued support for macOS 11 and increasing the minimum supported 
macOS version to 12. (macOS 11 reached EOL in September 2023)</li>
   <li>Discontinued support for iOS 12 and increasing the minimum supported iOS 
version to 13.</li>
   </ul>
   <h2>Core</h2>
   <ul>
   <li>Implemented DeformConv</li>
   <li><a 
href="https://redirect.github.com/microsoft/onnxruntime/pull/21133";>Fixed 
big-endian</a> and support build on AIX</li>
   </ul>
   <h2>Performance</h2>
   <ul>
   <li>Added QDQ support for INT4 quantization in CPU and CUDA Execution 
Providers</li>
   <li>Implemented FlashAttention on CPU to improve performance for GenAI 
prompt cases</li>
   <li>Improved INT4 performance on CPU (X64, ARM64) and NVIDIA GPUs</li>
   </ul>
   <h2>Execution Providers</h2>
   <ul>
   <li>
   <p>TensorRT</p>
   <ul>
   <li>Updated to support TensorRT 10.2</li>
   <li>Remove calls to deprecated api’s</li>
   <li>Enable refittable embedded engine when ONNX model provided as byte 
stream</li>
   </ul>
   </li>
   <li>
   <p>CUDA</p>
   <ul>
   <li>Upgraded cutlass to 3.5.0 for performance improvement of memory 
efficient attention.</li>
   <li>Updated MultiHeadAttention and Attention operators to be 
thread-safe.</li>
   <li>Added sdpa_kernel provider option to choose kernel for Scaled 
Dot-Product Attention.</li>
   <li>Expanded op support - Tile (bf16)</li>
   </ul>
   </li>
   <li>
   <p>CPU</p>
   <ul>
   <li>Expanded op support - GroupQueryAttention, SparseAttention (for Phi-3 
small)</li>
   </ul>
   </li>
   <li>
   <p>QNN</p>
   <ul>
   <li>Updated to support QNN SDK 2.25</li>
   <li>Expanded op support - HardSigmoid, ConvTranspose 3d, Clip (int32 data), 
Matmul (int4 weights), Conv (int4 weights), prelu (fp16)</li>
   <li>Expanded fusion support – Conv + Clip/Relu fusion</li>
   </ul>
   </li>
   <li>
   <p>OpenVINO</p>
   <ul>
   <li>Added support for OpenVINO 2024.3</li>
   <li>Support for enabling EpContext using session options</li>
   </ul>
   </li>
   <li>
   <p>DirectML</p>
   </li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/26250ae74d2c9a3c6860625ba4a147ddfb936907";><code>26250ae</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21726";>#21726</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ccf6a28c3cf9242bed312edecf0c7a2985f90a67";><code>ccf6a28</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21619";>#21619</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ee2fe87e2df9ffcaf5fae6b933eff59178c0916d";><code>ee2fe87</code></a>
 ORT 1.19.0 Release: Cherry-Pick Round 0 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21609";>#21609</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/530a2d7b41b0584f67ddfef6679a79e9dbeee556";><code>530a2d7</code></a>
 Enable FP16 Clip and Handle Bias in FP16 Depthwise Conv (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21493";>#21493</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/82036b04978b7930185996a70d2146c2895469ea";><code>82036b0</code></a>
 Remove references to the outdated CUDA EP factory method (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21549";>#21549</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/07d3be5b0e037927c3defd8a7e389e59ec748ad8";><code>07d3be5</code></a>
 CoreML: Add ML Program Split Op (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21456";>#21456</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/5d78b9a17bb6d126f8ae7fa7eef05cabe4a08dae";><code>5d78b9a</code></a>
 [TensorRT EP] Update TRT OSS Parser to 10.2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21552";>#21552</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/8417c325ec160dc8ee62edaf6d1daf91ad979d56";><code>8417c32</code></a>
 Keep QDQ nodes w/ nonpositive scale around MaxPool (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21182";>#21182</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/d98581495f996084af65ae1e6600378bed949460";><code>d985814</code></a>
 Update labeling bot (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21548";>#21548</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/7543dd040b2d32109a2718d7276d3aca1edadaae";><code>7543dd0</code></a>
 Propagate NaNs in the CPU min and max operators (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/21492";>#21492</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.18.0...v1.19.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to