dependabot[bot] opened a new pull request, #3365:
URL: https://github.com/apache/ignite-3/pull/3365

   Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 
8.3.2 to 8.11.3.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/releases";>org.rocksdb:rocksdbjni's 
releases</a>.</em></p>
   <blockquote>
   <h2>RocksDB 8.11.3</h2>
   <h2>8.11.3 (02/27/2024)</h2>
   <ul>
   <li>Correct CMake Javadoc and source jar builds</li>
   </ul>
   <h2>8.11.2 (02/16/2024)</h2>
   <ul>
   <li>Update zlib to 1.3.1 for Java builds</li>
   </ul>
   <h2>8.11.1 (01/25/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where older data of an ingested key can be returned for read 
when universal compaction is used</li>
   <li>Apply appropriate rate limiting and priorities in more places.</li>
   </ul>
   <h2>8.11.0 (01/19/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Add new statistics: <code>rocksdb.sst.write.micros</code> measures time 
of each write to SST file; 
<code>rocksdb.file.write.{flush|compaction|db.open}.micros</code> measure time 
of each write to SST table (currently only block-based table format) and blob 
file for flush, compaction and db open.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Added another enumerator <code>kVerify</code> to enum class 
<code>FileOperationType</code> in listener.h. Update your <code>switch</code> 
statements as needed.</li>
   <li>Add CompressionOptions to the CompressedSecondaryCacheOptions structure 
to allow users to specify library specific options when creating the compressed 
secondary cache.</li>
   <li>Deprecated several options: 
<code>level_compaction_dynamic_file_size</code>, 
<code>ignore_max_compaction_bytes_for_input</code>, 
<code>check_flush_compaction_key_order</code>, 
<code>flush_verify_memtable_count</code>, 
<code>compaction_verify_record_count</code>, 
<code>fail_if_options_file_error</code>, and 
<code>enforce_single_del_contracts</code></li>
   <li>Exposed options ttl via c api.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>rocksdb.blobdb.blob.file.write.micros</code> expands to also 
measure time writing the header and footer. Therefore the COUNT may be higher 
and values may be smaller than before. For stacked BlobDB, it no longer 
measures the time of explictly flushing blob file.</li>
   <li>Files will be compacted to the next level if the data age exceeds 
periodic_compaction_seconds except for the last level.</li>
   <li>Reduced the compaction debt ratio trigger for scheduling parallel 
compactions</li>
   <li>For leveled compaction with default compaction pri 
(kMinOverlappingRatio), files marked for compaction will be prioritized over 
files not marked when picking a file from a level for compaction.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix bug in auto_readahead_size that combined with 
IndexType::kBinarySearchWithFirstKey + fails or iterator lands at a wrong 
key</li>
   <li>Fixed some cases in which DB file corruption was detected but ignored on 
creating a backup with BackupEngine.</li>
   <li>Fix bugs where <code>rocksdb.blobdb.blob.file.synced</code> includes 
blob files failed to get synced and 
<code>rocksdb.blobdb.blob.file.bytes.written</code> includes blob bytes failed 
to get written.</li>
   <li>Fixed a possible memory leak or crash on a failure (such as I/O error) 
in automatic atomic flush of multiple column families.</li>
   <li>Fixed some cases of in-memory data corruption using mmap reads with 
<code>BackupEngine</code>, <code>sst_dump</code>, or <code>ldb</code>.</li>
   <li>Fixed issues with experimental 
<code>preclude_last_level_data_seconds</code> option that could interfere with 
expected data tiering.</li>
   <li>Fixed the handling of the edge case when all existing blob files become 
unreferenced. Such files are now correctly deleted.</li>
   </ul>
   <h2>RocksDB 8.10.2</h2>
   <h2>8.10.2 (02/16/2024)</h2>
   <ul>
   <li>Update zlib to 1.3.1 for Java builds</li>
   </ul>
   <h2>8.10.1 (01/16/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix bug in auto_readahead_size that combined with 
IndexType::kBinarySearchWithFirstKey + fails or iterator lands at a wrong 
key</li>
   </ul>
   <h2>RocksDB 8.10.0</h2>
   <h2>8.10.0 (2023-12-15)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Provide support for async_io to trim readahead_size by doing block cache 
lookup</li>
   <li>Added initial wide-column support in <code>WriteBatchWithIndex</code>. 
This includes the <code>PutEntity</code> API and support for wide columns in 
the existing read APIs (<code>GetFromBatch</code>, 
<code>GetFromBatchAndDB</code>, <code>MultiGetFromBatchAndDB</code>, and 
<code>BaseDeltaIterator</code>).</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/blob/v8.11.3/HISTORY.md";>org.rocksdb:rocksdbjni's
 changelog</a>.</em></p>
   <blockquote>
   <h2>8.11.3 (02/27/2024)</h2>
   <ul>
   <li>Correct CMake Javadoc and source jar builds</li>
   </ul>
   <h2>8.11.2 (02/16/2024)</h2>
   <ul>
   <li>Update zlib to 1.3.1 for Java builds</li>
   </ul>
   <h2>8.11.1 (01/25/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where older data of an ingested key can be returned for read 
when universal compaction is used</li>
   <li>Apply appropriate rate limiting and priorities in more places.</li>
   </ul>
   <h2>8.11.0 (01/19/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Add new statistics: <code>rocksdb.sst.write.micros</code> measures time 
of each write to SST file; 
<code>rocksdb.file.write.{flush|compaction|db.open}.micros</code> measure time 
of each write to SST table (currently only block-based table format) and blob 
file for flush, compaction and db open.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Added another enumerator <code>kVerify</code> to enum class 
<code>FileOperationType</code> in listener.h. Update your <code>switch</code> 
statements as needed.</li>
   <li>Add CompressionOptions to the CompressedSecondaryCacheOptions structure 
to allow users to specify library specific options when creating the compressed 
secondary cache.</li>
   <li>Deprecated several options: 
<code>level_compaction_dynamic_file_size</code>, 
<code>ignore_max_compaction_bytes_for_input</code>, 
<code>check_flush_compaction_key_order</code>, 
<code>flush_verify_memtable_count</code>, 
<code>compaction_verify_record_count</code>, 
<code>fail_if_options_file_error</code>, and 
<code>enforce_single_del_contracts</code></li>
   <li>Exposed options ttl via c api.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>rocksdb.blobdb.blob.file.write.micros</code> expands to also 
measure time writing the header and footer. Therefore the COUNT may be higher 
and values may be smaller than before. For stacked BlobDB, it no longer 
measures the time of explictly flushing blob file.</li>
   <li>Files will be compacted to the next level if the data age exceeds 
periodic_compaction_seconds except for the last level.</li>
   <li>Reduced the compaction debt ratio trigger for scheduling parallel 
compactions</li>
   <li>For leveled compaction with default compaction pri 
(kMinOverlappingRatio), files marked for compaction will be prioritized over 
files not marked when picking a file from a level for compaction.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix bug in auto_readahead_size that combined with 
IndexType::kBinarySearchWithFirstKey + fails or iterator lands at a wrong 
key</li>
   <li>Fixed some cases in which DB file corruption was detected but ignored on 
creating a backup with BackupEngine.</li>
   <li>Fix bugs where <code>rocksdb.blobdb.blob.file.synced</code> includes 
blob files failed to get synced and 
<code>rocksdb.blobdb.blob.file.bytes.written</code> includes blob bytes failed 
to get written.</li>
   <li>Fixed a possible memory leak or crash on a failure (such as I/O error) 
in automatic atomic flush of multiple column families.</li>
   <li>Fixed some cases of in-memory data corruption using mmap reads with 
<code>BackupEngine</code>, <code>sst_dump</code>, or <code>ldb</code>.</li>
   <li>Fixed issues with experimental 
<code>preclude_last_level_data_seconds</code> option that could interfere with 
expected data tiering.</li>
   <li>Fixed the handling of the edge case when all existing blob files become 
unreferenced. Such files are now correctly deleted.</li>
   </ul>
   <h2>8.10.0 (12/15/2023)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Provide support for async_io to trim readahead_size by doing block cache 
lookup</li>
   <li>Added initial wide-column support in <code>WriteBatchWithIndex</code>. 
This includes the <code>PutEntity</code> API and support for wide columns in 
the existing read APIs (<code>GetFromBatch</code>, 
<code>GetFromBatchAndDB</code>, <code>MultiGetFromBatchAndDB</code>, and 
<code>BaseDeltaIterator</code>).</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Custom implementations of <code>TablePropertiesCollectorFactory</code> 
may now return a <code>nullptr</code> collector to decline processing a file, 
reducing callback overheads in such cases.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>Make ReadOptions.auto_readahead_size default true which does prefetching 
optimizations for forward scans if iterate_upper_bound and block_cache is also 
specified.</li>
   <li>Compactions can be scheduled in parallel in an additional scenario: high 
compaction debt relative to the data size</li>
   <li>HyperClockCache now has built-in protection against excessive CPU 
consumption under the extreme stress condition of no (or very few) evictable 
cache entries, which can slightly increase memory usage such conditions. New 
option <code>HyperClockCacheOptions::eviction_effort_cap</code> controls the 
space-time trade-off of the response. The default should be generally 
well-balanced, with no measurable affect on normal operation.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/c2467b141e840fdba5b3a1810763043e56449fb9";><code>c2467b1</code></a>
 Update version.h and HISTORY for 8.11.3</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/ef0cadd38f4ba16ef802f9a4d7ec899b524e951e";><code>ef0cadd</code></a>
 Correct CMake Javadoc and source jar builds (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12371";>#12371</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/9d9014d4d0b9ef27abdce0f83d91277027741e69";><code>9d9014d</code></a>
 Update HISTORY and version for 8.11.2</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/412b7fd2bdf85701897ece22de3719d8d9b566d0";><code>412b7fd</code></a>
 Update ZLib to 1.3.1 (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12358";>#12358</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/8a201ce68a1af02dd89e3b8004a99a9466c35535";><code>8a201ce</code></a>
 Mark destructors as overridden (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12324";>#12324</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/a53094983b30af1e618dfff99d9d0af5f914abc7";><code>a530949</code></a>
 Version bump and HISTORY for 8.11.1 patch</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/cc20520d5fb6ae80e8d049dac0b4a142d541541e";><code>cc20520</code></a>
 Pass rate_limiter_priority from SequentialFileReader to FS (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12296";>#12296</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/36797a43fe08e4e45a4d147e3d05b1fa177855e5";><code>36797a4</code></a>
 Rate-limit un-ratelimited flush/compaction code paths (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12290";>#12290</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/7c06a1e503d8c6914dcd19456a8913cca12e164e";><code>7c06a1e</code></a>
 Fix bug of newer ingested data assigned with an older seqno (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12257";>#12257</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/d9e7f6a3b9087e67cbbede41abe39d8e10d8b191";><code>d9e7f6a</code></a>
 Fix UB/crash in new SeqnoToTimeMapping::CopyFromSeqnoRange (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12293";>#12293</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/facebook/rocksdb/compare/v8.3.2...v8.11.3";>compare 
view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.rocksdb:rocksdbjni&package-manager=gradle&previous-version=8.3.2&new-version=8.11.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to