Re: [PR] Core, Spark: Remove dangling deletes as part of RewriteDataFilesAction [iceberg]

2024-05-04 Thread via GitHub


nastra commented on code in PR #9724:
URL: https://github.com/apache/iceberg/pull/9724#discussion_r1589914458


##
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/actions/TestRemoveDanglingDeleteAction.java:
##
@@ -0,0 +1,438 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import static org.apache.iceberg.types.Types.NestedField.optional;
+
+import java.io.File;
+import java.nio.file.Path;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.iceberg.DataFile;
+import org.apache.iceberg.DataFiles;
+import org.apache.iceberg.DeleteFile;
+import org.apache.iceberg.FileMetadata;
+import org.apache.iceberg.PartitionSpec;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableProperties;
+import org.apache.iceberg.actions.RemoveDanglingDeleteFiles;
+import org.apache.iceberg.hadoop.HadoopTables;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.spark.TestBase;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.Encoders;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.Assertions;

Review Comment:
   please use Assertions from AssertJ



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] feat: add `RollingManifestWriter` [iceberg-python]

2024-05-04 Thread via GitHub


felixscherz commented on PR #650:
URL: https://github.com/apache/iceberg-python/pull/650#issuecomment-2094148537

   Hi, I finally had some time to continue working on this.
   
   Based on your suggestions @geruh I added a `tell` method to the 
`OutputStream` protocol that returns the number of bytes written to the stream.
   I then added `__len__` to the `AvroOutputFile` which calls out to either 
`OutputFile` or `OutputStream` to get the number of bytes written, depending on 
whether the stream is closed or not.
   Finally I extended `ManifestWriter` with a `__len__` method that calls 
`AvroOutputFile`.
   
   I initially tried to extend `OutputStream` with `__len__` until I realized 
that both `FileIO` implementations `fsspec` and `pyarrow` offer `OutputStream` 
implementations that implement the `tell` method while neither supports 
`__len__`.
   
   If we wanted to go with `__len__` instead of simply using `tell` we might 
have to implement custom `FsspecOutputStream` and `PyarrowOutputStream` classes 
that implement `__len__`. This might well be the cleaner approach but introduce 
a bit more abstraction.
   
   What do you think?
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] View Spec implementation [iceberg-rust]

2024-05-04 Thread via GitHub


c-thiel commented on PR #331:
URL: https://github.com/apache/iceberg-rust/pull/331#issuecomment-2094154795

   @Fokko from my side this is good to merge - types are complete and tests are 
passing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [I] Failing to create a table using pyiceberg [iceberg-python]

2024-05-04 Thread via GitHub


felixscherz commented on issue #692:
URL: https://github.com/apache/iceberg-python/issues/692#issuecomment-2094183984

   Hi, I just tried this on my machine, could you double check the properties 
that you pass to `load_catalog`. I believe the glue catalog looks for the AWS 
profile name under `profile_name` instead of `profile` (see  
[`GlueCatalog`](https://github.com/apache/iceberg-python/blob/7bd5d9e6c32bcc5b46993d6bfaeed50471e972ae/pyiceberg/catalog/glue.py#L280-L295)).
 
   This works for me after setting up an S3 bucket and a glue database:
   ```python
   # Instantiate glue catalog
   catalog = load_catalog(
   "glue",
   **{
   "type": "glue",
   "s3.region": "",
   "s3.access-key-id": "",
   "s3.secret-access-key":"",
   "profile_name": ""
   },
   )
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [I] Implement the equality delete writer [iceberg-rust]

2024-05-04 Thread via GitHub


Dysprosium0626 commented on issue #341:
URL: https://github.com/apache/iceberg-rust/issues/341#issuecomment-2094225277

   Hi I nearly complete adding `EqualityDeleteWriter` but I encounter some 
problem.
   My impl is here: 
https://github.com/Dysprosium0626/iceberg-rust/blob/add_equality_delete_writer/crates/iceberg/src/writer/base_writer/equality_delete_writer.rs
   
   Basically, in my test case, I write some schema to build up a 
`ParquetWriterBuilder` and pass it into `EqualityDeleteFileWriterBuilder`. 
   ```rust
   // prepare writer
   let pb = ParquetWriterBuilder::new(
   WriterProperties::builder().build(),
   to_write.schema(),
   file_io.clone(),
   location_gen,
   file_name_gen,
   );
   let equality_ids = vec![1, 3];
   let mut equality_delete_writer = 
EqualityDeleteFileWriterBuilder::new(pb)
   .build(EqualityDeleteWriterConfig::new(
   equality_ids,
   schema.clone(),
   PARQUET_FIELD_ID_META_KEY,
   ))
   .await?;
   ```
   The `FieldProjector` will filter columns in schema by the equality_ids and I 
tried to generate a delete_schema with fields after projection.
   ```rust
   async fn build(self, config: Self::C) -> Result {
   let (projector, fields) = FieldProjector::new(
   config.schema.fields(),
   &config.equality_ids,
   &config.column_id_meta_key,
   )?;
   let delete_schema = Arc::new(arrow_schema::Schema::new(fields));
   Ok(EqualityDeleteFileWriter {
   inner_writer: Some(self.inner.clone().build().await?),
   projector,
   delete_schema,
   equality_ids: config.equality_ids,
   })
   }
   
   ```
   **The problem is I cannot pass the delete_schema to 
`FileWriterBuilder`(`ParquetWriterBuilder` in this case), and the schema for 
inner writer is the old version(without projection), so the inner writer canno 
write file with properly.**
   Do you have any ideas? @ZENOTME 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [I] Implement all functions of BoundPredicateVisitor for ManifestFilterVisitor [iceberg-rust]

2024-05-04 Thread via GitHub


marvinlanhenke commented on issue #350:
URL: https://github.com/apache/iceberg-rust/issues/350#issuecomment-2094255209

   @s-akhtar-baig 
   are you still working on this? If you like I could try to help you out here?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [I] Implement all functions of BoundPredicateVisitor for ManifestFilterVisitor [iceberg-rust]

2024-05-04 Thread via GitHub


s-akhtar-baig commented on issue #350:
URL: https://github.com/apache/iceberg-rust/issues/350#issuecomment-2094259369

   Thanks @marvinlanhenke, but I have made good progress on the task and will 
be creating a pull request in the next few days. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] Compare `Schema` and `StructType` fields irrespective of ordering [iceberg-python]

2024-05-04 Thread via GitHub


kevinjqliu commented on code in PR #700:
URL: https://github.com/apache/iceberg-python/pull/700#discussion_r1590082827


##
tests/avro/test_resolver.py:
##
@@ -372,7 +372,7 @@ def test_writer_ordering() -> None:
 ),
 )
 
-expected = StructWriter(((1, DoubleWriter()), (0, StringWriter(
+expected = StructWriter(((0, DoubleWriter()), (1, StringWriter(

Review Comment:
   Not sure if this change is semantically correct. This test is affected 
because `resolve_writer` compares the two given schemas (`record_schema` and 
`file_schema`)
   
   
https://github.com/apache/iceberg-python/blob/7bd5d9e6c32bcc5b46993d6bfaeed50471e972ae/pyiceberg/avro/resolver.py#L200-L214
   
   Previously, comparison returned `False` due to different ordering



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] Add `InclusiveMetricsEvaluator` [iceberg-rust]

2024-05-04 Thread via GitHub


sdd commented on PR #347:
URL: https://github.com/apache/iceberg-rust/pull/347#issuecomment-2094359196

   FAO @Fokko @liurenjie1024 @marvinlanhenke:
   
   I've finished adding tests for this - it's ready for review, PTAL! 😄 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [I] An exception occurred while writing iceberg data through Spark: org. apache. iceberg. exceptions. CommitFailedException: metadata location has changed [iceberg]

2024-05-04 Thread via GitHub


srcnblgc commented on issue #9178:
URL: https://github.com/apache/iceberg/issues/9178#issuecomment-2094374588

   Possible to let me know how I can increase the number of retries on glue 
backed environment? 
   
   Or how to set "From 
https://iceberg.apache.org/docs/1.2.1/configuration/#write-properties
   write.update.isolation-level, write.merge.isolation-level" via python(which 
will be running on lambda) with glue backed environment?  
   
   here what I was trying:
   --
   
   catalog = load_catalog("glue", **{"type": "glue"})
   
   df = pq.read_table("~/Desktop/pyiceberg_py/yellow_tripdata_2023-01.parquet")
   
   tbl = catalog.load_table(identifier="pyiceberg_sb.pyiceberg_nyc_taxi")
   tbl.append(df)
   print(len(tbl.scan().to_arrow()))
   --
   and it is failing with "pyiceberg.exceptions.CommitFailedException: 
Requirement failed: branch main has changed: expected id 3182993835964802089, 
found 1838899954983424460" with only 2 concurrent inserts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] Add the build from source section [iceberg-go]

2024-05-04 Thread via GitHub


zeroshade commented on code in PR #70:
URL: https://github.com/apache/iceberg-go/pull/70#discussion_r1590173336


##
README.md:
##
@@ -21,6 +21,19 @@
 
 `iceberg` is a Golang implementation of the [Iceberg table 
spec](https://iceberg.apache.org/spec/).
 
+## Build From Source
+
+### Prerequisites
+
+* Go 1.21 or later
+
+### Build
+
+```shell
+$ git clone https://github.com/apache/iceberg-go.git
+$ cd iceberg-go/cmd/iceberg && go build .
+```

Review Comment:
   That makes sense, and is fine. Though I'm not sure it is necessarily needed 
to explain for any developers if they are going to work on it. But i guess it's 
fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] Support partial deletes [iceberg-python]

2024-05-04 Thread via GitHub


HonahX commented on code in PR #569:
URL: https://github.com/apache/iceberg-python/pull/569#discussion_r1584249768


##
pyiceberg/table/__init__.py:
##
@@ -443,6 +468,54 @@ def overwrite(
 for data_file in data_files:
 update_snapshot.append_data_file(data_file)
 
+def delete(self, delete_filter: BooleanExpression, snapshot_properties: 
Dict[str, str] = EMPTY_DICT) -> None:
+if (
+self.table_metadata.properties.get(TableProperties.DELETE_MODE, 
TableProperties.DELETE_MODE_COPY_ON_WRITE)
+== TableProperties.DELETE_MODE_MERGE_ON_READ
+):
+raise NotImplementedError("Merge on read is not yet supported")
+
+with 
self.update_snapshot(snapshot_properties=snapshot_properties).delete() as 
delete_snapshot:
+delete_snapshot.delete_by_predicate(delete_filter)
+
+# Check if there are any files that require an actual rewrite of a 
data file
+if delete_snapshot.rewrites_needed is True:
+# When we want to filter out certain rows, we want to invert the 
expression
+# delete id = 22 means that we want to look for that value, and 
then remove
+# if from the Parquet file
+delete_row_filter = Not(delete_filter)

Review Comment:
   How about `preserve_row_filter` or `rows_to_keep_filter`? I feel it more 
straightforward to say "When we want to filter out certain rows, we want to 
preserve the rest rows that not meet the filter." But this is totally personal 
preference.



##
pyiceberg/table/__init__.py:
##
@@ -434,6 +458,9 @@ def overwrite(
 if table_arrow_schema != df.schema:
 df = df.cast(table_arrow_schema)
 
+with 
self.update_snapshot(snapshot_properties=snapshot_properties).delete() as 
delete_snapshot:
+delete_snapshot.delete_by_predicate(overwrite_filter)

Review Comment:
   My understanding is that, currently it will
   
   - delete a datafile if all rows satisfy the `overwrite_filter`
   - append the df to the table.
   
   Is this the expected behavior? Looks like we could even change the below 
`overwrite` to `append` because we only `append_data_file`
   
   I feel that it would be reasonable if `overwrite_filter` holds the same 
funcitonality as `delete_filter` below, that we will partially overwrite the 
data file if some of the rows matching the filter.



##
pyiceberg/table/__init__.py:
##
@@ -443,6 +468,54 @@ def overwrite(
 for data_file in data_files:
 update_snapshot.append_data_file(data_file)
 
+def delete(self, delete_filter: BooleanExpression, snapshot_properties: 
Dict[str, str] = EMPTY_DICT) -> None:
+if (
+self.table_metadata.properties.get(TableProperties.DELETE_MODE, 
TableProperties.DELETE_MODE_COPY_ON_WRITE)
+== TableProperties.DELETE_MODE_MERGE_ON_READ
+):
+raise NotImplementedError("Merge on read is not yet supported")
+
+with 
self.update_snapshot(snapshot_properties=snapshot_properties).delete() as 
delete_snapshot:
+delete_snapshot.delete_by_predicate(delete_filter)
+
+# Check if there are any files that require an actual rewrite of a 
data file
+if delete_snapshot.rewrites_needed is True:
+# When we want to filter out certain rows, we want to invert the 
expression
+# delete id = 22 means that we want to look for that value, and 
then remove
+# if from the Parquet file
+delete_row_filter = Not(delete_filter)
+with 
self.update_snapshot(snapshot_properties=snapshot_properties).overwrite() as 
overwrite_snapshot:
+# Potential optimization is where we check if the files 
actually contain relevant data.
+files = self._scan(row_filter=delete_filter).plan_files()
+
+counter = itertools.count(0)
+
+# This will load the Parquet file into memory, including:
+#   - Filter out the rows based on the delete filter
+#   - Projecting it to the current schema
+#   - Applying the positional deletes if they are there
+# When writing
+#   - Apply the latest partition-spec
+#   - And sort order when added
+for original_file in files:
+df = project_table(
+tasks=[original_file],
+table_metadata=self._table.metadata,
+io=self._table.io,
+row_filter=delete_row_filter,
+projected_schema=self.table_metadata.schema(),
+)
+for data_file in _dataframe_to_data_files(
+io=self._table.io,
+df=df,
+table_metadata=self._table.metadata,
+  

Re: [PR] REST: honor OAuth config sent by the server [iceberg]

2024-05-04 Thread via GitHub


flyrain commented on code in PR #10256:
URL: https://github.com/apache/iceberg/pull/10256#discussion_r1590181761


##
core/src/main/java/org/apache/iceberg/rest/RESTSessionCatalog.java:
##
@@ -215,6 +215,12 @@ public void initialize(String name, Map 
unresolved) {
 this.paths = ResourcePaths.forCatalogProperties(mergedProps);
 
 String token = mergedProps.get(OAuth2Properties.TOKEN);
+// re-resolve these variables in case they were overridden by the config 
endpoint
+credential = mergedProps.get(OAuth2Properties.CREDENTIAL);
+scope = mergedProps.getOrDefault(OAuth2Properties.SCOPE, 
OAuth2Properties.CATALOG_SCOPE);
+oauth2ServerUri =
+mergedProps.getOrDefault(OAuth2Properties.OAUTH2_SERVER_URI, 
ResourcePaths.tokens());

Review Comment:
   It's a good idea to get certain configs from server. Thanks for doing this. 
Should we add these into config endpoint spec if we are doing this? Without 
something in the spec, we are relying on the convention. There is a minor 
concern that the config response from the server side may happen to have the 
same name with different meaning. For example, a property named `scope` in the 
config response may mean different things. cc @danielcweeks 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] REST: honor OAuth config sent by the server [iceberg]

2024-05-04 Thread via GitHub


flyrain commented on code in PR #10256:
URL: https://github.com/apache/iceberg/pull/10256#discussion_r1590182083


##
core/src/main/java/org/apache/iceberg/rest/RESTSessionCatalog.java:
##
@@ -215,6 +215,12 @@ public void initialize(String name, Map 
unresolved) {
 this.paths = ResourcePaths.forCatalogProperties(mergedProps);
 
 String token = mergedProps.get(OAuth2Properties.TOKEN);
+// re-resolve these variables in case they were overridden by the config 
endpoint
+credential = mergedProps.get(OAuth2Properties.CREDENTIAL);

Review Comment:
   It is weird to me that the client gets the credential from server's config 
endpoint, which doesn't need authentication. Does that mean any client can 
visit the REST catalog? I think we still need a client to provide its own 
credential.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



Re: [PR] Core: Introduce AuthConfig [iceberg]

2024-05-04 Thread via GitHub


flyrain commented on code in PR #10161:
URL: https://github.com/apache/iceberg/pull/10161#discussion_r1590183580


##
aws/src/main/java/org/apache/iceberg/aws/s3/signer/S3V4RestSignerClient.java:
##
@@ -213,12 +214,13 @@ private AuthSession authSession() {
   expiresAtMillis(properties()),
   new AuthSession(
   ImmutableMap.of(),
-  token,
-  null,
-  credential(),
-  SCOPE,
-  oauth2ServerUri(),
-  optionalOAuthParams(;
+  AuthConfig.builder()
+  .token(token)
+  .credential(credential())
+  .scope(SCOPE)

Review Comment:
   I'd consider `SCOPE` as one of the optional params. It is optional in both 
[token exchange 
flow](https://datatracker.ietf.org/doc/html/rfc8693#name-token-exchange-request-and-)
 and [client credential 
flow](https://datatracker.ietf.org/doc/html/rfc6749#section-4.4).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump software.amazon.awssdk:bom from 2.25.40 to 2.25.45 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10266:
URL: https://github.com/apache/iceberg/pull/10266

   Bumps software.amazon.awssdk:bom from 2.25.40 to 2.25.45.
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=software.amazon.awssdk:bom&package-manager=gradle&previous-version=2.25.40&new-version=2.25.45)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump nessie from 0.80.0 to 0.81.1 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10267:
URL: https://github.com/apache/iceberg/pull/10267

   Bumps `nessie` from 0.80.0 to 0.81.1.
   Updates `org.projectnessie.nessie:nessie-client` from 0.80.0 to 0.81.1
   
   Updates `org.projectnessie.nessie:nessie-jaxrs-testextension` from 0.80.0 to 
0.81.1
   
   Updates `org.projectnessie.nessie:nessie-versioned-storage-inmemory-tests` 
from 0.80.0 to 0.81.1
   
   Updates `org.projectnessie.nessie:nessie-versioned-storage-testextension` 
from 0.80.0 to 0.81.1
   
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump com.google.errorprone:error_prone_annotations from 2.27.0 to 2.27.1 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10268:
URL: https://github.com/apache/iceberg/pull/10268

   Bumps 
[com.google.errorprone:error_prone_annotations](https://github.com/google/error-prone)
 from 2.27.0 to 2.27.1.
   
   Release notes
   Sourced from https://github.com/google/error-prone/releases";>com.google.errorprone:error_prone_annotations's
 releases.
   
   Error Prone 2.27.1
   This release contains all of the changes in https://github.com/google/error-prone/releases/tag/v2.27.0";>2.27.0, 
plus a bug fix to https://errorprone.info/bugpattern/ClassInitializationDeadlock";>ClassInitializationDeadlock
 (https://redirect.github.com/google/error-prone/issues/4378";>google/error-prone#4378)
   Full Changelog: https://github.com/google/error-prone/compare/v2.27.0...v2.27.1";>https://github.com/google/error-prone/compare/v2.27.0...v2.27.1
   
   
   
   Commits
   
   https://github.com/google/error-prone/commit/464bb93d292123c750fe107984dcefc6f0905f00";>464bb93
 Release Error Prone 2.27.1
   https://github.com/google/error-prone/commit/bc3309a7dbe95d006ee190fb36f2d654779858d4";>bc3309a
 Flag comparisons of SomeEnum.valueOf(...) to 
null.
   https://github.com/google/error-prone/commit/6a8f4936b20e0a432d73930dac5f78517103af2f";>6a8f493
 Don't scan into nested enums in ClassInitializationDeadlock
   https://github.com/google/error-prone/commit/c8df502ab7cc8ce16b1a2e53533e7c247eba4a85";>c8df502
 Make the logic of detecting at least one allowed usage more explicit.
   See full diff in https://github.com/google/error-prone/compare/v2.27.0...v2.27.1";>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.google.errorprone:error_prone_annotations&package-manager=gradle&previous-version=2.27.0&new-version=2.27.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump net.snowflake:snowflake-jdbc from 3.15.1 to 3.16.0 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10269:
URL: https://github.com/apache/iceberg/pull/10269

   Bumps 
[net.snowflake:snowflake-jdbc](https://github.com/snowflakedb/snowflake-jdbc) 
from 3.15.1 to 3.16.0.
   
   Release notes
   Sourced from https://github.com/snowflakedb/snowflake-jdbc/releases";>net.snowflake:snowflake-jdbc's
 releases.
   
   v3.16.0
   
   Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   
   
   
   Changelog
   Sourced from https://github.com/snowflakedb/snowflake-jdbc/blob/master/CHANGELOG.rst";>net.snowflake:snowflake-jdbc's
 changelog.
   
   JDBC Driver 3.16.0
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.15.1
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.15.0
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.5
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.4
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.3
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.2
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.1
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.14.0
   
   ||Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.13.33
   
   || Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.13.32
   
   || Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.13.31
   
   || Please Refer to Release Notes at https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc";>https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc
   
   JDBC Driver 3.13.30
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/snowflakedb/snowflake-jdbc/commit/f12b25a96c5fddf662577622a70d810c1d20529e";>f12b25a
 Bump version to 3.16.0 for release (https://redirect.github.com/snowflakedb/snowflake-jdbc/issues/1741";>#1741)
   https://github.com/snowflakedb/snowflake-jdbc/commit/2da252d6adf53b67e436002cab308c7cdbfb646a";>2da252d
 SNOW-1333078: Add explicitly surefire autodetected dependencies (https://redirect.github.com/snowflakedb/snowflake-jdbc/issues/1739";>#1739)
   https://github.com/snowflakedb/snowflake-jdbc/commit/6927fff3e7771bbda87ba69792f529aaa2cb426a";>6927fff
 SNOW-1045676: Fix list of reserved keywords (https://redirect.github.com/snowflakedb/snowflake-jdbc/issues/1670";>#1670)
   https://github.com/snowflakedb/snowflake-jdbc/commit/ed334e6c440f2bc833f758370b433c7e8932a859";>ed334e6
 Structured types backward compatibility for getObject method (https://redirect.github.com/snowflakedb/snowflake-jdbc/issues/1740";>#1740)
   https://github.com/snowflakedb/snowflake-jdbc/commit/ff0adbd12e494381fe95dec1305c68f629412c6f";>ff0adbd
 SNOW-1213117: Wrap connection, statement and result set in try with 
resources...
   https://github.com/snowflakedb/snowflake-jdbc/commit/7cb73ff2ca00973550e4349453d6de4c0c193119";>7cb73ff
 SNOW-1213117: Wrap connection, statement and result set in try with 
resources...
   https://github.com/snowflakedb/snowflake-jdbc/commit/8dcd217a044eeec30bd9ac9e3bf3486aa920944a";>8dcd217
 SNOW-1157904 write and bindings structured types (https://redirect.github.com/snowflakedb/snowflake-jdbc/issues/1727";>#1727)
   https://github.com/snowflakedb/snowflake-jdbc/commit/804ef6701438864fecc3a2ebeb46c2600c863d9c";>804ef67
 SNOW-1213117: Wrap connection, statement and result set in try with 
resources...
   https://github.com/snowflakedb/snowflake-jdbc/commit/76a0d3f173b

Re: [PR] Build: Bump com.google.cloud:libraries-bom from 26.28.0 to 26.37.0 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] closed pull request #10094: Build: Bump 
com.google.cloud:libraries-bom from 26.28.0 to 26.37.0
URL: https://github.com/apache/iceberg/pull/10094


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump com.google.cloud:libraries-bom from 26.28.0 to 26.38.0 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10270:
URL: https://github.com/apache/iceberg/pull/10270

   Bumps 
[com.google.cloud:libraries-bom](https://github.com/googleapis/java-cloud-bom) 
from 26.28.0 to 26.38.0.
   
   Release notes
   Sourced from https://github.com/googleapis/java-cloud-bom/releases";>com.google.cloud:libraries-bom's
 releases.
   
   v26.38.0
   GCP Libraries BOM 26.38.0
   Known issues
   The BOM contains a nonexisttent artifact 
com:google:cloud:google-cloud-bigtable-stats:2.39.0. We stopped 
publishing this artifact because this is not for our SDK users. This will be 
fixed in the upcoming release (https://redirect.github.com/googleapis/java-bigtable/pull/2218";>googleapis/java-bigtable#2218).
   
   Here are the differences from the previous version (26.37.0)
   New Addition
   
   com.google.cloud:google-cloud-backupdr:0.1.0
   
   The group ID of the following artifacts is 
com.google.cloud.
   Notable Changes
   google-cloud-bigquery 2.39.1 (prev: 2.38.2)
   
   
   Add ExportDataStats to QueryStatistics (https://redirect.github.com/googleapis/java-bigquery/issues/3244";>#3244)
 (https://github.com/googleapis/java-bigquery/commit/e91be80ebdd39c2448914ff9aa1742f3079d0bb8";>e91be80)
   
   
   Add new fields to copy job statistics (https://redirect.github.com/googleapis/java-bigquery/issues/3205";>#3205)
 (https://github.com/googleapis/java-bigquery/commit/64bdda84fe06726042a41f2a89ac5c067f9bc949";>64bdda8)
   
   
   Add Range object to allow reading range value (https://redirect.github.com/googleapis/java-bigquery/issues/3236";>#3236)
 (https://github.com/googleapis/java-bigquery/commit/2c3399dd10fecc01237158a3cdeee966b38746f2";>2c3399d)
   
   
   Add support for inserting Range values (https://redirect.github.com/googleapis/java-bigquery/issues/3246";>#3246)
 (https://github.com/googleapis/java-bigquery/commit/ff1ebc66e458519deca37275ba91650133188683";>ff1ebc6)
   
   
   Add support for ObjectMetadata (https://redirect.github.com/googleapis/java-bigquery/issues/3217";>#3217)
 (https://github.com/googleapis/java-bigquery/commit/975df05b95b714c5574155d5e09860885c4b58f2";>975df05)
   
   
   Add totalSlotMs to JobStatistics (https://redirect.github.com/googleapis/java-bigquery/issues/3250";>#3250)
 (https://github.com/googleapis/java-bigquery/commit/75ea095b0a194d6be4951795bc3a616ace389ff2";>75ea095)
   
   
   Fix BigQuery#listDatasets to include dataset location in the response (https://redirect.github.com/googleapis/java-bigquery/issues/3238";>#3238)
 (https://github.com/googleapis/java-bigquery/commit/c50c17bc4eedd0c34f440b697a8b26a5354c9c4f";>c50c17b)
   
   
   Remove https://github.com/InternalApi";>@​InternalApi from 
TableResult (https://redirect.github.com/googleapis/java-bigquery/issues/3257";>#3257)
 (https://github.com/googleapis/java-bigquery/commit/19d92a144cd4d86fee6dd420e574c3a1a928642c";>19d92a1)
   
   
   https://github.com/Nullable";>@​Nullable 
annotations on builder methods (https://redirect.github.com/googleapis/java-bigquery/issues/3222";>#3222)
 (https://github.com/googleapis/java-bigquery/commit/0c5eed1a18409f120a1243bd5da1db2aa4f9c206";>0c5eed1)
   
   
   google-cloud-bigquerystorage 3.5.0 (prev: 3.4.0)
   
   Add libraries_bom_version in metadata (https://redirect.github.com/googleapis/java-bigquerystorage/issues/1956";>#1956)
 (https://redirect.github.com/googleapis/java-bigquerystorage/issues/2463";>#2463)
 (https://github.com/googleapis/java-bigquerystorage/commit/b35bd4a631ad6411531cd9056d01e829a0863b39";>b35bd4a)
   
   google-cloud-bigtable 2.39.0 (prev: 2.37.0)
   
   
   Add Data Boost configurations to admin API (https://github.com/googleapis/java-bigtable/commit/f29c5bba08daffe2721454db1714f6ea6f47fc66";>f29c5bb)
   
   
   Add feature flag for client side metrics (https://redirect.github.com/googleapis/java-bigtable/issues/2179";>#2179)
 (https://github.com/googleapis/java-bigtable/commit/f29c5bba08daffe2721454db1714f6ea6f47fc66";>f29c5bb)
   
   
   Migrate to OTEL and enable metrics by default (https://redirect.github.com/googleapis/java-bigtable/issues/2166";>#2166)
 (https://github.com/googleapis/java-bigtable/commit/168293937cc7f438a3ec2dee46805aa8e12089c4";>1682939)
   
   
   Admin API changes for databoost (https://redirect.github.com/googleapis/java-bigtable/issues/2181";>#2181)
 (https://github.com/googleapis/java-bigtable/commit/3b1886bea79525505e41124b41985f37c490c97e";>3b1886b)
   
   
   google-cloud-firestore 3.21.0 (prev: 3.20.0)
   
   Add Vector Index API (https://github.com/googleapis/java-firestore/commit/496498271b31a878910c17954350673beade2bef";>4964982)
   Add VectorSearch API (https://github.com/googleapis/java-firestore/commit/496498271b31a878910c17954350673beade2bef";>4964982)
   
   google-cloud-logging 3.17.0 (prev: 3.16.2)
   
   Add Cloud Run Jobs support (https://redirect.github.com/googleapis/java-logging/issues/1574";>#1574)
 (https://github.com/googleapis/java-logging/commit/1dd64d078e0d4cbb1e16cb1d

[PR] Build: Bump guava from 33.1.0-jre to 33.2.0-jre [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10271:
URL: https://github.com/apache/iceberg/pull/10271

   Bumps `guava` from 33.1.0-jre to 33.2.0-jre.
   Updates `com.google.guava:guava` from 33.1.0-jre to 33.2.0-jre
   
   Release notes
   Sourced from https://github.com/google/guava/releases";>com.google.guava:guava's 
releases.
   
   33.2.0
   Android users: Please test recent Guava versions
   If you know of Guava Android users who have not yet upgraded to at least 
release https://github.com/google/guava/releases/tag/v33.0.0";>33.0.0, please 
encourage them to upgrade, preferably to today's release, 33.2.0. These 
releases have begun adding Java 8+ APIs to guava-android. While we 
don't anticipate problems, we do anticipate that any unexpected problems could 
force a disruptive rollback. To minimize any disruption, we'd like to catch any 
such problems early.
   Please https://github.com/google/guava/issues/new?assignees=&labels=type%3Ddefect&projects=&template=bug_report.yaml";>let
 us know of any problems you encounter.
   Maven
   
 com.google.guava
 guava
 33.2.0-jre
 
 33.2.0-android
   
   
   Jar files
   
   https://repo1.maven.org/maven2/com/google/guava/guava/33.2.0-jre/guava-33.2.0-jre.jar";>33.2.0-jre.jar
   https://repo1.maven.org/maven2/com/google/guava/guava/33.2.0-android/guava-33.2.0-android.jar";>33.2.0-android.jar
   
   Guava requires https://github.com/google/guava/wiki/UseGuavaInYourBuild#what-about-guavas-own-dependencies";>one
 runtime dependency, which you can download here:
   
   https://repo1.maven.org/maven2/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar";>failureaccess-1.0.1.jar
   
   Javadoc
   
   https://guava.dev/releases/33.2.0-jre/api/docs/";>33.2.0-jre
   https://guava.dev/releases/33.2.0-android/api/docs/";>33.2.0-android
   
   JDiff
   
   https://guava.dev/releases/33.2.0-jre/api/diffs/";>33.2.0-jre 
vs. 33.1.0-jre
   https://guava.dev/releases/33.2.0-android/api/diffs/";>33.2.0-android vs. 
33.1.0-android
   https://guava.dev/releases/33.2.0-android/api/androiddiffs/";>33.2.0-android
 vs. 33.2.0-jre
   
   Changelog
   
   Dropped testing for Android versions before Lollipop (API Level 21). 
Guava may stop working under older versions in the future, or it may have done 
so already.
   Fixed https://redirect.github.com/google/guava/issues/7134";>a 
GWT compilation breakage under Gradle. (858caf425c)
   collect: Made our Collector APIs (e.g., 
ImmutableList.toImmutableList()) available in 
guava-android. More https://redirect.github.com/google/guava/issues/6567";>Java 8 APIs 
will follow in future releases. (96fca0b747)
   
   As always, streams are available to Android code only when that code https://developer.android.com/studio/write/java8-support#library-desugaring";>enables
 library desugaring or targets a new enough API Level (https://developer.android.com/reference/java/util/stream/Stream";>24 
(Nougat) for many stream APIs). (But note that we test only with library 
desugaring, so we don't https://redirect.github.com/google/guava/issues/7197";>currently know 
if API Level 24 is high enough to use our Collector APIs unless 
you have also enabled library desugaring.) Guava users who avoid the 
Collector APIs do not need to meet this requirement.
   
   
   collect: Fixed a potential 
NullPointerException in ImmutableMap.Builder on a 
rare code path. (70a98115d8)
   net: Added HttpHeaders constants 
Ad-Auction-Allowed, Permissions-Policy-Report-Only, 
and Sec-GPC. (7dc01ed27b, 41d0d9a833, 
38c8017bd44b7919b112f1c99f3d8ce4b058ae5d)
   
   
   
   
   Commits
   
   See full diff in https://github.com/google/guava/commits";>compare view
   
   
   
   
   Updates `com.google.guava:guava-testlib` from 33.1.0-jre to 33.2.0-jre
   
   Release notes
   Sourced from https://github.com/google/guava/releases";>com.google.guava:guava-testlib's
 releases.
   
   33.2.0
   Android users: Please test recent Guava versions
   If you know of Guava Android users who have not yet upgraded to at least 
release https://github.com/google/guava/releases/tag/v33.0.0";>33.0.0, please 
encourage them to upgrade, preferably to today's release, 33.2.0. These 
releases have begun adding Java 8+ APIs to guava-android. While we 
don't anticipate problems, we do anticipate that any unexpected problems could 
force a disruptive rollback. To minimize any disruption, we'd like to catch any 
such problems early.
   Please https://github.com/google/guava/issues/new?assignees=&labels=type%3Ddefect&projects=&template=bug_report.yaml";>let
 us know of any problems you encounter.
   Maven
   
 com.google.guava
 guava
 33.2.0-jre
 
 33.2.0-android
   
   
   Jar files
   
   https:

Re: [PR] Build: Bump com.google.cloud:libraries-bom from 26.28.0 to 26.37.0 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] commented on PR #10094:
URL: https://github.com/apache/iceberg/pull/10094#issuecomment-2094575869

   Superseded by #10270.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org



[PR] Build: Bump mkdocs-material from 9.5.19 to 9.5.21 [iceberg]

2024-05-04 Thread via GitHub


dependabot[bot] opened a new pull request, #10272:
URL: https://github.com/apache/iceberg/pull/10272

   Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 
9.5.19 to 9.5.21.
   
   Release notes
   Sourced from https://github.com/squidfunk/mkdocs-material/releases";>mkdocs-material's 
releases.
   
   mkdocs-material-9.5.21
   
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7133";>#7133:
 Ensure latest version of Mermaid.js is used
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7125";>#7125:
 Added warning for dotfiles in info plugin
   
   Thanks to https://github.com/kamilkrzyskow";>@​kamilkrzyskow for 
their contributions
   mkdocs-material-9.5.20
   
   Fixed deprecation warning in privacy plugin (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7119";>#7119:
 Tags plugin emits deprecation warning (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7118";>#7118:
 Social plugin crashes if fonts are disabled (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7085";>#7085:
 Social plugin crashes on Windows when downloading fonts
   
   
   
   
   Changelog
   Sourced from https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG";>mkdocs-material's
 changelog.
   
   mkdocs-material-9.5.21 (2024-05-03)
   
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7133";>#7133:
 Ensure latest version of Mermaid.js is used
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7125";>#7125:
 Added warning for dotfiles in info plugin
   
   mkdocs-material-9.5.20 (2024-04-29)
   
   Fixed deprecation warning in privacy plugin (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7119";>#7119:
 Tags plugin emits deprecation warning (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7118";>#7118:
 Social plugin crashes if fonts are disabled (9.5.19 regression)
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7085";>#7085:
 Social plugin crashes on Windows when downloading fonts
   
   mkdocs-material-9.5.19+insiders-4.53.8 (2024-04-26)
   
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7052";>#7052:
 Preview extension automatically including all pages
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7051";>#7051:
 Instant previews mounting on footnote references
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/5165";>#5165:
 Improved tooltips not mounting in sidebar for typeset plugin
   
   mkdocs-material-9.5.19+insiders-4.53.7 (2024-04-25)
   
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7060";>#7060:
 Incorrect resolution of translation when using static-i18n
   
   mkdocs-material-9.5.19 (2024-04-25)
   
   Updated MkDocs to 1.6 and limited version to < 2
   Updated Docker image to latest Alpine Linux
   Removed setup.py, now that GitHub fully understands pyproject.toml
   Improved interop of social plugin with third-party MkDocs themes
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7099";>#7099:
 Blog reading time not rendered correctly for Japanese
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7097";>#7097:
 Improved resilience of tags plugin when no tags are given
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7090";>#7090:
 Active tab indicator in nested content tabs rendering bug
   
   mkdocs-material-9.5.18 (2024-04-16)
   
   Refactored tooltips implementation to fix positioning issues
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7044";>#7044:
 Rendering glitch when hovering contributor avatar in Chrome
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/7043";>#7043:
 Highlighted lines in code blocks cutoff on mobile
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/6910";>#6910:
 Incorrect position of tooltip for page status in sidebar
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/6760";>#6760:
 Incorrect position and overly long tooltip in tables
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/6488";>#6488:
 Incorrect position and cutoff tooltip in content tabs
   
   mkdocs-material-9.5.17+insiders-4.53.6 (2024-04-05)
   
   Ensure working directory is set for projects when using projects 
plugin
   Fixed https://redirect.github.com/squidfunk/mkdocs-material/issues/6970";>#6970:
 Incorrect relative paths in git submodules with projects plugin
   
   mkdocs-material-9.5.17+insiders-4.53.5 (2024-04-02)
   
   Fixed social plugin crashing when no colors are specified in 
palettes
   
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/squidfunk/mkdocs-material/commit/d1161b431f391c3be2bf3617b8c62

Re: [PR] feat: add `ExpressionEvaluator` [iceberg-rust]

2024-05-04 Thread via GitHub


sdd commented on code in PR #363:
URL: https://github.com/apache/iceberg-rust/pull/363#discussion_r1590223772


##
crates/iceberg/src/expr/visitors/expression_evaluator.rs:
##
@@ -0,0 +1,819 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use fnv::FnvHashSet;
+
+use crate::{
+expr::{BoundPredicate, BoundReference},
+spec::{DataFile, Datum, PrimitiveLiteral, Struct},
+Error, ErrorKind, Result,
+};
+
+use super::bound_predicate_visitor::{visit, BoundPredicateVisitor};
+
+/// Evaluates a [`DataFile`]'s partition [`Struct`] to check
+/// if the partition tuples match the given [`BoundPredicate`].
+///
+/// Use within [`TableScan`] to prune the list of [`DataFile`]s
+/// that could potentially match the TableScan's filter.
+#[derive(Debug)]
+pub(crate) struct ExpressionEvaluator {
+/// The provided partition filter.
+partition_filter: BoundPredicate,

Review Comment:
   OK, makes sense! 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org