MonkeyCanCode commented on issue #625:
URL: https://github.com/apache/polaris/issues/625#issuecomment-2585843537
So this is actually due to missing region set on spark side instead of
polaris side. Here is how to reproduce this:
Having the same issue:
```
scala> spark.sql("INSERT INTO quickstart_table VALUES (1, 'some data')")
...
software.amazon.awssdk.core.exception.SdkClientException: Unable to load
region from any of the providers in the chain
software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@1315b5e8:
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@350a6bf2:
Unable to load region from system settings. Region must be specified either
via environment variable (AWS_REGION) or system property (aws.region).,
software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@54b391ae: No
region provided in profile: default,
software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@2be11aa0:
Unable to contact EC2 metadata service.]
```
One way to fix this is set region on the client side (spark) via global
variable but this can be problematic when distributed compute kicked in as
worker nodes won't have this set:
```
➜ spark-3.5.4-bin-hadoop3 export AWS_REGION=us-west-2
...
scala> spark.sql("INSERT INTO quickstart_table VALUES (1, 'some data')")
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
res7: org.apache.spark.sql.DataFrame = []
scala> spark.sql("select * from quickstart_table limit 10").show()
+---+---------+
| id| data|
+---+---------+
| 1|some data|
+---+---------+
```
Thus, it is better to set via catalog property such as following:
```
--conf spark.sql.catalog.quickstart_catalog.client.region=us-west-2
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]