tiagoposse commented on issue #9874:
URL: https://github.com/apache/iceberg/issues/9874#issuecomment-2357678799

   I'm facing the same issue:
   **Environment**: spark 3.5.2, iceberg 1.5.2, nessie 0.96.1
   
   my config is:
   ```
   conf = SparkConf() \
       .setAppName("Apache Iceberg with PySpark") \
       .setMaster("local[*]") \
       .setAll([
           ('spark.jars.packages', 
'org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.5.2,org.projectnessie.nessie-integrations:nessie-spark-extensions-3.5_2.12:0.96.1,software.amazon.awssdk:bundle:2.28.2,software.amazon.awssdk:url-connection-client:2.28.2'),
           ("spark.driver.memory", "4g"),
           ("spark.executor.memory", "4g"),
           ("spark.sql.shuffle.partitions", f"{partition_size}"),
           ('spark.sql.adaptive.coalescePartitions.initialPartitionNum', 
f"{(os.cpu_count())}"),
           ('spark.sql.adaptive.coalescePartitions.parallelismFirst', 'false'),
           ('spark.sql.files.minPartitionNum', "1"),
           ('spark.sql.files.maxPartitionBytes', '500mb'),
           
           # Add Iceberg SQL extensions like UPDATE or DELETE in Spark
           ('spark.sql.extensions', 
'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,org.projectnessie.spark.extensions.NessieSparkSessionExtensions'),
   
           ('spark.sql.catalog.nessie', 
'org.apache.iceberg.spark.SparkCatalog'),
           ('spark.sql.catalog.nessie.uri', self.nessie_uri),
           ('spark.sql.catalog.nessie.ref', 'main'),
           ('spark.sql.catalog.nessie.authentication.type', 'NONE'),
           ('spark.sql.catalog.nessie.catalog-impl', 
'org.apache.iceberg.nessie.NessieCatalog'),
           ('spark.sql.catalog.nessie.s3.endpoint', self.s3_endpoint),
           ('spark.sql.catalog.nessie.warehouse', self.warehouse),
           ('spark.sql.catalog.nessie.io-impl', 
'org.apache.iceberg.aws.s3.S3FileIO'),
           ('spark.hadoop.fs.s3a.access.key', self.aws_access_key),
           ('spark.hadoop.fs.s3a.secret.key', self.aws_secret_key),
       ])
   spark = SparkSession.builder.config(conf=conf).getOrCreate()
   ```
   
   Writing like:
   ```
   df \
       .writeTo(f"{self.table_name}") \
       .using("iceberg") \
       .append()
   ```
   
   Stacktrace:
   ```
   File 
"/Users/posse/work/meltano/.meltano/loaders/target-iceberg/venv/lib/python3.10/site-packages/target_iceberg/sinks.py",
 line 143, in write_data.append()                  
   File 
"/Users/posse/work/meltano/.meltano/loaders/target-iceberg/venv/lib/python3.10/site-packages/pyspark/sql/readwriter.py",
 line 2107, in append 
     self._jwriter.append()     
   File 
"/Users/posse/work/meltano/.meltano/loaders/target-iceberg/venv/lib/python3.10/site-packages/py4j/java_gateway.py",
 line 1322, in __call__ 
     return_value = get_return_value( 
   File 
"/Users/posse/work/meltano/.meltano/loaders/target-iceberg/venv/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py",
 line 185, in deco
     raise converted from None
   pyspark.errors.exceptions.captured.AnalysisException: Cannot write into v1 
table: `spark_catalog`.`default`.`genesys`. 
   ```
   
   I'm using SparkCatalog, so this did not solve it for me.
   Any help would be greatly appreciated. Can also provide more info if needed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to