Bhhsaurabh opened a new issue, #12037:
URL: https://github.com/apache/iceberg/issues/12037

   ### Apache Iceberg version
   
   1.3.0
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   When attempting to write data to an Iceberg table using the Spark write API, 
a NullPointerException is thrown. The job fails immediately after initiating 
the write operation.
   The write operation should complete successfully, persisting the data to the 
Iceberg table without exceptions.
        1.      Set up an Iceberg catalog with the following configuration in 
Spark:
        2. spark.sql.catalog.my_catalog = org.apache.iceberg.spark.SparkCatalog
   spark.sql.catalog.my_catalog.type = hadoop
   spark.sql.catalog.my_catalog.warehouse = s3://my-bucket/my-warehouse
   CREATE TABLE my_catalog.default.sample_table (
     id INT,
     name STRING
   ) USING iceberg;
   
   import spark.implicits._
   val data = Seq((1, "Alice"), (2, "Bob")).toDF("id", "name")
   data.writeTo("my_catalog.default.sample_table").append()
   
   java.lang.NullPointerException
       at org.apache.iceberg.<relevant-class>.<relevant-method>(<file>:<line>)
       at org.apache.spark.sql.<relevant-class>.<relevant-method>(<file>:<line>)
       ...
   
   ### Willingness to contribute
   
   - [x] I can contribute a fix for this bug independently
   - [x] I would be willing to contribute a fix for this bug with guidance from 
the Iceberg community
   - [x] I cannot contribute a fix for this bug at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to