herefree commented on issue #4517:
URL: https://github.com/apache/paimon/issues/4517#issuecomment-2478796389

   
   > I also encountered it, before the paimon-shade, we can solve it in this 
way for the time being. 
![image](https://private-user-images.githubusercontent.com/55388933/386026920-10c508a7-02ee-4724-989f-96db8e9d5768.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE2NjQyNTEsIm5iZiI6MTczMTY2Mzk1MSwicGF0aCI6Ii81NTM4ODkzMy8zODYwMjY5MjAtMTBjNTA4YTctMDJlZS00NzI0LTk4OWYtOTZkYjhlOWQ1NzY4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE1VDA5NDU1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWIyNDQ3ZGYzMzdlZmVmOGUxMGFmOTI4MTQ2YzgwNzk1OTI3NTE3NmUyNDE4MDc1NDIzYjFhM2MwNjI3YTlmNzYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.q8yYQch53WRRKZJrQhgPCSgTarlwLxN1GmjP88bFPfA)
   
   
   Run org.apache.paimon.spark.sql.DDLWithHiveCatalogTestBase also has similar 
error,I make paimon-format higher than parquet,it also has error.
    ```
   java.lang.BootstrapMethodError: java.lang.NoSuchMethodError: 
org.apache.parquet.hadoop.ParquetWriter$Builder.withBloomFilterFPP(Ljava/lang/String;D)Lorg/apache/parquet/hadoop/ParquetWriter$Builder;
        at 
org.apache.paimon.format.parquet.writer.RowDataParquetBuilder.createWriter(RowDataParquetBuilder.java:95)
        at 
org.apache.paimon.format.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:52)
        at 
org.apache.paimon.io.SingleFileWriter.<init>(SingleFileWriter.java:74)
        at 
org.apache.paimon.io.StatsCollectingSingleFileWriter.<init>(StatsCollectingSingleFileWriter.java:58)
        at 
org.apache.paimon.io.RowDataFileWriter.<init>(RowDataFileWriter.java:70)
        at 
org.apache.paimon.io.RowDataRollingFileWriter.lambda$new$0(RowDataRollingFileWriter.java:59)
        at 
org.apache.paimon.io.RollingFileWriter.openCurrentWriter(RollingFileWriter.java:123)
        at 
org.apache.paimon.io.RollingFileWriter.write(RollingFileWriter.java:78)
        at 
org.apache.paimon.append.AppendOnlyWriter$DirectSinkWriter.write(AppendOnlyWriter.java:403)
        at 
org.apache.paimon.append.AppendOnlyWriter.write(AppendOnlyWriter.java:161)
        at 
org.apache.paimon.append.AppendOnlyWriter.write(AppendOnlyWriter.java:66)
        at 
org.apache.paimon.operation.AbstractFileStoreWrite.write(AbstractFileStoreWrite.java:150)
        at 
org.apache.paimon.table.sink.TableWriteImpl.writeAndReturn(TableWriteImpl.java:175)
        at 
org.apache.paimon.table.sink.TableWriteImpl.write(TableWriteImpl.java:147)
        at 
org.apache.paimon.spark.SparkTableWrite.write(SparkTableWrite.scala:40)
        at 
org.apache.paimon.spark.commands.PaimonSparkWriter.$anonfun$write$2(PaimonSparkWriter.scala:94)
        at 
org.apache.paimon.spark.commands.PaimonSparkWriter.$anonfun$write$2$adapted(PaimonSparkWriter.scala:94)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at 
org.apache.paimon.spark.commands.PaimonSparkWriter.$anonfun$write$1(PaimonSparkWriter.scala:94)
        at 
org.apache.spark.sql.execution.MapPartitionsExec.$anonfun$doExecute$3(objects.scala:201)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
        at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:131)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1491)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@paimon.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to