One other note:
When creating the table, you need `using("iceberg")`. The example should
read
data.writeTo("prod.db.table")
.using("iceberg")
.tableProperty("write.format.default", "orc")
.partitionedBy($"level", days($"ts"))
.createOrReplace()
- Wing Yew
On Fri, May 27, 2022 at 11:29 AM Wing Yew Poon <[email protected]> wrote:
> That is a typo in the sample code. The doc itself (
> https://iceberg.apache.org/docs/latest/spark-writes/#creating-tables)
> says:
> "Create and replace operations support table configuration methods, like
> partitionedBy and tableProperty"
> You could also have looked up the API in Spark documentation:
>
> https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/DataFrameWriterV2.html
> There you would have found that the method is partitionedBy, not
> partitionBy.
>
> - Wing Yew
>
>
> On Fri, May 27, 2022 at 4:32 AM Saulius Pakalka
> <[email protected]> wrote:
>
>> Hi,
>>
>> I am trying to create partitioned iceberg table using scala code below
>> based on example in docs.
>>
>> df_c.writeTo(output_table)
>> .partitionBy(days(col("last_updated")))
>> .createOrReplace()
>>
>> However, this code does not compile and throws two errors:
>>
>> value partitionBy is not a member of
>> org.apache.spark.sql.DataFrameWriterV2[org.apache.spark.sql.Row]
>> [error] possible cause: maybe a semicolon is missing before `value
>> partitionBy'?
>> [error] .partitionBy(days(col("last_updated")))
>> [error] ^
>> [error] not found: value days
>> [error] .partitionBy(days(col("last_updated")))
>> [error] ^
>> [error] two errors found
>>
>> Not sure where to look for a problem. Any advice appreciated.
>>
>> Best regards,
>>
>> Saulius Pakalka
>>
>>