I probably wouldn't block on those issues being fixed but if we are really
worried we should just disable that functionality.

On Wed, Jun 25, 2025 at 12:27 PM Steven Wu <stevenz...@gmail.com> wrote:

> Amogh, thanks for sharing the input. Conceptually, I agree that it is good
> to fix those issues in the 1.10.0 release.
>
> My main concern is that a couple of them are large efforts and still in
> early draft status. It may take a few weeks to get them in. Since those
> bugs aren't introduced by the 1.10.0 release, can they be included in the
> 1.11.0 release?
>
> > we should indeed block for fixing those since it doesn't seem right to
> do another release which would further amplify the problem.
>
> I am not sure if a new release further amplifies the problem since it is
> the same for users if they pick up a new release of 1.9 or 1.10, and use
> Spark for V3 tables. Definitely interested in hearing others' take on this.
>
> On Mon, Jun 23, 2025 at 7:10 AM Amogh Jahagirdar <2am...@gmail.com> wrote:
>
>> Thanks Steven,
>>
>> While I definitely agree that we don't hold releases for new features, I
>> feel an important aspect to consider especially now that V3 is ratified is
>> to make sure we've resolved any known issues that would propagate bad V3
>> metadata. My take is basically if there are known issues from 1.9 in V3
>> implementation which propagate spec incompliant metadata, we should indeed
>> block for fixing those since it doesn't seem right to do another release
>> which would further amplify the problem.
>>
>> Some examples
>>
>>
>>    -  PR <https://github.com/apache/iceberg/pull/13061> for fixing an
>>    issue in row lineage propagation when distributed planning is applied in
>>    the Spark integration. Without this fix, row lineage metadata could get
>>    corrupted.
>>    - Also as of today for default value DDLs, Spark doesn't technically
>>    support them yet (I have a PR
>>    <https://github.com/apache/iceberg/pull/13107> for that as well, but
>>    I think it'll take a bit more time, I need to look into handling struct
>>    values better). However today, the spark integration silently accepts the
>>    DDL but doesn't actually do anything. Though it doesn't produce non
>>    compliant metadata it still does feel like a really misleading behavior. I
>>    think at minimum for the next release we should probably just fail the DDL
>>    if the PR doesn't get updated in time for handling default values for
>>    struct fields more cleanly
>>    - The timestamp nanos fix https://github.com/apache/iceberg/pull/11775
>>    which I think was already called out in this thread
>>    - Preventing orphan DVs <https://github.com/apache/iceberg/pull/13245>
>>    since that's required by the V3 spec
>>
>> So all in all, before a 1.10 release I'd encourage folks to test out
>> parts of the V3 work and anything that is either a correctness issue or
>> produces spec incompliant metadata should be surfaced (again, imo it's OK
>> if there's feature implementation gaps but at the same time don't want to
>> potentially amplify known incompliance problems by doing a release before
>> they're fixed)
>>
>> Thanks,
>> Amogh Jahagirdar
>>
>> On Thu, Jun 19, 2025 at 2:36 AM Péter Váry <peter.vary.apa...@gmail.com>
>> wrote:
>>
>>> If possible, I would love to have the File Format API interfaces
>>> approved and merged: https://github.com/apache/iceberg/pull/12774
>>> The effort is ongoing for half a year now, and not much change requested
>>> lately.
>>>
>>> On Thu, Jun 19, 2025, 00:16 Steven Wu <stevenz...@gmail.com> wrote:
>>>
>>>> sorry, I meant 1.10.0 release. Thanks for catching the error, JB!
>>>>
>>>> On Wed, Jun 18, 2025 at 2:29 PM Jean-Baptiste Onofré <j...@nanthrax.net>
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> I guess you mean 1.10.0 release :)
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>> On Wed, Jun 18, 2025 at 11:01 PM Steven Wu <stevenz...@gmail.com>
>>>>> wrote:
>>>>> >
>>>>> > V3 related features reference implementation don’t have much
>>>>> progress, which is probably not going to change significantly in the next 
>>>>> 1
>>>>> or 2 weeks. I would propose to cut the release branch by the end of next
>>>>> Friday (June 27). There are a few important features to be released like
>>>>> Spark 4.0 support, Flink 2.0 support, Flink dynamic sink etc. We typically
>>>>> don't want to hold back releases for extended time to wait for new feature
>>>>> implementations.
>>>>> >
>>>>> >
>>>>> > There are 11 open and 6 closed issues/PRs for the 0.10.0 milestone
>>>>> >
>>>>> > https://github.com/apache/iceberg/milestone/54
>>>>> >
>>>>> >
>>>>> > For the remaining open issues
>>>>> >
>>>>> > Flink: Dynamic Iceberg Sink Contribution. This is a large effort.
>>>>> Seems that Max and Peter have merged all breakdown PRs. So it is on track.
>>>>> >
>>>>> > Core: Fix numeric overflow of timestamp nano literal. Still have
>>>>> some discussion on the right approach for the short term and longer term
>>>>> >
>>>>> > Some of the other issues/PRs may need to be pushed to the next
>>>>> release.
>>>>> >
>>>>> >
>>>>> > Feedbacks are welcomed.
>>>>> >
>>>>> > Thanks,
>>>>> > Steven
>>>>>
>>>>

Reply via email to