Re: [Action required] Default Spark profile changed to 3.2

2023-06-02 Thread Y Ethan Guo
There is a hotfix (https://github.com/apache/hudi/pull/8822) merged
recently to fix the default build.  SPARK_HOME does not matter and the
latest master builds on my end with Spark 3.3 as SPARK_HOME.

On Fri, Jun 2, 2023 at 1:28 AM Vinoth Chandar  wrote:

> Hi,
>
> Just tried doing a mvn clean install -DskipTests, and the build failed. My
> local SPARK_HOME is pointing to spark 3.3 installation.
> Does that all matter now? Quite possible this is an issue with my setup,
> just flagging.
>
> Thanks
> Vinoth
>
> On Fri, May 26, 2023 at 8:30 AM Shiyan Xu 
> wrote:
>
> > Hi all,
> >
> > We recently landed a change
> > <
> >
> https://github.com/apache/hudi/commit/516c3d59404934e6a142ea1c9d97002c065f8a4f
> > >
> > in master switching the default Spark profile from 2.4 to 3.2. If your
> > local Hudi repo is configured to use Spark 2.4, you may need to re-import
> > the IDEA project (this may involve clearing `.idea/` folder and *.iml
> files
> > in each maven module)
> >
> > I'd also like to acknowledge contributions from Forward Xu, Rahil, Zhang
> > Yue, and Danny, who have previously worked on or helped tackle this
> > migration, as it involved a lot of dependency issue wrangling and test
> > fixes.
> >
> > Cheers,
> >
> > --
> > Best,
> > Shiyan
> >
>


Re: [Action required] Default Spark profile changed to 3.2

2023-06-02 Thread Vinoth Chandar
Hi,

Just tried doing a mvn clean install -DskipTests, and the build failed. My
local SPARK_HOME is pointing to spark 3.3 installation.
Does that all matter now? Quite possible this is an issue with my setup,
just flagging.

Thanks
Vinoth

On Fri, May 26, 2023 at 8:30 AM Shiyan Xu 
wrote:

> Hi all,
>
> We recently landed a change
> <
> https://github.com/apache/hudi/commit/516c3d59404934e6a142ea1c9d97002c065f8a4f
> >
> in master switching the default Spark profile from 2.4 to 3.2. If your
> local Hudi repo is configured to use Spark 2.4, you may need to re-import
> the IDEA project (this may involve clearing `.idea/` folder and *.iml files
> in each maven module)
>
> I'd also like to acknowledge contributions from Forward Xu, Rahil, Zhang
> Yue, and Danny, who have previously worked on or helped tackle this
> migration, as it involved a lot of dependency issue wrangling and test
> fixes.
>
> Cheers,
>
> --
> Best,
> Shiyan
>


Re: [DISCUSSION] Simplify code structure for supporting multiple Spark versions in Hudi

2023-06-02 Thread Y Ethan Guo
Hey Shawn, Rahil,

Thanks for raising this issue.  These are good suggestions; I would
recommend simplifying the code structure of Hudi Spark incrementally and
gradually making the code less coupled with Spark engine.

Identify breaking changes introduced by the new Spark version and patch
> affected Hudi classes.


This is important.  No matter how we organize the code structure, we
need to understand breaking changes from Spark that can affect Hudi.  Right
now, only a handful of Spark experts in our community have such knowledge
and how Hudi integrates with Spark at the implementation level.

We should document the integration, e.g., in an RFC, to avoid knowledge
gaps.  Based on the discussion, we should also write down the formal
process of supporting a new Spark version in Hudi, with clear testing and
certification criteria.

The current structure involves common code shared by several Spark
> versions, such as hudi-spark-common, hudi-spark3-common,
> hudi-spark3.2plus-common, etc. (a detailed description can be found in the
> readme here:
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/README.md).
> This setup aims to minimize duplicate code in Hudi. Hudi currently utilizes
> the SparkAdapter to invoke specific code based on the Spark version,
> allowing different Spark versions to trigger different logic.


We took the current approach of having hudi-spark3.2plus-common module to
deduplicate the code between Spark 3.2 and Spark 3.3 integrations (
https://issues.apache.org/jira/browse/HUDI-4691), because we anticipated
that this should reduce the code duplication going forward.  Now it proved
inflexible given Spark 3.4 changes some APIs and hudi-spark3.2plus-common
is no longer "common" any more.

It makes sense to have Spark version-specific module to contain classes for
specific integration logic.  I would still keep common modules for classes
that are more Hudi-relevant and applicable to all Spark or Spark 2/3
versions.  How we draw the line depends on the implementation, which we can
gradually make the decision.  This should solve the problem of applying a
general fix for multiple Spark versions.

For Spark 3.4 integration, I suggest duplicating the code for now in a
separate module, while we figure out a better code structure for Hudi Spark
integration.

Currently I think our integration with Spark is too tight, and brings up
> serious issues when upgrading.
> I will describe one example(however there are many more) but one area is
> we extend Spark's *ParquetFileFormat* in the following classes:
> buildReaderWithPartitionValues method


I agree that Hudi integration with Spark should not be tightly coupled.
Yet we should also honor the fact that some are for functionality and
performance reasons (which we should clearly document).  We should
definitely revisit such coupling points and see if they can be relaxed.

The Hoodie Parquet format classes are introduced mainly for supporting full
schema evaluation in Spark (https://github.com/apache/hudi/pull/4910).
AFAIK, in Spark32HoodieParquetFileFormat class, we did use
default ParquetFileFormat logic when schema evaluation is not enabled (
https://github.com/apache/hudi/pull/4910/files#diff-fc11bc10091e5312b58068f263960ef9459b1d01cf08512f33362b76f5554416R61),
meaning that it did not change any parquet reading logic.  However, such
fallback/default behavior is removed later on.  I think we need to revisit
such a decision and bring back the default behavior if OK.

In the future, we should understand the implications and review such code
changes carefully.

Best,
- Ethan

On Fri, Jun 2, 2023 at 12:28 AM Vinoth Chandar  wrote:

> This is a good topic, thanks for raising this. Overall our reliance on
> spark classes/APIs that are declared experimental is an issue on paper. But
> there is few other ways to get right performance without relying on these.
> This has been the tricky issue IMO. Thoughts?
>
>  I ll review the code organization more carefully and report back.
>
> On Fri, Jun 2, 2023 at 04:23 Rahil C  wrote:
>
> > Thanks Shawn for writing this, I would like to also add on to the Spark
> > Discussion.
> >
> > Currently I think our integration with Spark is too tight, and brings up
> > serious issues when upgrading.
> >
> > I will describe one example(however there are many more) but one area is
> we
> > extend Spark's *ParquetFileFormat* in the following classes.
> >
> >
> >
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/HoodieParquetFileFormat.scala
> >
> >
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark3.2plus-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/Spark32PlusHoodieParquetFileFormat.scala
> >
> > and specifically the main logic changes is we override
> > *buildReaderWithPartitionValues
> > *method
> > *.*
> > I understand the pro of reusability of spark's 

Re: [DISCUSSION] Simplify code structure for supporting multiple Spark versions in Hudi

2023-06-02 Thread Vinoth Chandar
This is a good topic, thanks for raising this. Overall our reliance on
spark classes/APIs that are declared experimental is an issue on paper. But
there is few other ways to get right performance without relying on these.
This has been the tricky issue IMO. Thoughts?

 I ll review the code organization more carefully and report back.

On Fri, Jun 2, 2023 at 04:23 Rahil C  wrote:

> Thanks Shawn for writing this, I would like to also add on to the Spark
> Discussion.
>
> Currently I think our integration with Spark is too tight, and brings up
> serious issues when upgrading.
>
> I will describe one example(however there are many more) but one area is we
> extend Spark's *ParquetFileFormat* in the following classes.
>
>
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/HoodieParquetFileFormat.scala
>
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/hudi-spark3.2plus-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/Spark32PlusHoodieParquetFileFormat.scala
>
> and specifically the main logic changes is we override
> *buildReaderWithPartitionValues
> *method
> *.*
> I understand the pro of reusability of spark's code, but the con is that we
> dont then get the latest changes from the latest implementation of these
> methods. This gets more complex as we then need to understand which spark
> changes are required to cherry pick over as spark upgrades, such as these
> issues.
>
> For spark 3.3.2 we faced several issues documented here
> https://github.com/apache/hudi/pull/8082,
> and for spark 3.4.0 we have encountered several issues as well.
> https://github.com/apache/hudi/pull/8682
>
> We also are not keeping up to date with certain spark features as a result
> of the integration we have made. I have created a JIRA that goes more into
> this in-depth to this.
> https://issues.apache.org/jira/browse/HUDI-6262
>
> Would be happy to sync with other hudi spark committers/experts, or anyone
> interested in revisiting this integration so that future spark work will be
> more achievable.
>
> Regards,
> Rahil Chertara
>
> On Tue, May 23, 2023 at 8:16 PM Shawn Chang  wrote:
>
> > Hi Hudi developers,
> >
> > I am writing to discuss the current code structure of the existing
> > hudi-spark-datasource and propose a more scalable approach for supporting
> > multiple Spark versions. The current structure involves common code
> shared
> > by several Spark versions, such as hudi-spark-common, hudi-spark3-common,
> > hudi-spark3.2plus-common, etc. (a detailed description can be found in
> the
> > readme here:
> >
> https://github.com/apache/hudi/blob/master/hudi-spark-datasource/README.md
> > ).
> > This setup aims to minimize duplicate code in Hudi. Hudi currently
> utilizes
> > the SparkAdapter to invoke specific code based on the Spark version,
> > allowing different Spark versions to trigger different logic.
> >
> > However, this code structure proves to be complex and hampers the process
> > of adding support for newer Spark versions. The current approach involves
> > the following steps:
> > 1) Identify breaking changes introduced by the new Spark version and
> patch
> > affected Hudi classes.
> > 2) Separate affected Hudi classes into different folders so that older
> > Spark versions can continue using the existing logic, while the new Spark
> > version can work with the updated Hudi classes.
> > 3) Connect SparkAdapter to these Hudi classes, enabling Hudi to utilize
> the
> > correct code based on the Spark version.
> > 4) Collect common code and place it in a new folder, such as
> > hudi-spark3.2plus-common, to reduce duplicate code.
> >
> > This convoluted process has significantly slowed down the pace of adding
> > support for newer Spark versions in Hudi. Fortunately, there is a simpler
> > alternative that can streamline the process. I propose removing the
> common
> > modules and having only one folder for each Spark version. For example:
> >
> >
> >
> >
> >
> >
> *hudi-spark-datasource/---hudi-spark2.4.0/---hudi-spark3.2.0/---hudi-spark3.3.0/...*
> >
> > With this revised code structure, each Spark version will have its own
> > corresponding Hudi module. The process of adding Spark support will be
> > simplified as follows:
> > 1) Copy the latest existing hudi-spark module to a new module,
> > hudi-spark.
> > 2) Identify breaking changes introduced by the new Spark version and
> patch
> > affected Hudi classes.
> >
> > Let's consider some pros and cons of this new code structure:
> > *Pros:*
> > -A more readable codebase, with each Spark version having its individual
> > module.
> > -Easier addition of support for new Spark versions by duplicating the
> most
> > recent module and making necessary modifications.
> > -Simpler implementation of improvements specific to a particular Spark
> > version.
> > *Cons:*
> > -Increased duplicate code (though this shouldn't impact 

Re: [ANNOUNCE] Apache Hudi 0.13.1 released

2023-06-02 Thread Vinoth Chandar
Thanks for driving this!

On Wed, May 31, 2023 at 10:00 Yue Zhang  wrote:

> The Apache Hudi team is pleased to announce the release of Apache Hudi
> 0.13.1
>
> Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and
> Incrementals. Apache Hudi manages storage of large analytical datasets
> on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible
> storage) and provides the ability to query them.
>
> This release comes 3 months after 0.13.0 and 1 months after 0.12.3. This
> release is purely intended to fix stability and bugs, which includes more
> than 100 resolved
> issues. Fixes span many areas ranging from core writer fixes,
> metadata, timeline, engine specific fixes, table services, etc.
>
> For details on how to use Hudi, please look at the quick start page located
> at https://hudi.apache.org/docs/quick-start-guide.html
>
> If you'd like to download the source release, you can find it here:
> https://github.com/apache/hudi/releases/tag/release-0.13.1
>
> You can read more about the release (including release notes) here:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822=12352250
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at:
> http://hudi.apache.org/
>
> Thanks to everyone involved!