Re: [VOTE] Release Spark 3.2.0 (RC3)

2021-09-21 Thread Venkatakrishnan Sowrirajan
tails? >> >> The reason we want to disable the LZ4 test is because it requires the >> native LZ4 library when running with Hadoop 2.x, which the Spark CI doesn't >> have. >> >> On Tue, Sep 21, 2021 at 3:46 PM Venkatakrishnan Sowrirajan < >> vsowr...@asu.edu&

Re: [VOTE] Release Spark 3.2.0 (RC3)

2021-09-21 Thread Venkatakrishnan Sowrirajan
Hi Chao, But there are tests in core as well failing. For eg: org.apache.spark.FileSuite But these tests are passing in 3.1, why do you think we should disable these tests for hadoop version < 3.x? Regards Venkata krishnan On Tue, Sep 21, 2021 at 3:33 PM Chao Sun wrote: > I just created

Re: Observer Namenode and Committer Algorithm V1

2021-09-20 Thread Venkatakrishnan Sowrirajan
I have created a JIRA (https://issues.apache.org/jira/browse/SPARK-36810) to track this issue. Will look into this issue further in the coming days. Regards Venkata krishnan On Tue, Sep 7, 2021 at 5:57 AM Steve Loughran wrote: > FileContext came in Hadoop 2.x with a cleaner split of client

Re: [VOTE][SPARK-30602] SPIP: Support push-based shuffle to improve shuffle efficiency

2020-09-14 Thread Venkatakrishnan Sowrirajan
+1. Interesting indeed :) Regards Venkata krishnan On Mon, Sep 14, 2020 at 11:14 AM Xingbo Jiang wrote: > +1 This is an exciting new feature! > > On Sun, Sep 13, 2020 at 8:00 PM Mridul Muralidharan > wrote: > >> Hi, >> >> I'd like to call for a vote on SPARK-30602 - SPIP: Support push-based

Re: Output Committers for S3

2017-06-17 Thread Venkatakrishnan Sowrirajan
I think Spark in itself doesn't allow DFOC when append mode is enabled. So DFOC works only for Insert overwrite queries/overwrite mode not for append mode. Regards Venkata krishnan On Fri, Jun 16, 2017 at 9:35 PM, sririshindra wrote: > Hi Ryan and Steve, > > Thanks very

Re: yarn-cluster mode throwing NullPointerException

2015-10-12 Thread Venkatakrishnan Sowrirajan
Hi Rachana, Are you by any chance saying something like this in your code ​? ​ "sparkConf.setMaster("yarn-cluster");" ​SparkContext is not supported with yarn-cluster mode.​ I think you are hitting this bug -- > https://issues.apache.org/jira/browse/SPARK-7504. This got fixed in Spark-1.4.0,