for 1.4.1 and 1.5.1 on Hadoop
version 2.
Best Regards,
Christian
.
I don't need any additional libraries. We just need to change the
core-site.xml
-Christian
On Thu, Nov 5, 2015 at 9:35 AM, Nicholas Chammas <nicholas.cham...@gmail.com
> wrote:
> Thanks for sharing this, Christian.
>
> What build of Spark are you using? If I understand co
I created the cluster with the following:
--hadoop-major-version=2
--spark-version=1.4.1
from: spark-1.5.1-bin-hadoop1
Are you saying there might be different behavior if I download
spark-1.5.1-hadoop-2.6 and create my cluster?
On Thu, Nov 5, 2015 at 1:28 PM, Christian <engr...@gmail.
ding, you need to install additional libraries
> <https://issues.apache.org/jira/browse/SPARK-7481> to access S3. When
> Spark is built against Hadoop 2.4 or earlier, you don't need to do this.
>
> I'm confirming that this is what is happening in your case.
>
> Nick
>
> On Thu, No
Even with the changes I mentioned above?
On Thu, Nov 5, 2015 at 8:10 PM Nicholas Chammas <nicholas.cham...@gmail.com>
wrote:
> Yep, I think if you try spark-1.5.1-hadoop-2.6 you will find that you
> cannot access S3, unfortunately.
>
> On Thu, Nov 5, 2015 at 3:53 PM Christian
+ unless you
> install additional libraries. The issue is explained in SPARK-7481
> <https://issues.apache.org/jira/browse/SPARK-7481> and SPARK-7442
> <https://issues.apache.org/jira/browse/SPARK-7442>.
>
> On Fri, Nov 6, 2015 at 12:22 AM Christian <engr...@gmail.com>
I see the same thing.
A workaround is to put a Thread.sleep(5000) statement before sc.stop()
Let us know how it goes.
On Sep 17, 2014, at 3:43 AM, wyphao.2007 wyphao.2...@163.com wrote:
Hi, When I run spark job on yarn,and the job finished success,but I found
there are some error