Or maybe in https://github.com/apache/spark/blob/master/dev/run-tests#L23
On 29 Jul 2017 11:16 am, "Hyukjin Kwon" wrote:
I am sorry for saying just based on my wild guess because I have no way to
check and take a look into Jenkins but I think we might have to set the
explicit Python version in h
I am sorry for saying just based on my wild guess because I have no way to
check and take a look into Jenkins but I think we might have to set the
explicit Python version in
https://github.com/apache/spark/blob/master/dev/run-tests-jenkins#L29
I guess we set the explicit Python version for running
--
Hao
I saw that error in the latest branch-2.1 build failure, too.
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-branch-2.1-test-sbt-hadoop-2.7/579/console
But, the code was written in Jan 2016. Didn’t we run it on Python 2.6 without
any problem?
ee74498de37 (
Shashi,
Welcome! There are a lot of ways you can help contribute. There is a page
documenting some of them: http://spark.apache.org/contributing.html
On Fri, Jul 28, 2017 at 1:35 PM, Shashi Dongur
wrote:
> Hello All,
>
> I am looking for ways to contribute to Spark repo. I want to start with
>
Hello All,
I am looking for ways to contribute to Spark repo. I want to start with
helping on running tests and improving documentation where needed.
Please let me know how I can find avenues to help. How can I spot users who
require assistance with testing? Or gathering documentation for any new
Yes, that's my guess just given information here without a close look.
On 28 Jul 2017 11:03 pm, "Sean Owen" wrote:
I see, does that suggest that a machine has 2.6, when it should use 2.7?
On Fri, Jul 28, 2017 at 2:58 PM Hyukjin Kwon wrote:
> That looks appearently due to dict comprehension wh
I see, does that suggest that a machine has 2.6, when it should use 2.7?
On Fri, Jul 28, 2017 at 2:58 PM Hyukjin Kwon wrote:
> That looks appearently due to dict comprehension which is, IIRC, not
> allowed in Python 2.6.x. I checked the release note for sure before -
> https://issues.apache.org/
That looks appearently due to dict comprehension which is, IIRC, not
allowed in Python 2.6.x. I checked the release note for sure before -
https://issues.apache.org/jira/browse/SPARK-20149
On 28 Jul 2017 9:56 pm, "Sean Owen" wrote:
> File "./dev/run-tests.py", line 124
> {m: set(m.dependen
File "./dev/run-tests.py", line 124
{m: set(m.dependencies).intersection(modules_to_test) for m in
modules_to_test}, sort=True)
^
SyntaxError: invalid syntax
It seems like tests are failing intermittently with this type of error,
w
I think it will be same, but let me try that
FYR - https://issues.apache.org/jira/browse/SPARK-19881
On Fri, Jul 28, 2017 at 4:44 PM, ayan guha wrote:
> Try running spark.sql("set yourconf=val")
>
> On Fri, 28 Jul 2017 at 8:51 pm, Chetan Khatri
> wrote:
>
>> Jorn, Both are same.
>>
>> On Fri,
Jorn, Both are same.
On Fri, Jul 28, 2017 at 4:18 PM, Jörn Franke wrote:
> Try sparksession.conf().set
>
> On 28. Jul 2017, at 12:19, Chetan Khatri
> wrote:
>
> Hey Dev/ USer,
>
> I am working with Spark 2.0.1 and with dynamic partitioning with Hive
> facing below issue:
>
> org.apache.hadoop.h
Try sparksession.conf().set
> On 28. Jul 2017, at 12:19, Chetan Khatri wrote:
>
> Hey Dev/ USer,
>
> I am working with Spark 2.0.1 and with dynamic partitioning with Hive facing
> below issue:
>
> org.apache.hadoop.hive.ql.metadata.HiveException:
> Number of dynamic partitions created is 1344
Hey Dev/ USer,
I am working with Spark 2.0.1 and with dynamic partitioning with Hive
facing below issue:
org.apache.hadoop.hive.ql.metadata.HiveException:
Number of dynamic partitions created is 1344, which is more than 1000.
To solve this try to set hive.exec.max.dynamic.partitions to at least 1
14 matches
Mail list logo