I did this and it seems to work now.
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3
10
even the cpython version got upgraded. Sorry if you feel I was asking too
many rudimentary questions, setting up spark has been rough without a
mentor. Thanks :-D
FYI, I'm working
Thanks for the response!! Here is my configuration:
flake8 --version
3.8.1 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 2.7.16 on
Linux
It seems I need to upgrade my CPython...
--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
this is the flake8 versioning from a jenkins worker:
$ flake8 --version
3.6.0 (mccabe: 0.6.1, pycodestyle: 2.4.0, pyflakes: 2.0.0) CPython 3.6.8 on
Linux
be sure you've got all the right versions of packages in there.
On Thu, May 14, 2020 at 12:19 PM suddhu wrote:
> Thanks for the response
Thanks for the response Jeff and Sean.
It has been quite frustrating setting up the dev environment without any
help. Its comforting to have some help finally.
I've added "alias python=python3" in my bashrc. So the default python
accessed is 3.7.2 I have flake8 installed in both python2 and
Thanks for the response Jeff and Sean.
It has been quite frustrating setting up the dev environment without any
help. Its comforting to have some help finally.
I've added "alias python=python3" in my bashrc. So the default python
accessed is 3.7.2 I have flake8 installed in both python2 and
Are you positive you set up your Python environment correctly? To me,
those error messages look like you are running Python 2, but it should be
Python 3.
On Thu, May 14, 2020 at 1:34 PM Sudharshann D wrote:
> Hello! ;)
>
> I'm new to spark development and have been trying to set up my dev
>
Hm, flake8 works OK for me locally in master, and on Jenkins it seems.
Could be a version issue?
On Thu, May 14, 2020 at 1:34 PM Sudharshann D wrote:
> Hello! ;)
>
> I'm new to spark development and have been trying to set up my dev
> environment for hours without much success. :-(
>
> Firstly,
Hello! ;)
I'm new to spark development and have been trying to set up my dev
environment for hours without much success. :-(
Firstly, I'm wondering why my ./dev/run-tests fails even though in on the
master branch.
This is the error:
flake8 checks failed:
No luck running the full test suites with mvn test from the main folder or
just mvn -pl mllib.
Any other suggestion would be much appreciated.
Thank you.
2017-11-11 12:46 GMT+00:00 Marco Gaido :
> Hi Jorge,
>
> then try running the tests not from the mllib folder, but
Hi Jorge,
then try running the tests not from the mllib folder, but on Spark base
directory.
If you want to run only the tests in mllib, you can specify the project
using the -pl argument of mvn.
Thanks,
Marco
2017-11-11 13:37 GMT+01:00 Jorge Sánchez :
> Hi Marco,
>
>
Hi Dev,
I'm running the MLLIB tests in the current Master branch and the following
Suites are failing due to some classes not being registered with Kryo:
org.apache.spark.mllib.MatricesSuite
org.apache.spark.mllib.VectorsSuite
org.apache.spark.ml.InstanceSuite
I can solve it by registering the
n modules_to_test},
> sort=True)
>
>
>
> Bests,
>
> Dongjoon.
>
>
>
> From: Hyukjin Kwon <gurwls...@gmail.com>
> Date: Friday, July 28, 2017 at 7:06 AM
> To: Sean Owen <so...@cloudera.com>
> Cc: dev <dev@spark.apache.org>
> Subject: Re: Test
8, 2017 at 7:06 AM
*To: *Sean Owen <so...@cloudera.com>
*Cc: *dev <dev@spark.apache.org>
*Subject: *Re: Tests failing with run-tests.py SyntaxError
Yes, that's my guess just given information here without a close look.
On 28 Jul 2017 11:03 pm, "Sean Owen" <so...@cloudera.com
s_to_test) for m in
modules_to_test}, sort=True)
Bests,
Dongjoon.
*From: *Hyukjin Kwon <gurwls...@gmail.com>
*Date: *Friday, July 28, 2017 at 7:06 AM
*To: *Sean Owen <so...@cloudera.com>
*Cc: *dev <dev@spark.apache.org>
*Subject: *Re: Tests failing with run-tests.py SyntaxError
Yes, that
Cc: dev <dev@spark.apache.org>
Subject: Re: Tests failing with run-tests.py SyntaxError
Yes, that's my guess just given information here without a close look.
On 28 Jul 2017 11:03 pm, "Sean Owen"
<so...@cloudera.com<mailto:so...@cloudera.com>> wrote:
I see, does t
Yes, that's my guess just given information here without a close look.
On 28 Jul 2017 11:03 pm, "Sean Owen" wrote:
I see, does that suggest that a machine has 2.6, when it should use 2.7?
On Fri, Jul 28, 2017 at 2:58 PM Hyukjin Kwon wrote:
> That
I see, does that suggest that a machine has 2.6, when it should use 2.7?
On Fri, Jul 28, 2017 at 2:58 PM Hyukjin Kwon wrote:
> That looks appearently due to dict comprehension which is, IIRC, not
> allowed in Python 2.6.x. I checked the release note for sure before -
>
That looks appearently due to dict comprehension which is, IIRC, not
allowed in Python 2.6.x. I checked the release note for sure before -
https://issues.apache.org/jira/browse/SPARK-20149
On 28 Jul 2017 9:56 pm, "Sean Owen" wrote:
> File "./dev/run-tests.py", line 124
>
File "./dev/run-tests.py", line 124
{m: set(m.dependencies).intersection(modules_to_test) for m in
modules_to_test}, sort=True)
^
SyntaxError: invalid syntax
It seems like tests are failing intermittently with this type of error,
i confirmed that an Encoder[Array[Int]] is no longer serializable, and with
my spark build from march 7 it was.
i believe the issue is commit 295747e59739ee8a697ac3eba485d3439e4a04c3 and
i send wenchen an email about it.
On Wed, Apr 12, 2017 at 4:31 PM, Koert Kuipers wrote:
i believe the error is related to an
org.apache.spark.sql.expressions.Aggregator where the buffer type (BUF) is
Array[Int]
On Wed, Apr 12, 2017 at 4:19 PM, Koert Kuipers wrote:
> hey all,
> today i tried upgrading the spark version we use internally by creating a
> new
hey all,
today i tried upgrading the spark version we use internally by creating a
new internal release from the spark master branch. last time i did this was
march 7.
with this updated spark i am seeing some serialization errors in the unit
tests for our own libraries. looks like a scala
(adding michael armbrust and josh rosen for visibility)
ok. roughly 9% of all spark tests builds (including both PRB builds
are failing due to GC overhead limits.
$ wc -l SPARK_TEST_BUILDS GC_FAIL
1350 SPARK_TEST_BUILDS
125 GC_FAIL
here are the affected builds (over the past ~2 weeks):
$
On Fri, Jan 6, 2017 at 12:20 PM, shane knapp wrote:
> FYI, this is happening across all spark builds... not just the PRB.
s/all/almost all/
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
FYI, this is happening across all spark builds... not just the PRB.
i'm compiling a report now and will email that out this afternoon.
:(
On Thu, Jan 5, 2017 at 9:00 PM, shane knapp wrote:
> unsurprisingly, we had another GC:
>
>
unsurprisingly, we had another GC:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70949/console
so, definitely not the system (everything looks hunky dory on the build node).
> It can always be some memory leak; if we increase the memory settings
> and OOMs still happen,
But is there any non-memory-leak reason why the tests should need more
memory? In theory each test should be cleaning up it's own Spark Context
etc. right? My memory is that OOM issues in the tests in the past have been
indicative of memory leaks somewhere.
I do agree that it doesn't seem likely
On Thu, Jan 5, 2017 at 4:58 PM, Kay Ousterhout wrote:
> But is there any non-memory-leak reason why the tests should need more
> memory? In theory each test should be cleaning up it's own Spark Context
> etc. right? My memory is that OOM issues in the tests in the past
Seems like the OOM is coming from tests, which most probably means
it's not an infrastructure issue. Maybe tests just need more memory
these days and we need to update maven / sbt scripts.
On Thu, Jan 5, 2017 at 1:19 PM, shane knapp wrote:
> as of first thing this morning,
Thanks for looking into this Shane!
On Thu, Jan 5, 2017 at 1:19 PM, shane knapp wrote:
> as of first thing this morning, here's the list of recent GC overhead
> build failures:
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70891/
> console
>
preliminary findings: seems to be transient, and affecting 4% of
builds from late december until now (which is as far back as we keep
build records for the PRB builds).
408 builds
16 builds.gc <--- failures
it's also happening across all workers at about the same rate.
and best of all,
nope, no changes to jenkins in the past few months. ganglia graphs
show higher, but not worrying, memory usage on the workers when the
jobs failed...
i'll take a closer look later tonite/first thing tomorrow morning.
shane
On Tue, Jan 3, 2017 at 4:35 PM, Kay Ousterhout
Hi,
Inside mllib I am running tests using:
mvn -Dhadoop.version=2.3.0-cdh5.1.0 -Phadoop-2.3 -Pyarn install
The locat tests run fine but cluster tests are failing..
LBFGSClusterSuite:
- task size should be small *** FAILED ***
org.apache.spark.SparkException: Job aborted due to stage
I have done mvn clean several times...
Consistently all the mllib tests that are using
LocalClusterSparkContext.scala, they fail !
Try to build the assembly jar first. ClusterSuite uses local-cluster
mode, which requires the assembly jar. -Xiangrui
On Tue, Sep 30, 2014 at 8:23 AM, Debasish Das debasish.da...@gmail.com wrote:
I have done mvn clean several times...
Consistently all the mllib tests that are using
Hi All,
I noticed that all PR tests run overnight had failed due to timeouts. The
patch that updates the netty shuffle I believe somehow inflated to the
build time significantly. That patch had been tested, but one change was
made before it was merged that was not tested.
I've reverted the patch
Also I think Jenkins doesn't post build timeouts to github. Is there anyway
we can fix that ?
On Aug 15, 2014 9:04 AM, Patrick Wendell pwend...@gmail.com wrote:
Hi All,
I noticed that all PR tests run overnight had failed due to timeouts. The
patch that updates the netty shuffle I believe
Shivaram,
Can you point us to an example of that happening? The Jenkins console
output, that is.
Nick
On Fri, Aug 15, 2014 at 2:28 PM, Shivaram Venkataraman
shiva...@eecs.berkeley.edu wrote:
Also I think Jenkins doesn't post build timeouts to github. Is there anyway
we can fix that ?
On
Hey Nicholas,
Yeah so Jenkins has it's own timeout mechanism and it will just kill the
entire build after 120 minutes. But since run-tests is sitting in the
middle of the tests, it can't actually post a failure message.
I think run-tests-jenkins should just wrap the call to run-tests in a call
So 2 hours is a hard cap on how long a build can run. Okie doke.
Perhaps then I'll wrap the run-tests step as you suggest and limit it to
100 minutes or something, and cleanly report if it times out.
Sound good?
On Fri, Aug 15, 2014 at 4:43 PM, Patrick Wendell pwend...@gmail.com wrote:
Hey
Yeah I was thinking something like that. Basically we should just have a
variable for the timeout and I can make sure it's under the configured
Jenkins time.
On Fri, Aug 15, 2014 at 1:55 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
So 2 hours is a hard cap on how long a build can
41 matches
Mail list logo