For me, I would like this if this can be done with relatively small changes.
How about adding more granular options, for example, specifying or
filtering smaller set of test goals in the run-tests.py script?
I think it'd be quite small change and we could roughly reach this goal if
I understood cor
The Apache Spark on Kubernetes Community Development Project is pleased to
announce the latest release of Apache Spark with native Scheduler Backend
for Kubernetes! Features provided in this release include:
-
Cluster-mode submission of Spark jobs to a Kubernetes cluster
-
Support
Ah interesting, looking at our latest docs we imply that it should work
with PyPy 2.3+ -- we might want to update that to 2.5+ since we aren't
testing with 2.3 anymore?
On Mon, Aug 14, 2017 at 3:09 PM, Tom Graves
wrote:
> I tried 5.7 and 2.5.1 so its probably something in my setup. I'll
> inves
I tried 5.7 and 2.5.1 so its probably something in my setup. I'll investigate
that more, wanted to make sure it was still supported because I didn't see
anything about it since the original jira that added it.
Thanks,Tom
On Monday, August 14, 2017, 4:29:01 PM CDT, shane knapp
wrote:
actually
actually, we *have* locked on a particular pypy versions for the
jenkins workers: 2.5.1
this applies to both the 2.7 and 3.5 conda environments.
(py3k)-bash-4.1$ pypy --version
Python 2.7.9 (9c4588d731b7fe0b08669bd732c2b676cb0a8233, Apr 09 2015, 02:17:39)
[PyPy 2.5.1 with GCC 4.4.7 20120313 (Red
As Dong says yes we do test with PyPy in our CI env; but we expect a
"newer" version of PyPy (although I don't think we ever bothered to write
down what the exact version requirements are for the PyPy support unlike
regular Python).
On Mon, Aug 14, 2017 at 2:06 PM, Dong Joon Hyun
wrote:
> Hi, To
Hi, Tom.
What version of PyPy do you use?
In the Jenkins environment, `pypy` always passes like Python 2.7 and Python 3.4.
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7/3340/consoleFull
==
Anyone know if pypy works with spark. Saw a jira that it was supported back in
Spark 1.2 but getting an error when trying and not sure if its something with
my pypy version of just something spark doesn't support.
AttributeError: 'builtin-code' object has no attribute 'co_filename'
Traceback (mo
Say you’re working on something and you want to rerun the PySpark tests,
focusing on a specific test or group of tests. Is there a way to do that?
I know that you can test entire modules with this:
./python/run-tests --modules pyspark-sql
But I’m looking for something more granular, like pytest’
Hi,
I have provided a PR around 2 months back to improve the performance of
decision tree by allowing flexible user provided storage class for
intermediate data. I have posted few questions about handling backward
compatibility but there is no answers from long.
Can anybody help me to move this f
10 matches
Mail list logo