Hi Mike,
I dug into this a little more, and it turns out in this case there is a
pretty trivial fix -- the problem you are seeing is just from integer
overflow before casting to a long in SizeEstimator. I've opened
https://issues.apache.org/jira/browse/SPARK-9437 for this.
For now, I think your
newp. still happening, and i'm still looking in to it:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/38880/console
On Wed, Jul 29, 2015 at 12:20 PM, shane knapp skn...@berkeley.edu wrote:
ok, i think i found the problem and solution to the git timeouts:
ok, i think i found the problem and solution to the git timeouts:
https://stackoverflow.com/questions/12236415/git-clone-return-result-18-code-200-on-a-specific-repository
so, on each worker i've run git config --global http.postBuffer
524288000 as the jenkins user and we'll see if this makes a
Hi Imran,
Thanks to you and Shivaram for looking into this, and opening the
JIRA/PR. I will update you once the PR is merged if there are any
other problems that arise from the broadcast.
Mike
On 7/29/15, Imran Rashid iras...@cloudera.com wrote:
Hi Mike,
I dug into this a little more, and it
I'd suggest using org.apache.spark.sql.hive.test.TestHive as the context in
unit tests. It takes care of creating separate directories for each
invocation automatically.
On Wed, Jul 29, 2015 at 7:02 PM, JaeSung Jun jaes...@gmail.com wrote:
Hi,
I'm working on custom sql processing on top of
Hi,
I'm working on custom sql processing on top of Spark-SQL, and i'm upgrading
it along with spark 1.4.1.
I've got an error regarding multiple test suites access hive meta store at
the same time like :
Cause: org.apache.derby.impl.jdbc.EmbedSQLException: Another instance of
Derby may have
Sure. Will do.
Thanks a lot for the help.
On Wed, Jul 29, 2015 at 12:08 PM, Reynold Xin r...@databricks.com wrote:
BTW for 1.5, there is already a now like function being added, so it
should work out of the box in 1.5.0, to be released end of Aug/early Sep.
On Tue, Jul 28, 2015 at 11:38
This is because Yarn's AM client does not remove fulfilled container request
from its MAP until the application's AM specifically calls
removeContainerRequest for fulfilled container requests.
Spark-1.2 : Spark's yarn AM does not call removeContainerRequest for
fulfilled container request.
BTW for 1.5, there is already a now like function being added, so it should
work out of the box in 1.5.0, to be released end of Aug/early Sep.
On Tue, Jul 28, 2015 at 11:38 PM, Reynold Xin r...@databricks.com wrote:
Yup - would you be willing to submit a patch to add UDF0?
Should be pretty
Right now, 603 issues have been resolved for 1.5.0. 424 are still
targeted for 1.5.0, of which 33 are marked Blocker and 60 Critical.
This count is not supposed to be 0 at this point, but must
conceptually get to 0 at the time of 1.5.0's release. Most will simply
be un-targeted or pushed down the
We should add UDF0 to it.
For now, can you just create an one-arg UDF and don't use the argument?
On Tue, Jul 28, 2015 at 10:59 PM, Sachith Withana swsach...@gmail.com
wrote:
Hi Reynold,
I'm implementing the interfaces given here (
Hi Reynold,
I'm implementing the interfaces given here (
https://github.com/apache/spark/tree/master/sql/core/src/main/java/org/apache/spark/sql/api/java
).
But currently there is no UDF0 adapter.
Any suggestions? I'm new to Spark and any help would be appreciated.
--
Thanks,
Sachith Withana
Yup - would you be willing to submit a patch to add UDF0?
Should be pretty easy (really just add a new Java class, and then add a new
function to registerUDF)
On Tue, Jul 28, 2015 at 11:36 PM, Sachith Withana sach...@wso2.com wrote:
That's what I'm doing right now.
I'm implementing UDF1 for
That's what I'm doing right now.
I'm implementing UDF1 for the now() UDF and in the UDF registration I'm
registering UDFs with zero parameters as a UDF1s.
For the above example, although I add the now() UDF as is, since it's
registered as an UDF1, I need to provide an empty parameter in the query
Hey Patrick,
Any update on this front please?
Thanks,
Bharath
On Fri, Jul 24, 2015 at 8:38 PM, Patrick Wendell pwend...@gmail.com wrote:
Hey Bharath,
There was actually an incompatible change to the build process that
broke several of the Jenkins builds. This should be patched up in the
We tested this out on our dev cluster (Hadoop 2.7.1 + Spark 1.4.0), and it
looks great! I might also be interested in contributing to it when I get a
chance! Keep up the awesome work! :)
Mark.
--
View this message in context:
Hi Ankur,
Thank you! This looks like a nice simplification. There should be some
performance improvement since newVerts are not chached now.
I’ve added your patch:
https://issues.apache.org/jira/browse/SPARK-9436
Best regards, Alexander
From: Ankur Dave [mailto:ankurd...@gmail.com]
Sent:
17 matches
Mail list logo