This discussion belongs on the dev list. Please post any replies there.
On Sat, May 23, 2015 at 10:19 PM, Cheolsoo Park piaozhe...@gmail.com
wrote:
Hi,
I've been testing SparkSQL in 1.4 rc and found two issues. I wanted to
confirm whether these are bugs or not before opening a jira.
*1)*
Yup netlib lgpl right now is activated through a profile...if we can reuse
the same idea then csparse can also be added to spark with a lgpl flag. But
again as Sean said its tricky. Better to keep it on spark packages for
users to try.
On May 24, 2015 1:36 AM, Sean Owen so...@cloudera.com wrote:
I am May 3rd commit:
commit 49549d5a1a867c3ba25f5e4aec351d4102444bc0
Author: WangTaoTheTonic wangtao...@huawei.com
Date: Sun May 3 00:47:47 2015 +0100
[SPARK-7031] [THRIFTSERVER] let thrift server take SPARK_DAEMON_MEMORY
and SPARK_DAEMON_JAVA_OPTS
On Sat, May 23, 2015 at 7:54 PM, Josh
Ah right I misread this. I get it but I dont think the PR fixes this. Let
me comment there.
On May 24, 2015 3:56 PM, Sean Owen so...@cloudera.com wrote:
Wait, isn't the error message just saying you can't set 8mb buffers? So it
is correctly parsing the args. I don't understand why this has to
Wait, isn't the error message just saying you can't set 8mb buffers? So it
is correctly parsing the args. I don't understand why this has to do with
parsing the value. That much works.
On May 24, 2015 2:04 AM, Debasish Das debasish.da...@gmail.com wrote:
Hi,
I am on last week's master but all
Please update to the following:
commit c2f0821aad3b82dcd327e914c9b297e92526649d
Author: Zhang, Liye liye.zh...@intel.com
Date: Fri May 8 09:10:58 2015 +0100
[SPARK-7392] [CORE] bugfix: Kryo buffer size cannot be larger than 2M
On Sun, May 24, 2015 at 8:04 AM, Debasish Das
I dont believe we are talking about adding things to the Apache project,
but incidentally LGPL is not OK in Apache projects either.
On May 24, 2015 6:12 AM, DB Tsai dbt...@dbtsai.com wrote:
I thought LGPL is okay but GPL is not okay for Apache project.
On Saturday, May 23, 2015, Patrick
Thanks for reporting this.
We intend to support the multiple metastore versions in a single
build(hive-0.13.1) by introducing the IsolatedClientLoader, but probably you’re
hitting the bug, please file a jira issue for this.
I will keep investigating on this also.
Hao
From: Mark Hamstra
Thank you Hao for the confirmation!
I filed two jiras as follows-
https://issues.apache.org/jira/browse/SPARK-7850 (removing hive-0.12.0
profile from pom)
https://issues.apache.org/jira/browse/SPARK-7851 (thrift error with hive
metastore 0.12)
On Sun, May 24, 2015 at 8:18 PM, Cheng, Hao
Hey jameszhouyi,
Since SPARK-7119 is not a regression from earlier versions, we won't
hold the release for it. However, please comment on the JIRA if it is
affecting you... it will help us prioritize the bug.
- Patrick
On Fri, May 22, 2015 at 8:41 PM, jameszhouyi yiaz...@gmail.com wrote:
We
This vote is cancelled in favor of RC2.
On Tue, May 19, 2015 at 9:10 AM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.0!
The tag to be voted on is v1.4.0-rc1 (commit 777a081):
Please vote on releasing the following candidate as Apache Spark version 1.4.0!
The tag to be voted on is v1.4.0-rc2 (commit 03fb26a3):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=03fb26a3e50e00739cc815ba4e2e82d71d003168
The release files, including signatures, digests, etc.
Hi All,
This week I got around to setting up nightly builds for Spark on
Jenkins. I'd like feedback on these and if it's going well I can merge
the relevant automation scripts into Spark mainline and document it on
the website. Right now I'm doing:
1. SNAPSHOT's of Spark master and release
Blocks are replicated immediately, before the driver launches any jobs
using them.
On Thu, May 21, 2015 at 2:05 AM, Hemant Bhanawat hemant9...@gmail.com
wrote:
Honestly, given the length of my email, I didn't expect a reply. :-)
Thanks for reading and replying. However, I have a follow-up
14 matches
Mail list logo