Thanks Michael,
I was wondering how HiveContext.sql() is hooked up HiveQL..I'll have a look
at it.
much appreciated.
Thanks
Jason
On 15 April 2015 at 04:15, Michael Armbrust mich...@databricks.com wrote:
HiveQL
These are great questions -- I dunno the answer to most of them, but I'll
try to at least give my take on What should be rejected and why?
For new features, I'm often really confused by our guidelines on what to
include and what to exclude. Maybe we should ask that all new features
make it clear
JIRA opened:https://issues.apache.org/jira/browse/SPARK-6921
At 2015-04-15 00:57:24, Cheng Lian lian.cs@gmail.com wrote: Would you
mind to open a JIRA for this? I think your suspicion makes sense. Will have
a look at this tomorrow. Thanks for reporting! Cheng On 4/13/15 7:13 PM,
For the record, this is what I came up with (ignoring the configurable port
for now):
spark/sbin/start-master.sh
master_ui_response_code=0
while [ $master_ui_response_code -ne 200 ]; do
sleep 1
master_ui_response_code=$(
curl --head --silent --output /dev/null \
I'd like to close this vote to coincide with the 1.3.1 release,
however, it would be great to have more people test this release
first. I'll leave it open for a bit longer and see if others can give
a +1.
On Tue, Apr 14, 2015 at 9:55 PM, Patrick Wendell pwend...@gmail.com wrote:
+1 from me ass
Hi,
I run very simple operation via ./spark-shell (version 1.3.0 ):
val data = Array(1, 2, 3, 4)
val distd = sc.parallelize(data)
distd.saveAsTextFile(.. )
When i executed it, I saw that 4 tasks very created in Spark. Each task
created 2 temp files at different stages, there was 1st tmp file
+1 from myself as well
On Mon, Apr 13, 2015 at 8:35 PM, GuoQiang Li wi...@qq.com wrote:
+1 (non-binding)
-- Original --
From: Patrick Wendell;pwend...@gmail.com;
Date: Sat, Apr 11, 2015 02:05 PM
To: dev@spark.apache.orgdev@spark.apache.org;
Subject:
This vote passes with 10 +1 votes (5 binding) and no 0 or -1 votes.
+1:
Sean Owen*
Reynold Xin*
Krishna Sankar
Denny Lee
Mark Hamstra*
Sean McNamara*
Sree V
Marcelo Vanzin
GuoQiang Li
Patrick Wendell*
0:
-1:
I will work on packaging this release in the next 48 hours.
- Patrick
+1 from me ass well.
On Tue, Apr 7, 2015 at 4:36 AM, Sean Owen so...@cloudera.com wrote:
I think that's close enough for a +1:
Signatures and hashes are good.
LICENSE, NOTICE still check out.
Compiles for a Hadoop 2.6 + YARN + Hive profile.
JIRAs with target version = 1.2.x look
While working on upgrading to Spark 1.3.x, notice that the Client and
ClientArgument classes in yarn module are now defined as private[spark]. I
know that these code are mostly used by spark-submit code; but we call Yarn
client directly ( without going through spark-submit) in our spark
Hi Theodore,
I'm currently working on elastic-net regression in ML framework, and I
decided not to have any extra layer of abstraction for now but focus
on accuracy and performance. We may come out with proper solution
later. Any idea is welcome.
Sincerely,
DB Tsai
yep, everything is installed (and i just checked again). the path for
python 3.4 is /home/anaconda/bin/envs/py3k/bin, which you can find by
either manually prepending it to the PATH variable or running 'source
activate py3k' in the test.
On Tue, Apr 14, 2015 at 11:41 AM, Davies Liu
(+dev)
Hi Justin,
short answer: no, there is no way to do that.
I'm just guessing here, but I imagine this was done to eliminate
serialization problems (eg., what if we got an error trying to serialize
the user exception to send from the executors back to the driver?).
Though, actually that
13 matches
Mail list logo