What version of Spark are you using? Did you compile your Spark version
and if so, what compile options did you use?
On 11/6/14, 9:22 AM, tridib tridib.sama...@live.com wrote:
Help please!
--
View this message in context:
?
From: Tridib Samanta tridib.sama...@live.commailto:tridib.sama...@live.com
Date: Thursday, November 6, 2014 at 9:49 AM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com,
u...@spark.incubator.apache.orgmailto:u...@spark.incubator.apache.org
u...@spark.incubator.apache.orgmailto:u
I just built the 1.2 snapshot current as of commit 76386e1a23c using:
$ ./make-distribution.sh —tgz —name my-spark —skip-java-test -DskipTests
-Phadoop-2.4 -Phive -Phive-0.13.1 -Pyarn
I drop in my Hive configuration files into the conf directory, launch
spark-shell, and then create my
Is there any reason why StringType is not a supported type the GT, GTE, LT, LTE
operations? I was able to previously have a predicate where my column type was
a string and execute a filter with one of the above operators in SparkSQL w/o
any problems. However, I synced up to the latest code this
Thanks, Kousuke. I’ll wait till this pull request makes it into the master
branch.
-Terry
From: Kousuke Saruta
saru...@oss.nttdata.co.jpmailto:saru...@oss.nttdata.co.jp
Date: Monday, November 3, 2014 at 11:11 AM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com,
user
Done.
https://issues.apache.org/jira/browse/SPARK-4213
Thanks,
-Terry
From: Michael Armbrust mich...@databricks.commailto:mich...@databricks.com
Date: Monday, November 3, 2014 at 1:37 PM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com
Cc: user@spark.apache.orgmailto:user
I am synced up to the Spark master branch as of commit 23468e7e96. I have Maven
3.0.5, Scala 2.10.3, and SBT 0.13.1. I’ve built the master branch successfully
previously and am trying to rebuild again to take advantage of the new Hive
0.13.1 profile. I execute the following command:
$ mvn
Thanks for the update, Shivaram.
-Terry
On 10/31/14, 12:37 PM, Shivaram Venkataraman
shiva...@eecs.berkeley.edu wrote:
Yeah looks like https://github.com/apache/spark/pull/2744 broke the
build. We will fix it soon
On Fri, Oct 31, 2014 at 12:21 PM, Terry Siu terry@smartfocus.com
wrote:
I
Found this as I am having the same issue. I have exactly the same usage as
shown in Michael's join example. I tried executing a SQL statement against
the join data set with two columns that have the same name and tried to
unambiguate the column name with the table alias, but I would still get an
Just to follow up, the queries worked against master and I got my whole flow
rolling. Thanks for the suggestion! Now if only Spark 1.2 will come out with
the next release of CDH5 :P
-Terry
From: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com
Date: Monday, October 20, 2014
Huai huaiyin@gmail.commailto:huaiyin@gmail.com
Date: Thursday, October 16, 2014 at 7:08 AM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com
Cc: Michael Armbrust mich...@databricks.commailto:mich...@databricks.com,
user@spark.apache.orgmailto:user@spark.apache.org
user
Hi all,
I’m getting a TreeNodeException for unresolved attributes when I do a simple
select from a schemaRDD generated by a join in Spark 1.1.0. A little background
first. I am using a HiveContext (against Hive 0.12) to grab two tables, join
them, and then perform multiple INSERT-SELECT with
Hi Michael,
Thanks again for the reply. Was hoping it was something I was doing wrong in
1.1.0, but I’ll try master.
Thanks,
-Terry
From: Michael Armbrust mich...@databricks.commailto:mich...@databricks.com
Date: Monday, October 20, 2014 at 12:11 PM
To: Terry Siu terry
. Let me know if you need more
information.
Thanks
-Terry
From: Yin Huai huaiyin@gmail.commailto:huaiyin@gmail.com
Date: Tuesday, October 14, 2014 at 6:29 PM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com
Cc: Michael Armbrust mich...@databricks.commailto:mich
Hi Michael,
That worked for me. At least I’m now further than I was. Thanks for the tip!
-Terry
From: Michael Armbrust mich...@databricks.commailto:mich...@databricks.com
Date: Monday, October 13, 2014 at 5:05 PM
To: Terry Siu terry@smartfocus.commailto:terry@smartfocus.com
Cc: user
I am currently using Spark 1.1.0 that has been compiled against Hadoop 2.3. Our
cluster is CDH5.1.2 which is runs Hive 0.12. I have two external Hive tables
that point to Parquet (compressed with Snappy), which were converted over from
Avro if that matters.
I am trying to perform a join with
16 matches
Mail list logo