That can't cause any error, since there is no action in your first
snippet. Even calling count on the result doesn't cause an error. You
must be executing something different.
On Sun, Aug 30, 2015 at 4:21 AM, ashrowty ashish.shro...@gmail.com wrote:
I am running the Spark shell (1.2.1) in local
Thanks everyone for the help!
On Sat, Aug 29, 2015 at 2:55 AM, Alexey Grishchenko programme...@gmail.com
wrote:
If the data is already in RDD, the easiest way to calculate min/max for
each column would be an aggregate() function. It takes 2 functions as
arguments - first is used to aggregate
Looks like ur version n spark's Jackson package are at different versions.
Raghav
On Aug 28, 2015 4:01 PM, Manohar753 manohar.re...@happiestminds.com
wrote:
Hi Team,
I upgraded spark older versions to 1.4.1 after maven build i tried to ran
my
simple application but it failed and giving the
Hi Ajay,
In short story: No, there is no easy way to do that. But if you'd like to
play around this topic a good starting point would be this blog post from
sequenceIQ: blog
http://blog.sequenceiq.com/blog/2014/08/22/spark-submit-in-java/.
I heard rumors that there are some work going on to
Hi Ajay,
Are you trying to save to your local file system or to HDFS?
// This would save to HDFS under /user/hadoop/counter
counter.saveAsTextFile(/user/hadoop/counter);
David
On Sun, Aug 30, 2015 at 11:21 AM, Ajay Chander itsche...@gmail.com wrote:
Hi Everyone,
Recently we have installed
I'm not sure how to reproduce it? this code does not produce an error in master.
On Sun, Aug 30, 2015 at 7:26 PM, Ashish Shrowty
ashish.shro...@gmail.com wrote:
Do you think I should create a JIRA?
On Sun, Aug 30, 2015 at 12:56 PM Ted Yu yuzhih...@gmail.com wrote:
I got StackOverFlowError
Pranay:
Please take a look at the Redirector class inside:
./launcher/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
Cheers
On Sun, Aug 30, 2015 at 11:25 AM, Pranay Tonpay pranay.ton...@impetus.co.in
wrote:
yes, the context is being closed at the end.
yes, the context is being closed at the end.
From: Akhil Das ak...@sigmoidanalytics.com
Sent: Sunday, August 30, 2015 9:03 AM
To: Pranay Tonpay
Cc: user@spark.apache.org
Subject: Re: spark-submit issue
Did you try putting a sc.stop at the end of your pipeline?
Do you think I should create a JIRA?
On Sun, Aug 30, 2015 at 12:56 PM Ted Yu yuzhih...@gmail.com wrote:
I got StackOverFlowError as well :-(
On Sun, Aug 30, 2015 at 9:47 AM, Ashish Shrowty ashish.shro...@gmail.com
wrote:
Yep .. I tried that too earlier. Doesn't make a difference. Are you
Hi David,
Thanks for responding! My main intention was to submit spark Job/jar to
yarn cluster from my eclipse with in the code. Is there any way that I
could pass my yarn configuration somewhere in the code to submit the jar to
the cluster?
Thank you,
Ajay
On Sunday, August 30, 2015, David
Could you please elaborate ? Spark Classpath in Spark.env.sonf file ?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Apache-Spark-Suitable-JDBC-Driver-not-found-tp24505p24511.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
I saw the posting about storing NumPy values in sequence files:
http://mail-archives.us.apache.org/mod_mbox/spark-user/201506.mbox/%3cCAJQK-mg1PUCc_hkV=q3n-01ioq_pkwe1g-c39ximco3khqn...@mail.gmail.com%3e
I’ve had a go at implementing this, and issued a PR request at
Hi Everyone,
Recently we have installed spark on yarn in hortonworks cluster. Now I am
trying to run a wordcount program in my eclipse and I
did setMaster(local) and I see the results that's as expected. Now I want
to submit the same job to my yarn cluster from my eclipse. In storm
basically I
Using Spark shell :
scala import scala.collection.mutable.MutableList
import scala.collection.mutable.MutableList
scala val lst = MutableList[(String,String,Double)]()
lst: scala.collection.mutable.MutableList[(String, String, Double)] =
MutableList()
scala
I see.
What about using the following in place of variable a ?
http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables
Cheers
On Sun, Aug 30, 2015 at 8:54 AM, Ashish Shrowty ashish.shro...@gmail.com
wrote:
@Sean - Agree that there is no action, but I still get the
Manohar:
See if adding the following dependency to your project helps:
dependency
+groupIdcom.fasterxml.jackson.core/groupId
+artifactIdjackson-databind/artifactId
+version${fasterxml.jackson.version}/version
+ /dependency
+ dependency
+
Yep .. I tried that too earlier. Doesn't make a difference. Are you able to
replicate on your side?
On Sun, Aug 30, 2015 at 12:08 PM Ted Yu yuzhih...@gmail.com wrote:
I see.
What about using the following in place of variable a ?
Hi, can you try something like:
val rowRDD=sc.textFile(/user/spark/short_model).map{ line =
val p = line.split(\\tfile:///\\t)
if (p.length =72) {
Row(p(0), p(1)…)
} else {
throw new RuntimeException(s“failed in parsing $line”)
}
}
From the log
Thanks everyone for your valuable time and information. It was helpful.
On Sunday, August 30, 2015, Ted Yu yuzhih...@gmail.com wrote:
This is related:
SPARK-10288 Add a rest client for Spark on Yarn
FYI
On Sun, Aug 30, 2015 at 12:12 PM, Dawid Wysakowicz
wysakowicz.da...@gmail.com
This is related:
SPARK-10288 Add a rest client for Spark on Yarn
FYI
On Sun, Aug 30, 2015 at 12:12 PM, Dawid Wysakowicz
wysakowicz.da...@gmail.com wrote:
Hi Ajay,
In short story: No, there is no easy way to do that. But if you'd like to
play around this topic a good starting point would be
You can also add a System.exit(0) after the sc.stop.
On 30 Aug 2015 23:55, Pranay Tonpay pranay.ton...@impetus.co.in wrote:
yes, the context is being closed at the end.
--
*From:* Akhil Das ak...@sigmoidanalytics.com
*Sent:* Sunday, August 30, 2015 9:03 AM
*To:*
HI All,
As a developer I understand certain scenario's can be achieved by Spark SQL
and Spark Programming(RDD transformation). More over I need to consider the
below points:
Performance
Implementation approach
Specific use cases suitable for each of the approach
Could you
I expect it because the versions are not in the range defined in pom.xml.
You should upgrade your maven version to 3.3.3 and JDK to 1.7.
Spark team already knows this issue so you can get some information on
community board of developers.
Kevin
--
View this message in context:
23 matches
Mail list logo