of nonzero values, but you initialize the first element of the pair to
v, not 1, in v = (v,1). Try v = (1,1)
On Thu, Oct 9, 2014 at 4:47 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
I am a beginner to Spark and finding it difficult to implement a very
simple
reduce operation. I
wrote:
Oh duh, sorry. The initialization should of course be (v) = (if (v
0) 1 else 0, 1)
This gives the answer you are looking for. I don't see what Part2 is
supposed to do differently.
On Thu, Oct 9, 2014 at 6:14 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Hello Sean
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
--
Regards,
Haripriya Ayyalasomayajula
something
like myRdd.map(x = sum += x) is “sum” being accumulated locally in any
way, for each element or partition or node? Is “sum” a broadcast variable?
Or does it only exist on the driver node? How does the driver node get
access to the “sum”?
Thanks,
Areg
--
Regards,
Haripriya
similar behavior with combineByKey(), will by faster than
groupByKey() version.
On Thu, Oct 9, 2014 at 9:28 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Sean,
Thank you. It works. But I am still confused about the function. Can you
kindly throw some light on it?
I was going
BUSINESS.*
*E*: ji...@sellpoints.com
javascript:_e(%7B%7D,'cvml','ji...@sellpoints.com');
*M*: *510.303.7751*
On Mon, Oct 13, 2014 at 5:39 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com
javascript:_e(%7B%7D,'cvml','aharipriy...@gmail.com'); wrote:
Helo,
Can you check if the jar
Or if it has something to do with the way you package your files - try
another alternative method and see if it works
On Monday, October 13, 2014, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Well in the cluster, can you try copying the entire folder and then run?
For example my
if someone can suggest me what could be possible ways to
do it.
Thanks in advance.
--
Regards,
Haripriya Ayyalasomayajula
the
same but I'm still not clear how it works in Scala/Spark.
Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
,j))
}
}
On Fri, Oct 24, 2014 at 8:52 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Hello,
My map function will call the following function (inc) which should yield
multiple values:
def inc(x:Int, y:Int)
={
if(condition)
{
for(i - 0 to 7) yield(x, y+i
it to Double and on the large file it works till I get the
mapOutput. But when I include the remaining part , it fails.
Can someone please help me understand where I am going wrong?
Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
are you using ?
I assume it contains SPARK-1297
Cheers
On Fri, Mar 13, 2015 at 7:47 PM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Hello,
I am running a HBase test case. I am using the example from the following:
https://github.com/apache/spark/blob/master/examples/src
AM, HARIPRIYA AYYALASOMAYAJULA
aharipriy...@gmail.com wrote:
Hello all,
Thank you for your responses. I did try to include the
zookeeper.znode.parent property in the hbase-site.xml. It still continues
to give the same error.
I am using Spark 1.2.0 and hbase 0.98.9.
Could you please
help. Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
/apache/hadoop/mapreduce/Job.html#submit()
to submit Spark jobs to a Yarn cluster? I see in example that
bin/spark-submit is what's out there, but couldn't find any APIs around it.
Thanks,
Prashant
--
Regards
vybs
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department
of tasks for jobs run on Mesos ? This
would be a very simple yet effective way to prevent a job dominating the
cluster.
cheers,
Tom
--
Regards,
Haripriya Ayyalasomayajula
...@spark.apache.org
--
Regards,
Haripriya Ayyalasomayajula
chiling...@gmail.com wrote:
My experience with Mesos + Spark is not great. I saw one executor with 30
CPU and the other executor with 6. So I don't think you can easily
configure it without some tweaking at the source code.
Sent from my iPad
On 2015-08-11, at 2:38, Haripriya Ayyalasomayajula
--
Regards,
Haripriya Ayyalasomayajula
20 matches
Mail list logo