y Lam wrote:
> My experience with Mesos + Spark is not great. I saw one executor with 30
> CPU and the other executor with 6. So I don't think you can easily
> configure it without some tweaking at the source code.
>
> Sent from my iPad
>
> On 2015-08-11, at 2:38, Hari
; --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Controlling-number-of-executors-on-Mesos-vs-YARN-tp20966.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>
>
--
Regards,
Haripriya Ayyalasomayajula
t; On May 19, 2015, at 12:39 PM, Thomas Dudziak wrote:
>>> >
>>> > I read the other day that there will be a fair number of improvements
>>> in 1.4 for Mesos. Could I ask for one more (if it isn't already in there):
>>> a configurable limit for the number of tasks for jobs run on Mesos ? This
>>> would be a very simple yet effective way to prevent a job dominating the
>>> cluster.
>>> >
>>> > cheers,
>>> > Tom
>>> >
>>>
>>>
>>
>>
--
Regards,
Haripriya Ayyalasomayajula
spark-bfd6c444-5346-4315-9501-1baed4d500de
--
Regards,
Haripriya Ayyalasomayajula
re! Just had a quick question - is there a job submission API
>> such as the one with hadoop
>>
>> https://hadoop.apache.org/docs/r2.3.0/api/org/apache/hadoop/mapreduce/Job.html#submit()
>> to submit Spark jobs to a Yarn cluster? I see in example that
>> bin/spark-subm
eatly appreciate any help. Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
is classpath
> issue.
>
> On Sun, Mar 15, 2015 at 10:04 AM, HARIPRIYA AYYALASOMAYAJULA <
> aharipriy...@gmail.com> wrote:
>
>> Hello all,
>>
>> Thank you for your responses. I did try to include the
>> zookeeper.znode.parent property in the hbas
ch Spark release are you using ?
> I assume it contains SPARK-1297
>
> Cheers
>
> On Fri, Mar 13, 2015 at 7:47 PM, HARIPRIYA AYYALASOMAYAJULA <
> aharipriy...@gmail.com> wrote:
>
>>
>> Hello,
>>
>> I am running a HBase test case. I am using the ex
-
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
file.
I changed it to Double and on the large file it works till I get the
mapOutput. But when I include the remaining part , it fails.
Can someone please help me understand where I am going wrong?
Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Compute
<- 0 to y-16)
> yield(x+1,j))
> }
> }
>
> On Fri, Oct 24, 2014 at 8:52 PM, HARIPRIYA AYYALASOMAYAJULA
> wrote:
> > Hello,
> >
> > My map function will call the following function (inc) which should yield
> > multiple values:
> >
> >
> > de
array or a list and return the
same but I'm still not clear how it works in Scala/Spark.
Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
Graduate Student
Department of Computer Science
University of Houston
Contact : 650-796-7112
thin it.
It would be great if someone can suggest me what could be possible ways to
do it.
Thanks in advance.
--
Regards,
Haripriya Ayyalasomayajula
Or if it has something to do with the way you package your files - try
another alternative method and see if it works
On Monday, October 13, 2014, HARIPRIYA AYYALASOMAYAJULA <
aharipriy...@gmail.com> wrote:
> Well in the cluster, can you try copying the entire folder and then run?
>
DOUBLE YOUR SALES,*
>
>
>
> *ONE OF US IS IN THE WRONG BUSINESS.*
>
> *E*: ji...@sellpoints.com
>
>
> *M*: *510.303.7751*
>
> On Mon, Oct 13, 2014 at 5:39 PM, HARIPRIYA AYYALASOMAYAJULA <
> aharipriy...@gmail.com
> > wrote:
>
>> Helo,
>>
otFoundException: ./joda-convert-1.2.jar (Permission denied)
>
> java.io.FileOutputStream.open(Native Method)
>
> java.io.FileOutputStream.(FileOutputStream.java:221)
>
> com.google.common.io.Files$FileByteSink.openStream(Files.java:223)
>
> com.google.common.io.Files$FileByteSink.openStream(Files.java:211)
>
>
> Thanks,
> Andy
>
>
--
Regards,
Haripriya Ayyalasomayajula
gt; It has similar behavior with combineByKey(), will by faster than
> groupByKey() version.
>
> On Thu, Oct 9, 2014 at 9:28 PM, HARIPRIYA AYYALASOMAYAJULA
> wrote:
> > Sean,
> >
> > Thank you. It works. But I am still confused about the function. Can you
> > kin
Let’s say you call something
> like myRdd.map(x => sum += x) is “sum” being accumulated locally in any
> way, for each element or partition or node? Is “sum” a broadcast variable?
> Or does it only exist on the driver node? How does the driver node get
> access to the “sum”?
> Thanks
7 PM, Theodore Si wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I want to use two nodes for test, one as master, the other worker.
>>>>> Can I submit the example application included in Spark source code
>>>>> tarball on master to let it run on the worker?
>>>>> What should I do?
>>>>>
>>>>> BR,
>>>>> Theo
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>
--
Regards,
Haripriya Ayyalasomayajula
duh, sorry. The initialization should of course be (v) => (if (v >
> 0) 1 else 0, 1)
> This gives the answer you are looking for. I don't see what Part2 is
> supposed to do differently.
>
> On Thu, Oct 9, 2014 at 6:14 PM, HARIPRIYA AYYALASOMAYAJULA
> wrote:
> &
;re just counting...
>
> On Thu, Oct 9, 2014 at 11:47 AM, HARIPRIYA AYYALASOMAYAJULA <
> aharipriy...@gmail.com> wrote:
>
>>
>> I am a beginner to Spark and finding it difficult to implement a very
>> simple reduce operation. I read that is ideal to use combineByKey f
't the problem I think.
> It sounds like you intend the first element of each pair to be a count
> of nonzero values, but you initialize the first element of the pair to
> v, not 1, in v => (v,1). Try v => (1,1)
>
>
> On Thu, Oct 9, 2014 at 4:47 PM, HARIPRIYA AYYALASOMAYA
throws IOException,InterruptedException
{
int acc2=0;
float frac_delay, percentage_delay;
int acc1=0;
for(IntWritable val : values)
{
if(val.get() > 0)
{
acc1++;
}
acc2++;
}
frac_delay = (float)acc1/acc2;
percentage_delay = frac_delay * 100 ;
pdelay.set(percentage_delay);
context.write(key,pdelay);
}
}
Please help. Thank you for your time.
--
Regards,
Haripriya Ayyalasomayajula
contact : 650-796-7112
23 matches
Mail list logo