Hi,
As far as I know you can create one SparkContext per jvm, but wanted to
confirm if it's one per jvm or one per classloader. As in one SparkContext
created per *. war, all deployment under one tomcat instance
Regards,
Praveen
Sorry.. Rephrasing :
Can this issue be resolved by having a smaller block interval?
Regards,
Praveen
On 18 Feb 2016 21:30, "praveen S" <mylogi...@gmail.com> wrote:
> Can having a smaller block interval only resolve this?
>
> Regards,
> Praveen
> On 18 Feb
On Thu, Feb 18, 2016 at 9:40 AM, praveen S <mylogi...@gmail.com> wrote:
>
>> Have a look at
>>
>> spark.streaming.backpressure.enabled
>> Property
>>
>> Regards,
>> Praveen
>> On 18 Feb 2016 00:13, "Abhishek Anand" <abhis.ana
Have a look at
spark.streaming.backpressure.enabled
Property
Regards,
Praveen
On 18 Feb 2016 00:13, "Abhishek Anand" wrote:
> I have a spark streaming application running in production. I am trying to
> find a solution for a particular use case when my application has
Even i was trying to launch spark jobs from webservice :
But I thought you could run spark jobs in yarn mode only through
spark-submit. Is my understanding not correct?
Regards,
Praveen
On 15 Feb 2016 08:29, "Sabarish Sasidharan"
wrote:
> Yes you can look at
Hi,
I have 2 questions when running the spark jobs on yarn in client mode :
1) Where is the AM(application master) created :
A) is it created on the client where the job was submitted? i.e driver and
AM on the same client?
Or
B) yarn decides where the the AM should be created?
2) Driver and AM
Can you explain what happens in yarn client mode?
Regards,
Praveen
On 10 Feb 2016 10:55, "ayan guha" <guha.a...@gmail.com> wrote:
> It depends on yarn-cluster and yarn-client mode.
>
> On Wed, Feb 10, 2016 at 3:42 PM, praveen S <mylogi...@gmail.com> wrote:
>
&g
.
---
Robin East
*Spark GraphX in Action* Michael Malak and Robin East
Manning Publications Co.
http://www.manning.com/books/spark-graphx-in-action
On 11 Jan 2016, at 12:30, praveen S <mylogi...@gmail.com> wrote:
Yes I was looking som
Sorry.. Found the api..
On 21 Jan 2016 10:17, "praveen S" <mylogi...@gmail.com> wrote:
> Hi Robin,
>
> I am using Spark 1.3 and I am not able to find the api
> Graph.fromEdgeTuples(edge RDD, 1)
>
> Regards,
> Praveen
> Well you can use a similar tech
Can you give me more details on Spark's jobserver.
Regards,
Praveen
On 18 Jan 2016 03:30, "Jia" wrote:
> I guess all jobs submitted through JobServer are executed in the same JVM,
> so RDDs cached by one job can be visible to all other jobs executed later.
> On Jan 17,
Is use of SparkContext from a Web container a right way to process spark
jobs or should we use spark-submit in a processbuilder?
Are there any pros or cons of using SparkContext from a Web container..?
How does zeppelin trigger spark jobs from the Web context?
in East
> *Spark GraphX in Action* Michael Malak and Robin East
> Manning Publications Co.
> http://www.manning.com/books/spark-graphx-in-action
>
>
>
>
>
> On 11 Jan 2016, at 03:19, praveen S <mylogi...@gmail.com> wrote:
>
> Is it possible in graphx to creat
Is it possible in graphx to create/generate graph of n x n given only the
vertices.
On 8 Jan 2016 23:57, "praveen S" <mylogi...@gmail.com> wrote:
> Is it possible in graphx to create/generate a graph n x n given n
> vertices?
>
Is it possible in graphx to create/generate a graph n x n given n vertices?
When I do an rdd.collect().. The data moves back to driver Or is still
held in memory across the executors?
What does this mean in .setMaster(local[2])
Is this applicable only for standalone Mode?
Can I do this in a cluster setup, eg:
. setMaster(hostname:port[2])..
Is it number of threads per worker node?
Is StringIndexer + VectorAssembler equivalent to HashingTF while converting
the document for analysis?
you are trying to solve, and
then the selection may be evident.
On Wednesday, August 5, 2015, praveen S mylogi...@gmail.com wrote:
I was wondering when one should go for MLib or SparkR. What is the
criteria or what should be considered before choosing either of the
solutions for data
I was wondering when one should go for MLib or SparkR. What is the criteria
or what should be considered before choosing either of the solutions for
data analysis?
or What is the advantages of Spark MLib over Spark R or advantages of
SparkR over MLib?
Hi
Wanted to know what is the difference between
RandomForestModel and RandomForestClassificationModel?
in Mlib.. Will they yield the same results for a given dataset?
Hi,
Is sparkR and spark Mlib same?
21 matches
Mail list logo