Hi Niranda,

After the ofline discussion we had, I tried increasing the Spark cores to
10, and limiting CarbonAnalytics App's cores to 5, Leaving rest of the 5
cores to be used by any external app. Now both the apps (CarbonAnalytics
app and external app) are shown as "Running" in the Spak-Master UI, But I
keep getting the error at the client side "TaskSchedulerImpl: Initial job
has not accepted any resources; check your cluster UI to ensure that
workers are registered and have sufficient memory" .

Further investigating, found out that the worker node listed in the
Spak-Master UI points to *localhost:8091 *where  in the worker's console it
says "Analytics worker started: [10.100.5.116:4503:*8092*]" . (I can access
the worker nodes UI's in both ports *localhost:8091 *and *localhost:8092*).
It seems there are two worker nodes get initiated, and the jobs are passed
to the wrong worker. Searching for the cause of the error I mentioned
earlier (*TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have
sufficient memory"*) seems also because of the improper connectivity
between the master and the worker.

Could you please take look whats happening?

Thanks,
Supun

On Tue, Mar 10, 2015 at 12:22 PM, Niranda Perera <nira...@wso2.com> wrote:

> Okay Anjana, noted!
>
> On Tue, Mar 10, 2015 at 12:10 PM, Anjana Fernando <anj...@wso2.com> wrote:
>
>> Hi Niranda,
>>
>> When we get the clustering to properly to work, which would be before
>> Friday's release. It should be straightforward for ML to point to our
>> master server and submit jobs. So lets wait till we finalize our
>> implementation, and then ML team can work with us to get their use case
>> working.
>>
>> Cheers,
>> Anjana.
>>
>> On Tue, Mar 10, 2015 at 10:09 AM, Niranda Perera <nira...@wso2.com>
>> wrote:
>>
>>> @supun I suggest you use an external spark cluster for the moment. as of
>>> the current BAM, we still have not finalized external job submission
>>> strategy in a BAM cluster. I will discuss this with Anjana and get back to
>>> you!
>>>
>>> On Tue, Mar 10, 2015 at 10:02 AM, Supun Sethunga <sup...@wso2.com>
>>> wrote:
>>>
>>>> [looping Nirmal]
>>>>
>>>> On Tue, Mar 10, 2015 at 10:01 AM, Supun Sethunga <sup...@wso2.com>
>>>> wrote:
>>>>
>>>>> [looping Nirmal]
>>>>>
>>>>> Hi Niranda,
>>>>>
>>>>> Thanks for the clarification.
>>>>>
>>>>> @Nirmal : Seems we have to wait till the BAM M2 release for ML-BAM
>>>>> Integration :) . But we still can enable to use an external spark cluster
>>>>> for ML.
>>>>>
>>>>> Regards,
>>>>> Supun
>>>>>
>>>>> On Tue, Mar 10, 2015 at 9:49 AM, Niranda Perera <nira...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> cc+ Anjana
>>>>>>
>>>>>> Hi Supun,
>>>>>>
>>>>>> As per the M1 release, we have only the standalone 'local' mode spark
>>>>>> instance. as you correctly said, we have set the master to 'local' in the
>>>>>> BAM . please refer init() method here [1]. In the init() method, we are
>>>>>> starting a spark context.
>>>>>>
>>>>>> we are still working on spark clustering. the plan is to release it
>>>>>> in the M2.
>>>>>>
>>>>>> [1]
>>>>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics/analytics-processors/org.wso2.carbon.analytics.spark.core/src/main/java/org/wso2/carbon/analytics/spark/core/AnalyticsExecutionService.java
>>>>>>
>>>>>> On Mon, Mar 9, 2015 at 5:48 PM, Supun Sethunga <sup...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I tried starting up a BAM server, and submitting a spark-job to the
>>>>>>> Spark which is built in BAM, from an external java client. In doing 
>>>>>>> this, I
>>>>>>> was not able to create a Spark Context using BAM's spark instance as
>>>>>>> master. (AFAIK, the way to do this is to create a Spark context inside 
>>>>>>> the
>>>>>>> client, and point BAM's Spark instance as the master, and then submit 
>>>>>>> any
>>>>>>> jobs to the created context.)
>>>>>>>
>>>>>>> Possible reason might be, in BAM, Spark instance (i.e. SparkContext)
>>>>>>> is started locally with one worker thread.(setting master as 'local').
>>>>>>> Hence the spark instance cannot be accessed by any external client. 
>>>>>>> (Refer
>>>>>>> attached image "BAM-spark.png")
>>>>>>> AFAIK, to be accessible, BAM's Spark instance needs be started in
>>>>>>> the clustering mode. Then any external client also can use the BAM's 
>>>>>>> spark,
>>>>>>> through the "spark.master" URL, which is "spark://localhost:7077" by
>>>>>>> default. (Refer attached image "
>>>>>>> external-spark-cluster.png").
>>>>>>>
>>>>>>> I'm not sure whether this feature (clustering Spark) is currently
>>>>>>> supported by BAM (in M1 release)?
>>>>>>>
>>>>>>> If so, whats the way forward?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Supun
>>>>>>>
>>>>>>> --
>>>>>>> *Supun Sethunga*
>>>>>>> Software Engineer
>>>>>>> WSO2, Inc.
>>>>>>> lean | enterprise | middleware
>>>>>>> Mobile : +94 716546324
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Niranda Perera*
>>>>>> Software Engineer, WSO2 Inc.
>>>>>> Mobile: +94-71-554-8430
>>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Supun Sethunga*
>>>>> Software Engineer
>>>>> WSO2, Inc.
>>>>> lean | enterprise | middleware
>>>>> Mobile : +94 716546324
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Supun Sethunga*
>>>> Software Engineer
>>>> WSO2, Inc.
>>>> lean | enterprise | middleware
>>>> Mobile : +94 716546324
>>>>
>>>
>>>
>>>
>>> --
>>> *Niranda Perera*
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-71-554-8430
>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>
>>
>>
>>
>> --
>> *Anjana Fernando*
>> Senior Technical Lead
>> WSO2 Inc. | http://wso2.com
>> lean . enterprise . middleware
>>
>
>
>
> --
> *Niranda Perera*
> Software Engineer, WSO2 Inc.
> Mobile: +94-71-554-8430
> Twitter: @n1r44 <https://twitter.com/N1R44>
>



-- 
*Supun Sethunga*
Software Engineer
WSO2, Inc.
http://wso2.com/
lean | enterprise | middleware
Mobile : +94 716546324
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to