>
> So how would I start a cluster of 3? SPARK_WORKER_INSTANCES is the only
> way I see to start the standalone cluster and the only way I see to define
> it is in spark-env.sh. The spark submit option, SPARK_EXECUTOR_INSTANCES
> and spark.executor.instances are all related to submitting the job.
>
>
>
> Any ideas?
>
> Thanks
>
> Assaf
>
--
Regards,
Ofer Eliassaf
vailabilty in python spark.
Cuurently only Yarn deployment supports it. Bringing the huge Yarn
installation just for this feature is not fun at all
Does someone have time estimation for this?
--
Regards,
Ofer Eliassaf
applications will get the total
amount of cores until a new application arrives...
--
Regards,
Ofer Eliassaf
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>
>
> --
> Cell : 425-233-8271
> Twitter: https://twitter.com/holdenkarau
>
--
Regards,
Ofer Eliassaf
>>> View this message in context: http://apache-spark-user-list.
>>>>> 1001560.n3.nabble.com/PySpark-TaskContext-tp28125.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> -
>>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ofer Eliassaf
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271
> Twitter: https://twitter.com/holdenkarau
>
--
Regards,
Ofer Eliassaf
anyone? please? is this getting any priority?
On Tue, Sep 27, 2016 at 3:38 PM, Ofer Eliassaf <ofer.elias...@gmail.com>
wrote:
> Is there any plan to support python spark running in "cluster mode" on a
> standalone deployment?
>
> There is this famous survey
hour.
We want to keep around the labels and the sample ids for the next iteration
(N+1) where we want to do a join with the new sample window to inherit the
labels of samples that existed in the previous (N) iteration.
--
Regards,
Ofer Eliassaf