tribution of the models. But I wanted
to know if there is any infrastructure in Spark that specifically addresses
such need.
Thanks.
Cheers,
P.S.: sorry Jacek, with "ml" I meant "Machine Learning". I thought is a
quite spread acronym. Sorry for the possible confusion.
--
S
Hi,
I have one question:
How is the ML models distribution done across all nodes of a Spark cluster?
I'm thinking about scenarios where the pipeline implementation does not
necessary need to change, but the models have been upgraded.
Thanks in advance.
Best regards,
--
Sergio Fernández
In theory yes... the common sense say that:
volume / resources = time
So more volume on the same processing resources would just take more time.
On Jun 15, 2016 6:43 PM, "spR" wrote:
> I have 16 gb ram, i7
>
> Will this config be able to handle the processing without my
n
>>
>> Where your bin/python is your actual Python environment with Numpy
>> installed.
>>
>>
>> El 1 jun 2016, a las 20:16, Bhupendra Mishra <bhupendra.mis...@gmail.com>
>> escribió:
>>
>> I have numpy installed but where I should setup P
gt;>
>>> File
>>> "/opt/mapr/spark/spark-1.6.1/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
>>> line 25, in
>>>
>>> ImportError: No module named numpy
>>>
>>>
>>> Thanks in advance!
>>>
>>&